CSI User Experience Conference 2012 Part 6 – Does technology help or hinder progress?

CSI User Experience Conference 2012: TV Accessibility

CSI User Experience Conference 2012: TV Accessibility

For background info on this conference read:Part 1.

A panel discussion on technology and access discussed whether the provision of access services by broadcast is helped or hindered by technology. Not surprisingly, the answer is both.

Gareth Ford Williams, Acting Head of Usability and Accessibility for the BBC discussed some of the technological challenges explaining that they are different barriers for different types of access services – subtitling, audio description and sign language:

We move from a world where you are taking something that is linear broadcast, trying to turn it into catch up and on demand we’re basically taking stuff that’s again from one set of format to another set. We luckily made the choice very early on with BBC iPlayer, to support time text which has paid off. It was a little bit of a punt at the time because no-one else was doing it. But the thing is that is one kind of solution, we hope that the platforms catch up and there’s more platforms support that, support the delivery of that standard than we can roll that out. But I think then when you look at the other access services, you have other issues or not. Signed content for instance is already packaged up and broadcast as a programme asset. There’s no conversion, nothing to be done which I why I made the point earlier the first access service we had all the available content on all of the devices and platforms iPlayer is on because it’s just 5% of the content. We just treated it like another programme. Audio description is a whole other kettle of fish suddenly we’re going from delivery an additional audio asset which is even more challenging than trying to deliver subtitles and making that work. We spent a year of effort in on iPlayer trying to make that work. Realised what we would be better off doing was turn it into another programme asset. Every single time we did it broke in many wonderful ways, now we have 600 devices to try to support, I think that again has become even more bigger and impossible task. So the issue there is then how do the solutions just built online. How do we now completely reengineer that make that scalable across more platforms. It’s not straight forward and easy, we’re several years into it and are still learning as we’re going along. But that’s where we are.

Regarding the provision of subtitling Andrew Lambourne from Screen Systems made an excellent point that access provision needs to be considered at the production level to assist broadcasters in providing the service, and it is often this lack of approach not technology that can hinder access:

There’s a need for some joined up thinking and a holistic approach. Often these problems after easily solved by stepping back and looking at what we’re doing here. If you step back from this industry of providing access to media you see an industry that producing media. During the course of that production they are creating a awful lot of data which is kept or thrown away and kept to one side which is not necessarily passed on to the people providing the subtitles or the audio description. You might have to go and research how they spell, what the lyrics of songs are etc. So if we were to take a step back it’s not necessarily a technological issue in and of itself it’s a product of speed with which some parts of the industry move compared to others. This is a product of commercial motivation. That happen because producers wanted to get their content out as in many ways as possible. DVD worked it was implemented it was fine. What they didn’t think was where it goes next down the chain. I think a very useful change of attitude would be to start to think of accessibility as part of the responsibility of the producer, not the deliverer or the broadcaster of the content so you think more holistically, you are bearing in mind at the beginning you are factoring it in at the beginning you can design your technological chain to make it easy. So if you take a cinema production, you recut it for television, let’s know where the cuts were then they can automate the refactoring of the subtitles. It can be built in, what’s needed is the right motivation to do it. I think the need to further save on cost, is the right kind of motivation.

Later on Lambourne went on to say that the barriers are less technological and more commercial and gave a passionate reminder that access services are not about punishing or attacking broadcast companies:

If the requirement is to have a commercial motivation I think the need for that requirement or the justification for that has come out obviously today, when somebody said ‘it’s all about bums on seats’. Access services aren’t attacks on broadcasters or anything, they are a means of reaching more people, and you’re talking about 10% extra audience or more, depending on the access you are providing. That’s a huge justification the other thing that is happening of course as we said earlier is the number of platforms to which you are targeting your media is increasing. One of the things that Screen have been doing recently is how you can make it easy to take the subtitles you did for broadcast perhaps as DVB, using STL or whatever files and how can you make those same subtitles available when you distribute the content on the web…It’s all about looking at the work flows, looking at the way that the technology is linked together, and then finding technical solutions. There are not very many barriers left except the barriers of needing budgets.

It was also lovely to hear someone speak of the advantages of providing subtitles and give a reminder that the benefits are not limited to those who are deaf and hard of hearing and that perhaps this message should be shouted louder. Lambourne continued:

Live subtitling which I something I have worked on for my entire career, is now reaching a point of maturity for half a dozen European languages you can train somebody without huge difficulty to sit and listen to more or less any kind of live broadcast, and to respeak it into a speech recognition system with good enough quality to broadcast that immediately as subtitles. But there are not speech recognition system available for all European languages. The costs are not massive at a national scale, the benefit are huge because of course subtitling is not just for access for people who can’t hear, audio description for people who can’t see, it helps people learn the language did preserves the quality did gives a cultural benefit. So for people who are not first language speakers in a given territory , the subtitles can help them to learn to read, the same with children. Subtitles on cartoon are a great motivator for children to learn to read. I think the value add benefits need to be brought out more sharply, perhaps even by the regulators, they don’t need sticks they can actually talk about the benefits of providing access.