Tagged: Connected TV Toggle Comment Threads | Keyboard Shortcuts

  • iheartsubtitles 10:08 pm on December 3, 2014 Permalink | Reply
    Tags: , Connected TV, , ,   

    Access 2020 – Languages & The Media 2014 

    Access 2020 was an interesting panel hosted by Alex Varley at the 10th Languages & The Media conference. The theme was for the panel to discuss what they thought media access might look like in 2020.

    Although it is difficult to summarise all of the discussions, Media Access Australia have written a summary of 20 highlights. Below is my two-cents.

    • Broadcasters have to start to think what is their role?  The industry still need content producers which broadcasters are likely to continue to play a big role in producing. There is likely to be a merge of broadcast and IPTV.
    • In Europe, there is a keen focus to develop in the areas of: Machine Translation (MT), User Experience (UX), and Big Data.
    • Subtitling is becoming a language technology business rather than editorial. Greater levels of interest and innovation in technology will lead to greater quality and lower cost.
    • The industry is aiming for interoperability by 2020 (if not before) to ensure no technological barriers to access exist.
    •  Two interesting ideas/questions raised:  Will access services start to go into/become a part of the production process for audio-visual content? Will we start to see closed signing?

    How to achieve all of this:

    1. Talk to end users more.
    2. Deal with the complexity. (interoperability)
    3. Different jobs will be created by new technology, but we still need humans to provide access.
    4. Regulators are not always the answer and can get it wrong. Target the businesses to provide access.
    Animated gif of the hoverboard from the film Back To The Future

    Personally I’m still waiting for the hoverboard.

     
  • iheartsubtitles 8:24 pm on January 14, 2013 Permalink | Reply
    Tags: , Connected TV, , , Sony, , ,   

    Well this is confusing, connected internet smart TV – Sony models discrepancy 

    How was everyone’s holiday break? Christmas seems ages away already but one of the best Christmas present’s I got was staying at my parents and watching lots of great television on their new internet connected smart TV. Up until this point in time, all the internet connected TV I had tried to use was completely without subtitles support, or if it was there, I had yet to find the subtitled content. So imagine my surprise when I selected BBC iplayer to find an option to turn the subtitles on. Very exciting! It meant I could watch catch-up services via connected TV on a big high quality TV screen rather than a smaller PC screen or an even smaller laptop screen or smart phone. Here’s the photo I took illustrating BBC iplayer on an internet connected TV with subtitles options:

    BBC iPlayer via connected TV with subtitles support

    BBC iPlayer via connected TV with subtitles support – the ‘S’ symbol allows you to switch subtitles on or off

    I too have a connected TV, and it too is a Sony model. A different model but only approx a year older than the model my parents bought. So I was hopeful that if I went into my connected internet TV settings and checked for a software upgrade that the next time I log into BBC iPlayer, I too would get the subtitles support. As soon as I got back home, this is exactly what I went to do. So imagine my disappointment when logging into iPlayer after the upgrade to find that I still do not have subtitles support. Worse than that, how confusing is it to consumers that different models from the same TV manufacturer (in this instance, Sony) appear to have different capabilities. My parents connected TV is a brand new model, I bought mine less than a year ago. It wasn’t cheap, and I feel like I want a refund. Or am I doing something wrong? It’s not that easy to know – how do I know I have the latest update? Also, there is no way for me to know prior to buying which models will provide me with the subtitle support. Sony cannot provide that information to me as it is probably down to the provider (in this instance the BBC). All in all it’s a bit of a mess. I don’t see that changing any time soon, and I’m not sure what the solution is. I can only hope that once internet connected or smart TV’s become the norm that there is consistent behaviour between models and manufacturers and the connected TV services provided on it – including its access services.

    What are other people’s experiences so far with using connected TV services and accessing subtitles or captions? Please comment below.

     
  • iheartsubtitles 5:18 pm on January 13, 2013 Permalink | Reply
    Tags: , Connected TV, , , , , , ,   

    CSI User Experience Conference 2012 Part 6 – Does technology help or hinder progress? 

    CSI User Experience Conference 2012: TV Accessibility

    CSI User Experience Conference 2012: TV Accessibility

    For background info on this conference read:Part 1.

    A panel discussion on technology and access discussed whether the provision of access services by broadcast is helped or hindered by technology. Not surprisingly, the answer is both.

    Gareth Ford Williams, Acting Head of Usability and Accessibility for the BBC discussed some of the technological challenges explaining that they are different barriers for different types of access services – subtitling, audio description and sign language:

    We move from a world where you are taking something that is linear broadcast, trying to turn it into catch up and on demand we’re basically taking stuff that’s again from one set of format to another set. We luckily made the choice very early on with BBC iPlayer, to support time text which has paid off. It was a little bit of a punt at the time because no-one else was doing it. But the thing is that is one kind of solution, we hope that the platforms catch up and there’s more platforms support that, support the delivery of that standard than we can roll that out. But I think then when you look at the other access services, you have other issues or not. Signed content for instance is already packaged up and broadcast as a programme asset. There’s no conversion, nothing to be done which I why I made the point earlier the first access service we had all the available content on all of the devices and platforms iPlayer is on because it’s just 5% of the content. We just treated it like another programme. Audio description is a whole other kettle of fish suddenly we’re going from delivery an additional audio asset which is even more challenging than trying to deliver subtitles and making that work. We spent a year of effort in on iPlayer trying to make that work. Realised what we would be better off doing was turn it into another programme asset. Every single time we did it broke in many wonderful ways, now we have 600 devices to try to support, I think that again has become even more bigger and impossible task. So the issue there is then how do the solutions just built online. How do we now completely reengineer that make that scalable across more platforms. It’s not straight forward and easy, we’re several years into it and are still learning as we’re going along. But that’s where we are.

    Regarding the provision of subtitling Andrew Lambourne from Screen Systems made an excellent point that access provision needs to be considered at the production level to assist broadcasters in providing the service, and it is often this lack of approach not technology that can hinder access:

    There’s a need for some joined up thinking and a holistic approach. Often these problems after easily solved by stepping back and looking at what we’re doing here. If you step back from this industry of providing access to media you see an industry that producing media. During the course of that production they are creating a awful lot of data which is kept or thrown away and kept to one side which is not necessarily passed on to the people providing the subtitles or the audio description. You might have to go and research how they spell, what the lyrics of songs are etc. So if we were to take a step back it’s not necessarily a technological issue in and of itself it’s a product of speed with which some parts of the industry move compared to others. This is a product of commercial motivation. That happen because producers wanted to get their content out as in many ways as possible. DVD worked it was implemented it was fine. What they didn’t think was where it goes next down the chain. I think a very useful change of attitude would be to start to think of accessibility as part of the responsibility of the producer, not the deliverer or the broadcaster of the content so you think more holistically, you are bearing in mind at the beginning you are factoring it in at the beginning you can design your technological chain to make it easy. So if you take a cinema production, you recut it for television, let’s know where the cuts were then they can automate the refactoring of the subtitles. It can be built in, what’s needed is the right motivation to do it. I think the need to further save on cost, is the right kind of motivation.

    Later on Lambourne went on to say that the barriers are less technological and more commercial and gave a passionate reminder that access services are not about punishing or attacking broadcast companies:

    If the requirement is to have a commercial motivation I think the need for that requirement or the justification for that has come out obviously today, when somebody said ‘it’s all about bums on seats’. Access services aren’t attacks on broadcasters or anything, they are a means of reaching more people, and you’re talking about 10% extra audience or more, depending on the access you are providing. That’s a huge justification the other thing that is happening of course as we said earlier is the number of platforms to which you are targeting your media is increasing. One of the things that Screen have been doing recently is how you can make it easy to take the subtitles you did for broadcast perhaps as DVB, using STL or whatever files and how can you make those same subtitles available when you distribute the content on the web…It’s all about looking at the work flows, looking at the way that the technology is linked together, and then finding technical solutions. There are not very many barriers left except the barriers of needing budgets.

    It was also lovely to hear someone speak of the advantages of providing subtitles and give a reminder that the benefits are not limited to those who are deaf and hard of hearing and that perhaps this message should be shouted louder. Lambourne continued:

    Live subtitling which I something I have worked on for my entire career, is now reaching a point of maturity for half a dozen European languages you can train somebody without huge difficulty to sit and listen to more or less any kind of live broadcast, and to respeak it into a speech recognition system with good enough quality to broadcast that immediately as subtitles. But there are not speech recognition system available for all European languages. The costs are not massive at a national scale, the benefit are huge because of course subtitling is not just for access for people who can’t hear, audio description for people who can’t see, it helps people learn the language did preserves the quality did gives a cultural benefit. So for people who are not first language speakers in a given territory , the subtitles can help them to learn to read, the same with children. Subtitles on cartoon are a great motivator for children to learn to read. I think the value add benefits need to be brought out more sharply, perhaps even by the regulators, they don’t need sticks they can actually talk about the benefits of providing access.

     
  • iheartsubtitles 2:35 pm on January 9, 2013 Permalink | Reply
    Tags: Connected TV, , , , , , , , ,   

    CSI User Experience Conference 2012 Part 5 – Broadcast subtitles and captions formats 

    CSI User Experience Conference 2012: TV Accessibility

    CSI User Experience Conference 2012: TV Accessibility

    For background info on this conference read:Part 1.

    Frans de Jong, a senior engineer for European Broadcasting Union (EBU) gave a presentation on the history of work and current work being done to ensure standardised subtitle formats as broadcast technology evolves whilst ensuring that legacy formats are still support and compatible. The subtitle format evolved from teletext technology STL has evolved to a format called EBU-TT Part I. Jong explained:

    We have published this year (2012) EBU-TT part one. This is the follow up specification for that old format (STL). It takes into account that nowadays we like to define things in XML and not in binary format because its human readable, and because there many people who read XML…and of course nowadays [broadcast] its all file based, network facilities. Because if you look at the way that subtitles are produced, this a very generic sketch, typically it comes from somewhere, external company or internal department, can be based on existing formats, then it goes into some central content management system. Afterwards it archived and of course its broadcast at a certain moment, then provided to several of the platforms on right. This list of platforms growing. Analogue TV, digital TV, now there’s HDTV, iPlayer, we have IPTV streaming platforms all these platforms have their own specific way of doing subtitling. But in the production side we have for a long time being using STL and also proprietary formats based on them or newly developed. There’s several places where this format is useful but we felt we had to update that format to make sure we can fulfill the requirements of today. That is HD TV and the different web platforms mainly. So the new format published was focusing on that, very aware of web format, but focused in our case on production. Our goal is to really optimise the production, to help the broadcasters get their infrastructure up-to-date.

    The EBU-TT format is not a stand-alone invention and is based on W3C Timed Text (TTML) but restricts the featureset, makes default values explicit, and adds (legacy STL) metadata. Similar work has been done in the US by SMPTE with the captioning format SMPTE-TT. This captioning standard received an honor from the Federal Communications Commission (FCC) —a Chairman’s Award for Advancement in Accessibility last month:

    The FCC declared the SMPTE Timed Text standard a safe harbor interchange and delivery format in February. As a result, captioned video content distributed via the Internet that uses the standard will comply with the 21st Century Communications and Video Accessibility Act, a recently enacted law designed to ensure the accessibility, usability, and affordability of broadband, wireless, and Internet technologies for people with disabilities.

    SOURCE: TV Technology

    The EBU are currently working on EBU-TT Part II which will include a guide to ensuring ‘upgrading’ STL legacy subtitle files and how they can be converted to EBU-TT file. This is due to be published early this year. Looking further ahead Jong’s said:

    There is also a third part coming up, that is now in the requirements phase, that’s on live subtitling. Several countries, and the UK is certainly leading, are working with live subtitling. The infrastructure for this and the standards used are not very mature, which means there is room also to use this format to come to a live subtitle specification. We will provide a user guide with examples…One word maybe again about live subtitling that’s coming up. What we did here is we had a workshop in the summer in Geneva at the EBU. We discussed the requirements with many broadcasters, what would you need this type of format. There are about 30 requirements. Some of the things that came up for example, is that it would be really good if there is a technical situation for routing, if I am subtitling for one channel maybe 10 minutes later I could be subtitling for another channel – to make sure that the system knows the what channel I am working for and that its not the wrong channel. And you need some data in the format that was used. Again the issue of enriching the work you are working on with additional information, description and speaker ID.

    To conclude the presentation Jong’s discussed his views on future technology and the next steps for subtitling including automated subtitles and quality control:

    There is an idea we could be much more abstract in how we author subtitle in the future. We understand that the thought alone can be quite disrupting for a lot of people in current practice because it’s far from current practice. Just to say we’re thinking about the future after this revision. I think later we’ll see on more advanced methods for subtitling, there is a lot of talk about automation and semi-automation. I think it was a week ago that You Tube released their automated subtitling with speech recognition, at least in the Dutch language. I am from Holland originally, I was pretty impressed by the amount of errors! … It’s a big paradox. You could argue that Google (owners of You Tube) has the biggest corpus of words and information probably of all of us.. if they make so many (automated subtitles/captions) mistakes how can we ever do better in our world? For the minority languages there is no good automated speech recognition software. If you ask TVP for example, the Polish broadcaster, how they do live subtitling, they say we would love to use speech recognition but we can’t find good enough software. In the UK it’s a lot better. It’s a real issue when you are talking about very well orchestrated condition and even there it doesn’t exist. I am really curious how this will develop.

     
  • iheartsubtitles 5:52 pm on December 20, 2012 Permalink | Reply
    Tags: , Connected TV, , Microsoft, , , Siri, , , ,   

    CSI User Experience Conference 2012 Part 3 – Live subtitles & voice recognition technology 

    CSI User Experience Conference 2012: TV Accessibility

    CSI User Experience Conference 2012: TV Accessibility

    For background info on this conference read:Part 1.

    It’s clear that much of the frustration from many UK TV viewers surrounds live subtitles and so the technology of voice recognition software and the process of respeaking used to achieve this was one of the topics of debate in a panel on the User Experience following Ofcom’s presentation.

    Deluxe Media’s Claude Le Guyader made some interesting points:

    In the case of live subtitling…it’s a lot of pressure on the person doing the work, the availability of the resource and the cost, it all means that the event of voice recognition was embraced by all the service providers as a service to palliate, to that lack of resource (in this case – stenographers). As we know voice recognition started, it’s not perfect, still not perfect, I don’t know if you have seen on your iPhone, it’s quite funny, with a French access it’s
    even worse! (This is in reference to Siri which is not used so far as I am aware to create live subtitles but it is part of the same technology used – voice recognition). With the voice recognition you need to train the person. Each person (in this case – a subtitler or respeaker) needs to train it, now it’s moved on and there are people using voice recognition very successfully as well, so it’s evolving but the patience, you know, it does run out when you are round the table again years are discussing the same issue, but it’s not a lack of will, I think it’s just a difficult thing to achieve, because it involves so many different people.

    Voice technology does seem to be constantly evolving and the fact that it is being implement in more and more products (the iPhone and Siri is a great example) I think is a positive thing. It increases consumer awareness of what this technology can do and consequently I think people will expect this technology to work. There are numerous ways voice technology is being used. To move away from just live subtitling and summarising points made at the conference for a moment but still within a broadcast TV context, another use is illustrated by Google TV. In the below video you can see voice recognition technology allowing a viewer to navigate the TV:

    Voice recognition technology is also used to create the automatically generated captions on You Tube videos. At the moment this does illustrate the technologies limitations as most readers here I am sure are aware – the captions created this way are completely inaccurate most of the time and therefore useless. I think we can all agree that respeaking to produce live subtitles creates errors but produces a much better result than a machine currently. Google recently added automatic captioning support for six new languages. Investment into this technology even if it is currently imperfect shouldn’t be discouraged because surely this is the only way for the technology to improve:

    A new research paper out of Google describes in some detail the data science behind the the company’s speech recognition applications, such as voice search and adding captions or tags to YouTube videos. And although the math might be beyond most people’s grasp, the concepts are not. The paper underscores why everyone is so excited about the prospect of “big data” and also how important it is to choose the right data set for the right job….No surprise, then, it turns out that more data is also better for training speech-recognition systems…The real key, however — as any data scientist will tell you — is knowing what type of data is best to train your models, whatever they are. For the voice search tests, the Google researchers used 230 billion words that came from “a random sample of anonymized queries from google.com that did not trigger spelling correction.” However, because people speak and write prose differently than they type searches, the YouTube models were fed data from transcriptions of news broadcasts and large web crawls…This research isn’t necessarily groundbreaking, but helps drive home the reasons that topics such as big data and data science get so much attention these days. As consumers demand ever smarter applications and more frictionless user experiences, every last piece of data and every decision about how to analyze it matters.

    SOURCE: GigaOM

    Following on from this example the natural question ask is will Apple integrate its voice technology Siri into Apple TV? It has been rumoured but not yet confirmed. (Interestingly it is already confirmed that Siri is being added to Chevrolet cars next year) If there is competition between companies for innovation using this technology, all the better. I found an interesting blog post pondering the future of Siri for Apple here. Although this blogger thinks that Google Voice is better. Voice technology is also being used in the world of translation. Last month Microsoft gave an impressive demo of voice recognition technology translating a speakers speech in English into Chinese text as well as speak it back to him in Chinese in his own voice:

    Skip to 4.22 to be able to read captions from this presentation.

    All of these examples I hope will contribute in some small way to an improvement in live subtitling. Mark Nelson would disagree with me. He wrote an article explaining how he believes that a peak has been reached and that greater reliance on voice technology could lead to the deaf and hard of hearing being left behind.

    What do you think? Do you think live subtitling will improve as a result of voice recognition technology or do you have another view? Please leave your comments below.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel