Tagged: Respeaking Toggle Comment Threads | Keyboard Shortcuts

  • iheartsubtitles 12:19 pm on June 27, 2014 Permalink | Reply
    Tags: , , , , , Respeaking, , ,   

    CSI TV Accessibility Conference 2014 – Live subtitling, VOD key themes 

    Photo of CSI TV Accessibility Conference 2014 brochure

    CSI TV Accessibility Conference 2014

    Earlier this month the CSI TV Accessibility Conference 2014 took place in London. I had hoped to be able to give a more detailed write up with a bit of help from the transcript of the live captioning that covered the event but I’m afraid my own notes are all I have and so I will summarise some of the interesting points made that I think will be of interest to readers here. It will not cover all of the presentations but it does cover the majority.

    i2 Media Research gave some statistics surrounding UK TV viewing and the opportunities that exist in TV accessibility. Firstly, TV viewing is higher in the older and disabled population. And with an ageing UK population the audience requiring accessibility features for TV is only going to increase.

    Andrew Lambourne, Business Director for Screen Subtitling Systems had an interesting title to his presentation: “What if subtitles were part of the programme?” In his years of working in the subtitling industry he questioned why are we still asking the same questions over recent years. The questions surround the measurement of subtitling quality, and if there is incentive to provide great subtitling coverage for children. He pointed out that in his opinion funding issues are still not addressed. Subtitling is still not a part of the production process and not often budgeted for. Broadcasters are required to pay subtitling companies,and subtitling costs are under continued to pressure (presumably to provide more, for less money). It is a sad fact that subtitling is not ascribed the value it deserves. With regards to live subtitling there is a need to educate the public as to why these errors occur. This was a repeated theme in a later presentation from Deluxe Media. It is one of the reasons I wrote the #subtitlefail! TV page on this blog.

    Peter Bourton, head of TV Content Policy at Ofcom gave an update and summary of the subtitling quality report which was recently published at the end of April. This is a continuing process and I’m looking forward to comparing the next report to this first one to see what changes and comparisons can be made. The presentation slides are available online.

    Senior BBC R&D Engineer Mike Armstrong gave a presentation on his results to measuring live subtitling quality. (This is different to the quantitative approach used by Pablo Romero and adopted by Ofcom to publish its reports) What I found most interesting about this research is that the perception of quality by a user of subtitles is quite different depending on whether the audio is switched on whilst watching the subtitled content. Ultimately nearly everyone is watching TV with the audio switched on and this research found that delay has a bigger impact on perception of quality compared to the impact of errors. The BBC R&D white paper is available online.

    Live subtitling continued to be a talking point at the conference with a panel discussion titled: Improving subtitling. On the panel was Gareth Ford-Williams (BBC Future Media), Vanessa Furey (Action On Hearing Loss), Andrew Lambourne (Screen Subtitling Systems), and David Padmore (Red Bee Media). All panelists were encouraged that all parties – regulators, broadcasters, technology researchers are working together to continually address subtitling issues. Developments in speech recognition technology used to produce live subtitles has moved towards language modelling to understand context better. The next generation of speech recognition tools such as Dragon has moved to phrase by phrase rather than word by word (the hope being that this should reduce error rates). There was also positivity that there is now a greater interest in speech technology which should lead to greater advancements over the coming years, compared to the speed of technology improvements in the past.

    With regards to accessibility and Video on Demand (VOD) services it was the turn of the UK’s Authority of Television Video on Demand (ATVOD) regulatory body to present. For those that are unaware, ATVOD regulate all VOD services operating in the UK except for BBC iPlayer which is regulated under Ofcom. In addition because iTunes and Netflix operate from Luxembourg, although their services are available in the UK, they are outside of the jurisdiction of ATVOD. There are no UK regulatory rules that say VOD providers must provide access services, but ATVOD have an access services working party group that encourage providers to do so as well as draft best practice guidelines. I cannot find anywhere on their website the results of a December 2013 survey looking at the statistics of how much VOD content is subtitled, signed, or audio described which was mentioned in the presentation. If anyone else finds it please comment below. However, in the meantime some of the statistics of this report can be found in Pete Johnson’s presentation slides online. What has changed since 2012 is that this survey is now compulsory for providers to complete to ensure the statistics accurately reflect the provision. Another repeated theme, first mentioned in this presentation is the complexity of the VOD distribution chain. It is very different for different companies, and the increasing number of devices which we can choose to access our content also adds to the complexity. One of the key differences for different VOD providers is end-to-end control. Few companies control the entire process from purchasing and/or creating content for consumers to watch right through to watching the content on a device. So therefore who is responsible for a change or adaptation to a workflow to support accessible features and who is going to pay for it?

    I should also mention that the success of a recent campaign from hard of hearing subtitling advocates in getting Amazon to finally commit a response and say that they will start subtitling content was mentioned positively during this presentation. You may have read my previous blog post discussing my disappointment at the lack of response. Since then, with the help of comedian Mark Thomas, who set up a stunt that involved putting posters up on windows of Amazon UK’s headquarters driving the message home, Amazon have committed to adding subtitles to their VOD service later this year. See video below for the stunt. It is not subtitled, but there is no dialogue, just a music track.

    You can read more about this successful advocacy work on Limping Chicken’s blog.

    Susie Buckridge, Director of Product for YouView gave a presentation on the accessibility features of the product which are pretty impressive. Much of the focus was on access features for the visually impaired. She reminded the audience that creating an accessible platform actually creates a better user experience for everyone. You can view the presentation slides online.

    Deluxe Media Europe gave a presentation that I think would be really useful for other audiences outside of those working in the industry. Stuart Campbell, Senior Live Operations Manager, and Margaret Lazenby Head of Media Access Services presented clear examples and explanations of the workflow involved in creating live subtitles via the process of respeaking for live television. Given the lack of understanding or coverage in mainstream media, this kind of information is greatly needed. This very point was also highlighted by the presenters. The presentation is not currently available online but you can find information about live subtitling processes on this blog’s #SubtitleFail TV page.

    A later panel discussed VOD accessibility. The panelists acknowledged that the expectation of consumers is increasing as is the volume and scale of complexity. It is hoped that the agreed common standard format of subtitle file EBU-TT will resolve a lot of these issues. This was a format still being worked on when it was discussed at the 2012 Conference which you can read about on this blog. The UK DPP earlier this year also published updated common standard subtitles guidelines.

    Were any of my readers at the conference? What did you think? And please do comment if you think I have missed anything important to highlight.

    Advertisements
     
    • peterprovins 4:48 pm on July 21, 2014 Permalink | Reply

      Interesting blog. No excuse for TV, Film, website or even theatre not to be captioned…we do it all. Currently captioning university lectures and looking at doctors surgeries which are currently limited to BSL only. Keep up the good work.

      Like

  • iheartsubtitles 3:11 pm on March 22, 2013 Permalink | Reply
    Tags: , , Respeaking, ,   

    New subtitling technology for TV broadcast and the cinema 

    Last week was a bit of subtitles technology themed week for me for two reasons. First I had the opportunity to visit the London offices of Red Bee Media who showed me their current workflows for providing access to broadcast TV for the deaf and hard of hearing, as well as giving me the opportunity to learn about the new bespoke software that they have been working on and are looking at rolling out soon. It is called Subito (which translates as ‘immediately’ in Italian). The hope is that this will result in a signficant improvement in the output of live subtitles which is currently nearly always produced by the process of respeaking (See #Subtitlefail TV). Most of the complaints people have in the UK around TV subtitling is when live subtitles are used. There are times when the process of live subtitling is not ideal because it is a process that can have an inconsistency in accuracy. It is hoped that this new software will result in much better consistency and accuracy of live subtitles.

    Subito allows the subtitler with far more options to prepare text which may come from a number of different sources to use in addition to respeaking the audio output. These sources might be a script or an autocue, or if they themselves type or respeak it and the audio video content is repeated later on (this is common on 24 hour news channels).This text can be accessed to use later on rather than the subtitler having to respeak the same content over again. The text can also be edited if and when required. At the moment the existing software does allow some prepared text content to be included with respeaking content but with very limited options including a lack of control and options for the subtitler to determine the speed of how that prepared text is seen by the viewer at home – it might appear too fast to be read for example as blocks rather than scrolling that you see with most live subtitles. The new software gives the subtitler much more control and flexibility to incorporate prepared subtitles. There are also improvements behind the speech technology used for computers to convert a subtitlers speech into text with accuracy and speed.(Speech technology was never designed with live subtitling in mind. The technology is being used in ways few would’ve thought of when it was first introduced into products the late 80s/1990s.) So why should this new software have a significant impact on live subtitling output? Well it is currently still being trialled. But the hope is that the effect should be twofold:

    (1) The skill of respeaking which is actually very difficult should become a little easier thanks to the improvements in speech recognition technology and further bespoke changes that have been made to the back-end to compliment it use for the purpose of creating broadcast subtitles.
    (2) The greater number of options and flexibility a subtitler will have to get the subtitles out during live programmes to the viewer with speed and accuracy should see an improvement in the output.

    The software has been designed with the end-user – the subtitler in mind. This is actually key for me – who better to know what tools they need to deliver a better output. There has also been thought put in to work out how to automate some of the options available to subtilers such as automatically cueing the text to the screen/viewer once it has been associated to the video content (for repeated segments on 24 hour live channels for example). The benefit is to free up a subtitler to work on something else that they can see is coming up on the live channel that they are subtitling. It should in theory make job satisfaction higher and hopefully slightly less monotonous. As a viewer I look forward to its roll out and the impact on the output of live subtitling on some of the UK TV channels.

    As a side note – last month I met with the manager of STAGETEXT who kindly showed me the software their subtitlers use and the process they go through to provide subtitled theatre. They too have gone down the bespoke software route to ensure that subtitlers or captioners have as much control as possible on the output – both the content and the speed. An awful lot of prep work is done to aid this. In the same way that TV broadcast subtitlers have to react quickly to any changes to audio on live broadcasts, the challenges are the same if an actor or actress goes off script or there are time delays / or increases and the software needs to allow for quick reactions. Those specific details are issues faced by both companies and it is interesting that bespoke software is the solution both companies have chosen.

    I was also lucky enough to take part in a cinema subtitling technology demo in London at the weekend. It was organised by the CEA. They have asked for us not to publicise too much information about what we used and that the CEA would publish public information about the trials results soon. I want to respect that request so the details of the devices we used are deliberately vague in this blog post. I was part of a screening which tested two types of personal devices that allow the individual to see subtitles without any being displayed on the cinema screen. I was allocated one of them. I took part in the focus group afterwards during which the feedback was very mixed for both pieces of technology. For those that don’t know the CEA has already done a lot of work in getting open subtitles screenings in cinemas across the UK which I am grateful for. We are one of the few countries to do this. I am of the opinion that the best technological solution is open subtitles. The UK cinema industry currently does not use any other form of technology to provide subtitles (to my knowledge). There were several different views expressed by different people at the focus group such. I hope that the CEA publish a summary of the feedback soon so it can be discussed in a more open way. As a reminder, you can find listings for subtitled cinema (as well as audio described screenings for those with visual impairments) in the UK at Your Local Cinema. If a subtitled screening is not taking place near you and you own a smart phone then why not try these options.

     
    • Richard Turner 4:07 pm on March 22, 2013 Permalink | Reply

      I agree that open Subtitles are the best option. However I feel in the future the tech that we tested will open up accessibility. I will be interested to see feedback. I would love to go to the Cinema tonight but unfortunately no subtitled films on a friday night. This tech will make my wish possible. great blog !

      Like

  • iheartsubtitles 5:14 pm on February 11, 2013 Permalink | Reply
    Tags: , Respeaking,   

    A poem made from live TV subtitling errors #subtitlefail! 

    Here is a blog post from Karen Corbel which contains a #subtitlefail! TV poem in tribute to the errors made from live TV subtitling of the weather reports. No one likes seeing errors but you have to admit they can be a source of hilarity. Enjoy!

    TV subtitles « karencorbel.

     
  • iheartsubtitles 5:18 pm on January 13, 2013 Permalink | Reply
    Tags: , , , , Respeaking, , , ,   

    CSI User Experience Conference 2012 Part 6 – Does technology help or hinder progress? 

    CSI User Experience Conference 2012: TV Accessibility

    CSI User Experience Conference 2012: TV Accessibility

    For background info on this conference read:Part 1.

    A panel discussion on technology and access discussed whether the provision of access services by broadcast is helped or hindered by technology. Not surprisingly, the answer is both.

    Gareth Ford Williams, Acting Head of Usability and Accessibility for the BBC discussed some of the technological challenges explaining that they are different barriers for different types of access services – subtitling, audio description and sign language:

    We move from a world where you are taking something that is linear broadcast, trying to turn it into catch up and on demand we’re basically taking stuff that’s again from one set of format to another set. We luckily made the choice very early on with BBC iPlayer, to support time text which has paid off. It was a little bit of a punt at the time because no-one else was doing it. But the thing is that is one kind of solution, we hope that the platforms catch up and there’s more platforms support that, support the delivery of that standard than we can roll that out. But I think then when you look at the other access services, you have other issues or not. Signed content for instance is already packaged up and broadcast as a programme asset. There’s no conversion, nothing to be done which I why I made the point earlier the first access service we had all the available content on all of the devices and platforms iPlayer is on because it’s just 5% of the content. We just treated it like another programme. Audio description is a whole other kettle of fish suddenly we’re going from delivery an additional audio asset which is even more challenging than trying to deliver subtitles and making that work. We spent a year of effort in on iPlayer trying to make that work. Realised what we would be better off doing was turn it into another programme asset. Every single time we did it broke in many wonderful ways, now we have 600 devices to try to support, I think that again has become even more bigger and impossible task. So the issue there is then how do the solutions just built online. How do we now completely reengineer that make that scalable across more platforms. It’s not straight forward and easy, we’re several years into it and are still learning as we’re going along. But that’s where we are.

    Regarding the provision of subtitling Andrew Lambourne from Screen Systems made an excellent point that access provision needs to be considered at the production level to assist broadcasters in providing the service, and it is often this lack of approach not technology that can hinder access:

    There’s a need for some joined up thinking and a holistic approach. Often these problems after easily solved by stepping back and looking at what we’re doing here. If you step back from this industry of providing access to media you see an industry that producing media. During the course of that production they are creating a awful lot of data which is kept or thrown away and kept to one side which is not necessarily passed on to the people providing the subtitles or the audio description. You might have to go and research how they spell, what the lyrics of songs are etc. So if we were to take a step back it’s not necessarily a technological issue in and of itself it’s a product of speed with which some parts of the industry move compared to others. This is a product of commercial motivation. That happen because producers wanted to get their content out as in many ways as possible. DVD worked it was implemented it was fine. What they didn’t think was where it goes next down the chain. I think a very useful change of attitude would be to start to think of accessibility as part of the responsibility of the producer, not the deliverer or the broadcaster of the content so you think more holistically, you are bearing in mind at the beginning you are factoring it in at the beginning you can design your technological chain to make it easy. So if you take a cinema production, you recut it for television, let’s know where the cuts were then they can automate the refactoring of the subtitles. It can be built in, what’s needed is the right motivation to do it. I think the need to further save on cost, is the right kind of motivation.

    Later on Lambourne went on to say that the barriers are less technological and more commercial and gave a passionate reminder that access services are not about punishing or attacking broadcast companies:

    If the requirement is to have a commercial motivation I think the need for that requirement or the justification for that has come out obviously today, when somebody said ‘it’s all about bums on seats’. Access services aren’t attacks on broadcasters or anything, they are a means of reaching more people, and you’re talking about 10% extra audience or more, depending on the access you are providing. That’s a huge justification the other thing that is happening of course as we said earlier is the number of platforms to which you are targeting your media is increasing. One of the things that Screen have been doing recently is how you can make it easy to take the subtitles you did for broadcast perhaps as DVB, using STL or whatever files and how can you make those same subtitles available when you distribute the content on the web…It’s all about looking at the work flows, looking at the way that the technology is linked together, and then finding technical solutions. There are not very many barriers left except the barriers of needing budgets.

    It was also lovely to hear someone speak of the advantages of providing subtitles and give a reminder that the benefits are not limited to those who are deaf and hard of hearing and that perhaps this message should be shouted louder. Lambourne continued:

    Live subtitling which I something I have worked on for my entire career, is now reaching a point of maturity for half a dozen European languages you can train somebody without huge difficulty to sit and listen to more or less any kind of live broadcast, and to respeak it into a speech recognition system with good enough quality to broadcast that immediately as subtitles. But there are not speech recognition system available for all European languages. The costs are not massive at a national scale, the benefit are huge because of course subtitling is not just for access for people who can’t hear, audio description for people who can’t see, it helps people learn the language did preserves the quality did gives a cultural benefit. So for people who are not first language speakers in a given territory , the subtitles can help them to learn to read, the same with children. Subtitles on cartoon are a great motivator for children to learn to read. I think the value add benefits need to be brought out more sharply, perhaps even by the regulators, they don’t need sticks they can actually talk about the benefits of providing access.

     
  • iheartsubtitles 2:35 pm on January 9, 2013 Permalink | Reply
    Tags: , , , , Respeaking, , , , ,   

    CSI User Experience Conference 2012 Part 5 – Broadcast subtitles and captions formats 

    CSI User Experience Conference 2012: TV Accessibility

    CSI User Experience Conference 2012: TV Accessibility

    For background info on this conference read:Part 1.

    Frans de Jong, a senior engineer for European Broadcasting Union (EBU) gave a presentation on the history of work and current work being done to ensure standardised subtitle formats as broadcast technology evolves whilst ensuring that legacy formats are still support and compatible. The subtitle format evolved from teletext technology STL has evolved to a format called EBU-TT Part I. Jong explained:

    We have published this year (2012) EBU-TT part one. This is the follow up specification for that old format (STL). It takes into account that nowadays we like to define things in XML and not in binary format because its human readable, and because there many people who read XML…and of course nowadays [broadcast] its all file based, network facilities. Because if you look at the way that subtitles are produced, this a very generic sketch, typically it comes from somewhere, external company or internal department, can be based on existing formats, then it goes into some central content management system. Afterwards it archived and of course its broadcast at a certain moment, then provided to several of the platforms on right. This list of platforms growing. Analogue TV, digital TV, now there’s HDTV, iPlayer, we have IPTV streaming platforms all these platforms have their own specific way of doing subtitling. But in the production side we have for a long time being using STL and also proprietary formats based on them or newly developed. There’s several places where this format is useful but we felt we had to update that format to make sure we can fulfill the requirements of today. That is HD TV and the different web platforms mainly. So the new format published was focusing on that, very aware of web format, but focused in our case on production. Our goal is to really optimise the production, to help the broadcasters get their infrastructure up-to-date.

    The EBU-TT format is not a stand-alone invention and is based on W3C Timed Text (TTML) but restricts the featureset, makes default values explicit, and adds (legacy STL) metadata. Similar work has been done in the US by SMPTE with the captioning format SMPTE-TT. This captioning standard received an honor from the Federal Communications Commission (FCC) —a Chairman’s Award for Advancement in Accessibility last month:

    The FCC declared the SMPTE Timed Text standard a safe harbor interchange and delivery format in February. As a result, captioned video content distributed via the Internet that uses the standard will comply with the 21st Century Communications and Video Accessibility Act, a recently enacted law designed to ensure the accessibility, usability, and affordability of broadband, wireless, and Internet technologies for people with disabilities.

    SOURCE: TV Technology

    The EBU are currently working on EBU-TT Part II which will include a guide to ensuring ‘upgrading’ STL legacy subtitle files and how they can be converted to EBU-TT file. This is due to be published early this year. Looking further ahead Jong’s said:

    There is also a third part coming up, that is now in the requirements phase, that’s on live subtitling. Several countries, and the UK is certainly leading, are working with live subtitling. The infrastructure for this and the standards used are not very mature, which means there is room also to use this format to come to a live subtitle specification. We will provide a user guide with examples…One word maybe again about live subtitling that’s coming up. What we did here is we had a workshop in the summer in Geneva at the EBU. We discussed the requirements with many broadcasters, what would you need this type of format. There are about 30 requirements. Some of the things that came up for example, is that it would be really good if there is a technical situation for routing, if I am subtitling for one channel maybe 10 minutes later I could be subtitling for another channel – to make sure that the system knows the what channel I am working for and that its not the wrong channel. And you need some data in the format that was used. Again the issue of enriching the work you are working on with additional information, description and speaker ID.

    To conclude the presentation Jong’s discussed his views on future technology and the next steps for subtitling including automated subtitles and quality control:

    There is an idea we could be much more abstract in how we author subtitle in the future. We understand that the thought alone can be quite disrupting for a lot of people in current practice because it’s far from current practice. Just to say we’re thinking about the future after this revision. I think later we’ll see on more advanced methods for subtitling, there is a lot of talk about automation and semi-automation. I think it was a week ago that You Tube released their automated subtitling with speech recognition, at least in the Dutch language. I am from Holland originally, I was pretty impressed by the amount of errors! … It’s a big paradox. You could argue that Google (owners of You Tube) has the biggest corpus of words and information probably of all of us.. if they make so many (automated subtitles/captions) mistakes how can we ever do better in our world? For the minority languages there is no good automated speech recognition software. If you ask TVP for example, the Polish broadcaster, how they do live subtitling, they say we would love to use speech recognition but we can’t find good enough software. In the UK it’s a lot better. It’s a real issue when you are talking about very well orchestrated condition and even there it doesn’t exist. I am really curious how this will develop.

     
  • iheartsubtitles 5:52 pm on December 20, 2012 Permalink | Reply
    Tags: , , , Microsoft, , Respeaking, Siri, , , ,   

    CSI User Experience Conference 2012 Part 3 – Live subtitles & voice recognition technology 

    CSI User Experience Conference 2012: TV Accessibility

    CSI User Experience Conference 2012: TV Accessibility

    For background info on this conference read:Part 1.

    It’s clear that much of the frustration from many UK TV viewers surrounds live subtitles and so the technology of voice recognition software and the process of respeaking used to achieve this was one of the topics of debate in a panel on the User Experience following Ofcom’s presentation.

    Deluxe Media’s Claude Le Guyader made some interesting points:

    In the case of live subtitling…it’s a lot of pressure on the person doing the work, the availability of the resource and the cost, it all means that the event of voice recognition was embraced by all the service providers as a service to palliate, to that lack of resource (in this case – stenographers). As we know voice recognition started, it’s not perfect, still not perfect, I don’t know if you have seen on your iPhone, it’s quite funny, with a French access it’s
    even worse! (This is in reference to Siri which is not used so far as I am aware to create live subtitles but it is part of the same technology used – voice recognition). With the voice recognition you need to train the person. Each person (in this case – a subtitler or respeaker) needs to train it, now it’s moved on and there are people using voice recognition very successfully as well, so it’s evolving but the patience, you know, it does run out when you are round the table again years are discussing the same issue, but it’s not a lack of will, I think it’s just a difficult thing to achieve, because it involves so many different people.

    Voice technology does seem to be constantly evolving and the fact that it is being implement in more and more products (the iPhone and Siri is a great example) I think is a positive thing. It increases consumer awareness of what this technology can do and consequently I think people will expect this technology to work. There are numerous ways voice technology is being used. To move away from just live subtitling and summarising points made at the conference for a moment but still within a broadcast TV context, another use is illustrated by Google TV. In the below video you can see voice recognition technology allowing a viewer to navigate the TV:

    Voice recognition technology is also used to create the automatically generated captions on You Tube videos. At the moment this does illustrate the technologies limitations as most readers here I am sure are aware – the captions created this way are completely inaccurate most of the time and therefore useless. I think we can all agree that respeaking to produce live subtitles creates errors but produces a much better result than a machine currently. Google recently added automatic captioning support for six new languages. Investment into this technology even if it is currently imperfect shouldn’t be discouraged because surely this is the only way for the technology to improve:

    A new research paper out of Google describes in some detail the data science behind the the company’s speech recognition applications, such as voice search and adding captions or tags to YouTube videos. And although the math might be beyond most people’s grasp, the concepts are not. The paper underscores why everyone is so excited about the prospect of “big data” and also how important it is to choose the right data set for the right job….No surprise, then, it turns out that more data is also better for training speech-recognition systems…The real key, however — as any data scientist will tell you — is knowing what type of data is best to train your models, whatever they are. For the voice search tests, the Google researchers used 230 billion words that came from “a random sample of anonymized queries from google.com that did not trigger spelling correction.” However, because people speak and write prose differently than they type searches, the YouTube models were fed data from transcriptions of news broadcasts and large web crawls…This research isn’t necessarily groundbreaking, but helps drive home the reasons that topics such as big data and data science get so much attention these days. As consumers demand ever smarter applications and more frictionless user experiences, every last piece of data and every decision about how to analyze it matters.

    SOURCE: GigaOM

    Following on from this example the natural question ask is will Apple integrate its voice technology Siri into Apple TV? It has been rumoured but not yet confirmed. (Interestingly it is already confirmed that Siri is being added to Chevrolet cars next year) If there is competition between companies for innovation using this technology, all the better. I found an interesting blog post pondering the future of Siri for Apple here. Although this blogger thinks that Google Voice is better. Voice technology is also being used in the world of translation. Last month Microsoft gave an impressive demo of voice recognition technology translating a speakers speech in English into Chinese text as well as speak it back to him in Chinese in his own voice:

    Skip to 4.22 to be able to read captions from this presentation.

    All of these examples I hope will contribute in some small way to an improvement in live subtitling. Mark Nelson would disagree with me. He wrote an article explaining how he believes that a peak has been reached and that greater reliance on voice technology could lead to the deaf and hard of hearing being left behind.

    What do you think? Do you think live subtitling will improve as a result of voice recognition technology or do you have another view? Please leave your comments below.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: