Tagged: Translation Toggle Comment Threads | Keyboard Shortcuts

  • iheartsubtitles 4:54 pm on December 22, 2014 Permalink | Reply
    Tags: , , , , , production, Translation,   

    Accessible film making or what if subtitles were part of the programme? 

    I was prompted to write this blog post by a recent tweet from director Samuel Dore who bemoaned the fact that he felt that film directors and distributors seem to ‘moan’ about the cost of subtitling content:

    And I’ve seen tweets from others with comments of a similar nature.  This is a tricky topic because it would be wrong to label everyone individual or company out there as having this belief or attitude. However it’s another repeated theme I’ve seen discussed at access and language conferences this year.  That’s a good thing – it means its recognised as a potential issue for some companies or individuals and others in the same industry are challenging this assumption and trying to change it.  At the 2014 CSI Accessibility Conference Screen Subtitling’s John Birch asked the question “What if subtitles were part of the programme?”  He pointed out that in his opinion funding issues are still not addressed. Subtitling is still not a part of the production process and not often budgeted for. Broadcasters are required to pay subtitling companies,and subtitling companies are under continued to pressure (presumably to provide more, for less money). It is a sad fact that subtitling is not ascribed the value it deserves.

    I would also argue that there is some lost opportunity with the current Ofcom Code on Access Television Services that gives new TV channels a one year grace period in which regardless of audience reach, if the TV channel is less than one year old it is not required to subtitle/caption any volume of its output at all. Whilst I understand the cost of doing so might be considered a barrier to even launching the channel in the first place, the problem is it promotes an attitude or thinking once  again of not budgeting for subtitling/captioning from the start of the business process.  So two or three years down the line when the grace period is over,the risk is that it becomes an additional cost that the channel has not budgeted for and could be perceived as hindrance or ‘punishment’ rather than something positive that adds value for the channel and its viewers.

    The same is also true for translation subtitling. At the 2014 Languages & The Media Conference Pablo Romero-Fresco gave this statistic: Subtitling and translation make up 57% of revenue generated from English speaking movies but translation subtitling only gets 0.1% of budget. He argued that there needs to be a shift of change in the production process of filmmaking.  His suggestion is that film production should recognise and create the role of Producer of Accessibility who is involved before the final edit is locked.

    Sherlock - text message - on screen typography

    Sherlock – text message – on screen typography

    He observed that in recent years text and typography effects like those seen in the BBC’s Sherlock, and Netflix’s House of Cards (and many, many more), which uses text on screen as part of the storytelling and is part of the post production process should also be integrated in this role.  I too have observed the increase in recent years of using typography on screen as part of the story telling process. It’s also being widely used in music videos. For lots of examples of kinetic typography be sure to check out this Vimeo channel.

    Romero repeated this vision and idea at the Future of Subtitling Conference 2014.  You can read more in-depth information in the Journal of Specialised Translation.  I’ve also collated further tweets and information on this topic at Storify: Why subtitles should be part of the production process.

    I think its a really interesting idea. I also think that it will require a monumental shift for this to happen in the industry but never say never. What is good, is that certainly between broadcast TV production companies and subtitling companies is that collaboration of a sort is happening. Information and scripts are shared well in advance so that subtitler’s can prepare as much as possible in advance of broadcasts. Clearly, Romero’s vision is to be much more integrated than that.

    Currently for broadcast TV that is licensed under Ofcom, the responsibility for access and provision of subtitling lies with the broadcaster/TV channel. If the creation of subtitles and captions is implemented wholly into the production process then should subtitling provision then solely lie with the production company?

    At the moment it would appear that the responsibility shifts between the two depending on a number of factors:

    1. Regulation, if there is any and whom is considered responsible for providing subtitles.
    2. The production company and/or the distribution company making the content (some will provide subtitles, some will not, and a broadcaster may have bought programmes from either one of these or they may be one and the same thing)
    3. The country broadcasting the content (what language do you need subtitles in and how many languages will a production company be prepared to produce?)
    4. The method of how content is viewed (digital TV, satellite, cable, online, download, streaming subscription, pay per view,)

    It really shouldn’t be complicated but there is no denying that with all these variables it is. A lot of the above is complicated further by distribution rights which is another topic entirely. I do like the idea a lot though as it has the potential to simplify some of the above. I also think production companies would benefit greatly from the knowledge and expertise gained from years of experience from translation and subtitling companies as to the best methods to achieve collaboration and integration. What do you think?

    • Claude Almansi 11:08 pm on December 22, 2014 Permalink | Reply

      Thank you, Dawn: so many creative proposals in your post. It reminded me of a tutorial that Roberto Ellero made for the Italian public administration in 2009, entitled rather sternly – well, due to the target audience – “Accessibilità e qualità dei contenuti audiovisivi”, Accessibility and quality of audiovisual content. It’s in https://www.youtube.com/watch?v=wy34n09tvKo , with Italian captions and English subtitles (1). I think you might agree with the part from 1:47:

      “Every audiovisual product begins with a text, a script, a storyboard, some writing geared towards visualization, which then gets enacted in a series of frames and sequences. Every video alway starts from a text and returns to a text (a book, being read generates images in our mind, and the reverse path leads to audiodescription, which, in turn, is also a text)…”

      (1) Apologies for the typos in the English subs: I translated them on a train journey with TextEdit and sent them from a station where I got a wireless connection: he needed them urgently for some talk he was to give the following day 🙂


  • iheartsubtitles 4:19 pm on November 8, 2013 Permalink | Reply
    Tags: , , , Translation   

    Machine Translation & Subtitles – Q&A with Yota Georgakopoulou 

    Something I have not blogged much about to date is the topic of machine translation and its use within a subtitling context. Having read about a project titled SUMAT I was lucky enough to asks questions on this topic with Yota Georgakopoulou:

    Q1: What does SUMAT stand for? (is it an Acronym?)

    Yes, it stands for SUbtitling by MAchine Translation.

    Q2: How is SUMAT funded and what industries/companies are involved?

    SUMAT is funded by the European Commission through Grant Agreement nº 270919 of the funding scheme ICT CIP-PSP – Theme 6, Multilingual Online Services.

    There are a total of nine legal entities involved in the project. Four of them are subtitling companies, four are technical centres in charge of building the MT systems we are using in the project, and the ninth is responsible for integrating all systems in an online interface through which the service will be offered.

    Q3: Can you give us a little bit of information on your background and what your involvement in SUMAT has been to date?

    I have been working in translation and subtitling ever since I was a BA student in the early 90’s. I was working in the UK as a translator/subtitler, teaching and studying for a PhD in subtitling at the time of the DVD ‘revolution’, with all the changes it brought to the subtitling industry. This was when I was asked to join the European Captioning Institute (ECI), to set up the company’s translation department that would handle multi-language subtitling in approximately 40 languages for the DVD releases of major Hollywood studios. That’s how my career in the industry began. It was a very exciting time, as the industry was undergoing major changes, much like what is happening today.

    Due to my background in translation, I was always interested in machine translation and was closely following all attempts to bring it to the subtitling world. At the same time, I was looking for a cost-effective way to make use of ECI’s valuable archive of parallel subtitle files in 40+ languages, and the opportunity came up with the SUMAT consortium. ECI has since been acquired by Deluxe, who saw the value of the SUMAT project and brought further resources to it. Our involvement in the project has been that of data providers, evaluators and end users.

    Q4: Machine Translation (MT) already has some history of being used to translate traditional text. Why has machine translation not been put to use for translation subtitles?

    Actually, it has. There have been at least two other European projects which have attempted to use machine translation as part of a workflow that was meant to automate the subtitling process: MUSA (2002-2004) and eTITLE (2004-2006). Unfortunately, these projects were not commercialized in the end. Part of the reason for this is likely to be that the MT output was not of good enough quality for a commercial setting. As professional quality parallel subtitle data are typically the property of subtitling companies and their clients, this is not surprising. The SUMAT consortium invested a large amount of effort at the beginning of the project harvesting millions of professional parallel subtitles from the archives of partner subtitling companies, then cleaning and otherwise processing them for the training of the Statistical Machine Translation (SMT) systems our Research and Technical Development (RTD) partners have built as part of the project.

    Q5: Some readers might be concerned that a machine could never replace the accuracy of a human subtitler translating material. What is your response to that concern?

    Well, actually, I also believe that a machine will never replace a subtitler – at least not in my lifetime. MT is not meant to replace humans, it is simply meant to be another tool at their disposal. Even if machines were so smart that they could translate between natural languages perfectly, the source text in the case of film is the video as a whole, not just the dialogue. The machine will only ‘see’ the dialogue as source file input, with no contextual information, and will translate just that. Would a human be able to produce great subtitles simply by translating from script without ever watching the film? Of course not. Subtitling is a lot more complex than that. So why would anyone expect that an MT system could be able to do this? I haven’t heard anyone claiming this, so I am continuously surprised to see this coming up as a topic for discussion. I think some translators are so afraid of technology, because they think it will take their jobs away or make their lives hard because they will have to learn how to use it, that they are missing the point altogether: MT is not there to do their job, it is there to help them do their job faster!

    Q6: Is the technology behind SUMAT similar to that used by You Tube for its ‘automated subtitles’?

    Yes, in a way. YouTube also uses SMT technology to translate subtitles. However, the data YouTube’s SMT engines have been trained with is different. It is not professional quality subtitle data, but vast amounts of amateur quality subtitle data found on the internet, coupled with even larger amounts of any type of parallel text data found on the web and utilized by Google Translate. Also, one should bear in mind that many ‘issues’ found in YouTube subtitles, such as poor subtitle segmentation, are a result of the input text, which in some cases is an automatic transcription of the source audio. Thus, errors in these transcriptions (including segmentation of text in subtitle format) are propagated in the ‘automatic subtitles’ provided by YouTube.

    SUMAT also uses SMT engines built with the Moses toolkit. This is an open source toolkit that has been developed as part of another EU-funded project. In SUMAT, the SMT engines have been trained with professional quality subtitle data in the 14 language pairs we deal with in the project, and supplemented with other freely available data. Various techniques have been used to improve the core SMT systems (e.g. refined data selection, translation model combination, etc.), with the aim of ironing out translation problems and improving the quality of the MT output. Furthermore, the MT output of SUMAT has been evaluated by professional subtitlers. Human evaluation is the most costly and time-consuming part of any MT project, and this is why SUMAT is so special: we are dedicating almost an entire year to such human evaluation. We have already completed the 1st round of this evaluation, where we focused on the quality output of the system, and we have now moved on to the 2nd round which focuses on measuring the productivity gain that the system helps subtitlers achieve.

    Q7: Why do you think machine translation is needed in the field of subtitling?

    I work in the entertainment market, and there alone the work volumes in recent years have skyrocketed, while at the same time clients require subtitle service providers to deliver continuous improvement on turnaround times and cost reduction. The only way I see to meet current client needs is by introducing automation to speed up the work of subtitlers.

    Aside from entertainment material, there is a huge amount of other audiovisual material that needs to be made accessible to speakers of other languages. We have witnessed the rise of crowdsourcing platforms for subtitling purposes in recent years specifically as a result of this. Alternative workflows involving MT could also be used in order to make such material accessible to all. In fact, there are other EU-funded projects, such as transLectures and EU-Bridge, which are trying to achieve this level of automation for material such as academic videolectures, meetings, telephone conversations, etc.

    Q8: How do you control quality of the output if it is translated by a machine?

    The answer is quite simple. The output is not meant to be published as is. It is meant to be post-edited by an experienced translator/subtitler (a post editor) in order for it to reach publishable quality. So nothing changes here: it is still a human who quality-checks the output.

    However, we did go through an extensive evaluation round measuring MT quality in order to finalise the SMT systems to be used in the SUMAT online service, as explained below. The point of this evaluation was to measure MT quality, pinpoint recurrent and time-consuming errors and dedicate time and resources to improving the final system output quality-wise. Retraining cycles of MT systems and other measures to improve system accuracy should also be part of MT system maintenance after system deployment, so that new post-edited data can be used to benefit the system and to ensure that the quality of the system output continues to improve.

    Q9: How do you intend to measure the quality/accuracy of SUMAT?

    We have designed a lengthy evaluation process specifically to measure the quality and accuracy of SUMAT. The first round of this evaluation was focused on quality: we asked the professional translator/subtitlers who participated to rank MT output on a 1-5 scale (1 being incomprehensible MT output that cannot be used, and 5 being near perfect MT output that requires little to no post-editing effort), as well as annotate recurrent MT errors according to a typology we provided, and give us their opinion on the MT output and the post-editing experience itself. The results of this evaluation showed that over 50% of the MT subtitles were ranked as 4 or 5, meaning little post-editing effort is required for the translations to reach publishable quality.

    At the second and final stage of evaluation that is currently under way, we are measuring the benefits of MT in a professional use case scenario, i.e. checking the quality of MT output indirectly, by assessing its usefulness. We will thus measure the productivity gain (or loss) achieved through post-editing MT output as opposed to translating subtitles from a template. We have also planned for a third scenario, whereby the MT output is filtered automatically to remove poor MT output, so that translators’ work is a combination of post-editing and translation from source. One of the recurrent comments translators made during the first round of evaluation was that it was frustrating to have to deal with poor MT output and that there was significant cognitive effort involved in deciding how to treat such output before actually proceeding with post-editing it. We concluded it was important to deal with such translator frustrations as they may have a negative impact on productivity and have designed our second round of experiments accordingly.

    Q10: Are there any examples of translation subtitles created by SUMAT?

    Yes, the SUMAT demo is live and can be found on the project website (www.sumat-project.eu). Users can upload subtitle files in various subtitle formats and they will be able to download a machine translated version of their file in the language(s) they have selected. We have decided to limit the number of subtitles that can be translated through the demo, so that people do not abuse it and try to use it for commercial purposes.

    Q11: Does SUMAT have a role to play in Same Language Subtitles for Access? (Subtitles for the Deaf and HOH)

    No. SUMAT is a service that offers automation when one needs to translate existing subtitles from one language to another and presupposes the existence of a source subtitle file as input.

    Q12: You recently gave a workshop for SUMAT at the Media For All conference, can you tell us a little bit about the results of the workshop?

    The workshop at Media for All was the culmination of our dissemination efforts and the first time the SUMAT demo was shown to professionals (other than staff of the subtitling companies that are partners in this project). These professionals had the chance to upload their own subtitle files and download machine-translated versions thereof. There were approximately 30 participants at the workshop, who were first briefed on the background of the project, the way the MT systems were built and automatically evaluated, as well as on the progress of our current evaluation with professional translators.

    In general, participants seemed impressed with the demo and the quality of the MT output. Representatives of European universities teaching subtitling to their students acknowledged that post-editing will have an important role to play in the future of the industry and were very interested in hearing our thoughts on it. We were also invited to give presentations on post-editing to their students, some of which have already been scheduled.

    Q13: Where can readers go to find out more about this project?

    The best source of information on the project is the project website: http://www.sumat-project.eu. We have recently re-designed it, making it easier to navigate. One can also access our live demo through it and will eventually be able to access the online service itself.

    Q14: Is there anything readers can do if they wish to get involved in the project?

    Although the project is almost complete, with less than half a year to go, contributions are more than welcome both until project end and beyond.

    Once people have started using the live demo (or, later on, the service itself), any type of feedback would be beneficial to us, especially if specific examples of files, translations, etc. are mentioned. We plan to continue improving our systems’ output after the end of the project, as well as add more language pairs, depending on the data and resources we will have available. As we all know, professional human evaluation is time-consuming and costly, so we would love to hear from all translators that end up using the service – both about the good and the bad, but especially about the bad, so we can act on it!

    Q15: If you could translate any subtitling of your choice using SUMAT what would it be?

    Obviously MT output is most useful to the translator when its accuracy is at its highest. From our evaluation of the SUMAT systems so far, we have noticed trends that indicate that scripted material is translated with higher accuracy than unscripted material. This is something that we are looking at in detail during the second round of evaluations that are now underway, but it is not surprising. MT fares better with shorter textual units that have a fairly straightforward syntax. If there are a great deal of disfluencies, as one typically finds in free speech, the machine may struggle with these, so I’m expecting our experiments to confirm this. I suppose we will need to wait until March 2014 when our SUMAT evaluation will be completed before I can give you a definite answer to this question.

    Thanks again to Yota for agreeing to the Q&A and for providing such informative answers.

    • Patricia Falls 2:00 pm on May 13, 2014 Permalink | Reply

      I train people on a steno machine to do realtime translation. I would like to discuss our product with you and how we can become involved in training


    • iheartsubtitles 2:10 pm on May 13, 2014 Permalink | Reply

      Hi Patricia, the SUMAT project is about machine translation for post editing translation. The system does not work with live/real-time subtitling so I am not sure the two are compatible? I suggest contacting them via the website listed in the article for further information.


  • iheartsubtitles 10:43 am on August 12, 2013 Permalink | Reply
    Tags: , Translation   

    Subtitles to learn a second language – resources 

    A lot of viewers to this blog are using search terms that seem to relate to using subtitles to learn a second language so depending on feedback I get I may later change this blog post into a page for reference in the future.  My question is how many good resources are there on the web that help with this? Of course anyone can watch a DVD and/or download a subtitle file in the language they are trying to learn but what about other resources?

    Here are some I have found to date. I cannot vouch for their usefulness since I am not using any to learn a language but I have included them because they offer something extra than just a subtitle file.

    Audio Verb

    An interesting website for learning Chinese.

    A screen shot of the Audio Verb website

    Audio Verb website

    Clip Flair

    Clip Flair describes itself as Foreign Language Learning through Interactive Captioning and Revoicing of Clips. It is an online tool that allows users to create clips, revoice them, and subtitle them. The video below demonstrates how it works. (Note: there is no audio dialogue on this video)

    Anyone learning English as a second language that is also a music lover might want to check out the musicESL You Tube channel and the website MusicEnglish for collections of subtitled music videos. If music is not your thing then Voice of America (VOA) has captioned You Tube videos for viewers to learn American English and much more with captioned news reports that are read at a slower speed.

    If anyone else knows of any good online resources please comment and share. Thanks!

  • iheartsubtitles 2:05 pm on February 21, 2013 Permalink | Reply
    Tags: , Translation,   

    You Tube – pay for translation captions service now available 

    This is interesting, You Tube are now offering translation pay for services for people wishing to provide translation subtitles/captions to viewers for their You Tube hosted videos. Would you use it? Or would you go for a free option only? (crowd sourced Amara perhaps? Or Google Translate?) There are pro’s and cons to both.

  • iheartsubtitles 1:30 pm on February 14, 2013 Permalink | Reply
    Tags: Documentary, , , Translation   

    Coming Soon, a film about subtitler’s 

    I discovered this trailer a few weeks ago and thought I would share here. A documentary film has been made that goes behind the scenes of the subtitling industry. A trailer has been made which has been subtitled for the deaf and hoh:

    Interesting that the last 20 seconds illustrates a translation #subtitlefail! Is the issue of high quality subtitles and translation a problem that is getting worse?

    The film is due to be released next month. I hope I get to see it.

  • iheartsubtitles 2:14 pm on January 30, 2013 Permalink | Reply
    Tags: , Translation,   

    Talking Animals – Subtitled 

    Who doesn’t like talking animals (if you don’t what’s wrong with you?!) Just going to leave these here for your enjoyment:

    Introducing Ruby the talking parrot:

    And cat talk translated, sort of:

  • iheartsubtitles 12:29 pm on January 23, 2013 Permalink | Reply
    Tags: , , , Translation,   

    Film & Television Awards Season and subtitles 

    Awards season for the film and television industries has started already.Last week was the Golden Globes during which a rather rambling speech from Jodie Foster in which she came out received a fair amount of media coverage. Here’s the real transcript For Funny Or Die’s truncated Foster’s speech:

    Robert [Downey Jr], I want to thank you for everything: for your bat-crazed, rapid-fire brain, the sweet intro. I love you and Susan and tonight I feel like the prom queen. Thank you. Looking at all those clips, you know, the hairdos and the freaky platform shoes, it’s like a home-movie nightmare that just won’t end, and I guess I have a sudden urge to say something that I’ve never really been able to air in public. So, a declaration that I’m a little nervous about but maybe not quite as nervous as my publicist right now, huh Jennifer? But I’m just going to put it out there, right? Loud and proud, right? So I’m going to need your support on this. I am single. Yes I am, I am single. No, I’m kidding — but I mean I’m not really kidding, but I’m kind of kidding. I mean, thank you for the enthusiasm. Can I get a wolf whistle or something? Jesus. Seriously, I hope you’re not disappointed that there won’t be a big coming-out speech tonight because I already did my coming out about a thousand years ago …. if you’d had to fight for a life that felt real and honest and normal against all odds, then maybe you too might value privacy above all else. Privacy. Some day, in the future, people will look back and remember how beautiful it once was. I have given everything up there from the time that I was three years old…There are a few secrets to keeping your psyche intact over such a long career. The first, love people and stay beside them. That table over there, 222, way out in Idaho, Paris, Stockholm and of course, Mel Gibson. You know you save me too. There is no way I could ever stand here without acknowledging one of the deepest loves of my life, my heroic co-parent, my ex-partner in love but righteous soul sister in life, my confessor, ski buddy, consigliere, most beloved BFF of 20 years, Cydney Bernard. …Well, I may never be up on this stage again, on any stage for that matter. Change, you gotta love it. …I want to be seen, to be understood deeply and to be not so very lonely. Thank you, all of you, for the company. Here’s to the next 50 years.

    Now watch the speech with Funny Or Die’s amusing ‘translation subtitles’ by clicking on the link below:

    For a more serious analysis of the speech, read this article.

    Sticking to serious but still on the topic of subtitles and the awards season, there was an interesting article published by The Observer claiming that the nomination of ‘Best Picture’ for Austrian film Amour indicates a growing acceptability into the mainstream for subtitled films:

    Academy voters appear to be hinting at a new openness to other cultures and the growing acceptability of subtitled entertainment. “It really is unusual for a foreign language film to do this well and to be nominated in two other main categories too, for best adapted screenplay and best director,” said Charles Gant, film editor of Heat magazine.

    Not since Clint Eastwood’s Letters from Iwo Jima, shot almost entirely in Japanese, was nominated in 2007 and Ang Lee’s action-packed Crouching Tiger, Hidden Dragon in 2001 has a work in another language stood as an equal next to the best of English language cinematic storytelling.

    Audiences in Britain in particular are responding to the growing accessibility of high-quality foreign films, which are easier to access at home now.

    Critically rated television shows such as the French series Spiral, Hatufim – the Israeli show Homeland is based on – the Sicilian Inspector Montalbano, and BBC Four’s smorgasbord of Scandinavian shows, The Killing, The Bridge and Borgen, have allowed British audiences to appreciate foreign entertainment. Home delivery services such as Love Film, Apple TV’s iTunes, Netflix and Curzon On Demand mean that viewers can download and stream new and classic foreign titles on a whim, rather than seeking out a DVD. “There was previously a real access problem for this kind of film,” recalls Gant. “There was a short run at your local cinema, and that was your only chance.”

    Changes in technology have helped at the cinema as well as at home, Gant said. “Digital projection now means cinema programmers have a lot more flexibility. They don’t just have to run a film for a week now. And with Curzon on Demand they are actually offering people the chance to see foreign films at home on the day of release.”…..Gant suspects the increasing use of computers and phones for social media means that resistance to reading type while relaxing has disappeared, making subtitles less frightening.

    SOURCE: The Observer

    On a related note, be sure to read Lipreading Mom’s blog post on her reports of watching Best Picture Academy Award nominee’s in US cinemas in captioned performances with various technology options on offer:

    Lipreading Mom's Nominees for Best Captioned Oscar Movie Are….

    • sgrovesuss 12:50 am on January 24, 2013 Permalink | Reply

      Thank you for your excellent articles about quality captioning. Stay tuned for the next and final installment in my Best Captioned Oscar Movie blog series at LipreadingMom.com. Blessings!

      Shanna / LipreadingMom.com


  • iheartsubtitles 3:21 pm on January 18, 2013 Permalink | Reply
    Tags: , , , Translation   

    This Is a Story About ‘The Fresh Prince’ and subtitles via Google Translate 

    Mashable pointed me to this comedy/creative use of Google translate to see what happened to the lyrics to the TV theme tune for The Fresh Prince Of Bel Air. Subtitles are used to display the translation results in the video:

    Click here to watch with full English subtitles.

    (and just in case you’ve been living under a rock or can’t remember the 1990s click here for the lyrics.)

  • iheartsubtitles 5:52 pm on December 20, 2012 Permalink | Reply
    Tags: , , , Microsoft, , , Siri, , Translation, ,   

    CSI User Experience Conference 2012 Part 3 – Live subtitles & voice recognition technology 

    CSI User Experience Conference 2012: TV Accessibility

    CSI User Experience Conference 2012: TV Accessibility

    For background info on this conference read:Part 1.

    It’s clear that much of the frustration from many UK TV viewers surrounds live subtitles and so the technology of voice recognition software and the process of respeaking used to achieve this was one of the topics of debate in a panel on the User Experience following Ofcom’s presentation.

    Deluxe Media’s Claude Le Guyader made some interesting points:

    In the case of live subtitling…it’s a lot of pressure on the person doing the work, the availability of the resource and the cost, it all means that the event of voice recognition was embraced by all the service providers as a service to palliate, to that lack of resource (in this case – stenographers). As we know voice recognition started, it’s not perfect, still not perfect, I don’t know if you have seen on your iPhone, it’s quite funny, with a French access it’s
    even worse! (This is in reference to Siri which is not used so far as I am aware to create live subtitles but it is part of the same technology used – voice recognition). With the voice recognition you need to train the person. Each person (in this case – a subtitler or respeaker) needs to train it, now it’s moved on and there are people using voice recognition very successfully as well, so it’s evolving but the patience, you know, it does run out when you are round the table again years are discussing the same issue, but it’s not a lack of will, I think it’s just a difficult thing to achieve, because it involves so many different people.

    Voice technology does seem to be constantly evolving and the fact that it is being implement in more and more products (the iPhone and Siri is a great example) I think is a positive thing. It increases consumer awareness of what this technology can do and consequently I think people will expect this technology to work. There are numerous ways voice technology is being used. To move away from just live subtitling and summarising points made at the conference for a moment but still within a broadcast TV context, another use is illustrated by Google TV. In the below video you can see voice recognition technology allowing a viewer to navigate the TV:

    Voice recognition technology is also used to create the automatically generated captions on You Tube videos. At the moment this does illustrate the technologies limitations as most readers here I am sure are aware – the captions created this way are completely inaccurate most of the time and therefore useless. I think we can all agree that respeaking to produce live subtitles creates errors but produces a much better result than a machine currently. Google recently added automatic captioning support for six new languages. Investment into this technology even if it is currently imperfect shouldn’t be discouraged because surely this is the only way for the technology to improve:

    A new research paper out of Google describes in some detail the data science behind the the company’s speech recognition applications, such as voice search and adding captions or tags to YouTube videos. And although the math might be beyond most people’s grasp, the concepts are not. The paper underscores why everyone is so excited about the prospect of “big data” and also how important it is to choose the right data set for the right job….No surprise, then, it turns out that more data is also better for training speech-recognition systems…The real key, however — as any data scientist will tell you — is knowing what type of data is best to train your models, whatever they are. For the voice search tests, the Google researchers used 230 billion words that came from “a random sample of anonymized queries from google.com that did not trigger spelling correction.” However, because people speak and write prose differently than they type searches, the YouTube models were fed data from transcriptions of news broadcasts and large web crawls…This research isn’t necessarily groundbreaking, but helps drive home the reasons that topics such as big data and data science get so much attention these days. As consumers demand ever smarter applications and more frictionless user experiences, every last piece of data and every decision about how to analyze it matters.

    SOURCE: GigaOM

    Following on from this example the natural question ask is will Apple integrate its voice technology Siri into Apple TV? It has been rumoured but not yet confirmed. (Interestingly it is already confirmed that Siri is being added to Chevrolet cars next year) If there is competition between companies for innovation using this technology, all the better. I found an interesting blog post pondering the future of Siri for Apple here. Although this blogger thinks that Google Voice is better. Voice technology is also being used in the world of translation. Last month Microsoft gave an impressive demo of voice recognition technology translating a speakers speech in English into Chinese text as well as speak it back to him in Chinese in his own voice:

    Skip to 4.22 to be able to read captions from this presentation.

    All of these examples I hope will contribute in some small way to an improvement in live subtitling. Mark Nelson would disagree with me. He wrote an article explaining how he believes that a peak has been reached and that greater reliance on voice technology could lead to the deaf and hard of hearing being left behind.

    What do you think? Do you think live subtitling will improve as a result of voice recognition technology or do you have another view? Please leave your comments below.

  • iheartsubtitles 9:32 pm on November 15, 2012 Permalink | Reply
    Tags: , , , Translation   

    Misheard lyrics, Gangnam Style, and Coldplay subtitled 

    Sharing this amusing video found on another WordPress blog. It is a collection of music videos edited together and subtitled to its misheard lyrics. Who hasn’t misheard a lyric or two? (There is also a website dedicated to this very subject). A warning, it might be difficult to “un-hear” these lyrics once you’ve watched the video!

    Music Monday #73

    Misheard song lyrics. Some are better than others and there are a few where I thought the subtitles were the real lyrics anyway (I’m not the best when it comes to stuff like that) Read More

    via Bite me Charlie

    Sticking with ‘fun’ and moving on to K-Pop. Is there anyone who hasn’t seen or heard Gangnam Style yet? It’s interesting that a music video has become the most liked You Tube video according to the Guiness Book Of Records, and has gone on to win Best Video at the 2012 MTV Europe Music Awards. Perhaps because music is universal? Or in this particular case, maybe its just the dance move! Despite that, if like me your Korean isn’t up to scratch and you are curious about the lyrics, how about watching subtitled translation? Viki has 23 language translations available here.

    Whilst on the subject of music, I recently added a link to my blogroll on the right hand side —> to a blog dedicated to subtitled music videos to use as an educational resource. It’s called Music English and has a decent collection of accurately subtitled music videos to choose from which is growing all the time. I have contributed by subtitling some music videos myself using Amara (previously called Universal Subtitles). If this is something that appeals to any readers why not join the Music Captioning team. At some point I may collate the videos I have subtitled and share them here but you can find most of them on the aforementioned blog. And as a final note to this blog entry, I am really pleased to see what looks like a screenshot of upcoming Coldplay DVD menu, and it has subtitles! (Incidently one of the music videos I have subtitled is a Coldplay song)

    Coldplay Live 2012

    Yes! Music with subtitles, more like this please 🙂

    SOURCE: Coldplay – Instagram.

    If this is the case, I will be making a purchase 🙂

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: