Tagged: Technology Toggle Comment Threads | Keyboard Shortcuts

  • iheartsubtitles 11:57 pm on January 16, 2016 Permalink | Reply
    Tags: , , , , Technology, , , ,   

    A look back at 2015 

    2015 saw a lot of accessibility advocacy around subtitles/captioning and audio description with some great victories. This is by no means a complete list but it does summarise some of the highlights for me.

    Amazon Instant Video UK finally started added subtitles to their VOD service after some great advocacy which had its origins in the Love Subtitles campaign.  In 2014, there were no subtitles at all, and by the end of 2015, approximately 50% of its content is subtitled (see ATVOD report detailed later in this article). Let’s hope that percentage figure continues to rise in 2016 .

    Animated gif of Matt Murdock aka Daredevil

    Marvel’s Daredevil on Netflix

    Netflix US / UK found itself the target of an advocacy campaign to add audio description to its content after it released an original series about a blind superhero Daredevil based on the Marvel comic book character of the same name without making it accessible to blind and visually impaired viewers. Thanks to the efforts of The Accessible Netflix Project,  there was a pretty fast response from Netflix in releasing audio description tracks for this series and more on its platform.

    The Action On Hearing Loss #SubtitleIt! campaign succesfully obtained a public comittment from Sky that it would increase the amount of  subtitled VOD content on it’s services by summer 2016 following a petition from Jamie Danjoux. I think a lot of people will be watching this one closely and look forward to seeing the comittment being met in 2016.

    All of these campaigns are far from over and many are continuing their advocacy into 2016. The #SubtitleIt campaign just published a useful summary of ATVOD’s final report  (before being taken over by Ofcom) into the Provision of Access Services published at the end of 2015.

    There were many other interesting publications around accessibility in 2015:

    There were many other successful advocacy and awareness campaigns in 2015.

    I had great fun taking part in the UK’s first #CAPaware week launched by Stagetext to celebrate its 15th birthday which amongst many activities that week included tweeting alongside watching a captioned play from Digital Theatre.

    Turn On The Captions Now  was a campaign that successfully passed a local city law in Portland, Oregon USA which states that all public televisions in public areas such as bars and restaurants must have closed captioning switched on. The Portland: Turn On The Captions Now! group have since published a website that includes instructions for turning on closed captions and advice for Portland residents on how to request captions if they spot non-compliance.

    American Airlines showed everyone how not to respond to a query on social media to a request for closed captioning to be made available on their in-flight entertainment.  It lead to a Twitter campaign with the hashtag #DeafInTheAir

    But it wasn’t just airlines, Braam Jordaan was successfull in getting the White House to make its video content accessible with a campaign predominantly on Twitter using the hashtag #WHccNow

    In fact it seems when it comes to social media, (whisper it carefully, I don’t want to jinx it) but it seems that the knowledge that subtitling and captioning your video media leads to other benefits outside of accessibility is starting to become mainstream. Video marketing websites have been quick to report research showing that adding subtitles can increase the video completion rate and  the video share rate. I for one have noticed more and more videos on social media with auto-play *and* open subtitles.  I hope that this trend continues as it can only lead to more accessible content online for everyone. Roll on 2016…

     

     
  • iheartsubtitles 10:08 pm on December 3, 2014 Permalink | Reply
    Tags: , , , , Technology   

    Access 2020 – Languages & The Media 2014 

    Access 2020 was an interesting panel hosted by Alex Varley at the 10th Languages & The Media conference. The theme was for the panel to discuss what they thought media access might look like in 2020.

    Although it is difficult to summarise all of the discussions, Media Access Australia have written a summary of 20 highlights. Below is my two-cents.

    • Broadcasters have to start to think what is their role?  The industry still need content producers which broadcasters are likely to continue to play a big role in producing. There is likely to be a merge of broadcast and IPTV.
    • In Europe, there is a keen focus to develop in the areas of: Machine Translation (MT), User Experience (UX), and Big Data.
    • Subtitling is becoming a language technology business rather than editorial. Greater levels of interest and innovation in technology will lead to greater quality and lower cost.
    • The industry is aiming for interoperability by 2020 (if not before) to ensure no technological barriers to access exist.
    •  Two interesting ideas/questions raised:  Will access services start to go into/become a part of the production process for audio-visual content? Will we start to see closed signing?

    How to achieve all of this:

    1. Talk to end users more.
    2. Deal with the complexity. (interoperability)
    3. Different jobs will be created by new technology, but we still need humans to provide access.
    4. Regulators are not always the answer and can get it wrong. Target the businesses to provide access.
    Animated gif of the hoverboard from the film Back To The Future

    Personally I’m still waiting for the hoverboard.

     
  • iheartsubtitles 1:54 pm on November 23, 2014 Permalink | Reply
    Tags: , , , Technology   

    Smart Technologies, Smart Translations – Languages & The Media 2014 

    The  theme of this years 10th Language & The Media conference was Smart Technologies, Smart Translations with a panel and audience discussion on:

    1) Machine Translation (MT)

    Sketch of a robot reading a book and writing a translation

    Machine Translation (MT)

    I thought it was interesting to find out that two Spanish universities have rolled out auto machine translated subtitled recordings of lectures. (I assume with post-editing also). There are numerous companies, and academic institutions working in and researching in this area including SUMAT, EU Bridge, transLectures.

    2) Speech Recognition

    Sketch of someone using speech recognition with really bad recognition output

    Speech Recognition (SOURCE: toothpastefordinner.com)

    Speech-to-text technology is already playing an important role in producing real-time or live subtitling and captioning. However there was also some interesting discussion about text-to-speech technologies and access to audio-visual media via audio description.  Particularly with regards to speech synthesis. Many expressed a concern that the quality of the machine voice is not, or would not be a pleasant experience for the end user. However  it was also pointed out that this would not replace current human based audio description but allow for bigger volumes of written content such as news to be made available to audiences that currently have no access such as illiterate audiences.

    3) Cloud Technologies

    A graphic illustrating benefits of cloud technology: infrastructure, platform as a service, software as a service

    The benefits of cloud technology.

    There is a lot of talk about this everywhere at the moment. Within the context of subtitling and translation workflows I can see a lot of benefits. It is already being done and where as previously (or currently) much of the business process management has been based on trust, a cloud based system can allow for greater transparency and even force checks. For example a translation manager or coordinator might currently communicate with a freelance translator via email but the actual work is done by someone else on their computer in a different location. The status of a project or job would be unknown and the co-ordinator has to trust that it is being worked on and will meet the deadline set. With a cloud-based application – a translation coordinator (or even the client themselves) could potentially log into the application and see the progress or status of a job being completed by anyone anywhere in the world. It also makes it easy for multiple access for multiple translators on a single project if required. And depending on how you build the application it could also force a task or process (for example only allowing a final subtitle file to be sent after a QC check has been made).

     
  • iheartsubtitles 4:19 pm on November 8, 2013 Permalink | Reply
    Tags: , , Technology,   

    Machine Translation & Subtitles – Q&A with Yota Georgakopoulou 

    Something I have not blogged much about to date is the topic of machine translation and its use within a subtitling context. Having read about a project titled SUMAT I was lucky enough to asks questions on this topic with Yota Georgakopoulou:

    Q1: What does SUMAT stand for? (is it an Acronym?)



    Yes, it stands for SUbtitling by MAchine Translation.

    Q2: How is SUMAT funded and what industries/companies are involved?



    SUMAT is funded by the European Commission through Grant Agreement nº 270919 of the funding scheme ICT CIP-PSP – Theme 6, Multilingual Online Services.

    There are a total of nine legal entities involved in the project. Four of them are subtitling companies, four are technical centres in charge of building the MT systems we are using in the project, and the ninth is responsible for integrating all systems in an online interface through which the service will be offered.

    Q3: Can you give us a little bit of information on your background and what your involvement in SUMAT has been to date?



    I have been working in translation and subtitling ever since I was a BA student in the early 90’s. I was working in the UK as a translator/subtitler, teaching and studying for a PhD in subtitling at the time of the DVD ‘revolution’, with all the changes it brought to the subtitling industry. This was when I was asked to join the European Captioning Institute (ECI), to set up the company’s translation department that would handle multi-language subtitling in approximately 40 languages for the DVD releases of major Hollywood studios. That’s how my career in the industry began. It was a very exciting time, as the industry was undergoing major changes, much like what is happening today.

    Due to my background in translation, I was always interested in machine translation and was closely following all attempts to bring it to the subtitling world. At the same time, I was looking for a cost-effective way to make use of ECI’s valuable archive of parallel subtitle files in 40+ languages, and the opportunity came up with the SUMAT consortium. ECI has since been acquired by Deluxe, who saw the value of the SUMAT project and brought further resources to it. Our involvement in the project has been that of data providers, evaluators and end users.

    Q4: Machine Translation (MT) already has some history of being used to translate traditional text. Why has machine translation not been put to use for translation subtitles?



    Actually, it has. There have been at least two other European projects which have attempted to use machine translation as part of a workflow that was meant to automate the subtitling process: MUSA (2002-2004) and eTITLE (2004-2006). Unfortunately, these projects were not commercialized in the end. Part of the reason for this is likely to be that the MT output was not of good enough quality for a commercial setting. As professional quality parallel subtitle data are typically the property of subtitling companies and their clients, this is not surprising. The SUMAT consortium invested a large amount of effort at the beginning of the project harvesting millions of professional parallel subtitles from the archives of partner subtitling companies, then cleaning and otherwise processing them for the training of the Statistical Machine Translation (SMT) systems our Research and Technical Development (RTD) partners have built as part of the project.


    Q5: Some readers might be concerned that a machine could never replace the accuracy of a human subtitler translating material. What is your response to that concern?



    Well, actually, I also believe that a machine will never replace a subtitler – at least not in my lifetime. MT is not meant to replace humans, it is simply meant to be another tool at their disposal. Even if machines were so smart that they could translate between natural languages perfectly, the source text in the case of film is the video as a whole, not just the dialogue. The machine will only ‘see’ the dialogue as source file input, with no contextual information, and will translate just that. Would a human be able to produce great subtitles simply by translating from script without ever watching the film? Of course not. Subtitling is a lot more complex than that. So why would anyone expect that an MT system could be able to do this? I haven’t heard anyone claiming this, so I am continuously surprised to see this coming up as a topic for discussion. I think some translators are so afraid of technology, because they think it will take their jobs away or make their lives hard because they will have to learn how to use it, that they are missing the point altogether: MT is not there to do their job, it is there to help them do their job faster!

    Q6: Is the technology behind SUMAT similar to that used by You Tube for its ‘automated subtitles’?



    Yes, in a way. YouTube also uses SMT technology to translate subtitles. However, the data YouTube’s SMT engines have been trained with is different. It is not professional quality subtitle data, but vast amounts of amateur quality subtitle data found on the internet, coupled with even larger amounts of any type of parallel text data found on the web and utilized by Google Translate. Also, one should bear in mind that many ‘issues’ found in YouTube subtitles, such as poor subtitle segmentation, are a result of the input text, which in some cases is an automatic transcription of the source audio. Thus, errors in these transcriptions (including segmentation of text in subtitle format) are propagated in the ‘automatic subtitles’ provided by YouTube.

    SUMAT also uses SMT engines built with the Moses toolkit. This is an open source toolkit that has been developed as part of another EU-funded project. In SUMAT, the SMT engines have been trained with professional quality subtitle data in the 14 language pairs we deal with in the project, and supplemented with other freely available data. Various techniques have been used to improve the core SMT systems (e.g. refined data selection, translation model combination, etc.), with the aim of ironing out translation problems and improving the quality of the MT output. Furthermore, the MT output of SUMAT has been evaluated by professional subtitlers. Human evaluation is the most costly and time-consuming part of any MT project, and this is why SUMAT is so special: we are dedicating almost an entire year to such human evaluation. We have already completed the 1st round of this evaluation, where we focused on the quality output of the system, and we have now moved on to the 2nd round which focuses on measuring the productivity gain that the system helps subtitlers achieve.

    Q7: Why do you think machine translation is needed in the field of subtitling?



    I work in the entertainment market, and there alone the work volumes in recent years have skyrocketed, while at the same time clients require subtitle service providers to deliver continuous improvement on turnaround times and cost reduction. The only way I see to meet current client needs is by introducing automation to speed up the work of subtitlers.

    Aside from entertainment material, there is a huge amount of other audiovisual material that needs to be made accessible to speakers of other languages. We have witnessed the rise of crowdsourcing platforms for subtitling purposes in recent years specifically as a result of this. Alternative workflows involving MT could also be used in order to make such material accessible to all. In fact, there are other EU-funded projects, such as transLectures and EU-Bridge, which are trying to achieve this level of automation for material such as academic videolectures, meetings, telephone conversations, etc.


    Q8: How do you control quality of the output if it is translated by a machine?



    The answer is quite simple. The output is not meant to be published as is. It is meant to be post-edited by an experienced translator/subtitler (a post editor) in order for it to reach publishable quality. So nothing changes here: it is still a human who quality-checks the output.

    However, we did go through an extensive evaluation round measuring MT quality in order to finalise the SMT systems to be used in the SUMAT online service, as explained below. The point of this evaluation was to measure MT quality, pinpoint recurrent and time-consuming errors and dedicate time and resources to improving the final system output quality-wise. Retraining cycles of MT systems and other measures to improve system accuracy should also be part of MT system maintenance after system deployment, so that new post-edited data can be used to benefit the system and to ensure that the quality of the system output continues to improve.

    Q9: How do you intend to measure the quality/accuracy of SUMAT?



    We have designed a lengthy evaluation process specifically to measure the quality and accuracy of SUMAT. The first round of this evaluation was focused on quality: we asked the professional translator/subtitlers who participated to rank MT output on a 1-5 scale (1 being incomprehensible MT output that cannot be used, and 5 being near perfect MT output that requires little to no post-editing effort), as well as annotate recurrent MT errors according to a typology we provided, and give us their opinion on the MT output and the post-editing experience itself. The results of this evaluation showed that over 50% of the MT subtitles were ranked as 4 or 5, meaning little post-editing effort is required for the translations to reach publishable quality.

    At the second and final stage of evaluation that is currently under way, we are measuring the benefits of MT in a professional use case scenario, i.e. checking the quality of MT output indirectly, by assessing its usefulness. We will thus measure the productivity gain (or loss) achieved through post-editing MT output as opposed to translating subtitles from a template. We have also planned for a third scenario, whereby the MT output is filtered automatically to remove poor MT output, so that translators’ work is a combination of post-editing and translation from source. One of the recurrent comments translators made during the first round of evaluation was that it was frustrating to have to deal with poor MT output and that there was significant cognitive effort involved in deciding how to treat such output before actually proceeding with post-editing it. We concluded it was important to deal with such translator frustrations as they may have a negative impact on productivity and have designed our second round of experiments accordingly.

    Q10: Are there any examples of translation subtitles created by SUMAT?



    Yes, the SUMAT demo is live and can be found on the project website (www.sumat-project.eu). Users can upload subtitle files in various subtitle formats and they will be able to download a machine translated version of their file in the language(s) they have selected. We have decided to limit the number of subtitles that can be translated through the demo, so that people do not abuse it and try to use it for commercial purposes.

    Q11: Does SUMAT have a role to play in Same Language Subtitles for Access? (Subtitles for the Deaf and HOH)



    No. SUMAT is a service that offers automation when one needs to translate existing subtitles from one language to another and presupposes the existence of a source subtitle file as input.

    Q12: You recently gave a workshop for SUMAT at the Media For All conference, can you tell us a little bit about the results of the workshop?



    The workshop at Media for All was the culmination of our dissemination efforts and the first time the SUMAT demo was shown to professionals (other than staff of the subtitling companies that are partners in this project). These professionals had the chance to upload their own subtitle files and download machine-translated versions thereof. There were approximately 30 participants at the workshop, who were first briefed on the background of the project, the way the MT systems were built and automatically evaluated, as well as on the progress of our current evaluation with professional translators.

    In general, participants seemed impressed with the demo and the quality of the MT output. Representatives of European universities teaching subtitling to their students acknowledged that post-editing will have an important role to play in the future of the industry and were very interested in hearing our thoughts on it. We were also invited to give presentations on post-editing to their students, some of which have already been scheduled.

    Q13: Where can readers go to find out more about this project?



    The best source of information on the project is the project website: http://www.sumat-project.eu. We have recently re-designed it, making it easier to navigate. One can also access our live demo through it and will eventually be able to access the online service itself.

    Q14: Is there anything readers can do if they wish to get involved in the project?



    Although the project is almost complete, with less than half a year to go, contributions are more than welcome both until project end and beyond.

    Once people have started using the live demo (or, later on, the service itself), any type of feedback would be beneficial to us, especially if specific examples of files, translations, etc. are mentioned. We plan to continue improving our systems’ output after the end of the project, as well as add more language pairs, depending on the data and resources we will have available. As we all know, professional human evaluation is time-consuming and costly, so we would love to hear from all translators that end up using the service – both about the good and the bad, but especially about the bad, so we can act on it!

    Q15: If you could translate any subtitling of your choice using SUMAT what would it be?

    Obviously MT output is most useful to the translator when its accuracy is at its highest. From our evaluation of the SUMAT systems so far, we have noticed trends that indicate that scripted material is translated with higher accuracy than unscripted material. This is something that we are looking at in detail during the second round of evaluations that are now underway, but it is not surprising. MT fares better with shorter textual units that have a fairly straightforward syntax. If there are a great deal of disfluencies, as one typically finds in free speech, the machine may struggle with these, so I’m expecting our experiments to confirm this. I suppose we will need to wait until March 2014 when our SUMAT evaluation will be completed before I can give you a definite answer to this question.

    Thanks again to Yota for agreeing to the Q&A and for providing such informative answers.

     
    • Patricia Falls 2:00 pm on May 13, 2014 Permalink | Reply

      I train people on a steno machine to do realtime translation. I would like to discuss our product with you and how we can become involved in training

      Like

    • iheartsubtitles 2:10 pm on May 13, 2014 Permalink | Reply

      Hi Patricia, the SUMAT project is about machine translation for post editing translation. The system does not work with live/real-time subtitling so I am not sure the two are compatible? I suggest contacting them via the website listed in the article for further information.

      Like

  • iheartsubtitles 10:09 am on September 12, 2013 Permalink | Reply
    Tags: , , , , , Technology, ,   

    SMPTE Internet Captioning Webcast 

    This webcast posted by the Society of Motion Picture and Television Engineers (SMPTE) is a good introduction to current US captioning regulatory requirements and new requirements due to come into play in the USA. All US broadcasters must caption content online that has previously been broadcast on linear TV by the end of this month. This includes pre-recorded content that has been edited for broadcast online. By March 2014, this also applies to live and near live content. Whilst the webcast is US-Centric the technical problems and solutions it discusses around captioning formats for online, and multi-platform broadcast content is relevant to all global broadcasters. The webcast covers both pre-recorded/block style captioning as well as live subtitling. It is captioned and you can view it below:

     
  • iheartsubtitles 7:34 pm on July 31, 2013 Permalink | Reply
    Tags: , , Technology   

    Off-screen cinema subtitles 

    Readers who are keeping up to date with subtitling solutions and projects might be pleased to know that the former Indiegogo project for a subtitling solution for cinemas is a project now being developed by GeoJaX Ltd and Mystery Technology LLP.

    Entrepreneur George Georgiou and inventor Jack Ezra have teamed up to form “GioJaX Ltd” and “Mystery Technology LLP”, which will develop an “Off-Screen Cinema Subtitle System” for the deaf and hard of hearing”. The development work will be carried out in Sri Lanka, China & the UK over the coming months with a fully working system hopefully, being tested in October/ November 2013. The Off-Screen Cinema Subtitle System uses a special display under the movie screen which is invisible to the general audience until you wear special light-weight glasses and then the subtitles are viewable to anyone in the audience wishing to see them.

    I was lucky enough to be shown a prototype of the technology last week. Already built as a demo on a laptop I was shown what appeared to be a blank screen. However as soon as I put on a standard pair of 3D glasses (the same kind worn for 3D movies at the cinema now) I could see letters, and numbers displayed across the screen. It was great to see a real working example of the technology I had heard being described as a potential way of displaying subtitles at the cinema that are only viewable to those that wish it. It is the closest experience I have had of using different technology than that of open captions but still gives the same feel as using open captions or switching on the subtitles on the television or on a DVD. The text was easy to read and the glasses comfortable to wear. The next step will be for the company to build a fully working system example and get feedback. I for one will be keeping an eye on the progress with this project. And I am not the only one – industry professionals such as Regal in the USA, and in the UK, the Cinema Exhibitors Association and Cineworld, have offered their help to the new venture in the form of feedback, testing and promotion of the technology.

     
  • iheartsubtitles 9:15 pm on July 9, 2013 Permalink | Reply
    Tags: , , Technology, ,   

    CEA – UK published report on cinema subtitling technology and my experience in the USA 

    Back in March 2013, some, including myself were lucky enough to take part in a trial to test some personalised technology that provides subtitles to cinemas. The trial took place in London and was organised by the Cinema Exhibitors Association and they have now published the results to those that attended. I have summarised the main points below:

    The project was designed to gather:

    • Findings from a demonstration of four of the leading CC technologies for interested industry partners;
    • Initial and headline structured feedback from a small sample of people with varying degrees of hearing loss on their experience of using the systems;
    • And preliminary feedback from an operator perspective on the potential management, practical and technical considerations around each of the systems.

    The suppliers and products involved were:

    • Doremi – Captiview for CC, and Fidelio for audio description (AD) and hearing assist.
    • Sony – Entertainment Access Glasses (SEAG) for CC and connecting headphones for AD and hearing assist.
    • USL – Captionwear glasses and screens for CC and connecting headphones for AD and hearing assist.

    While the AD functionalities of the products were part of the industry showcase, the audience screenings concentrated solely on CC, that being the technology which offers something completely new for customers.

    For more details read the CEA’s published report. Now that this detail has finally been released I can talk more freely about the device I got to test. I was given the Captiview device to watch the movie Wreck It Ralph. The good thing about it was that the subtitles worked, were pretty accurate with the exception of a few letters dropping of the ends of words at the end of a line on the screen. They were easy to follow for someone used to reading subtitles but trying to watch the action on screen is much harder and so the movie experience itself was not as immersive as it would’ve been through no fault of the movie itself. More recently whilst on holiday in the United States I got to use the device again in a real screening for Iron Man 3:

    I got a few strange looks from some people in the cinema who clearly hadn’t seen this device being used before but that didn’t bother me. What did bother me was the fact that I couldn’t get the device positioned correctly. Why? Because the device is supposed to sit in the cup holder on your seat. Except in this cinema it didn’t fit correctly. This made it an even worse experience than during the trial where the device was fitted for me and correctly before sitting in my seat. Again whilst the subtitles were accurate, it’s the practicality of using the device that left me feeling a bit disheartened by it all. For a start, collecting a device at the point that you purchase the ticket, and then having to carry it around. It is not very heavy but it is bulky. Trying to juggle carrying that whilst also purchasing popcorn, and then what if you want a toilet visit prior to being allowed into the cinema to take your seat? What do you do with the piece of kit you are carrying around? (I hope the cinema’s that provide these devices consider hygiene and that they are wiped clean after each use).

    Back in the UK and open subtitled cinema screenings has been a bit of mixed bag. I failed to get to see Star Trek into Darkness with subtitles because the advertised subtitled screening I wanted to go to got cancelled. More recently though I did get to successfully go to a subtitled screening of Man Of Steel. A life long fan of Superman, Man of Steel is actually the first ever Superman-related subtitled cinema screening I have attended. To be able to hear all the dialogue prior to the movies DVD release and turning on the subtitles months after struggling to watch it without is a complete joy and something I suspect hearing people take for granted (I can’t tell you the number of movies I’ve re-watched on DVD with the subtitles on after its cinema release to find myself thinking ‘Oh, so that’s what they said, now I get it!’).

    Will the UK see personalised subtitling solutions in cinemas? The CEA don’t have an answer for that just yet. Since the feedback from the trials was mixed and sometimes conflicting I hope that there are more trials to come before committing to the right technological solution. The CEA have said that if/when there is further progress they will make this known so keep an eye on the CEA website.

     
    • Richard Turner 10:19 pm on July 9, 2013 Permalink | Reply

      Great blog. I do feel in the future that personalised captioning is the only way that cinema will become fully accessible. However it is work in progress.

      Like

  • iheartsubtitles 3:11 pm on April 29, 2013 Permalink | Reply
    Tags: , , , Technology   

    Cinema subtitling technology – could 3D be the better solution? 

    To quote from my previous blog post:

    The UK film industry is currently investigating recently-developed solutions that could improve the cinema experience further for people with hearing loss. For example, ‘personal’ inclusive caption/subtitle solutions are now available from Sony, Doremi and others that, instead of projecting captions on to the cinema screen, display them on wearable glasses or small, seat-mounted displays. So, any ‘regular’ cinema show could also be a captioned show. These solutions are already being rolled out in the US and Australia.

    It’s hoped that for audience members with hearing loss, as well as cinema exhibitors and film distributors, the convenience of a personal solution, and the vastly increased choice it can offer, will be more favourable than separate, inconvenient, costly on-screen captioned shows.

    SOURCE: i heart subtitles – History of Subtitling and Cinema in the UK

    Now, some of these ‘personal’ devices I was lucky enough to trial which you can read about in New Subtitling Technology for TV broadcast and the cinema.

    I was hopeful but not massively convinced of the benefits of the personal devices trialled. (When are the CEA going to publish these results?) I was recently alerted to a crowdsource funding campaign from a 3D technology specialist who thinks that a better solution can be found. Designed by Jack Ezra, here is his technological solution:

    Indiegogo – Subtitles off screen solution – Please visit this link for more information on the project. I would love to see this project get the funding it needs to move forward. There are several reasons why in principle I favour this idea over other subtitling/captioning ‘personal’ devices solutions:

    1) Unlike a second screen or other glasses devices where the subtitles appear on the lenses, this 3D solution appears to best replicate the look and feel and therefore hopefully the more pleasant and relaxed experience of watching open subtitles.
    2) The glasses are similar to 3D movie glasses. These are much less heavy, bulky, uncomfortable. Similarly I am assuming you could dispose/get a new pair. With other glasses – these will have been used by others before you at other screening – you just have to hope they are clean and no one sneezed over them! With these 3D glasses you can keep your own, or get a brand new pair on your visit.
    3) Stigma. No one likes to admit it but some people will not order technology like second screen or subtitle glasses because they are immediately ‘different’ to everyone else in the cinema and may feel embarrassed about their hearing loss. However there is nothing embarrassing about asking for 3D glasses. Anyone might be asking for them, and they are ‘normal’ request. Wearing these there is no stigma attached as people are used to seeing people wearing them at the cinema anyway.

    It seems I am not alone in liking this idea. I received this message from Jack which is a fitting last word for this blog post :

    A word from Inventor – Jack Ezra.

    Firstly, a huge “THANK YOU” to all of you who have come back to me with these kind words….
    “Jack, Congrats – what a terrific Idea this is” and “Jack, you’re so clever”, and
    “Jack, this could really change the face of cinema” & “I love this idea so much – can’t wait to see it”.

    While I really appreciate all these kind words, this technology will not succeed unless we raise the money. Below is a link to Indiegogo, the crowd-funding site of our choice – this is like KickStarter.
    It is here you can go on and contribute some money. Just a few pounds each, from a lot of people will build up the necessary funds for the prototype. Then we can start to put it into the cinemas worldwide.

    INDIEGOGO – Off-Screen Cinema Subtitle System

     
    • Me 3:51 pm on May 13, 2013 Permalink | Reply

      I am a Deaf person and I have tried those “glasses” at the movies. I do not like them AT ALL. They are uncomfortable all around. I find I have to keep my head straight and I cannot lean my head on the movie seat. My neck and shoulders becomes uncomfortable after the movie is over.
      Why can’t we have open captions in the movie theatre? All of us have gotten used to the disabled toilet stall in the public restroom – it seems to be the “norm”. All of us have gotten used to the wheelchair ramps in various places, such as the sidewalks. All of us have gotten used to the “awareness bumps” in front of stores that are set in place for the blind & visually impaired. So, why not subtitles in movie theatres?? Not only would it benefit the Deaf people, it would also benefit people that are losing their hearing and would appreciate the opportunity to catch a word, here and there, as well as benefit the people that are learning the language the movie is set in.

      Like

      • iheartsubtitles 4:23 pm on May 13, 2013 Permalink | Reply

        Hi, are you referring to the glasses in which the subtitles appear on the lenses? Those are the only ones that have been trialed in the UK and are available for use in some cinemas in the USA. This 3D glasses solution is different and appeals to me because it would use standard light weight 3D glasses and the subtitles appear close to the bottom of the screen (and not on the lenses making it difficult to focus on the movie).

        I too would prefer open captions at all screenings. I do agree cinema managers could do more here but how to perusade cinema managers when it digs into profit? It shouldn’t be about the bottom line. However cinema’s have to make a profit and they will be reluctant to do anything that hurts this. The best thing you can do to support open captions is to attend as many open captions screenings as you can and make cinema managers aware that this is something you appreciate and is vital to you. I try do this as often as my schedule allows (ironically this is difficult when subtitled screenings are during working hours) I know I am grateful that we even get this option in the UK. No other country has this and I do not want to see it go entirely, I would like the alternatives to be an additional option and not a replacement of.

        Like

  • iheartsubtitles 10:33 am on April 28, 2013 Permalink | Reply
    Tags: , , , Technology,   

    History of subtitling and cinema in the UK 

    

The film industry is forever devising new ways to capitalise on technological advancements to attract audiences.

    But back in the 1920s, and on the verge of going bust, Sam Warner, co-founder (with brothers Harry, Albert and Jack) of small studio Warner Bros. introduced some fancy tech that, with the help of jazz singer Al Jolson, unintentionally alienated many film fans for the next 75 years.

    
Before the Movietone sound-on-film system became the industry standard, the short-lived Vitaphone sound-on-disc system was the most hi-tech audio product available. Originally intended to cut costs of live musicians, the 1.0 non-surround system was responsible for the innovative synchronized mix of Al Jolson’s singing, dialogue and music for Warner Bros’ The Jazz Singer (1927).

    
Although it contained few spoken words, and played silently in many cinemas that had yet to be equipped for sound, The Jazz Singer launched the ‘talkies’ revolution, taking $3m box-office (spectacular in those days), putting the US touring stage production of ‘The Jazz Singer’ out of business, and confirming its studio as a major player in Hollywood.

    (Sadly, just before the premiere, Sam Warner died of complications brought on by a sinus infection. He was 40).

    Jolson’s next WB musical, 1928’s ‘The Singing Fool’, was an even bigger success (almost $6m) and held the box office attendance record for 10 years (eventually broken by Disney’s Snow White and the Seven Dwarfs). Jolson become America’s most famous and highest-paid entertainer of the time.

    So how exactly was the cinema experience ruined for many film fans?

    
The end of the ’20s signalled the end of the silent era as sound and dialogue in movies became standard practice. With ‘talkies’, the essential plot-following device – the caption card – was deemed no longer necessary.

    For people with hearing loss, a cinema visit was suddenly, if unintentionally, no longer enjoyable or accessible. By and large, they stopped going. For 75 years. A major step backwards for equality, inclusion and community integration.

    Which is all the more ironic as Thomas Edison, ‘man of a thousand patents’ and pioneer-creator of the first copyrighted film, was almost completely deaf from an early age. Without captions he wouldn’t have been able to follow many of the new ‘talkies’.

    I often wonder what Edison and Alexander Graham Bell, the two inventors responsible for introducing many of the film, sound and light technologies we take for granted today, would have thought of this ‘talkies’ development, as they chatted over their latest inventions with Étienne-Jules Marey, who was a major influence on all pioneers of cinema, at the Centennial Exhibition in Philadelphia.

    Of course they could never have had such a discussion – Marey died 25 years before ‘The Jazz Singer’, Bell died 5 years before, and Edison 5 years after. (And, er, the exhibition was held half a century before the film, in 1876…)

    But let’s imagine they were all having a chat over a cappuccino, at the same exhibition, held just AFTER the films release. I would expect that they would have been very disappointed at the demise of caption cards.

    A few decades before the release of ‘The Jazz Singer’, Alexander Graham Bell, inventor of the telephone, created the Photophone – a device that enabled sound to be transmitted on a beam of light (the principle upon which today’s laser and fiber optic communication systems are founded).

    Étienne-Jules Marey had combined a camera and a Gatling gun to create a mutant photographic machine-gun/steadicam device, capable of shooting 60fps (more than a century before James Cameron and Peter Jackson attempted HFR).

    Edison came up with the Kinetophone, the first attempt in history to record sound and moving image in synchronization.

    All three pioneers were well aware of the importance of captions – words on screen (or a piece of cardboard).

    Edison – almost completely deaf from an early age – most likely wouldn’t have liked the film. He hated Jazz, preferring simple melodies and basic harmonies, very possibly due to his high-frequency hearing loss.

    Bell had founded and helped run a school for deaf children with his wife, who was also deaf. Caption cards were used to teach the deaf children reading and literacy skills.

    And Marey was a foreigner! (It’s well known that captions/subtitles are beneficial to students studying English as a Second Language).

    Photo of people at the cinema

    Your Local Cinema – lists screening of subtitled and audio described cinema across the UK

    Fast forward to the end of the century, and reality, when caption cards were re-introduced to UK cinemas in the form of on-screen subtitles. Steven Spielberg, an early investor in the sound company, Digital Theater Systems (DTS), championed its new cine audio format – a digital sound-on-disc system – and encouraged cinemas to install it ahead of his highly anticipated new release, Jurassic Park (1993). A decade later, DTS updated its (by now popular) system to include, alongside music and dialogue tracks, multi-language subtitles and a caption track, enabling cinemas to project synchronised captions directly on to cinema screens.

    
Dolby launched a similar system soon afterwards. Not long after that – probably feeling bad about the Al Jolson episode – cinemas across the UK collaborated with the UK Film Council to install this new ‘access’ technology.

    After 75 years, people with hearing loss could once again enjoy, rather than endure, the cinema experience. Hurrah!

    And, for the first time in the UK, people with sight loss could also enjoy it as an audio description (AD) track – a recorded narration – could also be delivered to wireless headphones. Double hurrah!

    (But sadly, for people with loss of smell, things were not so good. ‘Smell-O-Vision’, introduced in the 1960s, just never caught on).

    As before, Warner Bros. was at the forefront of this quiet revolution in cinema.

    
The first film to utilise the new digital caption/subtitle/AD system was Harry Potter and the Philosopher’s Stone (2001). (Steven Spielberg, having played his part in re-introducing captions to cinema audiences, had declined an offer to direct – he’d done enough).

    Today, another decade later, UK film distributors routinely ensure the provision of caption/subtitle/AD tracks for most popular titles. More than 1,000 have been produced to date.

    Almost every UK cinema is now accessible in that all d-cinema systems have built-in ‘access’ facilities and can broadcast caption/subtitle/AD tracks. Every week hundreds of cinemas present a total of around 1,000 shows with on-screen captions. Thousands more shows are screened with audio description, received via personal headphones.

    
But as the number of shows and the audience have grown – by around 20% year-on-year – the current UK caption format has inevitably become problematic. Since captions in UK cinemas are on-screen, inconvenient and costly separate shows are necessary, segregating people and restricting the choice of films and showtimes that a cinema can provide. A limited audience, combined with limited opportunities to attend, ultimately results in limited box-office returns.

    
For some time, the industry has wrestled with the conundrum of how to provide an economically viable service to people with hearing loss – how to get a good balance between what the public wants and what it’s possible reasonably to provide.

    
Digital cinema brings with it digital participation – inclusion – which is just as important as digital infrastructures and digital content.

    For the UK film industry, a commitment to diversity and inclusion is not just a social and legal responsibility. It aims to ensure that cinema is accessible to all, regardless of age or ability, by understanding and catering for audiences with physical or sensory impairments, and their diverse technological needs.

    The UK film industry is currently investigating recently-developed solutions that could improve the cinema experience further for people with hearing loss. For example, ‘personal’ inclusive caption/subtitle solutions are now available from Sony, Doremi and others that, instead of projecting captions on to the cinema screen, display them on wearable glasses or small, seat-mounted displays. So, any ‘regular’ cinema show could also be a captioned show. These solutions are already being rolled out in the US and Australia.

    It’s hoped that for audience members with hearing loss, as well as cinema exhibitors and film distributors, the convenience of a personal solution, and the vastly increased choice it can offer, will be more favourable than separate, inconvenient, costly on-screen captioned shows.

    It is hoped that within the next few years, audiences with hearing or sight loss will be able to enjoy the big-screen experience as never before.

    As Al Jolson (who really should be forgiven by now) famously said: “I tell yer, you ain’t heard nothin’ yet!”

    With thanks to Your Local Cinema for this article. Posted with permission.

    Stay tuned for another follow-up post very shortly to this on subtitling technology for the cinema.

     
    • Mikel Recondo 2:18 pm on April 29, 2013 Permalink | Reply

      In Spain, there’s a tradition of dubbing all the foreign films into Spanish. It dates back to the dictatorship of Franco, that in 1940 stablished that all movies should be dubbed into Spanish.

      Then the dictatorship ended and some cinemas chose not to dub the movies and run them in their original languages with subtitles. Nowadays, these are the only cinemas that I know of that offer any kind of accessibility services.

      Like

    • markbutterworth 7:58 pm on June 30, 2014 Permalink | Reply

      Reblogged this on Mark Butterworth learning journey BSL level 3 and commented:
      History of Subtitles

      Like

  • iheartsubtitles 2:56 pm on February 4, 2013 Permalink | Reply
    Tags: , , , , Technology,   

    Closed Captioning to learn a language – old tech meets new tech 

    I have blogged before about whether using subtitles to learn a second language is a good or bad thing, but recently I came across a nifty project that aims to help people do just that. Easy Way Language Center has hooked up a computer to capture the closed captions of Brazilian TV stations. The computer then uses Google Translate to translate the captions into another language of your choice. Click on the image below to watch the video explaining how this works.

    Image - Easy Way Subtitles

    Easy Way Subtitles uses Closed Captioning [CC] and Google Translate

    Image - Easy Way Subtitles

    Easy Way Subtitles – A computer hooked up to the TV captures all the closed captioning to put into Google Translate

    Image - Easy Way Subtitles on iPhone

    Easy Way Subtitles – The translation is streamed to a second screen

    The Easy Way Subtitles website allows you to select the TV channel and the language you wish to translate the captions into online and you can watch the subtitles stream back to you on the web page, although without the context of knowing what is airing on the channel itself at the time you do this makes it difficult to apply any context to what you are reading. Still I like the use of technology here. What do you think? A good way to learn, or not enough quality control to avoid translation mistakes? After all the captions themselves in the original language might not be correct in the first place though of course they should be.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel