Recent Updates Toggle Comment Threads | Keyboard Shortcuts

  • iheartsubtitles 10:04 pm on April 10, 2015 Permalink | Reply
    Tags: , , ,   

    Live Music & Live Lyrics & Live Subtitles 

    Last month I did something I’ve never done before, and I don’t think many others will have done it either. What was it? I attended a live music gig with live subtitles! The gig was called Club Attitude. It was organised by Attitude is Everything and the live subtitling was provided by StageTEXT.

    Having been to several StageTEXT captioned plays, and live subtitled talks I was pretty confident that the quality of the live subtitles would be excellent. But I also know that high quality subtitling doesn’t just happen without a lot of prep, a lot of technical set up, and of course skilled subtitlers.

    I am sure that this gig had its challenges, especially considering it hadn’t been done before but I was really pleased to see that even for this first ever subtitled gig, the access worked well. I felt for the stenographer wearing their headphones listening intently in order to deliver the lyrics in a time accurate manner in what was already musically noisy environment. Talk about powers of concentration!

    The subtitles were displayed on both sides of the stage at a high height on the right so that the screen could still be seen at the back of the venue (as per the Vine above) and also on a screen at a low height on the left side of the stage in case wheelchair users also wanted to read the captions throughout the gig. I should also point out there was also a signer on stage translating the lyrics into BSL for BSL users. None of this got in the way of the band members performing. It was lovely to see full access had been thought of and was indeed being provided including an accessible venue (if only this was the norm and I wouldn’t even point it out in a review like this but sadly it is not always the case).

    I’d love to have known what the artists performing at the gig thought of the live subtitles (although they cannot really see it from their position on the stage.) But if they are reading this article, or any other bands who might be thinking about captioning or subtitling their gigs, an overlooked but massive benefit isn’t just the lyrics. I shall try to explain:

    Because the subtitling provided at this gig was live, the dialogue and conversation that the bands had with the audience is also subtitled. I am taking about the intro and chat between songs. “Hello everyone, thanks for coming.” etc That might not seem important but what if you happen to be talking to the audience about where they can buy your music or your merchandise?  Ordinarily this information is lost on me. The number of gigs I’ve been to where I can enjoy the music (because I’ve listened to the songs over and over and looked up the lyrics on the internet) but cannot understand any of the talking is well pretty much all of them without a hearing friend confirming what’s being said. Even if I am close to the stage, I can’t lip-read you – your microphone is in the way. And this means you’ve lost communication with me and a connection. What I often hear is something like, “And so fdfgddfas this is our next song that dfawesfasdf  and its called dfaefavdfa.” What this means is, I never catch the song title, so if I like the song, I can’t go home, search the title online, listen to it again, and you know maybe buy it!

    So, we know live subtitling of music can be done, so why isn’t it done more often? I do hope we have got rid of the misconception that deaf and hard of hearing people are not music lovers. I can relate to an awful lot written in this great article from @ItsThatDeafGuy especially the bit about getting the lyrics from Smash Hits magazine and subtitled music on TV! Being Deaf Doesn’t Mean You Don’t Care About Music.

    I too have blogged several times already on this subject including my frustration that music DVDs seem to be exempt from requiring subtitles, and how having access to subtitled music via TV was hugely important to me as a teenager. And it still is. Search the music tag for more articles.

    And who doesn’t love knowing what the lyrics are? The way we consume music has changed drastically in the last 20 years, and technology is providing new ways to get the lyrics. Recently the music streaming service Spotify launched lyrics integration and the company has been retweeting the positive feedback it is getting about it.

    I also can’t help but notice that the trend of official lyric videos being released by music artists isn’t going away. And that’s just fine by me because a probably unintentional side effect is that it gives me access to the song and allows me to consume the music in my preferred way by reading the lyrics alongside listening to the song. Arena and stadium artists have started to incorporate this into some of their video screen stage graphics during concerts. And naturally I love this.

    Given all of these trends maybe this reviewer of Club Attitude is right: Perhaps the most extraordinary thing is that this gig night does not feel extra-ordinary at all. Now that would be something.

     
  • iheartsubtitles 3:13 pm on February 19, 2015 Permalink | Reply
    Tags: , , ASR, , captions, ,   

    #withcaptions Fixing You Tube’s auto-captions 

    Last month some high-profile vlogger’s that include Rikki Poynter and Tyler Oakley on the popular video sharing site YouTube got the attention of some mainstream press with a campaign that started with the hashtag #withcaptions.  It’s fantastic to see other’s campaigning and educating their audience as to the importance of not just captioning your online videos but captioning them accurately. I won’t repeat what mainstream media coverage reported but if you missed it or have no idea what I am talking about click on the links below:

    Animated gif of 1980s Apple commercial of a a kid at a computer looking impressed and giving a thumbs up to the camera

    To anyone who accurately captions their online videos. Good job. Thank you.

    It is so refreshing to get some positive mainstream press coverage about the importance of subtitling and its even more brilliant that the message is being spread by individuals outside of the subtitling, captioning or SEO industry. To all of you individuals doing this or perhaps have acted on this information and are now accurately captioning your own You Tube video’s – a massive thank you from me.

    As most of you reading should already know, You Tube does use automatic speech recognition (ASR) technology to automatically create captions from the audio track of uploaded video content on its site but these are very rarely, if ever accurate.  But what if you could fix these to make them accurate, rather than have to start from scratch to create accurate captions? That’s exactly what Michael Lockrey, who refers to these as ‘Craptions’ aims to solve with nomoreCRAPTIONS.  As Lockrey explains:

    nomoreCRAPTIONS is a free, open source solution that enables any YouTube video with an automatic captioning (‘craptioning’) track to be fixed within the browser.

    Craptions is the name coined by me for Google YouTube’s automatic craptioning – as they don’t provide any accessibility outcomes for people who rely on captioning unless they are reviewed and corrected. As this rarely happens and as Google rarely explains that they haven’t really “fixed” the captioning accessibility issue, we have a huge web accessibility problem where most online videos are uncaptioned (or only craptioned which is just as poor as no captioning at all).

    If you don’t believe me, then look at Google YouTube’s own actions in this space. The fact that they don’t even bother to index the automatic craptioning speaks volumes – as their robots hunt down pretty much everything that moves on the internet. So it’s obvious from these actions that they don’t place any value in them at all when they are left unmodified by content creators.

    There is also no way to watch the automatic craptioning on an iOS device (such as an iPhone or iPad) at present, unless you use the nomoreCRAPTIONS tool.

    Lockrey who is profoundly deaf has taught himself web development skills to solve a problem that he feels Google (You Tube’s owners) have largely ignored.  This hasn’t been easy as although there’s a huge amount of learning materials on YouTube and other platforms, most of them are uncaptioned or craptioned. Lockrey explains:

    Previously if I encountered yet another YouTube video that was uncaptioned or craptioned, I would often spend my own money and invest personal resources (my own personal time, effort, etc) in obtaining a transcript and / or a timed text caption file.  This usually also involved taking a copy of the YouTube video and then re-uploading the video onto my own YouTube channel so I could add the accessibility layer (i.e. good quality captioning).  Quite often I would end up being blacklisted from Google YouTube’s automated copyright systems, when I was only trying to access content that was freely and publicly made available by the content creators on YouTube and was not trying to earn revenue from the content (via ads) or any “funny” business, etc I knew that there simply had to be a better way.

    Screen grab of No More Captions hompage

    No More Craptions lets you edit You Tube’s auto-captioning errors

    With nomoreCRAPTIONS you simply paste in a YouTube URL or video ID and it instantly provides you with an individual web page for that video where you can go through and fix up the automatic craptioning (where there is an automatic craptioning track available).

    At the moment it’s a very simple interface and it is ideal for shorter YouTube videos of 4 or 5 minutes in duration (or less). It works in all languages that Google supports on YouTube with automatic craptioning. Here’s an example of the Kim Kardashian superbowl commercial which is very short and sweet.

    Screen shot showing edited auto captions via the No More Craptions tool.

    You can modify the text of the auto-captions to correct any errors via the yellow box on the right.

    Lockrey explains:

    There’s very little learning curve involved and this was intentional as whilst Amara and DotSub have great solutions in this space, they also have quite a substantial learning curve and I wanted to make it as easy as possible for anyone to just hop on and do the right thing. One the biggest advantages of the tool is that the corrected captions can be viewed immediately once you have saved them. This means it’s possible for a Deaf person to watch a hearing person fix up the craptions on a video over their shoulder and see the edits in real-time!

    We’ve even had a few universities using the tool as there’s so much learning content that is on YouTube, and this is simply the easiest way for them to ensure that there’s an accessible version made available to the students that need captioning – without wasting time on copyright shenanigans etc.  I’ve also been using it as a great advocacy tool – it’s so easy to share corrected captions with the content creators now and hopefully we can bridge that awareness gap that Google has allowed to fester since November 2009.

    noMORECRAPTIONS is still very much in the early development stage and there is more to come. The next steps are a partnership with #FreeCodeCamp to help with rolling out improvements and new features in the very near future. This includes looking at other platforms such as Facebook and Vimeo videos as part of the next tranche of upgrades as more and more platforms cross over to HTML 5 video.

    Lockrey is keen to get as much user feedback as possible so what are you waiting for – try the tool for yourself. For more information please contact @mlockrey.

    And when you’ve done that, you might also want to read: OMG! I just found out there’s only 5% captioning* on YouTube.

     
  • iheartsubtitles 4:54 pm on December 22, 2014 Permalink | Reply
    Tags: , , , , , production, ,   

    Accessible film making or what if subtitles were part of the programme? 

    I was prompted to write this blog post by a recent tweet from director Samuel Dore who bemoaned the fact that he felt that film directors and distributors seem to ‘moan’ about the cost of subtitling content:

    And I’ve seen tweets from others with comments of a similar nature.  This is a tricky topic because it would be wrong to label everyone individual or company out there as having this belief or attitude. However it’s another repeated theme I’ve seen discussed at access and language conferences this year.  That’s a good thing – it means its recognised as a potential issue for some companies or individuals and others in the same industry are challenging this assumption and trying to change it.  At the 2014 CSI Accessibility Conference Screen Subtitling’s John Birch asked the question “What if subtitles were part of the programme?”  He pointed out that in his opinion funding issues are still not addressed. Subtitling is still not a part of the production process and not often budgeted for. Broadcasters are required to pay subtitling companies,and subtitling companies are under continued to pressure (presumably to provide more, for less money). It is a sad fact that subtitling is not ascribed the value it deserves.

    I would also argue that there is some lost opportunity with the current Ofcom Code on Access Television Services that gives new TV channels a one year grace period in which regardless of audience reach, if the TV channel is less than one year old it is not required to subtitle/caption any volume of its output at all. Whilst I understand the cost of doing so might be considered a barrier to even launching the channel in the first place, the problem is it promotes an attitude or thinking once  again of not budgeting for subtitling/captioning from the start of the business process.  So two or three years down the line when the grace period is over,the risk is that it becomes an additional cost that the channel has not budgeted for and could be perceived as hindrance or ‘punishment’ rather than something positive that adds value for the channel and its viewers.

    The same is also true for translation subtitling. At the 2014 Languages & The Media Conference Pablo Romero-Fresco gave this statistic: Subtitling and translation make up 57% of revenue generated from English speaking movies but translation subtitling only gets 0.1% of budget. He argued that there needs to be a shift of change in the production process of filmmaking.  His suggestion is that film production should recognise and create the role of Producer of Accessibility who is involved before the final edit is locked.

    Sherlock - text message - on screen typography

    Sherlock – text message – on screen typography

    He observed that in recent years text and typography effects like those seen in the BBC’s Sherlock, and Netflix’s House of Cards (and many, many more), which uses text on screen as part of the storytelling and is part of the post production process should also be integrated in this role.  I too have observed the increase in recent years of using typography on screen as part of the story telling process. It’s also being widely used in music videos. For lots of examples of kinetic typography be sure to check out this Vimeo channel.

    Romero repeated this vision and idea at the Future of Subtitling Conference 2014.  You can read more in-depth information in the Journal of Specialised Translation.  I’ve also collated further tweets and information on this topic at Storify: Why subtitles should be part of the production process.

    I think its a really interesting idea. I also think that it will require a monumental shift for this to happen in the industry but never say never. What is good, is that certainly between broadcast TV production companies and subtitling companies is that collaboration of a sort is happening. Information and scripts are shared well in advance so that subtitler’s can prepare as much as possible in advance of broadcasts. Clearly, Romero’s vision is to be much more integrated than that.

    Currently for broadcast TV that is licensed under Ofcom, the responsibility for access and provision of subtitling lies with the broadcaster/TV channel. If the creation of subtitles and captions is implemented wholly into the production process then should subtitling provision then solely lie with the production company?

    At the moment it would appear that the responsibility shifts between the two depending on a number of factors:

    1. Regulation, if there is any and whom is considered responsible for providing subtitles.
    2. The production company and/or the distribution company making the content (some will provide subtitles, some will not, and a broadcaster may have bought programmes from either one of these or they may be one and the same thing)
    3. The country broadcasting the content (what language do you need subtitles in and how many languages will a production company be prepared to produce?)
    4. The method of how content is viewed (digital TV, satellite, cable, online, download, streaming subscription, pay per view,)

    It really shouldn’t be complicated but there is no denying that with all these variables it is. A lot of the above is complicated further by distribution rights which is another topic entirely. I do like the idea a lot though as it has the potential to simplify some of the above. I also think production companies would benefit greatly from the knowledge and expertise gained from years of experience from translation and subtitling companies as to the best methods to achieve collaboration and integration. What do you think?

     
    • Claude Almansi 11:08 pm on December 22, 2014 Permalink | Reply

      Thank you, Dawn: so many creative proposals in your post. It reminded me of a tutorial that Roberto Ellero made for the Italian public administration in 2009, entitled rather sternly – well, due to the target audience – “Accessibilità e qualità dei contenuti audiovisivi”, Accessibility and quality of audiovisual content. It’s in https://www.youtube.com/watch?v=wy34n09tvKo , with Italian captions and English subtitles (1). I think you might agree with the part from 1:47:

      “Every audiovisual product begins with a text, a script, a storyboard, some writing geared towards visualization, which then gets enacted in a series of frames and sequences. Every video alway starts from a text and returns to a text (a book, being read generates images in our mind, and the reverse path leads to audiodescription, which, in turn, is also a text)…”

      (1) Apologies for the typos in the English subs: I translated them on a train journey with TextEdit and sent them from a station where I got a wireless connection: he needed them urgently for some talk he was to give the following day :)

      Like

  • iheartsubtitles 10:08 pm on December 3, 2014 Permalink | Reply
    Tags: , , , ,   

    Access 2020 – Languages & The Media 2014 

    Access 2020 was an interesting panel hosted by Alex Varley at the 10th Languages & The Media conference. The theme was for the panel to discuss what they thought media access might look like in 2020.

    Although it is difficult to summarise all of the discussions, Media Access Australia have written a summary of 20 highlights. Below is my two-cents.

    • Broadcasters have to start to think what is their role?  The industry still need content producers which broadcasters are likely to continue to play a big role in producing. There is likely to be a merge of broadcast and IPTV.
    • In Europe, there is a keen focus to develop in the areas of: Machine Translation (MT), User Experience (UX), and Big Data.
    • Subtitling is becoming a language technology business rather than editorial. Greater levels of interest and innovation in technology will lead to greater quality and lower cost.
    • The industry is aiming for interoperability by 2020 (if not before) to ensure no technological barriers to access exist.
    •  Two interesting ideas/questions raised:  Will access services start to go into/become a part of the production process for audio-visual content? Will we start to see closed signing?

    How to achieve all of this:

    1. Talk to end users more.
    2. Deal with the complexity. (interoperability)
    3. Different jobs will be created by new technology, but we still need humans to provide access.
    4. Regulators are not always the answer and can get it wrong. Target the businesses to provide access.
    Animated gif of the hoverboard from the film Back To The Future

    Personally I’m still waiting for the hoverboard.

     
  • iheartsubtitles 1:54 pm on November 23, 2014 Permalink | Reply
    Tags: , , ,   

    Smart Technologies, Smart Translations – Languages & The Media 2014 

    The  theme of this years 10th Language & The Media conference was Smart Technologies, Smart Translations with a panel and audience discussion on:

    1) Machine Translation (MT)

    Sketch of a robot reading a book and writing a translation

    Machine Translation (MT)

    I thought it was interesting to find out that two Spanish universities have rolled out auto machine translated subtitled recordings of lectures. (I assume with post-editing also). There are numerous companies, and academic institutions working in and researching in this area including SUMAT, EU Bridge, transLectures.

    2) Speech Recognition

    Sketch of someone using speech recognition with really bad recognition output

    Speech Recognition (SOURCE: toothpastefordinner.com)

    Speech-to-text technology is already playing an important role in producing real-time or live subtitling and captioning. However there was also some interesting discussion about text-to-speech technologies and access to audio-visual media via audio description.  Particularly with regards to speech synthesis. Many expressed a concern that the quality of the machine voice is not, or would not be a pleasant experience for the end user. However  it was also pointed out that this would not replace current human based audio description but allow for bigger volumes of written content such as news to be made available to audiences that currently have no access such as illiterate audiences.

    3) Cloud Technologies

    A graphic illustrating benefits of cloud technology: infrastructure, platform as a service, software as a service

    The benefits of cloud technology.

    There is a lot of talk about this everywhere at the moment. Within the context of subtitling and translation workflows I can see a lot of benefits. It is already being done and where as previously (or currently) much of the business process management has been based on trust, a cloud based system can allow for greater transparency and even force checks. For example a translation manager or coordinator might currently communicate with a freelance translator via email but the actual work is done by someone else on their computer in a different location. The status of a project or job would be unknown and the co-ordinator has to trust that it is being worked on and will meet the deadline set. With a cloud-based application – a translation coordinator (or even the client themselves) could potentially log into the application and see the progress or status of a job being completed by anyone anywhere in the world. It also makes it easy for multiple access for multiple translators on a single project if required. And depending on how you build the application it could also force a task or process (for example only allowing a final subtitle file to be sent after a QC check has been made).

     
  • iheartsubtitles 4:53 pm on November 14, 2014 Permalink | Reply
    Tags: , , , ,   

    The Power of Accessibility: A Personal View from a Life-long Subtitling User 

    Below is a transcript of a presentation I gave to attendees at the 10th Languages & The Media Conference 2014 in Berlin. Thank you to the organisers for asking me to present.

    My name is Dawn and I run a blog and Twitter account on subtitling called i heart subtitles. And I currently work in broadcast TV so I think I understand both sides of the story. I both require access subtitling and I understand the challenges that have been overcome in the industry and challenges still to face.

    I was born in the early 80s in Oxford, England. I must have been in a hurry to get into this world because I was born three months premature and my hearing loss can be attributed to my early arrival. However it was not diagnosed until I was four years old. This meant that I was registered into a mainstream school and fitted with hearing aids and was never encouraged to learn sign language. I instinctively learnt to lip read as my parents discovered when I told them what a newsreader on TV was saying when the sound was turned down.

    A screenshot of a BBC Ceefax page (the BBC's version of Teletext)

    BBC Ceefax (the BBC’s version of Teletext)

    Image of a BBC1 Station Ident with 888 indicating subtitles are available.

    BBC1 Station Ident with 888 indicating subtitles are available.

    I first discovered subtitles for TV in the mid-90s when my parents acquired a new television with teletext facilities. Programmes that had subtitling were indicated by an 888 caption in the top right hand corner of the screen. 888 was the Teletext page number which provided subtitles

    Still of Neighbours opening credits from the 1990s

    Neighbours opening credits from the 1990s

    My earliest memories of subtitles are from an Australian soap opera called Neighbours and once I discovered them I never switched them off. However, this may surprise you, at first I found subtitles a bit depressing. Because it made me very aware of how much I was not hearing without them even with my hearing aids in and I took pride in coping in the hearing world. That quickly faded to loving them because it made TV viewing so much more relaxing. Hearing aids help a lot but wearing them is not the same experience as wearing glasses to help poor eyesight. I still unconsciously strain to hear everything – at work and at home all day . So to be able to rely on text and use my eyes to hear via subtitles is nothing short of amazing.

    A screenshot of Take That on Top of The Pops with subtitles.

    Take That on Top of The Pops with subtitles.

    In my teens subtitles gave me a peculiar benefit in that I could recite the words to the latest pop tunes which I had seen on a TV pop music chart show called Top of the Pops. If you’re a 90s music fan some of you may recognise the above screenshot of a subtitled Gary Barlow from the boyband Take That. I used to be and still am a big fan.

    In the mid 1990s I don’t remember seeing much, if any live subtitling for live television programmes. I always had the Teletext 888 subtitles page turned on. And so when programs without subtitles aired this would be indicated by a blue 888 icon showing on the top right of the screen. I used to refer to this as the “blue screen of death”. And I hated it seeing it because I knew it meant that I was going to struggle to follow what was going on throughout the rest of the programme.

    Seeing the “blue icon of death” instead of subtitles appearing left me feeling left out and frustrated so as a teenager I took my first steps in advocacy by supporting an RNID campaign to increase the amount of subtitled content on major UK channels. This small involvement in an effort from numerous parties led to the UK telecommunications regulator Ofcom, implementing the code on television access services.

    Seeing this change come about relatively quickly was the start of me maintaining an awareness of the issues surrounding SDH subtitling.

    By the end of the 90s/early noughties I had left home and gone to university. At that time real-time subtitling for university lectures was not available. However I had access to a note taker which certainly eased the task of picking up what was being said during lectures.

    During this time I also witnessed the move away from analogue to digital broadcast which I am happy to say had no negative impact on subtitling provision so far as I can remember as a subtitling user. It was also a time when VHS was being replaced by the DVD and blu-ray. As I am sure you are all aware, this had a hugely positive impact when it came to being able to access far more content that came with When I left university I got my first job in broadcasting. Both as a consumer and as a broadcast TV employee I have witnessed huge changes in the way we consume content, in particular the rise of video on demand (VOD) services. Some VOD providers are meeting the challenge of providing access to content via SDH subtitles despite no regulatory requirement to do so. For that I am grateful. However the majority do not and I find myself facing a blue icon of death in very different circumstances. I am not going to go too much into why I think this here today. But I have discussed this in various publications and on my blog.

    Screenshot of Twitter Image search results of #subtitlefail

    Twitter Image search results of #subtitlefail

    Instead I want to focus on another observation of change. Not only has the way we choose to watch content changed but the type of content has too. In particular traditional scheduled linear TV has seen a huge rise in the number of hours of live content. What was once limited to news and sport is now hundreds of hours of live entertainment programmes on prime-time TV. Format shows such as Big Brother, The X Factor, The Voice, Strictly Come Dancing and Dancing On Ice to name a few are shown as live broadcasts for months at a time.

    These shows are subtitled live and with the rise in social media attract a huge amount of public comments from viewers tweeting online as the show airs. The popularity of this sort of event TV viewing means subtitles and in particular live subtitling errors have attracted mainstream attention via social media. If you search the Twitter hashtag #subtitlefail you will see that much of the attention focuses on the hilarity of the errors and even I do often find them funny. What gets lost in translation sometimes is how lucky we are in the UK to have the volume of live subtitled TV content that we have. I won’t go into too much detail about how it’s done except to say there seems to be little mainstream awareness as to how live subtitling is produced and why errors occur. Again you can read more about this on my blog. I am hopeful that the technology behind it will continue to improve and improve faster in the next 10 years than it has in the last 10.

    Screenshot of BBC Genome project website

    BBC Genome project website

    Taking a pause for a minute, I’d like to reflect on how far we’ve come in the UK. Recently the BBC made available a TV and radio channel listings search facility called Genome.

    The first entry for subtitling came in March 1975 on BBC2. This was a subtitled opera programme and so doesn’t really count as access subtitling. Second entry was June 1975. On BBC1 a series called I See What You Mean. The synopsis states: “A series for hearing impaired people and in this episode a studio audience discusses a preview of Ceefax a new BBC technical development which provides some exciting possibilities for the subtitling of programmes for the deaf and hard of hearing.” It seems so understated!

    The next four entries were again subtitled opera and we have to wait until February 1977 for BBC2’s News On 2 Headlines with subtitles for the hard of hearing.

    In March 1980 the momentum begins when on BBC1 Life on Earth, a David Attenborough program is subtitled. By the end of 1981 subtitling of BBC programmes for the hard of hearing was just starting to slowly expand a little bit more.

    Coming back to the present day. Since working in the broadcast industry and seeing the operational workflow involved in getting a TV channel to air and attending industry events, I’m excited by the business benefits SDH subtitles can bring particularly in regards to metadata and search engine optimisation. I hope online Video On Demand providers are listening to this. Again you can read more about this subject on my blog.

    So SDH subtitles – why does it matter? The obvious answer is that it’s the right thing to do, to include all members of society. It should be the only answer required for businesses to act but often times it is not and that’s why I’m grateful for regulation in the UK. But to talk about regulation somehow dehumanises it all that’s the crux of it.

    You might think giving access to a trashy entertainment programme is trivial but it’s really not. It’s about the positive social impact that this access goes on to have.

    You remember earlier on I spoke about Neighbours and Top of the Pops as being early subtitle memories. The reason is because these TV shows were a talking point in the school playground and because they were subtitled it meant I could join in with such conversations about Neighbours plot points or I would know the lyrics of a song that has aired on Top of the Pops and subsequently could bond with a schoolmate over the latest chart music. My life could have been much lonelier.

    To give another example, in my first year at university I stayed in hall of residents but my TV reception in my room was so poor I struggled to get reliable subtitles. But not wanting to miss my favourite soap I plucked up the courage to ask my neighbour if I could watch EastEnders with her on her TV which had better reception. She happily obliged and we got to know each other quite well. This friendship that started over sharing a TV and watching a subtitled programme has stood the test of time and we remain close friends.

    Such social impacts are hard to measure but are not to be underestimated.

    Today I live and work in London and advocate for subtitling in all forms of media. The UK is one of the few countries that has open SDH subtitles in cinemas and whilst there are limitations with screening times this is a fantastic result and resource to have. I have also spent the last year volunteering for a charity called STAGETEXT which provide open captions for live theatre as well as talks and lectures in art galleries and museums across the UK.

    I’m excited about where else we can see SDH subtitles being provided. There are various companies offering live subtitles for lectures, talks at conferences, and meetings in the workplace. And you can access this live real-time subtitling through PCs, laptops, tablets smart phones and even Google Glass.

    There are always improvements to be made and battles to be won with access to audiovisual media via SDH subtitles. Some of the biggest challenges are the direct result of new technologies but I remain hopeful that new technology will also provide some of the solutions.

    To anyone who has ever typed, spoken, edited, or QC’d subtitles. Or if you have built and contributed to technologies that allow me to switch the subtitles on I thank you for doing so. Thank you for your patience and indulgence. I hope you enjoyed my story.

    Sound label subtitles image saying CHEERS AND APPLAUSE.

     
    • Michelle 6:13 pm on November 14, 2014 Permalink | Reply

      Wow! Turns out we have similar parellels in our lives! Although I was born in the 70s, like you I was probably born deaf due to complications but not diagnosed until 3 years old, and was in mainstream school most of my education! We got our first teletext tv in 1982 and I can remember all the first milestones of the various programmes – Cornation STreet, Grange Hill, Neighbours, Top of the Tops, Auf Weidershein Pet, Olympics, Doctor Who, Wimbledon and the list goes on!! I remember the first subtitled episode of Neighbours, we were on holiday at the time and we had to get back to the caravan especially to watch it!! Subtitles enhanced my social life at school too, especially in the playground as we talked about last nights episode!

      I do feel that a lot of people who have had subtitles all their lives (and the sheer volume) take it for granted and dont appreciate how slow progress can be. It took a long time to get to the stage where the terrestrial channels would subtitle a huge amount of their content, and that is exactly how it is for Satelite and VOD services. We cant have it all at once (as much as I would love that!) it has to be gradual. I accept that. What isnt acceptable is when they seem to make no effort at all and not bother with any percentage at all.

      Anyhow, I loved your blog and glad you shared it :)

      Like

    • Sabrina 8:35 pm on November 14, 2014 Permalink | Reply

      Great post, Dawn! Congratulations, and thank you for this informative and candid story. :)

      Like

    • Claude Almansi 11:35 pm on November 14, 2014 Permalink | Reply

      Thank you, Dawn. Your post is very instructive for me as non-deaf.
      Best,
      Claude

      Like

  • iheartsubtitles 11:57 am on September 19, 2014 Permalink | Reply
    Tags: , , ,   

    MOOC’s, Learning and Education 

    Please don’t think a lack of blog posts over the summer means a lack of interest in the subject of all things captioning and subtitling, far from it. In fact in an attempt to improve my skills and knowledge, one of things that I’ve been busy with is learning. I took my first steps into the world of MOOC’s. In case you are unfamiliar with the term, it stands for Massive Open Online Courses. They are courses that exist online, and the majority consist of a combination of reading material and video lectures.

    So you can probably guess what I am going to comment on next. As a hard of hearing person, just how accessible was the video content? Well it goes without saying that a key factor in me choosing a MOOC was not just the subject matter but whether the video and audio content was subtitled or captioned in English. The two MOOC’s I took were from FutureLearn and Coursera*

    A screenshot of Coursera's Course At A Glance details hours of study, length of the course, language of the course, and language of subititles that are available.

    Coursera – At A Glance section of the page detailing subtitle availability

    A screenshot of FutureLearn's FAQ webpage noting that subtitles are available

    FutureLearn’s FAQ includes information on the availability of subtitles

     

    I am happy to say that it was relatively easy for me to find out if content on their courses was subtitled. I particularly like Coursera’s clear layout and course summary from a course’s main page which tells you if subtitles are available. You have to dig a little deeper to find the answer on FutureLearn’s website but it is there in a detailed FAQ – Technology and Accessibility page. All of FutureLearn’s courses are subtitled in English, I am unsure if that is the case for Coursera.

    But…having established that the video content of the course itself is subtitled, why oh why, on both websites, is the introductory video not also subtitled! I have to rely only on the text description of the course to decide if it is the right one for me. This is the only opportunity you have to make me a ‘customer’ and commit to joining your course, so why are you leaving this video out?  It’s clear time and effort has been put into recording and editing them – so for goodness sake make them accessible and add subtitles!

    So what was the quality of the subtitling of the course content like I hear you ask? Well, varied to be honest. Starting with the good – the errors that did occur in the subtitles for both MOOC courses were not frequent enough to stop me from understanding and completing assignments. The most grave example – where a word error actually changed the meaning of the sentence came from Coursera. For example the phrase “Dublin Core” was subtitled as “Double Encore” and it was a horrible distraction when trying to understand a new topic that I had not studied before. When I pointed this out in the course forums, the staff explained it was likely due to an  auto-captioning error and apologised for the mistake. They also fixed the error relatively quickly allowing me to watch the video again two days later with much less confusion. Whilst it would have been better if the error was not there at all the speed of the response to fix it meant I didn’t get left behind in my studies. On the FutureLearn course one video used an incorrect word. I have to admit if it wasn’t for my own lip-reading skills I may not have realised this. When I posted a comment about it, it wasn’t the staff that responded but a very helpful fellow learner who clarified the correct word for me.

    Now for the not so good. Anyone who is a professional subtitler or captioner will know the importance of chunking, character limits per line and reading speeds. Now assuming the same guidelines for subtitling pre-recorded content for captioning/subtitles on broadcast TV also applies to pre-recorded educational MOOC videos (I don’t see why not but please comment if you disagree) these rules were not adhered to. The question is did it stop me learning? Honestly, no it didn’t (I can at least pause,rewind online) but it did make the retention and understanding harder. The user experience was not as good as it could have been. It is not what I am used to. I would prefer that the level of quality I am used to seeing on broadcast TV and DVD is replicated for MOOC videos.

    Another issue, for both courses is that the teacher would sometimes direct you to an external resource such as another website or video not hosted by the MOOC platform itself. And here’s where the access falls down. On both FutureLearn and Coursera the external content contained videos that were not subtitled or captioned. So I was unable to benefit from this. Now it would be nice if the platforms only allowed external links if the content has been made accessible. However the decision to include such content is probably at the discretion of the teacher not the MOOC platform. It’s exactly the same issue we currently see with VOD (Video on Demand) platforms. They might host the video but they are not the providers of content for whom it is generally accepted that the responsibility to provide the captioning or subtitling lies with. Did this prevent me from learning and passing tests and assignments? Thankfully no, because for both courses the external content was an optional extra but it still stands that this current format/situation does not equate to equal access to content. And that is most certainly a bad thing.

    Both MOOC courses that I took allowed students on the course to download a transcript of all videos (Coursera also allow you to download the subtitle file itself). This is a nice tool that all pupils on the course can benefit from. And this brings me to the point of one of the reasons I set up this blog – the belief that subtitles and closed captioning are not just a resource for deaf and hard of hearing communities, they are for everyone. There has been numerous research and studies over the last 20-30 years that suggest subtitles and closed captioning can help improve reading skills, literacy and the retention of information. There are a few websites that highlight this, the most comprehensive are Captions For Literacy and Zane Education.

    A photo of a captioned TV, the front cover of the National Captioning Institute - Guide for Using Captioned Television in the Teaching of Reading

    SOURCE: National Captioning Institute – Guide for Using Captioned Television in the Teaching of Reading (1987)

    Some of this research has been recognised and there are resources for teachers in Australia via Cap That!, and the USA via Read Captions Across America and Reading Rockets.  In fact, the USA as far back as 1987 realised the benefits and the National Captioning Institute published a guide for teachers.

    Does anyone know if there are or have been similar publications or resources for teachers in the UK? I have been unable to find anything and given the level of subtitled coverage on TV we now have, it seems a missed opportunity for teachers not to use it as a learning tool and encourage their use?

    Going back to MOOC’s , the global nature of the internet means its recognised that subtitles are needed given the course can be taken anywhere in the world and a pupil might need to read subtitles in their own language or use same language subtitles to aid their understanding. And everyone stands to benefit from this. I really enjoyed the experience overall and will absolutely consider taking more subtitled MOOC courses in the future.

    I haven’t even mentioned the services of CART (Communication Access in Real Time) or STT/STTR (Speech To Text) as an educational tool yet. These services were not available to me as a student but where they have been made available for at talks, meetings, or events I have absolutely benefited from being better at retaining the information being spoken simply because I can read every word.  I look forward to more research and evidence in the area of real-time live subtitling/captioning access because again I think all learners could benefit from this not just those who struggle to hear what is being said.

    What has been your experience with using captioning or subtitling as an educational tool been?

    *other accessible MOOCs are available.

     
    • Claude Almansi 12:17 am on September 23, 2014 Permalink | Reply

      Great post, Dawn: thank you.

      About the “double encore” for “Dublin core” error in a Coursera lecture that you mention: I think the instructor was mistaken in saying it was likely due to an auto-captioning error: Coursera used to visit appalling automatically voice-recognition generated original subs (1) on volunteer translators when it was using an Amara.org team, but at least, volunteers were able to fix them – in the course videos as well – before translating them.

      But with their new crowdtranslating initiative called the Global Translator Community (GTC), they said, in a hangout for GTC volunteers:

      “…When they [Coursera’s university partners] request captioning, that goes to a company that we work with,that does human language-captioning of videos. So then people listen to the videos and actually,humans write out the words that are being spoken on the screens.
      Now, the people who are doing these captions, they are not subject-matter experts,so, for instance in the course on Machine Learning, you know,they’re probably going to get some words wrong, there are going to be grammatical mistakes and, you know, one of the challenges that I realize, that we certainly realize is a challenge,is that English transcripts are not perfect.We think that they’ve improved a lot, we’ve worked with this provider that we use to improve that.I don’t know if any, if actually some of you had been on the platform for a couple of yearsand saw the transcripts back in 2012,and maybe you can tell that they have gone better — I hope so.” (1)

      Actually they haven’t, by a long shot: there might be fewer transcription errors than with the former auto-captions, though that’s arguable, but now, as the GTC uses Transifex, which is NOT a subtitling app, for translating the original subtitles, volunteers have no way to fix them anymore: hence the staple absurd splitting, frequent bad syncing, sometimes long unsubtitled parts, not to mention inane mentions of non verbal audio, like just [music] without describing it. So on June 6, Coursera staff started a Google spreadsheet, http://goo.gl/ilB1uK , where volunteers are meant to report these original subtitles issues via a form, so staff can respond to them. Problem: staff hasn’t responded to a single entry after June 16.

      About captioning for literacy: not UK but Indian: http://www.planetread.org/ . Pity the video on the home page is uncaptioned, but the site offers many resources, theoretical and practical.

      As to my use of captioning in education: in a couple of really open online courses for Italian teachers organized by Andreas Formiconi (3), I deviously started captioning some videos then asked if other participants would like to join. Only a few did, but they got really interested, and some posted about it in their blogs.

      (1) See https://github.com/acli/Coursera-subtitles#things-to-watch-out-for-if-you-want-to-work-on-courseras-subtitles

      (2) From the the transcript generated by the captions in http://www.amara.org/en/videos/4H50v2EYDXP7/info/global-translator-community-hangout-with-daphne-koller/

      (3) See his http://iamarf.org/ blog

      Like

      • iheartsubtitles 10:22 am on September 23, 2014 Permalink | Reply

        Hi Claude, thanks for commenting. Some very interesting background and links with regards to Coursera’s subtitling and captioning methods.

        Like

    • Arlene Mayerson 7:57 pm on September 29, 2014 Permalink | Reply

      I am a lawyer with the Disability Rights Education and Defense Fund who litigated the Netflix case. If any one has trouble accessing MOOC’s because of lack of captions, please contact me at amayerson@dredf.org. Thanks.

      Like

  • iheartsubtitles 12:25 pm on July 24, 2014 Permalink | Reply
    Tags: , , ,   

    Invisible Subtitled Live Theatre – Trial in the UK 

    Giojax, the company using 3D technology to create invisible subtitles for use by cinemas have just announced that the same technology is to be trialled in the theatre.

    Originally set up as a crowd-funded business, the now private company with private investors is running a trial of the invisible subtitles technology to subtitle a musical in October this year.

    The principle is the same as for the cinema. Audience members who wish to see the captions running during the live performance can wear 3D glasses and view the subtitles via a box situated on the theatre stage. The subtitles will be in English and is aimed as a solution to provide subtitles for the deaf and hard of hearing so should not be confused with translation subtitles or surtitles that you may have seen at opera performances.

    If you are interested in trying this technology out, the trial will take place on Saturday October 4th at the matinée performance at the Harlow Theatre for the Barry Manilow musical Copacabana:

    Her name was Lola, she was showgirl… So begins this tale of romance and stardom that has captivated audiences in the West End, Atlantic City and on-screen across the US. With sensational original songs by Barry Manilow, dazzling costumes and fabulous choreography is a show that will leave you breathless. Featuring hits such as Dancin Fool, Who Needs To Dream, Aye Caramba, and of course the Grammy award-winning Copacabana, this is a show sure to have you humminh the tunes all the way home. Harlow Playhouse is proud to present the premiere of Barry Manilow’s revised version of the original show for 2014.

    For more information on the musical and to purchase tickets visit the Harlow Playhouse website.

    For more information on 3D subtitles technology please visit the Giojax web page.

    And if anyone is wondering, the 3D Invisible Subtitles for cinemas project is still under way, testing took place earlier this year in Milton Keynes and the next stage is to finalise the software for the cinemas.

     
    • Mamtha 11:04 am on December 6, 2014 Permalink | Reply

      We are experienced in Video/Audio Transcription and subtitling, kindly give us opportunity to work as a vendor for your company.

      Like

  • iheartsubtitles 12:19 pm on June 27, 2014 Permalink | Reply
    Tags: , , , , , , , ,   

    CSI TV Accessibility Conference 2014 – Live subtitling, VOD key themes 

    Photo of CSI TV Accessibility Conference 2014 brochure

    CSI TV Accessibility Conference 2014

    Earlier this month the CSI TV Accessibility Conference 2014 took place in London. I had hoped to be able to give a more detailed write up with a bit of help from the transcript of the live captioning that covered the event but I’m afraid my own notes are all I have and so I will summarise some of the interesting points made that I think will be of interest to readers here. It will not cover all of the presentations but it does cover the majority.

    i2 Media Research gave some statistics surrounding UK TV viewing and the opportunities that exist in TV accessibility. Firstly, TV viewing is higher in the older and disabled population. And with an ageing UK population the audience requiring accessibility features for TV is only going to increase.

    Andrew Lambourne, Business Director for Screen Subtitling Systems had an interesting title to his presentation: “What if subtitles were part of the programme?” In his years of working in the subtitling industry he questioned why are we still asking the same questions over recent years. The questions surround the measurement of subtitling quality, and if there is incentive to provide great subtitling coverage for children. He pointed out that in his opinion funding issues are still not addressed. Subtitling is still not a part of the production process and not often budgeted for. Broadcasters are required to pay subtitling companies,and subtitling costs are under continued to pressure (presumably to provide more, for less money). It is a sad fact that subtitling is not ascribed the value it deserves. With regards to live subtitling there is a need to educate the public as to why these errors occur. This was a repeated theme in a later presentation from Deluxe Media. It is one of the reasons I wrote the #subtitlefail! TV page on this blog.

    Peter Bourton, head of TV Content Policy at Ofcom gave an update and summary of the subtitling quality report which was recently published at the end of April. This is a continuing process and I’m looking forward to comparing the next report to this first one to see what changes and comparisons can be made. The presentation slides are available online.

    Senior BBC R&D Engineer Mike Armstrong gave a presentation on his results to measuring live subtitling quality. (This is different to the quantitative approach used by Pablo Romero and adopted by Ofcom to publish its reports) What I found most interesting about this research is that the perception of quality by a user of subtitles is quite different depending on whether the audio is switched on whilst watching the subtitled content. Ultimately nearly everyone is watching TV with the audio switched on and this research found that delay has a bigger impact on perception of quality compared to the impact of errors. The BBC R&D white paper is available online.

    Live subtitling continued to be a talking point at the conference with a panel discussion titled: Improving subtitling. On the panel was Gareth Ford-Williams (BBC Future Media), Vanessa Furey (Action On Hearing Loss), Andrew Lambourne (Screen Subtitling Systems), and David Padmore (Red Bee Media). All panelists were encouraged that all parties – regulators, broadcasters, technology researchers are working together to continually address subtitling issues. Developments in speech recognition technology used to produce live subtitles has moved towards language modelling to understand context better. The next generation of speech recognition tools such as Dragon has moved to phrase by phrase rather than word by word (the hope being that this should reduce error rates). There was also positivity that there is now a greater interest in speech technology which should lead to greater advancements over the coming years, compared to the speed of technology improvements in the past.

    With regards to accessibility and Video on Demand (VOD) services it was the turn of the UK’s Authority of Television Video on Demand (ATVOD) regulatory body to present. For those that are unaware, ATVOD regulate all VOD services operating in the UK except for BBC iPlayer which is regulated under Ofcom. In addition because iTunes and Netflix operate from Luxembourg, although their services are available in the UK, they are outside of the jurisdiction of ATVOD. There are no UK regulatory rules that say VOD providers must provide access services, but ATVOD have an access services working party group that encourage providers to do so as well as draft best practice guidelines. I cannot find anywhere on their website the results of a December 2013 survey looking at the statistics of how much VOD content is subtitled, signed, or audio described which was mentioned in the presentation. If anyone else finds it please comment below. However, in the meantime some of the statistics of this report can be found in Pete Johnson’s presentation slides online. What has changed since 2012 is that this survey is now compulsory for providers to complete to ensure the statistics accurately reflect the provision. Another repeated theme, first mentioned in this presentation is the complexity of the VOD distribution chain. It is very different for different companies, and the increasing number of devices which we can choose to access our content also adds to the complexity. One of the key differences for different VOD providers is end-to-end control. Few companies control the entire process from purchasing and/or creating content for consumers to watch right through to watching the content on a device. So therefore who is responsible for a change or adaptation to a workflow to support accessible features and who is going to pay for it?

    I should also mention that the success of a recent campaign from hard of hearing subtitling advocates in getting Amazon to finally commit a response and say that they will start subtitling content was mentioned positively during this presentation. You may have read my previous blog post discussing my disappointment at the lack of response. Since then, with the help of comedian Mark Thomas, who set up a stunt that involved putting posters up on windows of Amazon UK’s headquarters driving the message home, Amazon have committed to adding subtitles to their VOD service later this year. See video below for the stunt. It is not subtitled, but there is no dialogue, just a music track.

    You can read more about this successful advocacy work on Limping Chicken’s blog.

    Susie Buckridge, Director of Product for YouView gave a presentation on the accessibility features of the product which are pretty impressive. Much of the focus was on access features for the visually impaired. She reminded the audience that creating an accessible platform actually creates a better user experience for everyone. You can view the presentation slides online.

    Deluxe Media Europe gave a presentation that I think would be really useful for other audiences outside of those working in the industry. Stuart Campbell, Senior Live Operations Manager, and Margaret Lazenby Head of Media Access Services presented clear examples and explanations of the workflow involved in creating live subtitles via the process of respeaking for live television. Given the lack of understanding or coverage in mainstream media, this kind of information is greatly needed. This very point was also highlighted by the presenters. The presentation is not currently available online but you can find information about live subtitling processes on this blog’s #SubtitleFail TV page.

    A later panel discussed VOD accessibility. The panelists acknowledged that the expectation of consumers is increasing as is the volume and scale of complexity. It is hoped that the agreed common standard format of subtitle file EBU-TT will resolve a lot of these issues. This was a format still being worked on when it was discussed at the 2012 Conference which you can read about on this blog. The UK DPP earlier this year also published updated common standard subtitles guidelines.

    Were any of my readers at the conference? What did you think? And please do comment if you think I have missed anything important to highlight.

     
    • peterprovins 4:48 pm on July 21, 2014 Permalink | Reply

      Interesting blog. No excuse for TV, Film, website or even theatre not to be captioned…we do it all. Currently captioning university lectures and looking at doctors surgeries which are currently limited to BSL only. Keep up the good work.

      Like

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 1,493 other followers

%d bloggers like this: