Tagged: You Tube Toggle Comment Threads | Keyboard Shortcuts

  • iheartsubtitles 3:13 pm on February 19, 2015 Permalink | Reply
    Tags: , , ASR, , captions, , You Tube   

    #withcaptions Fixing You Tube’s auto-captions 

    Last month some high-profile vlogger’s that include Rikki Poynter and Tyler Oakley on the popular video sharing site YouTube got the attention of some mainstream press with a campaign that started with the hashtag #withcaptions.  It’s fantastic to see other’s campaigning and educating their audience as to the importance of not just captioning your online videos but captioning them accurately. I won’t repeat what mainstream media coverage reported but if you missed it or have no idea what I am talking about click on the links below:

    Animated gif of 1980s Apple commercial of a a kid at a computer looking impressed and giving a thumbs up to the camera

    To anyone who accurately captions their online videos. Good job. Thank you.

    It is so refreshing to get some positive mainstream press coverage about the importance of subtitling and its even more brilliant that the message is being spread by individuals outside of the subtitling, captioning or SEO industry. To all of you individuals doing this or perhaps have acted on this information and are now accurately captioning your own You Tube video’s – a massive thank you from me.

    As most of you reading should already know, You Tube does use automatic speech recognition (ASR) technology to automatically create captions from the audio track of uploaded video content on its site but these are very rarely, if ever accurate.  But what if you could fix these to make them accurate, rather than have to start from scratch to create accurate captions? That’s exactly what Michael Lockrey, who refers to these as ‘Craptions’ aims to solve with nomoreCRAPTIONS.  As Lockrey explains:

    nomoreCRAPTIONS is a free, open source solution that enables any YouTube video with an automatic captioning (‘craptioning’) track to be fixed within the browser.

    Craptions is the name coined by me for Google YouTube’s automatic craptioning – as they don’t provide any accessibility outcomes for people who rely on captioning unless they are reviewed and corrected. As this rarely happens and as Google rarely explains that they haven’t really “fixed” the captioning accessibility issue, we have a huge web accessibility problem where most online videos are uncaptioned (or only craptioned which is just as poor as no captioning at all).

    If you don’t believe me, then look at Google YouTube’s own actions in this space. The fact that they don’t even bother to index the automatic craptioning speaks volumes – as their robots hunt down pretty much everything that moves on the internet. So it’s obvious from these actions that they don’t place any value in them at all when they are left unmodified by content creators.

    There is also no way to watch the automatic craptioning on an iOS device (such as an iPhone or iPad) at present, unless you use the nomoreCRAPTIONS tool.

    Lockrey who is profoundly deaf has taught himself web development skills to solve a problem that he feels Google (You Tube’s owners) have largely ignored.  This hasn’t been easy as although there’s a huge amount of learning materials on YouTube and other platforms, most of them are uncaptioned or craptioned. Lockrey explains:

    Previously if I encountered yet another YouTube video that was uncaptioned or craptioned, I would often spend my own money and invest personal resources (my own personal time, effort, etc) in obtaining a transcript and / or a timed text caption file.  This usually also involved taking a copy of the YouTube video and then re-uploading the video onto my own YouTube channel so I could add the accessibility layer (i.e. good quality captioning).  Quite often I would end up being blacklisted from Google YouTube’s automated copyright systems, when I was only trying to access content that was freely and publicly made available by the content creators on YouTube and was not trying to earn revenue from the content (via ads) or any “funny” business, etc I knew that there simply had to be a better way.

    Screen grab of No More Captions hompage

    No More Craptions lets you edit You Tube’s auto-captioning errors

    With nomoreCRAPTIONS you simply paste in a YouTube URL or video ID and it instantly provides you with an individual web page for that video where you can go through and fix up the automatic craptioning (where there is an automatic craptioning track available).

    At the moment it’s a very simple interface and it is ideal for shorter YouTube videos of 4 or 5 minutes in duration (or less). It works in all languages that Google supports on YouTube with automatic craptioning. Here’s an example of the Kim Kardashian superbowl commercial which is very short and sweet.

    Screen shot showing edited auto captions via the No More Craptions tool.

    You can modify the text of the auto-captions to correct any errors via the yellow box on the right.

    Lockrey explains:

    There’s very little learning curve involved and this was intentional as whilst Amara and DotSub have great solutions in this space, they also have quite a substantial learning curve and I wanted to make it as easy as possible for anyone to just hop on and do the right thing. One the biggest advantages of the tool is that the corrected captions can be viewed immediately once you have saved them. This means it’s possible for a Deaf person to watch a hearing person fix up the craptions on a video over their shoulder and see the edits in real-time!

    We’ve even had a few universities using the tool as there’s so much learning content that is on YouTube, and this is simply the easiest way for them to ensure that there’s an accessible version made available to the students that need captioning – without wasting time on copyright shenanigans etc.  I’ve also been using it as a great advocacy tool – it’s so easy to share corrected captions with the content creators now and hopefully we can bridge that awareness gap that Google has allowed to fester since November 2009.

    noMORECRAPTIONS is still very much in the early development stage and there is more to come. The next steps are a partnership with #FreeCodeCamp to help with rolling out improvements and new features in the very near future. This includes looking at other platforms such as Facebook and Vimeo videos as part of the next tranche of upgrades as more and more platforms cross over to HTML 5 video.

    Lockrey is keen to get as much user feedback as possible so what are you waiting for – try the tool for yourself. For more information please contact @mlockrey.

    And when you’ve done that, you might also want to read: OMG! I just found out there’s only 5% captioning* on YouTube.

     
  • iheartsubtitles 10:18 am on August 15, 2013 Permalink | Reply
    Tags: , , , You Tube   

    Captioned Music – automated vs human skill 

    Here are two fun videos that illustrate two very different results when captioning music.

    The first is lyric video for One Direction lyrics as captioned by You Tube’s auto captioning system. (You can also view the results of Taylor Swift’s lyrics)

    Machine translation does have a role to play in providing access and despite these funny videos continues to improve but that is for another blog post.

    Continuing on, compare the above with the fantastic skill of this stenographer and watch them subtitle Eminem’s Lose Yourself in real-time (music starts at 1:35 in).

    Stenography is also used to caption/subtitle live television – see #subtitlefail! TV

     
  • iheartsubtitles 8:47 am on June 4, 2013 Permalink | Reply
    Tags: , , You Tube   

    Taylor Swift -Auto captioning fails 

    Ever wonder what You Tube’s auto-captioning would make of Taylor Swift’s song lyrics? Wonder no more thanks to this funny video from Rhett and Link.

    Enjoy!

     
    • jennpower 11:46 am on June 4, 2013 Permalink | Reply

      This is why it’s impossible to rely on the captioning on youtube. You have to read their lips too (if you know how) and decipher it while reading. Basically you have to read the words, look at the person singing, and then figure out what the person really means. Youtube really does need better captioning. And it isn’t an option on all videos. If T.V is required to have captioning, then Youtube should as well. The only time you know you really have it right on youtube is if someone put the video on with the subtitles on it already.

      Youtube really needs to step it up on this. It’s almost inaccessible to me as a result.

      Like

    • mikelrecondo 11:03 am on June 5, 2013 Permalink | Reply

      The main issue here is that automatic captioning isn’t suitable for complex audio (that includes songs), so it would be better if YouTube just disabled auto captioning for that kind of content. As I see it, it is better not do something than doing it so poorly.

      If YouTube would selectively turn it on only in the most simple videos, we would know that the subtitles would reach a certain quality threshold. I do not think that their technology is bad but that they should not use it for inappropriate content.

      Like

    • Claire Brown 7:41 pm on June 12, 2013 Permalink | Reply

      This sort of example makes professional subtitlers feel just that little bit more safe. And glad that their hard work properly helps.

      Like

    • Claire Brown 7:43 pm on June 12, 2013 Permalink | Reply

      Reblogged this on Making * Living * Doing and commented:
      We are never, ever, ever hitting ant kidders.

      Like

    • itsmesammies 3:36 am on June 13, 2013 Permalink | Reply

      I’m glad Youtube was there to mess it so this awesome video could be produced! I hate this song more than I hate jelly on my floor at three am and I happen to be the one that steps in it… Now I’ll have something to think of every time my eight year old turns it up! New fave video!

      Like

  • iheartsubtitles 2:05 pm on February 21, 2013 Permalink | Reply
    Tags: , , You Tube   

    You Tube – pay for translation captions service now available 

    This is interesting, You Tube are now offering translation pay for services for people wishing to provide translation subtitles/captions to viewers for their You Tube hosted videos. Would you use it? Or would you go for a free option only? (crowd sourced Amara perhaps? Or Google Translate?) There are pro’s and cons to both.

     
  • iheartsubtitles 2:42 pm on February 10, 2013 Permalink | Reply
    Tags: Adverts, Commercials, , You Tube   

    Nike #makeitcount advert with graphics subtitled 

    On UK TV the majority, but by no means all adverts are subtitled (so long as you have them turned on in the first place). Online, I see very few subtitled. However I came across a campaign from Nike promoting the use of a twitter hashtag #MAKEITCOUNT that has subtitled it’s advert online for the deaf and hard of hearing. The difference is they have subtitled the graphics and not the audio (which is music lyrics) which is traditionally what subtitles are used for:

    Screen grab of same language subtitling

    Graphics on screen are subtitled in original language (same language subtitling)

    The subtitles have also been used in the more conventional way to translate into other languages, but again it is the graphics that are translated and not the music audio:

    screen grab of graphics on screen with Spanish subtitles

    Nike subtitles translate the graphics into another language – in this example – into Spanish.

    What I found interesting is that Nike have also chosen to match the graphics style of text on screen and replicate as best as they can with the subtitles too. Here is an illustration:

    Screen grab of Nike subtitles matching design of graphics on screen

    Nike subtitles – the subtitles are designed to try to match the look of the graphics on screen.

    This includes adding the “@twitterhandle” names of the athletes appearing in the advert in the twitter handle style. This is included both in same language subtitles and the translation subtitles:

    screen grab of Nike subtitles with twitter handle name

    Nike subtitles – same language subtitles includes twitter handle names in “@twitter style”

    screen grab of Nike subtitles with twitter handle name in Spanish

    Nike subtitles – translation subtitles includes twitter handle names in “@twitter style”

    See for yourself by watching the subtitled Nike #MAKEITCOUNT advert here:

    Do you like this? Or would you rather see the audio subtitled? I suppose it illustrates that for Nike the key messages they want the viewer to understand are in the graphics and not the audio in the first place. And if it is not, they really should be subtitling the music audio!

     
    • elenagmaroto 11:13 pm on February 19, 2013 Permalink | Reply

      Great post!

      I think they made a big mistake by trying to match the subtitles to the graphics so much, specially by using upper case they are making it so much more difficult to read and enjoy. It’s a very good campaign but dynamic becomes fuzzy at some point. Too little on screen time and way too many one-liners.

      I don’t know the term in English but in Spanish this type of graphic subtitling is called “inserto” (inserts?) as opposed to conventional subtitles. Audio subs usually take precedence over inserts (like when you are watching some sign but there’s dialogue too) but, like you said, I agree that Nike probably is more interested in the graphics here.

      I would love you to visit my blog but I’m still working on it. Hopefully soon you could 🙂

      Like

      • iheartsubtitles 10:05 am on February 20, 2013 Permalink | Reply

        Thanks for your comments. I agree the timing’s are way too fast but it is clear the speed has been matched again for the graphics on the screen. You are correct that these are technically inserts – it is the first example I have seen of inserts been added via closed captioning on a You Tube video so found it fascinating. Let me know when you have updated your blog and I’ll be sure to visit.

        Like

  • iheartsubtitles 2:14 pm on January 30, 2013 Permalink | Reply
    Tags: , , You Tube   

    Talking Animals – Subtitled 

    Who doesn’t like talking animals (if you don’t what’s wrong with you?!) Just going to leave these here for your enjoyment:

    Introducing Ruby the talking parrot:

    And cat talk translated, sort of:

     
  • iheartsubtitles 3:23 pm on January 28, 2013 Permalink | Reply
    Tags: , , , , , You Tube   

    Web Series – increasing in popularity? Where are the captions? 

    So far on this blog when discussing access to video content on the web I have focused on catch-up services provided by traditional linear TV broadcasters. But increasingly there is some content that is available on the web only, usually refered to as a web series.

    A web series is a series of videos, generally in episodic form, released on the Internet or also by mobile or cellular phone, and part of the newly emerging medium called web television. A single instance of a web series program is called an episode or webisode.

    SOURCE: Wikipedia

    Web Series shouldn’t be mistaken for being small-fry, it is an industry big enough to have its own awards called The Streamys. The number 1 subscribed web series on You Tube is currently Smoosh with over 7,000,000 subscribers! This kind of content is not subject to the same regulatory rules as web catch up services in any country so far as I am aware (readers please correct me by commenting on this post if I am wrong). Unfortunately much of this content is without captions or subtitles but there are some fantastic individuals working hard to advocate and educate producers of web series to encourage them to include it. Captioned Web TV is a fantastic blog that lists all web series it finds that includes captions. It also contains useful information for web producers to take steps in captioning their videos. If you know of any web series with captions that is not listed you can submit that information to the site.

    In addition to web series created by individual producers, OTT platforms such as Amazon and Netflix are starting to produce their own exclusive shows. Netflix’s first produced show is a remake of the TV series House Of Cards. To my pleasant surprise the trailer which is already online has been captioned and so I hope the same will be true of the series itself:

    In a similar vein Amazon Studios has greenlit several productions but have not yet completed production. And in the US Hulu has several exclusive series, the captioning of which seems to be a mixed bag:

    It is not just the OTT companies, traditional Film & TV production companies also produce series exclusively for the web. One of the series I would very much like to watch but cannot because it is not captioned is from Crackle (run by Sony) called Comedians In Cars Getting Coffee. It’s success in bringing viewers to the site has meant that a second series is being produced, and according to paidContent, “2013 is the year of the web series second season”. What I’d like to see is “2013 – the year of captioned web series”. I’ll settle for 2014 if I have to. I’m not convinced changes will happen this quickly. For a start because of its very nature – anyone can upload a web series anywhere at anytime once they have made it, how to keep up with it all? Here’s a list that is fairly current of the many ways to watch web series. I don’t doubt this list could be out of date fairly quickly. But what if The Streamys gave an awards category for the most accessible content? I’d like to see producers whether individuals, OTT platforms, or web content from traditional production companies all competing for that as much as they are for subscribers/hits/views at the very least. Right now, a lot of us are missing out.

     
  • iheartsubtitles 3:51 pm on January 25, 2013 Permalink | Reply
    Tags: , , , You Tube   

    President Obama ‘bad lip reading’ video becomes hit 

    Happy Friday! Here’s a captioned video that had me laughing. What if you badly lip-read the President of the USA Inauguration 2013? This video has apparently gone viral (so you’ve probably watched it already right?) President Obama 'bad lip reading' video becomes hit – watch – Odd News – Digital Spy.

     
    • happyzinny 3:10 am on January 29, 2013 Permalink | Reply

      I can’t remember the last time I actually had teary laughter but this video did it. I had to back up so as not to short circuit the keyboard. Thank you for sharing!

      Like

    • iheartsubtitles 9:49 am on January 29, 2013 Permalink | Reply

      There are some other great ones on that you tube channel. But this one is my current favourite. Glad you liked it.

      Like

  • iheartsubtitles 2:35 pm on January 9, 2013 Permalink | Reply
    Tags: , , , , , , , , , You Tube   

    CSI User Experience Conference 2012 Part 5 – Broadcast subtitles and captions formats 

    CSI User Experience Conference 2012: TV Accessibility

    CSI User Experience Conference 2012: TV Accessibility

    For background info on this conference read:Part 1.

    Frans de Jong, a senior engineer for European Broadcasting Union (EBU) gave a presentation on the history of work and current work being done to ensure standardised subtitle formats as broadcast technology evolves whilst ensuring that legacy formats are still support and compatible. The subtitle format evolved from teletext technology STL has evolved to a format called EBU-TT Part I. Jong explained:

    We have published this year (2012) EBU-TT part one. This is the follow up specification for that old format (STL). It takes into account that nowadays we like to define things in XML and not in binary format because its human readable, and because there many people who read XML…and of course nowadays [broadcast] its all file based, network facilities. Because if you look at the way that subtitles are produced, this a very generic sketch, typically it comes from somewhere, external company or internal department, can be based on existing formats, then it goes into some central content management system. Afterwards it archived and of course its broadcast at a certain moment, then provided to several of the platforms on right. This list of platforms growing. Analogue TV, digital TV, now there’s HDTV, iPlayer, we have IPTV streaming platforms all these platforms have their own specific way of doing subtitling. But in the production side we have for a long time being using STL and also proprietary formats based on them or newly developed. There’s several places where this format is useful but we felt we had to update that format to make sure we can fulfill the requirements of today. That is HD TV and the different web platforms mainly. So the new format published was focusing on that, very aware of web format, but focused in our case on production. Our goal is to really optimise the production, to help the broadcasters get their infrastructure up-to-date.

    The EBU-TT format is not a stand-alone invention and is based on W3C Timed Text (TTML) but restricts the featureset, makes default values explicit, and adds (legacy STL) metadata. Similar work has been done in the US by SMPTE with the captioning format SMPTE-TT. This captioning standard received an honor from the Federal Communications Commission (FCC) —a Chairman’s Award for Advancement in Accessibility last month:

    The FCC declared the SMPTE Timed Text standard a safe harbor interchange and delivery format in February. As a result, captioned video content distributed via the Internet that uses the standard will comply with the 21st Century Communications and Video Accessibility Act, a recently enacted law designed to ensure the accessibility, usability, and affordability of broadband, wireless, and Internet technologies for people with disabilities.

    SOURCE: TV Technology

    The EBU are currently working on EBU-TT Part II which will include a guide to ensuring ‘upgrading’ STL legacy subtitle files and how they can be converted to EBU-TT file. This is due to be published early this year. Looking further ahead Jong’s said:

    There is also a third part coming up, that is now in the requirements phase, that’s on live subtitling. Several countries, and the UK is certainly leading, are working with live subtitling. The infrastructure for this and the standards used are not very mature, which means there is room also to use this format to come to a live subtitle specification. We will provide a user guide with examples…One word maybe again about live subtitling that’s coming up. What we did here is we had a workshop in the summer in Geneva at the EBU. We discussed the requirements with many broadcasters, what would you need this type of format. There are about 30 requirements. Some of the things that came up for example, is that it would be really good if there is a technical situation for routing, if I am subtitling for one channel maybe 10 minutes later I could be subtitling for another channel – to make sure that the system knows the what channel I am working for and that its not the wrong channel. And you need some data in the format that was used. Again the issue of enriching the work you are working on with additional information, description and speaker ID.

    To conclude the presentation Jong’s discussed his views on future technology and the next steps for subtitling including automated subtitles and quality control:

    There is an idea we could be much more abstract in how we author subtitle in the future. We understand that the thought alone can be quite disrupting for a lot of people in current practice because it’s far from current practice. Just to say we’re thinking about the future after this revision. I think later we’ll see on more advanced methods for subtitling, there is a lot of talk about automation and semi-automation. I think it was a week ago that You Tube released their automated subtitling with speech recognition, at least in the Dutch language. I am from Holland originally, I was pretty impressed by the amount of errors! … It’s a big paradox. You could argue that Google (owners of You Tube) has the biggest corpus of words and information probably of all of us.. if they make so many (automated subtitles/captions) mistakes how can we ever do better in our world? For the minority languages there is no good automated speech recognition software. If you ask TVP for example, the Polish broadcaster, how they do live subtitling, they say we would love to use speech recognition but we can’t find good enough software. In the UK it’s a lot better. It’s a real issue when you are talking about very well orchestrated condition and even there it doesn’t exist. I am really curious how this will develop.

     
  • iheartsubtitles 5:52 pm on December 20, 2012 Permalink | Reply
    Tags: , , , Microsoft, , , Siri, , , , You Tube   

    CSI User Experience Conference 2012 Part 3 – Live subtitles & voice recognition technology 

    CSI User Experience Conference 2012: TV Accessibility

    CSI User Experience Conference 2012: TV Accessibility

    For background info on this conference read:Part 1.

    It’s clear that much of the frustration from many UK TV viewers surrounds live subtitles and so the technology of voice recognition software and the process of respeaking used to achieve this was one of the topics of debate in a panel on the User Experience following Ofcom’s presentation.

    Deluxe Media’s Claude Le Guyader made some interesting points:

    In the case of live subtitling…it’s a lot of pressure on the person doing the work, the availability of the resource and the cost, it all means that the event of voice recognition was embraced by all the service providers as a service to palliate, to that lack of resource (in this case – stenographers). As we know voice recognition started, it’s not perfect, still not perfect, I don’t know if you have seen on your iPhone, it’s quite funny, with a French access it’s
    even worse! (This is in reference to Siri which is not used so far as I am aware to create live subtitles but it is part of the same technology used – voice recognition). With the voice recognition you need to train the person. Each person (in this case – a subtitler or respeaker) needs to train it, now it’s moved on and there are people using voice recognition very successfully as well, so it’s evolving but the patience, you know, it does run out when you are round the table again years are discussing the same issue, but it’s not a lack of will, I think it’s just a difficult thing to achieve, because it involves so many different people.

    Voice technology does seem to be constantly evolving and the fact that it is being implement in more and more products (the iPhone and Siri is a great example) I think is a positive thing. It increases consumer awareness of what this technology can do and consequently I think people will expect this technology to work. There are numerous ways voice technology is being used. To move away from just live subtitling and summarising points made at the conference for a moment but still within a broadcast TV context, another use is illustrated by Google TV. In the below video you can see voice recognition technology allowing a viewer to navigate the TV:

    Voice recognition technology is also used to create the automatically generated captions on You Tube videos. At the moment this does illustrate the technologies limitations as most readers here I am sure are aware – the captions created this way are completely inaccurate most of the time and therefore useless. I think we can all agree that respeaking to produce live subtitles creates errors but produces a much better result than a machine currently. Google recently added automatic captioning support for six new languages. Investment into this technology even if it is currently imperfect shouldn’t be discouraged because surely this is the only way for the technology to improve:

    A new research paper out of Google describes in some detail the data science behind the the company’s speech recognition applications, such as voice search and adding captions or tags to YouTube videos. And although the math might be beyond most people’s grasp, the concepts are not. The paper underscores why everyone is so excited about the prospect of “big data” and also how important it is to choose the right data set for the right job….No surprise, then, it turns out that more data is also better for training speech-recognition systems…The real key, however — as any data scientist will tell you — is knowing what type of data is best to train your models, whatever they are. For the voice search tests, the Google researchers used 230 billion words that came from “a random sample of anonymized queries from google.com that did not trigger spelling correction.” However, because people speak and write prose differently than they type searches, the YouTube models were fed data from transcriptions of news broadcasts and large web crawls…This research isn’t necessarily groundbreaking, but helps drive home the reasons that topics such as big data and data science get so much attention these days. As consumers demand ever smarter applications and more frictionless user experiences, every last piece of data and every decision about how to analyze it matters.

    SOURCE: GigaOM

    Following on from this example the natural question ask is will Apple integrate its voice technology Siri into Apple TV? It has been rumoured but not yet confirmed. (Interestingly it is already confirmed that Siri is being added to Chevrolet cars next year) If there is competition between companies for innovation using this technology, all the better. I found an interesting blog post pondering the future of Siri for Apple here. Although this blogger thinks that Google Voice is better. Voice technology is also being used in the world of translation. Last month Microsoft gave an impressive demo of voice recognition technology translating a speakers speech in English into Chinese text as well as speak it back to him in Chinese in his own voice:

    Skip to 4.22 to be able to read captions from this presentation.

    All of these examples I hope will contribute in some small way to an improvement in live subtitling. Mark Nelson would disagree with me. He wrote an article explaining how he believes that a peak has been reached and that greater reliance on voice technology could lead to the deaf and hard of hearing being left behind.

    What do you think? Do you think live subtitling will improve as a result of voice recognition technology or do you have another view? Please leave your comments below.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: