There’s a growing trend on social media and sites like Reddit and Quora to showcase captioning errors from television and numerous online platforms. As accessibility laws tighten and the quality standards for captioning on broadcasts become more rigorous, how do these bloggers have so much fuel for their posts on captioning errors? It is a simple question with many complicated answers.
Live television programming is captioned in real-time either by machines or humans working with a stenotype machine (like those used in courtrooms) and thus tends to lag slightly behind and, inevitably, will include some paraphrasing and errors. While the Federal Communication Commission requires American television stations' post-production captions to meet certain standards, the Internet is still vastly unregulated. Video-sharing websites like YouTube have struggled to provide accessible captions. Despite YouTube's recent efforts to improve accessibility, their captions continue to disappoint viewers, especially those of the deaf and hard-of-hearing community.
In a 2014 The Atlantic article called "The Sorry State of Closed Captioning," Tammy H. Nam explains why machines cannot create the same experience humans can. She posits, "Machine translation is responsible for much of today’s closed-captioning and subtitling of broadcast and online streaming video. It can’t register sarcasm, context, or word emphasis." By using machines instead of human writers and editors, sites like YouTube are not providing the same viewing experience to the deaf and hard of hearing as they are to their other patrons. Humans can understand which homophone to use based on context. There is an enormous difference between the words soar and sore, air and heir, suite and sweet. Humans can also determine when noise is important to the plot of a story and thereby include it in the captions so that a non-hearing viewer won't miss critical details. In the same Atlantic article, deaf actress Marlee Matlin says, "I rely on closed captioning to tell me the entire story…I constantly spot mistakes in the closed captions. Words are missing or something just doesn’t make sense." Accessible closed captions should follow along exactly with the spoken dialogue and important sounds so that viewers are immersed in the story. Having to decipher poor captions takes the viewer out of the flow of the story and creates a frustrating experience.
YouTube created its own auto caption software for its creators to use in 2010. The software is known for its incomprehensible captions. Deaf YouTuber and activist Rikki Poynter made a video in 2015 highlighting the various ways in which YouTube's automatic captions are inaccessible. She wrote a 2018 blog post explaining her experience with the software, "Most of the words were incorrect. There was no grammar. (For the record, I’m no expert when it comes to grammar, but the lack of punctuation and capitalization sure was something.) Everything was essentially one long run-on sentence. Captions would stack up on each other and move at a slow pace." For years, Rikki and other deaf and hard-of-hearing YouTube users had to watch videos with barely any of the audio accurately conveyed. Although her blog post highlights the ways in which YouTube's automatic captions have improved since 2015, she writes, "With all of that said, do I think that we should choose to use only automatic captions? No, I don’t suggest that. I will always suggest manually written or edited captions because they will be the most accurate. Automatic captions are not 100% accessible and that is what captions should be." The keyword is accessible. When captions do not accurately reflect spoken words in videos, television shows, and movies, the stories and information are inaccessible to the deaf and hard of hearing. Missing words, incorrect words, poor timing, captions covering subtitles, or other important graphics all take the viewer out of the experience or leave out critical information to fully understand and engage with the content. Until web resources like YouTube take their deaf and hard-of-hearing viewer's complaints seriously, they will continue to alienate them.
So, what can we do about poor web-closed captioning? Fortunately, the Internet is also an amazing tool that allows consumers and users to have a voice in the way they experience web content. Deaf and hard-of-hearing activists like Marlee Matlin, Rikki Poynter, and Sam Wildman have been using their online platforms to improve web-closed captions. Follow in their footsteps and use the voice that the web gives you. Make a YouTube video like Rikki Poynter or write a blog post like Sam Wildman's post, "An Open Letter to Netflix Re: Subtitles."
The Internet is a powerful platform in which large companies like Google can hear directly from their consumers. If you would like to see the quality of closed captions on the web improve, use your voice. Otherwise, you'll continue to see memes like this one...
If providing access to the deaf and hard of hearing lacks incentive, will more YouTube views persuade you? YouTube creators today are forced to look for new and exciting ways to attract viewers. Meanwhile, it’s only getting harder to stand out in the ever-growing, over competitive, viral hungry, trend hopping, video-sharing ecosystem that is YouTube (see the video-host’s latest press release for an idea—the numbers are staggering, unsurprising, and deservedly proud—boasting that over 6 billion hours of YouTube is watched monthly).
Although intended for the deaf and hard of hearing community, captions are providing lesser-known secondary benefits to an unlikely recipient: YouTubers. Content creators are using captions, and they’re doing it for more hits. With rewarding incentives from YouTube, creators are reaping the benefits of “popularizing” their content by tapping into larger audiences—albeit, in a few unlikely places.