In the history of our planet, littering is a relatively new problem. It was around the 1950s when manufacturers began producing a higher volume of litter-creating material, such as disposable products and packaging made with plastic. Much like the boom of manufacturers creating more disposable packaging, new video content is being pushed out to streaming platforms in incredible volumes every day.
Along with all this new video content, there are noticeable similarities between littering and a prevalent problem in our industry: inaccessible media – specifically poor captioning quality. Instead of it being food wrappers, water bottles, plastic bags, or cigarette butts, it’s misspellings, lack of punctuation, missing words, or the wrong reading rate (words-per-minute on the screen) that affects readability.
The motives behind littering and choosing poor-quality captioning are similar and it generally boils down to one of the following reasons: laziness or carelessness, lenient law enforcement, and/or presence of litter already in the area. Both are very selfish acts, allowing one person to take the easy route by just discarding their trash wherever they please, or in the case of captioning, choosing the quickest & cheapest option available to fulfill a request without any regard to the quality. When it comes to organizations enforcing the guidelines and standards, if their efforts are relaxed, it will encourage a lot of people to not follow them. And the presence of other content creators getting away with inaccessible media will, no doubt, encourage others to take the same route.
In The Big Hack’s survey of over 3,000 disabled viewers, four in five disabled people experience accessibility issues with video-on-demand services. “66% of users feel either frustrated, let down, excluded or upset by inaccessible entertainment.” In fact, “20% of disabled people have canceled a streaming service subscription because of accessibility issues.” It’s clear: inaccessible media is polluting video content libraries.
Viewers that do not utilize closed captions may not always think about how poor-quality captions affect the users that do, just like the consequences of littering on the community and animals that all share the Earth’s ecosystem are often overlooked. Education and awareness are important tools in reducing the problem. If we allow it to become commonplace, much like litter, bad captioning will wash away into the “ocean” of online video content and become permanent pollution our video “eco-system.”
So, what can we do about it before it’s too late? Much like with littering, we can start with community cleanups. Let the content creators know that you value captioning and would enjoy their content more if captions were present and accurately represent the program to all viewers. Find their websites and social media pages and contact them – make them aware. And if it’s on broadcast television, let the FCC know.
Clean communities have a better chance of attracting new business, residents, and tourists – the same will go for the online video community. Quality captioning is your choice and, for the sake of the video community, please evaluate the quality of work done by the captioning vendors that you’re considering and don’t always just go for the cheapest and quickest option. Help keep the video community clean.
Automated transcription and captioning technologies continue to improve – there’s no denying that. It was only a few years ago when the industry was applauding accuracy levels above 80%; but nowadays, we’re witnessing results above 90%. That’s an impressive improvement, but there’s still that error margin of 5-10% that will never be acceptable and easily cause the viewer to miss the message that is being conveyed.
Consider this: the average English speaker will have a conversation rate of around 120-150 words per minute (WPM). However, trained, professional speakers will speak between 150-170 WPM and even a higher rate from the more charismatic and confident presenters. During his TED Talk Why We Do What We Do, motivational speaker Tony Robbins clocked in at an average of 201 WPM. Applying an error rate of 5-10% on Tony’s 21-minute talk would yield over 400 errors. That’s 400 chances of a wrong word completely transforming the meaning of a sentence.
This is where human live captioners will continue to win over the automated competition. With proper preparation from the presenter, live captioners will prepare for a session. Supporting documents provided by the presenter ahead of an event helps the live captioner prepare for any uncommon terminology that may be used, learn important acronyms, and know the proper spelling of people’s names. Live captioners can get a sense of the presenter’s speaking style and WPM beforehand, identify any accents, and may even work with the speaker on how they will best be able to keep pace with them. Preparation like this is why our live captioners can write at an accuracy rate of 98% or higher.
Below are the most recent kudos we received from a client where their event checked all the boxes in the examples above. Our live captioner’s preparation maintained the highest quality standards and gained attention to the fact that oftentimes the captions are helpful for ALL viewers to follow along.
Michele was our captioner for a FranklinCovey team event today where someone from India gave an interactive virtual tour of street art. Michele did an AMAZING job. It was a challenging assignment. The speaker was extremely fast-paced, was discussing names and extremely specific locations that wouldn’t be familiar to most native English speakers, and of course he had a strong accent to American ears. Michele joined early, communicated directly with the speaker about how they could coordinate if needed, and then her captions were absolutely phenomenal. I had the streamtext pulled up because sometimes it was hard for me, with full hearing, to pick up on what was said. Her ability to keep pace and accuracy was truly top-notch.
There is no way our team member who is Deaf would have been able to follow and participate in the event without Michele’s captions. By making the event accessible, Michele also did more than just help our team member feel included – she made it possible for me to bring a diverse experience to the whole team. Without captions, I wouldn’t have scheduled that event, knowing it would be hard to follow, and the entire team would have lost out on something that is a big priority for FranklinCovey.
I imagine captioners often go unthanked by clients and are the unsung heroes of the meetings they are in. It is easy for people who don’t use captions to not even realize what is happening behind the scenes. But I want you all to know, and Michele specifically to know – I am so grateful. Michele’s remarkable talents make an important difference in our world.
We genuinely appreciate it when the preparation and dedication of our live captioners is noticed and has a profound impact as it did here. It’s results like this that keep our team truly passionate about producing the best possible accuracy in the work we do. And it’s the reason why we continue to use human live captioners on all of our events. They’ll win every time.
This article was our contribution to the Fall 2020 edition of ChurchLeaders MinistryTech magazine. https://churchleaders.com/outreach-missions/outreach-missions-articles/382555-captioning.html
Technological advancements have made preaching the Gospel through new mediums easier than ever – and the limitations in place due to the COVID-19 pandemic has forced embracing these new technologies a necessity. A majority of the fastest-growing churches in the U.S. had already begun live-streaming their services as a way to grow and connect with their audience that may not be able to physically attend due to distance, age, or a disability. Now, it’s a scramble for everyone to get onboard with a solution.
But this new burden to adapt is not all that bad. So far, we are hearing a positive response from ministries that the newly implemented video streams of their services have not only provided an adequate solution for their congregation but has also gained exposure to more members of their community. This leads us to see a common trend among the churches that make Outreach’s 100 Fastest-Growing Churches in America list every year: online services.
Like nearly every institution in American life, places of worship have been hit hard by the novel coronavirus and subsequent social distancing measures – no longer able to physically gather as one; to collectively nod their heads when a verse speaks to them or sway together during songs of worship.
State-to-state the laws vary, but here in California places of worship have been asked to “discontinue indoor singing and chanting activities and limit indoor attendance to 25% of building capacity or a maximum of 100 attendees, whichever is lower.” And it’s also encouraged to “consider practicing these activities through alternative methods (such as internet streaming).”
So amidst the uncertainty of how and when the regulations will change, religious leaders have turned to online platforms to practice their faith with community members. Since March of this year, BoxCast, the complete live video streaming solution popular among churches, experienced an 85% increase in active accounts and a 500% increase in viewing minutes compared to the same period last year. Even the modestly-sized church streaming platform streamingchurch.net saw an immediate increase in their subscriber base of 20% and their total viewership triple to 60,000 weekly viewers.
Rick Warren from Saddleback Church reports that in the last 23 weeks – since the church moved to online-only services – they have more than doubled their 45,000-weekly attendance. This is their greatest growth in the shortest amount of time in their 40-year history.
The silver lining here is that being forced to find an online solution has allowed the message to be more accessible than ever. And once the setup is in place to live-stream your services, keeping it as an option for your audience unable to attend in person even after all restrictions are lifted will be an invaluable resource for continued growth.
As audiences grow, it is important to point out that approximately 20% of American adults (48 million!) aged 18 and over report some trouble hearing. Some of the audience may be sitting in silence; literally.
Captions are words displayed on a television, computer, mobile device, etc., providing the speech or sound portion of a program or video via text. Captions allow viewers to follow the dialogue and the action of a program simultaneously. Captions can also provide information about who is speaking or about sound effects that might be important to understanding the message.
Captions help comprehension and clarification of the dialogue – it’s not just with those with hearing loss. Reading along with captions can help other members of the congregation with concentration and engagement.
After surveying a small sample of churches using captioning, we’ve seen similar responses where they’ve started by adding captioning to one service a week to gauge the response. Most find encouraging numbers with engagement on that service and move to add captions to the remaining services and even start captioning their archived videos of past sermons.
So as your audience grows, consider being further accessible with captioning and ensure you’re reaching that additional 20%.
When we launched our new website last year, we also made the move host it on the .IO domain. Why? Because all of the advancements we're executing behind the scenes that help ensure we're captioning and delivering media in the most advanced and efficient way possible are connected by our application programming interfaces ("APIs") built on this domain. Having our website on the same domain is a reminder of our continued dedication to our clients to improve, and highlights one of our core values: to be solution driven.
Our most considerable investment was our move to the AWS platform – the gold standard in the industry and what more than 40% of cloud computing is run on. This move not only permitted us to work uninterrupted by the current global pandemic, but it also allows us to expedite those quick-turn broadcast deliveries for our clients without losing that personal touch or eyes-on review by our AberFast engineers that makes our video delivery service unique.
Here’s our President, Matt Cook, explaining how a client came to us with a new challenge of captioning and delivering time-sensitive content – where every minute counts – and how we found the perfect solution for them:
While the delivery time frame is shrinking, viewers still expect quality — as do the broadcast outlets and online platforms that make content available. We are well aware of these pressures and that’s why we encourage you to bring us your challenges so our team can find the most efficient workflow for you.
Engineering solutions is one of our commitments to our clients and one of the values to which our team is dedicated.
**Update: July 29, 2020** Changes have been made to the integration methods on WebEX, YouTube, and The Church Online platforms since we hosted this webinar. Please download the materials to get the most current charts.**
From April 9th, 2020
Virtual gatherings have become the new normal during the stay-at-home orders in place as a result of the COVID-19 outbreak. It's as important as ever for organizations to keep their audience engaged, informed, and connected. With that in mind, it’s also essential that these events are accessible to the deaf and hard of hearing – which is approximately 20% of American adults.
We've been receiving countless phone calls looking for answers on how captioning works, what's possible, and how quickly businesses can get set up and ready to go.
In this 30-minute webinar, Matt Cook (President) and Becky Isaacs (Executive VP & Live Captioning Manager) from Aberdeen Broadcast Services discuss the options available for implementing accessible meetings with captioning – from the most simplified approach, to seamlessly integrating with your video player or conferencing platform.
Whether you use Zoom, Adobe Connect, or Microsoft Teams for your virtual meetings or YouTube, Facebook Live, or even The Church Online platform to stream your remote events, Aberdeen can find the solution for your audience to stay engaged and accessible with closed captioning.
We are constantly reminded of the difference that live captioning services can make for a ministry.
Take a quick listen (under a minute!) as our Matt Cook, Aberdeen President, talks about Woodland’s success with live captioning.
After working with Woodlands Church for many years on the post-produced closed captioning and AberFast Station Delivery of the Kerry Shook program, it was just a few years ago that they decided to give our live captioning services a try. Before long, they discovered all of the positive ways live captioning supports their events and services — and, most importantly, makes them more accessible to church embers.
“Woodlands Church has used Aberdeen live captioning services since 2018. The audience engagement and measurable growth have been so positive that we've added captioning to an additional service.”
Vince W. - Online Campus Pastor – Woodlands Church
It is not every day you get to observe acts of pure kindness and dedication of service in today’s world. Last week, myself and Becky Isaacs founder of Aberdeen, had the privilege of observing the No Limits after-school program for deaf children.
No
During our visit, we were given the grand tour and introduced to sweet Jacob who explained to a very ignorant person (I’m speaking about myself here) how cochlear implants work. We also met adorable Ian, who just spoke his first word about two months ago and just turned 2 years old.
What was shocking was the staggering statistics we learned from their volunteers: Over 75% of deaf children in California graduate from High School - functionally illiterate - unable to read and write higher than the fourth-grade level; and, that over 90% of children with hearing loss are born to hearing parents. The program helps battle these statistics through advocacy, awareness, and education, while implementing the following core values listed on their mission statement:
Honestly, there aren’t enough words that give justice to this wonderful program. It was truly inspiring and touching to see. The educators, volunteers, and staff that work with the program truly have a servant’s heart. They help these children no matter their circumstances and situation, and students who have attended No Limits three years or more are now enrolled in college and graduated from college.
When our tour came to an end, we were in awe. And, I kept on asking myself, what more can I do to help? I was even more moved when they thanked Aberdeen for the closed captioning services our company provides for deaf and hard-of-hearing individuals. Seriously, what a great bunch of people!
To learn more about No Limits, or to see how you can volunteer or donate to this wonderful program, please visit their website at http://www.nolimitsfordeafchildren.org/.
There’s a growing trend on social media and sites like Reddit and Quora to showcase captioning errors from television and numerous online platforms. As accessibility laws tighten and the quality standards for captioning on broadcasts become more rigorous, how do these bloggers have so much fuel for their posts on captioning errors? It is a simple question with many complicated answers.
Live television programming is captioned in real-time either by machines or humans working with a stenotype machine (like those used in courtrooms) and thus tends to lag slightly behind and, inevitably, will include some paraphrasing and errors. While the Federal Communication Commission requires American television stations' post-production captions to meet certain standards, the Internet is still vastly unregulated. Video-sharing websites like YouTube have struggled to provide accessible captions. Despite YouTube's recent efforts to improve accessibility, their captions continue to disappoint viewers, especially those of the deaf and hard-of-hearing community.
In a 2014 The Atlantic article called "The Sorry State of Closed Captioning," Tammy H. Nam explains why machines cannot create the same experience humans can. She posits, "Machine translation is responsible for much of today’s closed-captioning and subtitling of broadcast and online streaming video. It can’t register sarcasm, context, or word emphasis." By using machines instead of human writers and editors, sites like YouTube are not providing the same viewing experience to the deaf and hard of hearing as they are to their other patrons. Humans can understand which homophone to use based on context. There is an enormous difference between the words soar and sore, air and heir, suite and sweet. Humans can also determine when noise is important to the plot of a story and thereby include it in the captions so that a non-hearing viewer won't miss critical details. In the same Atlantic article, deaf actress Marlee Matlin says, "I rely on closed captioning to tell me the entire story…I constantly spot mistakes in the closed captions. Words are missing or something just doesn’t make sense." Accessible closed captions should follow along exactly with the spoken dialogue and important sounds so that viewers are immersed in the story. Having to decipher poor captions takes the viewer out of the flow of the story and creates a frustrating experience.
YouTube created its own auto caption software for its creators to use in 2010. The software is known for its incomprehensible captions. Deaf YouTuber and activist Rikki Poynter made a video in 2015 highlighting the various ways in which YouTube's automatic captions are inaccessible. She wrote a 2018 blog post explaining her experience with the software, "Most of the words were incorrect. There was no grammar. (For the record, I’m no expert when it comes to grammar, but the lack of punctuation and capitalization sure was something.) Everything was essentially one long run-on sentence. Captions would stack up on each other and move at a slow pace." For years, Rikki and other deaf and hard-of-hearing YouTube users had to watch videos with barely any of the audio accurately conveyed. Although her blog post highlights the ways in which YouTube's automatic captions have improved since 2015, she writes, "With all of that said, do I think that we should choose to use only automatic captions? No, I don’t suggest that. I will always suggest manually written or edited captions because they will be the most accurate. Automatic captions are not 100% accessible and that is what captions should be." The keyword is accessible. When captions do not accurately reflect spoken words in videos, television shows, and movies, the stories and information are inaccessible to the deaf and hard of hearing. Missing words, incorrect words, poor timing, captions covering subtitles, or other important graphics all take the viewer out of the experience or leave out critical information to fully understand and engage with the content. Until web resources like YouTube take their deaf and hard-of-hearing viewer's complaints seriously, they will continue to alienate them.
So, what can we do about poor web-closed captioning? Fortunately, the Internet is also an amazing tool that allows consumers and users to have a voice in the way they experience web content. Deaf and hard-of-hearing activists like Marlee Matlin, Rikki Poynter, and Sam Wildman have been using their online platforms to improve web-closed captions. Follow in their footsteps and use the voice that the web gives you. Make a YouTube video like Rikki Poynter or write a blog post like Sam Wildman's post, "An Open Letter to Netflix Re: Subtitles."
The Internet is a powerful platform in which large companies like Google can hear directly from their consumers. If you would like to see the quality of closed captions on the web improve, use your voice. Otherwise, you'll continue to see memes like this one...
“The ability to speak does not make you intelligent.”
- Qui-Gon Jinn
One could imagine the kinds of responses people might give when asked to list humanity's greatest achievements. A quick internet search reveals such monumental accomplishments as the building of the pyramids, the advent of flight, or the recording of Pink Floyd's "Dark Side of the Moon." Not surprisingly, few of these lists include the creation of language — more specifically grammar — as great accomplishments, but how many of humanity's greatest achievements would have been possible without the ability for human beings to communicate with one another in precise terms? The phrases failure to communicate or communication breakdown are commonly heard, and indeed, poor communication seems to be a handy scapegoat for a myriad of failures, including marriages, wars, and road construction projects. It seems natural for humans to go straight to the source and blame language when things go wrong, so what exactly is the point at which language loses all meaning?Read
Last month, the FCC amended a few sections of Title 47 CFR 79.1: the rule pertaining to closed captioning of televised video programming. The amendments, specifically to 79.1(g)(1) through (9) and (i)(1) through (2), along with the removal of (j)(4), are a follow-up to the proposed reallocation of responsibilities of the Video Programmers and Video Program Distributors first established back in early 2016. The updates to the rule reflect the final decisions on how a compliance ladder will operate when handling consumer complaints related to closed captioning quality concerns.
The ruling focuses on two different scenarios based on how the consumer may approach making a complaint. The FCC recommends filing all complaints within 60 days of the problem either directly with the FCC, or with the Video Program Distributor (VPD) responsible for delivering the program to the consumer. Depending on how the complaint is filed, the review and steps taken to correct the issue should follow the steps below.Read