On October 11, 2022, the Federal Communications Commission (FCC) released the latest CVAA biennial report to Congress, evaluating the current industry compliance as it pertains to Sections 255, 716, and 718 of the Communications Act of 1934. The biennial report is required by the 21st Century Communications and Video Accessibility Act (CVAA), which amended the Communications Act of 1934 to include updated requirements for ensuring the accessibility of "modern" telecommunications to people with disabilities.
FCC rules under Section 255 of the Communications Act require telecommunications equipment manufacturers and service providers to make their products and services accessible to people with disabilities. If such access is not readily achievable, manufacturers and service providers must make their devices and services compatible with third-party applications, peripheral devices, software, hardware, or consumer premises equipment commonly used by people with disabilities.
Despite major design improvements over the past two years, the report reveals that accessibility gaps still persist and that industry commenters are most concerned about equal access on video conferencing platforms. The COVID-19 pandemic has highlighted the importance of accessible video conferencing services for people with disabilities.
Zoom, BlueJeans, FaceTime, and Microsoft Teams have introduced a variety of accessibility feature enhancements, including screenreader support, customizable chat features, multi-pinning features, and “spotlighting” so that all participants know who is speaking. However, commentators have expressed concern over screen share and chat feature compatibility with screenreaders along with the platforms’ synchronous automatic captioning features.
Although many video conferencing platforms now offer meeting organizers synchronous automatic captioning to accommodate deaf and hard-of-hearing participants, the Deaf and Hard of Hearing Consumer Advocacy (DHH CAO) pointed out that automated captioning sometimes produces incomplete or delayed transcriptions and even if slight delays of live captions cannot be avoided, these captioning delays may cause “cognitive overload.” Comprehension can be further hindered if a person who is deaf or hard of hearing cannot see the faces of speaking participants, for “people with hearing loss rely more on nonverbal information than their peers, and if a person misses a visual cue, they may fall behind in the conversation.”
At present, the automated captioning features on these conference platforms have an error rate of 5-10%. That’s 5-10 errors per 100 words spoken and when the average conversation rate of an English speaker is 150 words per minute, you’re looking at the possibility of over a dozen errors a minute.
Earlier this year, our team put Adobe’s artificial intelligence (AI) powered speech-to-text engine to the test. We tasked our most experienced Caption Editor with using Adobe’s auto-generated transcript to create & edit the captions to meet the quality standards of the FCC and the deaf and hard of hearing community on two types of video clips: a single-speaker program and one with multiple speakers.
How did it go? Take a look: Human-generated Captions vs. Adobe Speech-to-text
Open captions and closed captions are both used to provide text-based representations of spoken dialogue or audio content in videos, but they differ in their visibility and accessibility options.
Here's the difference between closed and open captions:
Feature | Open Caption | Closed Captions |
---|---|---|
Visibility | Permanently embedded in the video | Separate text track that can be turned on or off |
Accessibility | Cannot be turned off | Can be turned on or off by the viewer |
Applications | Wide audiences, noisy environments | Diverse audiences, compliance with accessibility regulations |
Creation | Added during video production | Generated in real-time or embedded manually during post-production or uploaded as a sidecar file |
Both open and closed captions serve the purpose of making videos accessible to individuals who are deaf or hard of hearing, those who are learning a new language, or those who prefer to read the text alongside the audio.
The choice between open or closed captions depends on the specific requirements and preferences of the content creators and the target audience.
This article is current as of February 4th, 2022.
A few months ago, Zoom announced that auto-generated captions (also known as live transcription) were now available for all Zoom meeting accounts. The development has been a long-awaited feature for the deaf and hard-of-hearing community.
As popular and ubiquitous Zoom has become, it can be overwhelming to understand its multiple meeting features and options – especially in regards to closed captioning. Here at Aberdeen Broadcast Services, we offer live captioning services with our team of highly trained, experienced captioners with the largest known dictionaries in the industry. CART (Communication Access Realtime Translation) captioning is still considered the gold standard of captioning (See related post: FranklinCovey Recognizes the Human Advantage in Captioning). Our team at Aberdeen strives to go above and beyond expectations with exceptional captioning and customer service.
Whether you choose to enable Zoom’s artificial intelligence (AI) transcription feature or integrate a 3rd-party service, like Aberdeen Broadcast Services, the following steps will help ensure you’re properly setting up your event for success.
To get started, you'll need to enable closed captioning in your Zoom account settings.
Scroll down to the “Closed captioning” options.
In the top right, enable closed captions by toggling the button from grey to blue to “Allow host to type closed captions or assign a participant/3rd-party service to add closed captions.”
Below is a detailed description of the additional three closed captioning options here in the settings...
This feature enables a 3rd-party closed captioning service, such as Aberdeen Broadcast Services, to caption your Zoom meeting or webinar using captioning software. The captions from a 3rd-party service are integrated into the Zoom meeting via a caption URL or API token that sends its captions to Zoom. For a 3rd-party service such as Aberdeen to provide captions within Zoom, this feature must be enabled.
As mentioned earlier in this post, auto-generated captions or AI captions became available to all Zoom users in October 2021. Zoom refers to auto-generated captions as its live transcription feature, which is powered by automatic speech recognition (ASR) and artificial intelligence (AI) technology. While not as accurate, ASR is an acceptable way to provide live captions for your Zoom event if you are not able to secure a live captioner. If you will be having a live captioner through a 3rd-party service in your meeting, do NOT check “Allow live transcription service to transcribe meeting automatically.”
Unless you expect to use Zoom’s AI live transcription for most of your meetings, it is best to uncheck or disable live transcription as Zoom’s AI auto-generated captions will override 3rd-party captions in a meeting if live transcription is enabled.
This setting gives the audience an additional option to view what is being transcribed during your Zoom meeting or webinar. In addition to viewing captions as subtitles at the bottom of the screen, users will be able to view the full transcript on the right side of the meeting.
The meeting organizer or host can control permission of who can save a full transcript of the closed captions during a meeting. Enabling the Save Captions feature grants access to the entire participant list in a meeting.
Transcript options from 3rd-party services may vary. At Aberdeen Broadcast Services, we provide full transcripts in a variety of formats to fit your live event or post-production needs. For more information, please see our list of captioning exports or contact us.
Once the webinar or meeting is live, the individual assigned as the meeting host can acquire the caption URL or API token.
As the host, copy the API token by clicking on the Closed Caption or Live Transcript button on the bottom of the screen and selecting Copy the API token, which will save the token to your clipboard.
By copying the API token, you will not need to assign yourself or a participant to type. Send the API token to your closed captioning provider to integrate captions from a 3rd-party service into your Zoom meeting. We ask that clients provide the API token at least 20 minutes before an event (and no earlier than 24 hours) to avoid any captioning issues.
Once the API token has been activated within your captioning service, the captioner will be able to test captions from their captioning software.
A notification in the Zoom meeting should pop up at the top saying “Live Transcription (Closed Captioning) has been enabled.” and the Live Transcript or Closed Caption button at the bottom of the screen will appear for the audience. Viewers can now choose Show Subtitle to view the captions.
Viewers will be able to adjust the size of captions by clicking on Subtitle Settings...
Yes! Captioning multiple breakout rooms occurring at the same time is possible using the caption API token to integrate with a 3rd-party, such as Aberdeen Broadcast Services. Zoom's AI live transcription option is currently not supported in multiple Zoom breakout rooms, which is why it is important to consult with live captioning experts to make that happen. Contact us to learn more about how it works.
Enjoy this post? Email sales@abercap.com for more information or feedback. We look forward to hearing your thoughts!
Closed captioning is an essential aspect of modern media consumption, bridging the gap of accessibility and inclusivity for diverse audiences. Yet, despite its widespread use, misconceptions about closed captioning persist. In this article, we delve into the most prevalent myths surrounding this invaluable feature, shedding light on the truth behind closed captioning's capabilities, impact, and indispensable role in enhancing the way we interact with video content.
Let’s debunk these common misunderstandings about closed captioning and gain a fresh perspective on the far-reaching importance of closed captioning in today's digital landscape.
While closed captions are crucial for people with hearing impairments, they benefit a much broader audience. They are also helpful for people learning a new language, those in noisy environments, individuals with attention or cognitive challenges, and viewers who prefer to watch videos silently.
While there are automatic captioning tools, they are not always accurate, especially with complex content, background noise, or accents. Human involvement is often necessary to ensure high-quality and accurate captions.
Some formats, like SCC, support positioning and allow captions to appear in different locations. However, most platforms use standard positioning at the bottom of the screen.
While it's possible to add closed captions after video production, it's more efficient and cost-effective to incorporate captioning during the production process. Integrating captions during editing ensures a seamless viewing experience.
Closed captioning is essential for television and films, but it's also used in various other video content, including online videos, educational videos, social media clips, webinars, and live streams.
Different platforms and devices may have varying requirements for closed caption formats and display styles. To ensure accessibility and optimal viewing experience, captions may need adjustments based on the target platform.
Captioning standards and regulations vary between countries, and it's essential to comply with the specific accessibility laws and guidelines of the target audience's location.
While manual captioning can be time-consuming, there are cost-effective solutions available, including automatic captioning and professional captioning services. Moreover, the benefits of accessibility and broader audience reach often outweigh the investment.
In summary, closed captioning is a vital tool for enhancing accessibility and user experience in videos. Understanding the realities of closed captioning helps ensure that content creators and distributors make informed decisions to improve inclusivity and reach a broader audience.
In this video, Becky Isaacs, Executive Vice President of Aberdeen Broadcast Services, interviews Rob and David from Arlington County, Virginia. The two have been partners in live captioning for the past decade. Rob serves as the executive producer for Arlington County's government cable channel, which encompasses both live streaming and traditional cable TV programming. Meanwhile, David manages engineering, provides production assistance, and oversees live captioning.
Arlington TV is the visual communications channel for Arlington County Government and serves a country of about 230,000 people. Described as a hyper-local version of C-SPAN, Arlington TV broadcasts county board meetings, commission meetings, talk shows, and community town hall meetings. With a mixture of cord-cutters and cable television subscribers in the community, it’s important that their programming is available across multiple outlets – Comcast Xfinity, Verizon FiOS, and online platforms like Facebook, YouTube, and Granicus.
Rob and David describe their technical setup, which involves connecting Aberdeen's live captioners to their audio and video systems via analog phone lines and the EGHT 1430 Ross OpenGear card, with Granicus handling the video feed and captions. This setup ensures that live captions are integrated into their broadcast and streaming content. They share advice for others in a similar position, emphasizing the importance of having a reliable setup and the benefits of human captioners for context and troubleshooting. They appreciate Aberdeen's flexibility and availability during unpredictable meetings, demonstrating a strong working relationship.
Arlington County began captioning due to a Justice Department requirement for ADA compliance. They had issues with smaller captioning companies in the beginning but eventually found Aberdeen back in 2009. Rob and David expressed their deep appreciation for Aberdeen's captioning services. They emphasized the absence of problems and the seamless captioning experience – Rob likened this achievement to an Emmy or Academy Award, highlighting that they haven't received complaints in many years, a testament to Aberdeen's reliability. David is particularly impressed by Aberdeen's accuracy and speed in handling challenging speakers, including those with accents or speech impediments. Aberdeen's team, including captioners and account managers, ensures that captioning support is available regardless of the time, accommodating early morning and late-night meetings.
Rob and David acknowledge that captioning often goes unnoticed when it works seamlessly, and they appreciate how Aberdeen's human captioners excel in understanding context, resulting in more accurate and contextually relevant captions. This focus on enhancing the viewer experience underscores the value of their partnership with Aberdeen.
In the history of our planet, littering is a relatively new problem. It was around the 1950s when manufacturers began producing a higher volume of litter-creating material, such as disposable products and packaging made with plastic. Much like the boom of manufacturers creating more disposable packaging, new video content is being pushed out to streaming platforms in incredible volumes every day.
Along with all this new video content, there are noticeable similarities between littering and a prevalent problem in our industry: inaccessible media – specifically poor captioning quality. Instead of it being food wrappers, water bottles, plastic bags, or cigarette butts, it’s misspellings, lack of punctuation, missing words, or the wrong reading rate (words-per-minute on the screen) that affects readability.
The motives behind littering and choosing poor-quality captioning are similar and it generally boils down to one of the following reasons: laziness or carelessness, lenient law enforcement, and/or presence of litter already in the area. Both are very selfish acts, allowing one person to take the easy route by just discarding their trash wherever they please, or in the case of captioning, choosing the quickest & cheapest option available to fulfill a request without any regard to the quality. When it comes to organizations enforcing the guidelines and standards, if their efforts are relaxed, it will encourage a lot of people to not follow them. And the presence of other content creators getting away with inaccessible media will, no doubt, encourage others to take the same route.
In The Big Hack’s survey of over 3,000 disabled viewers, four in five disabled people experience accessibility issues with video-on-demand services. “66% of users feel either frustrated, let down, excluded or upset by inaccessible entertainment.” In fact, “20% of disabled people have canceled a streaming service subscription because of accessibility issues.” It’s clear: inaccessible media is polluting video content libraries.
Viewers that do not utilize closed captions may not always think about how poor-quality captions affect the users that do, just like the consequences of littering on the community and animals that all share the Earth’s ecosystem are often overlooked. Education and awareness are important tools in reducing the problem. If we allow it to become commonplace, much like litter, bad captioning will wash away into the “ocean” of online video content and become permanent pollution our video “eco-system.”
So, what can we do about it before it’s too late? Much like with littering, we can start with community cleanups. Let the content creators know that you value captioning and would enjoy their content more if captions were present and accurately represent the program to all viewers. Find their websites and social media pages and contact them – make them aware. And if it’s on broadcast television, let the FCC know.
Clean communities have a better chance of attracting new business, residents, and tourists – the same will go for the online video community. Quality captioning is your choice and, for the sake of the video community, please evaluate the quality of work done by the captioning vendors that you’re considering and don’t always just go for the cheapest and quickest option. Help keep the video community clean.
Automated transcription and captioning technologies continue to improve – there’s no denying that. It was only a few years ago when the industry was applauding accuracy levels above 80%; but nowadays, we’re witnessing results above 90%. That’s an impressive improvement, but there’s still that error margin of 5-10% that will never be acceptable and easily cause the viewer to miss the message that is being conveyed.
Consider this: the average English speaker will have a conversation rate of around 120-150 words per minute (WPM). However, trained, professional speakers will speak between 150-170 WPM and even a higher rate from the more charismatic and confident presenters. During his TED Talk Why We Do What We Do, motivational speaker Tony Robbins clocked in at an average of 201 WPM. Applying an error rate of 5-10% on Tony’s 21-minute talk would yield over 400 errors. That’s 400 chances of a wrong word completely transforming the meaning of a sentence.
This is where human live captioners will continue to win over the automated competition. With proper preparation from the presenter, live captioners will prepare for a session. Supporting documents provided by the presenter ahead of an event helps the live captioner prepare for any uncommon terminology that may be used, learn important acronyms, and know the proper spelling of people’s names. Live captioners can get a sense of the presenter’s speaking style and WPM beforehand, identify any accents, and may even work with the speaker on how they will best be able to keep pace with them. Preparation like this is why our live captioners can write at an accuracy rate of 98% or higher.
Below are the most recent kudos we received from a client where their event checked all the boxes in the examples above. Our live captioner’s preparation maintained the highest quality standards and gained attention to the fact that oftentimes the captions are helpful for ALL viewers to follow along.
Michele was our captioner for a FranklinCovey team event today where someone from India gave an interactive virtual tour of street art. Michele did an AMAZING job. It was a challenging assignment. The speaker was extremely fast-paced, was discussing names and extremely specific locations that wouldn’t be familiar to most native English speakers, and of course he had a strong accent to American ears. Michele joined early, communicated directly with the speaker about how they could coordinate if needed, and then her captions were absolutely phenomenal. I had the streamtext pulled up because sometimes it was hard for me, with full hearing, to pick up on what was said. Her ability to keep pace and accuracy was truly top-notch.
There is no way our team member who is Deaf would have been able to follow and participate in the event without Michele’s captions. By making the event accessible, Michele also did more than just help our team member feel included – she made it possible for me to bring a diverse experience to the whole team. Without captions, I wouldn’t have scheduled that event, knowing it would be hard to follow, and the entire team would have lost out on something that is a big priority for FranklinCovey.
I imagine captioners often go unthanked by clients and are the unsung heroes of the meetings they are in. It is easy for people who don’t use captions to not even realize what is happening behind the scenes. But I want you all to know, and Michele specifically to know – I am so grateful. Michele’s remarkable talents make an important difference in our world.
We genuinely appreciate it when the preparation and dedication of our live captioners is noticed and has a profound impact as it did here. It’s results like this that keep our team truly passionate about producing the best possible accuracy in the work we do. And it’s the reason why we continue to use human live captioners on all of our events. They’ll win every time.
This article was our contribution to the Fall 2020 edition of ChurchLeaders MinistryTech magazine. https://churchleaders.com/outreach-missions/outreach-missions-articles/382555-captioning.html
Technological advancements have made preaching the Gospel through new mediums easier than ever – and the limitations in place due to the COVID-19 pandemic has forced embracing these new technologies a necessity. A majority of the fastest-growing churches in the U.S. had already begun live-streaming their services as a way to grow and connect with their audience that may not be able to physically attend due to distance, age, or a disability. Now, it’s a scramble for everyone to get onboard with a solution.
But this new burden to adapt is not all that bad. So far, we are hearing a positive response from ministries that the newly implemented video streams of their services have not only provided an adequate solution for their congregation but has also gained exposure to more members of their community. This leads us to see a common trend among the churches that make Outreach’s 100 Fastest-Growing Churches in America list every year: online services.
Like nearly every institution in American life, places of worship have been hit hard by the novel coronavirus and subsequent social distancing measures – no longer able to physically gather as one; to collectively nod their heads when a verse speaks to them or sway together during songs of worship.
State-to-state the laws vary, but here in California places of worship have been asked to “discontinue indoor singing and chanting activities and limit indoor attendance to 25% of building capacity or a maximum of 100 attendees, whichever is lower.” And it’s also encouraged to “consider practicing these activities through alternative methods (such as internet streaming).”
So amidst the uncertainty of how and when the regulations will change, religious leaders have turned to online platforms to practice their faith with community members. Since March of this year, BoxCast, the complete live video streaming solution popular among churches, experienced an 85% increase in active accounts and a 500% increase in viewing minutes compared to the same period last year. Even the modestly-sized church streaming platform streamingchurch.net saw an immediate increase in their subscriber base of 20% and their total viewership triple to 60,000 weekly viewers.
Rick Warren from Saddleback Church reports that in the last 23 weeks – since the church moved to online-only services – they have more than doubled their 45,000-weekly attendance. This is their greatest growth in the shortest amount of time in their 40-year history.
The silver lining here is that being forced to find an online solution has allowed the message to be more accessible than ever. And once the setup is in place to live-stream your services, keeping it as an option for your audience unable to attend in person even after all restrictions are lifted will be an invaluable resource for continued growth.
As audiences grow, it is important to point out that approximately 20% of American adults (48 million!) aged 18 and over report some trouble hearing. Some of the audience may be sitting in silence; literally.
Captions are words displayed on a television, computer, mobile device, etc., providing the speech or sound portion of a program or video via text. Captions allow viewers to follow the dialogue and the action of a program simultaneously. Captions can also provide information about who is speaking or about sound effects that might be important to understanding the message.
Captions help comprehension and clarification of the dialogue – it’s not just with those with hearing loss. Reading along with captions can help other members of the congregation with concentration and engagement.
After surveying a small sample of churches using captioning, we’ve seen similar responses where they’ve started by adding captioning to one service a week to gauge the response. Most find encouraging numbers with engagement on that service and move to add captions to the remaining services and even start captioning their archived videos of past sermons.
So as your audience grows, consider being further accessible with captioning and ensure you’re reaching that additional 20%.
**Update: July 29, 2020** Changes have been made to the integration methods on WebEX, YouTube, and The Church Online platforms since we hosted this webinar. Please download the materials to get the most current charts.**
From April 9th, 2020
Virtual gatherings have become the new normal during the stay-at-home orders in place as a result of the COVID-19 outbreak. It's as important as ever for organizations to keep their audience engaged, informed, and connected. With that in mind, it’s also essential that these events are accessible to the deaf and hard of hearing – which is approximately 20% of American adults.
We've been receiving countless phone calls looking for answers on how captioning works, what's possible, and how quickly businesses can get set up and ready to go.
In this 30-minute webinar, Matt Cook (President) and Becky Isaacs (Executive VP & Live Captioning Manager) from Aberdeen Broadcast Services discuss the options available for implementing accessible meetings with captioning – from the most simplified approach, to seamlessly integrating with your video player or conferencing platform.
Whether you use Zoom, Adobe Connect, or Microsoft Teams for your virtual meetings or YouTube, Facebook Live, or even The Church Online platform to stream your remote events, Aberdeen can find the solution for your audience to stay engaged and accessible with closed captioning.
We are constantly reminded of the difference that live captioning services can make for a ministry.
Take a quick listen (under a minute!) as our Matt Cook, Aberdeen President, talks about Woodland’s success with live captioning.
After working with Woodlands Church for many years on the post-produced closed captioning and AberFast Station Delivery of the Kerry Shook program, it was just a few years ago that they decided to give our live captioning services a try. Before long, they discovered all of the positive ways live captioning supports their events and services — and, most importantly, makes them more accessible to church embers.
“Woodlands Church has used Aberdeen live captioning services since 2018. The audience engagement and measurable growth have been so positive that we've added captioning to an additional service.”
Vince W. - Online Campus Pastor – Woodlands Church