Photo of a hand on a remote scrolling through tv settings

On July 18, 2024, the FCC released Report and Order (FCC 24-79) which implements a “readily accessible” requirement for closed captioning display settings on various video devices, allowing users to customize font size, type, color, position, opacity, and background to enhance readability and viewing preferences. This Order addresses the difficulties many users, particularly those who are deaf or hard of hearing, face due to complex navigation, inconsistent device interfaces, limited customization options, and inadequate support. This initiative responds to widespread complaints about the accessibility challenges of closed captioning.

There will be four elements to consider in deciding whether or not these display settings are "readily accessible”, which manufacturers of covered apparatus and multichannel video programming distributors (MVPDs) will need to comply with in this Order. These four elements include: proximity, ensuring settings are easy to navigate to; discoverability, making them straightforward to find; previewability, allowing users to see changes in real-time; and consistency/persistence, maintaining user settings across devices and sessions.

FCC Commissioner Anna Gomez stated, "Ensuring that those who are deaf and hard of hearing can locate and adjust closed caption settings is essential to their being able to meaningfully access and enjoy video programming. While this is a milestone to be proud of, as technology continues to advance, it is crucial that manufacturers prioritize the inclusion of accessibility features into product development from the beginning. Accessibility by design."

The discussions and rulings on these matters emphasize the FCC's commitment to improving accessibility in communications technologies, ensuring that closed captioning features are more user-friendly and customizable. Hopefully, these changes will be implemented sooner rather than later, so more people can enjoy the benefits of closed captions.

Photo of a vintage TV with an alert from the emercency alert system

The Federal Communications Commission’s (FCC) Public Safety and Homeland Security Bureau (PSHSB) issued a Public Notice to remind Emergency Alert System (EAS) participants of their obligation to ensure that EAS alerts are accessible to persons with disabilities.

The Federal Emergency Management Agency (FEMA), in coordination with the FCC, will conduct a nationwide Emergency Alert System (EAS) and Wireless Emergency Alert (WEA) test on October 4, 2023.

The Public Notice also reminded EAS Participants that they must file ETRS Form Two after the nationwide EAS test no later than October 5, 2023, and they must file ETRS Form Three on or before Nov. 20, 2023. For TV stations, to be visually accessible, EAS texts must be displayed as follows (as it relates to closed captioning):

“At the top of the television screen or where it will not interfere with other visual messages (e.g., closed captioning),” and “without overlapping lines or extending beyond the viewable display (except for video crawls that intentionally scroll on and off the screen)…”

This is in addition to another FCC Public Notice which states:

Individuals who are Deaf or Hard of Hearing. Emergency information provided in the audio portion of programming also must be accessible to persons who are deaf or hard of hearing through closed captioning or other methods of visual presentation, including open captioning, crawls or scrolls that appear on the screen. Visual presentation of emergency information may not block any closed captioning, and closed captioning may not block any emergency information provided by crawls, scrolls, or other visual means.”

As EAS alerts are expected to be more common in the future, this is something that we in the captioning industry will be prepared for and do our part to make it better for viewers.

Photo of a video conferencing platform

On June 8, 2023, the Federal Communications Commission (FCC) released a Report and Order, Notice of Proposed Rulemaking, aiming to further ensure accessibility for all individuals in video conferencing services. The action establishes that under Section 716 of the Twenty-First Century Communications and Video Accessibility Act of 2010 (CVAA), video conferencing platforms commonly used for work, school, healthcare, and other purposes, fall under the definition of "interoperable video conferencing service."

Under Section 716 of the CVAA, Advanced Communications Services (ACS) and equipment manufacturers are required to make their services and equipment accessible to individuals with disabilities, unless achieving accessibility is not feasible. ACS includes interoperable video conferencing services such as Zoom, Microsoft Teams, Google Meet, and BlueJeans. The FCC previously left the interpretation of "interoperable" open, but in this latest report, it adopted the statutory definition without modification, encompassing services that provide real-time video communication to enable users to share information.

In the Notice of Proposed Rulemaking, the FCC seeks public comments on performance objectives for interoperable video conferencing services, including requirements for accurate and synchronous captions, text-to-speech functionality, and effective video connections for sign language interpreters.

The FCC's actions on this item are an important step toward ensuring that people with disabilities have equal access to video conferencing services. The Report & Order will help to make video conferencing more accessible and promote greater inclusion and participation of people with disabilities.

Photo of a conference call on Zoom

On October 11, 2022, the Federal Communications Commission (FCC) released the latest CVAA biennial report to Congress, evaluating the current industry compliance as it pertains to Sections 255, 716, and 718 of the Communications Act of 1934. The biennial report is required by the 21st Century Communications and Video Accessibility Act (CVAA), which amended the Communications Act of 1934 to include updated requirements for ensuring the accessibility of "modern" telecommunications to people with disabilities.

FCC rules under Section 255 of the Communications Act require telecommunications equipment manufacturers and service providers to make their products and services accessible to people with disabilities. If such access is not readily achievable, manufacturers and service providers must make their devices and services compatible with third-party applications, peripheral devices, software, hardware, or consumer premises equipment commonly used by people with disabilities.

Accessibility Barriers

Despite major design improvements over the past two years, the report reveals that accessibility gaps still persist and that industry commenters are most concerned about equal access on video conferencing platforms. The COVID-19 pandemic has highlighted the importance of accessible video conferencing services for people with disabilities.

Zoom, BlueJeans, FaceTime, and Microsoft Teams have introduced a variety of accessibility feature enhancements, including screenreader support, customizable chat features, multi-pinning features, and “spotlighting” so that all participants know who is speaking. However, commentators have expressed concern over screen share and chat feature compatibility with screenreaders along with the platforms’ synchronous automatic captioning features.

Although many video conferencing platforms now offer meeting organizers synchronous automatic captioning to accommodate deaf and hard-of-hearing participants, the Deaf and Hard of Hearing Consumer Advocacy (DHH CAO) pointed out that automated captioning sometimes produces incomplete or delayed transcriptions and even if slight delays of live captions cannot be avoided, these captioning delays may cause “cognitive overload.” Comprehension can be further hindered if a person who is deaf or hard of hearing cannot see the faces of speaking participants, for “people with hearing loss rely more on nonverbal information than their peers, and if a person misses a visual cue, they may fall behind in the conversation.”

Automated vs. Human-generated Captions

At present, the automated captioning features on these conference platforms have an error rate of 5-10%. That’s 5-10 errors per 100 words spoken and when the average conversation rate of an English speaker is 150 words per minute, you’re looking at the possibility of over a dozen errors a minute.

Earlier this year, our team put Adobe’s artificial intelligence (AI) powered speech-to-text engine to the test. We tasked our most experienced Caption Editor with using Adobe’s auto-generated transcript to create & edit the captions to meet the quality standards of the FCC and the deaf and hard of hearing community on two types of video clips: a single-speaker program and one with multiple speakers.

How did it go? Take a look: Human-generated Captions vs. Adobe Speech-to-text

In the history of our planet, littering is a relatively new problem. It was around the 1950s when manufacturers began producing a higher volume of litter-creating material, such as disposable products and packaging made with plastic. Much like the boom of manufacturers creating more disposable packaging, new video content is being pushed out to streaming platforms in incredible volumes every day.

Along with all this new video content, there are noticeable similarities between littering and a prevalent problem in our industry: inaccessible media – specifically poor captioning quality. Instead of it being food wrappers, water bottles, plastic bags, or cigarette butts, it’s misspellings, lack of punctuation, missing words, or the wrong reading rate (words-per-minute on the screen) that affects readability.

The motives behind littering and choosing poor-quality captioning are similar and it generally boils down to one of the following reasons: laziness or carelessness, lenient law enforcement, and/or presence of litter already in the area. Both are very selfish acts, allowing one person to take the easy route by just discarding their trash wherever they please, or in the case of captioning, choosing the quickest & cheapest option available to fulfill a request without any regard to the quality. When it comes to organizations enforcing the guidelines and standards, if their efforts are relaxed, it will encourage a lot of people to not follow them. And the presence of other content creators getting away with inaccessible media will, no doubt, encourage others to take the same route.

In The Big Hack’s survey of over 3,000 disabled viewers, four in five disabled people experience accessibility issues with video-on-demand services. “66% of users feel either frustrated, let down, excluded or upset by inaccessible entertainment.” In fact, “20% of disabled people have canceled a streaming service subscription because of accessibility issues.” It’s clear: inaccessible media is polluting video content libraries.

Viewers that do not utilize closed captions may not always think about how poor-quality captions affect the users that do, just like the consequences of littering on the community and animals that all share the Earth’s ecosystem are often overlooked. Education and awareness are important tools in reducing the problem. If we allow it to become commonplace, much like litter, bad captioning will wash away into the “ocean” of online video content and become permanent pollution our video “eco-system.”

So, what can we do about it before it’s too late? Much like with littering, we can start with community cleanups. Let the content creators know that you value captioning and would enjoy their content more if captions were present and accurately represent the program to all viewers. Find their websites and social media pages and contact them – make them aware. And if it’s on broadcast television, let the FCC know.

Clean communities have a better chance of attracting new business, residents, and tourists – the same will go for the online video community. Quality captioning is your choice and, for the sake of the video community, please evaluate the quality of work done by the captioning vendors that you’re considering and don’t always just go for the cheapest and quickest option. Help keep the video community clean.

Internet Closed Captioning Quality

There’s a growing trend on social media and sites like Reddit and Quora to showcase captioning errors from television and numerous online platforms. As accessibility laws tighten and the quality standards for captioning on broadcasts become more rigorous, how do these bloggers have so much fuel for their posts on captioning errors? It is a simple question with many complicated answers.

Live television programming is captioned in real-time either by machines or humans working with a stenotype machine (like those used in courtrooms) and thus tends to lag slightly behind and, inevitably, will include some paraphrasing and errors. While the Federal Communication Commission requires American television stations' post-production captions to meet certain standards, the Internet is still vastly unregulated. Video-sharing websites like YouTube have struggled to provide accessible captions. Despite YouTube's recent efforts to improve accessibility, their captions continue to disappoint viewers, especially those of the deaf and hard-of-hearing community.

In a 2014 The Atlantic article called "The Sorry State of Closed Captioning," Tammy H. Nam explains why machines cannot create the same experience humans can.  She posits, "Machine translation is responsible for much of today’s closed-captioning and subtitling of broadcast and online streaming video. It can’t register sarcasm, context, or word emphasis." By using machines instead of human writers and editors, sites like YouTube are not providing the same viewing experience to the deaf and hard of hearing as they are to their other patrons. Humans can understand which homophone to use based on context. There is an enormous difference between the words soar and sore, air and heir, suite and sweet. Humans can also determine when noise is important to the plot of a story and thereby include it in the captions so that a non-hearing viewer won't miss critical details. In the same Atlantic article, deaf actress Marlee Matlin says, "I rely on closed captioning to tell me the entire story…I constantly spot mistakes in the closed captions. Words are missing or something just doesn’t make sense." Accessible closed captions should follow along exactly with the spoken dialogue and important sounds so that viewers are immersed in the story. Having to decipher poor captions takes the viewer out of the flow of the story and creates a frustrating experience.

YouTube created its own auto caption software for its creators to use in 2010. The software is known for its incomprehensible captions. Deaf YouTuber and activist Rikki Poynter made a video in 2015 highlighting the various ways in which YouTube's automatic captions are inaccessible. She wrote a 2018 blog post explaining her experience with the software, "Most of the words were incorrect. There was no grammar. (For the record, I’m no expert when it comes to grammar, but the lack of punctuation and capitalization sure was something.) Everything was essentially one long run-on sentence. Captions would stack up on each other and move at a slow pace." For years, Rikki and other deaf and hard-of-hearing YouTube users had to watch videos with barely any of the audio accurately conveyed. Although her blog post highlights the ways in which YouTube's automatic captions have improved since 2015, she writes, "With all of that said, do I think that we should choose to use only automatic captions? No, I don’t suggest that. I will always suggest manually written or edited captions because they will be the most accurate. Automatic captions are not 100% accessible and that is what captions should be." The keyword is accessible. When captions do not accurately reflect spoken words in videos, television shows, and movies, the stories and information are inaccessible to the deaf and hard of hearing. Missing words, incorrect words, poor timing, captions covering subtitles, or other important graphics all take the viewer out of the experience or leave out critical information to fully understand and engage with the content. Until web resources like YouTube take their deaf and hard-of-hearing viewer's complaints seriously, they will continue to alienate them.

So, what can we do about poor web-closed captioning? Fortunately, the Internet is also an amazing tool that allows consumers and users to have a voice in the way they experience web content. Deaf and hard-of-hearing activists like Marlee Matlin, Rikki Poynter, and Sam Wildman have been using their online platforms to improve web-closed captions. Follow in their footsteps and use the voice that the web gives you. Make a YouTube video like Rikki Poynter or write a blog post like Sam Wildman's post, "An Open Letter to Netflix Re: Subtitles.

The Internet is a powerful platform in which large companies like Google can hear directly from their consumers. If you would like to see the quality of closed captions on the web improve, use your voice. Otherwise, you'll continue to see memes like this one...

Closed Captioning Ordinance Rochester New York

At a meeting on September 19th, 2017, the Rochester City Council (New York) approved a new city ordinance that will now require its local businesses to enable the closed captioning feature on televisions displayed to the public. Rochester will join only a few other U.S. cities, such as Portland, Oregon, that require all city businesses to provide this service to their patrons.Read

Closed Captioning in Higher Education

Over the past couple of years – and after several lawsuits filed against a few Ivy League schools – a growing number of universities are working toward accessibility compliance with their online video courses. Initially, the process can be overwhelming and take up a lot of resources. One university even resorted to removing its public library of 20,000 free educational videos because of a complaint filed by the Department of Justice. Unfortunately, because accessibility requirements are often met by a big-bad lawsuit when standards are not met, the whole process gets a negative designation right from the get-go.

Where to Begin

It appears that the initial obstacle for universities is a lack of appropriate education on both the requirements and the benefits of closed captioning. Often, universities do not take the proper initiative in allocating suitable funding or appropriately equipping the classrooms until the school is approached by an organization advocating for accessibility rights – at which time the need to comply becomes a time-crunched burden with a possible lawsuit attached.

It’s important to know that public and private educational institutions must provide equal access for students with disabilities in agreement with the Americans with Disabilities Act (ADA) as detailed in Title II (publicly-funded universities & vocational schools) and Title III (privately funded). Sections 504 and 508 of the Rehabilitation Act of 1973 also ensure equal accessibility in any system of higher education that receives federal funding.

Beyond ensuring equal accessibility to all students, closed captioning is used as an effective learning tool by a significant percentage of students who do not self-identify as having difficulty with hearing. The availability of lecture and video transcripts helps students review and retain information outside of the classroom. It also aids students with learning disabilities, as well as ESL students.

Breaking Down the Roadblocks

Universities should not be overwhelmed by the thought of having to equip classrooms with an array of expensive technologies. The closed captioning process is simple. As long as clear audio can be transferred via the internet or a phone line, real-time captioners can write remotely to a variety of web platforms across multiple devices already available to students – devices such as mobile phones or laptops.

Furthermore, the educational value and benefits of closed captioning are often overlooked when it’s perceived as economically burdensome. Captioning is not only for the deaf and hard of hearing. Recognizing the value of captioning as a successful tool for all students should motivate all administrators to provide captioning on all their online courses.

Amidst the growing number of accessibility discrimination lawsuits being filed against US movie theater chains, the Department of Justice announced yesterday that a Final Rule of the Americans with Disabilities Act (ADA) Title III will soon be published to enforce a nationally consistent standard. This rulemaking will provide specific requirements that cinemas must now meet to satisfy their equal access obligations to patrons with hearing and vision disabilities.

For screenings of features produced with closed captioning, movie theaters will now be required to provide caption display devices, such as Sony’s Access Glasses or Dolby’s CaptiView, to patrons who request them. The law will also require these theaters to provide assistive listening devices for any film produced with an audio description track, which contains a personal narration throughout the film.Read

Minnesota Captioning Law

Effective August 1, 2016, a new law authored by Minnesota politician Brian Daniels now requires all televisions in common areas at medical facilities in Minnesota to turn on the closed captioning feature.

The bill, which was signed into law on May 22 of this year, was an effort by Mr. Daniels to address a problem brought to his attention by the Minnesota State Academy for the Deaf. According to Daniels, he can recall the exact story that triggered his motivation to move forward in pursuing this matter. He had heard of a family sitting in a hospital waiting room one morning struggling to make sense of the breaking news of a horrific tragedy unfolding in New York. That was the morning of September 11, 2001.

It was not a difficult task for the Minnesota State Academy for the Deaf to gain his attention on this matter. Back in 1996, Daniels and his family relocated to the city of Faribault so his son, Jeremiah, who is severely hard of hearing, could attend the MSAD. He highlighted his joy in serving the deaf community of Minnesota, saying, “I’m just so proud of it because it affects my family and so many people we know in the community.”

In addition to waiting rooms in hospitals, other medical facilities include surgical centers, birth centers, and certain group homes. Group Homes affected are those that provide housing, meals, and services to five or more people who are developmentally or physically disabled, chemically dependent, or mentally ill. However, it does not apply to medical clinics, nursing homes, or assisted living facilities.

This was one of several new laws to go into effect on August 1st and another promising example of the increasing efforts of lawmakers to help gain accessibility for all.

The language of the new law can be found at: The Office of the Revisor of Statutes.