This article is current as of February 4th, 2022.

A few months ago, Zoom announced that auto-generated captions (also known as live transcription) were now available for all Zoom meeting accounts. The development has been a long-awaited feature for the deaf and hard-of-hearing community.

As popular and ubiquitous Zoom has become, it can be overwhelming to understand its multiple meeting features and options – especially in regards to closed captioning. Here at Aberdeen Broadcast Services, we offer live captioning services with our team of highly trained, experienced captioners with the largest known dictionaries in the industry. CART (Communication Access Realtime Translation) captioning is still considered the gold standard of captioning (See related post: FranklinCovey Recognizes the Human Advantage in Captioning). Our team at Aberdeen strives to go above and beyond expectations with exceptional captioning and customer service.

Whether you choose to enable Zoom’s artificial intelligence (AI) transcription feature or integrate a 3rd-party service, like Aberdeen Broadcast Services, the following steps will help ensure you’re properly setting up your event for success.

Step 1: Adjust Your Zoom Account Settings

To get started, you'll need to enable closed captioning in your Zoom account settings.

Scroll down to the “Closed captioning” options.

In the top right, enable closed captions by toggling the button from grey to blue to “Allow host to type closed captions or assign a participant/3rd-party service to add closed captions.”

Below is a detailed description of the additional three closed captioning options here in the settings...

Allow use of caption API Token to integrate with 3rd-party Closed Captioning services

This feature enables a 3rd-party closed captioning service, such as Aberdeen Broadcast Services, to caption your Zoom meeting or webinar using captioning software. The captions from a 3rd-party service are integrated into the Zoom meeting via a caption URL or API token that sends its captions to Zoom. For a 3rd-party service such as Aberdeen to provide captions within Zoom, this feature must be enabled.

Allow live transcription service to transcribe meeting automatically

As mentioned earlier in this post, auto-generated captions or AI captions became available to all Zoom users in October 2021. Zoom refers to auto-generated captions as its live transcription feature, which is powered by automatic speech recognition (ASR) and artificial intelligence (AI) technology. While not as accurate, ASR is an acceptable way to provide live captions for your Zoom event if you are not able to secure a live captioner. If you will be having a live captioner through a 3rd-party service in your meeting, do NOT check “Allow live transcription service to transcribe meeting automatically.”

Unless you expect to use Zoom’s AI live transcription for most of your meetings, it is best to uncheck or disable live transcription as Zoom’s AI auto-generated captions will override 3rd-party captions in a meeting if live transcription is enabled.

Allow viewing of full transcript in the in-meeting side panel

This setting gives the audience an additional option to view what is being transcribed during your Zoom meeting or webinar. In addition to viewing captions as subtitles at the bottom of the screen, users will be able to view the full transcript on the right side of the meeting.

Check or enable this feature to provide additional options for accessibility.

Save Captions

The meeting organizer or host can control permission of who can save a full transcript of the closed captions during a meeting. Enabling the Save Captions feature grants access to the entire participant list in a meeting.

Transcript options from 3rd-party services may vary. At Aberdeen Broadcast Services, we provide full transcripts in a variety of formats to fit your live event or post-production needs. For more information, please see our list of captioning exports or contact us.

Step 2: Start your meeting and obtain the caption URL (API token)

Once the webinar or meeting is live, the individual assigned as the meeting host can acquire the caption URL or API token.

As the host, copy the API token by clicking on the Closed Caption or Live Transcript button on the bottom of the screen and selecting Copy the API token, which will save the token to your clipboard.

By copying the API token, you will not need to assign yourself or a participant to type. Send the API token to your closed captioning provider to integrate captions from a 3rd-party service into your Zoom meeting. We ask that clients provide the API token at least 20 minutes before an event (and no earlier than 24 hours) to avoid any captioning issues.

Step 3: Test Your Captions and Go!

Once the API token has been activated within your captioning service, the captioner will be able to test captions from their captioning software.

A notification in the Zoom meeting should pop up at the top saying “Live Transcription (Closed Captioning) has been enabled.” and the Live Transcript or Closed Caption button at the bottom of the screen will appear for the audience. Viewers can now choose Show Subtitle to view the captions.

Viewers will be able to adjust the size of captions by clicking on Subtitle Settings...

Can you also caption breakout rooms in Zoom?

Yes! Captioning multiple breakout rooms occurring at the same time is possible using the caption API token to integrate with a 3rd-party, such as Aberdeen Broadcast Services. Zoom's AI live transcription option is currently not supported in multiple Zoom breakout rooms, which is why it is important to consult with live captioning experts to make that happen. Contact us to learn more about how it works.

Enjoy this post? Email sales@abercap.com for more information or feedback. We look forward to hearing your thoughts!

Closed captioning is an essential aspect of modern media consumption, bridging the gap of accessibility and inclusivity for diverse audiences. Yet, despite its widespread use, misconceptions about closed captioning persist. In this article, we delve into the most prevalent myths surrounding this invaluable feature, shedding light on the truth behind closed captioning's capabilities, impact, and indispensable role in enhancing the way we interact with video content.

Let’s debunk these common misunderstandings about closed captioning and gain a fresh perspective on the far-reaching importance of closed captioning in today's digital landscape.

Closed captioning is only for the deaf and hard of hearing

While closed captions are crucial for people with hearing impairments, they benefit a much broader audience. They are also helpful for people learning a new language, those in noisy environments, individuals with attention or cognitive challenges, and viewers who prefer to watch videos silently.

Closed captioning is automatic and 100% accurate

While there are automatic captioning tools, they are not always accurate, especially with complex content, background noise, or accents. Human involvement is often necessary to ensure high-quality and accurate captions.

Captions are always displayed at the same place on the screen

Some formats, like SCC, support positioning and allow captions to appear in different locations. However, most platforms use standard positioning at the bottom of the screen.

Captions can be added to videos as a separate file later

While it's possible to add closed captions after video production, it's more efficient and cost-effective to incorporate captioning during the production process. Integrating captions during editing ensures a seamless viewing experience.

Captions are only available for movies and TV shows

Closed captioning is essential for television and films, but it's also used in various other video content, including online videos, educational videos, social media clips, webinars, and live streams.

Captioning is a one-size-fits-all solution

Different platforms and devices may have varying requirements for closed caption formats and display styles. To ensure accessibility and optimal viewing experience, captions may need adjustments based on the target platform.

All countries have the same closed captioning standards

Captioning standards and regulations vary between countries, and it's essential to comply with the specific accessibility laws and guidelines of the target audience's location.

Closed captioning is expensive and time-consuming

While manual captioning can be time-consuming, there are cost-effective solutions available, including automatic captioning and professional captioning services. Moreover, the benefits of accessibility and broader audience reach often outweigh the investment.

In summary, closed captioning is a vital tool for enhancing accessibility and user experience in videos. Understanding the realities of closed captioning helps ensure that content creators and distributors make informed decisions to improve inclusivity and reach a broader audience.

When we launched our new website last year, we also made the move host it on the .IO domain. Why? Because all of the advancements we're executing behind the scenes that help ensure we're captioning and delivering media in the most advanced and efficient way possible are connected by our application programming interfaces ("APIs") built on this domain. Having our website on the same domain is a reminder of our continued dedication to our clients to improve, and highlights one of our core values: to be solution driven.

Our most considerable investment was our move to the AWS platform – the gold standard in the industry and what more than 40% of cloud computing is run on. This move not only permitted us to work uninterrupted by the current global pandemic, but it also allows us to expedite those quick-turn broadcast deliveries for our clients without losing that personal touch or eyes-on review by our AberFast engineers that makes our video delivery service unique.

Here’s our President, Matt Cook, explaining how a client came to us with a new challenge of captioning and delivering time-sensitive content – where every minute counts – and how we found the perfect solution for them:

While the delivery time frame is shrinking, viewers still expect quality — as do the broadcast outlets and online platforms that make content available. We are well aware of these pressures and that’s why we encourage you to bring us your challenges so our team can find the most efficient workflow for you.

Engineering solutions is one of our commitments to our clients and one of the values to which our team is dedicated.

This weekend I had the privilege of attending a wedding where the bride is hearing and the groom is deaf. It was very interesting to watch them communicate with each other. She even serenaded him by “signing” a love song.  I was questioned a lot about my job as a caption editor and was asked why captions aren’t always accurate. The couple was watching a sitcom together and some of the jokes lost their humor due to the fact that the captions were severely edited down and not verbatim. I discussed the importance of verbatim captioning and how at Aberdeen we make it our priority to provide accurate, verbatim captioning that does not deter from the original meaning. It was a personal moment for me to take pride in my professional career but also sad to see that many programs still have poor captioning that barely meets the requirements. Hopefully this will change as awareness spreads on the importance of quality captioning.

Side note: I was very embarrassed by my own personal lack of sign language and was frustrated that I was not able to communicate as well as I would have liked. I think it is important that everyone knows at least a few fundamental and basic signs. Here is a website that provides a list of some common signs that you should know: http://www.start-american-sign-language.com/basic-words-in-sign-language.html.

My grandfather is a soft-spoken former Marine and my father has carried on the tradition of both soft-spoken tendencies as well as military service. The one thing that isn’t soft, however, is the volume at which they watch television. Both will ratchet up the sound until it’s blaring through the entire house. This may seem like an annoying habit or one that’s inconsiderate, at best, but for them, it’s the only way to hear. Both suffer from hearing loss – hereditary as well as linked to their military service.

My father is 80% deaf and wears hearing aids in both ears to compensate. His audiologist has told him that by the time he reaches 50 years old he will be nearly 100% deaf. My grandfather, too, wears hearing aids but often forgets to put them in rendering them useless. The solution the family has found for my grandfather who loves baseball and true crime movies is closed captioning. He can now sit in the living room with the family bustling around him and follow the storyline of Law and Order or Tigers baseball game without missing a beat.

It’s an amazing thing to now work for a company that provides these services for people around the world. Having my family be affected by hearing loss makes me so much more appreciative of people like those on our team who strive to deliver 100% accurate captioning.

Written by Amber Kellogg

Many people today associate text written in all capital letters as text that is meant to be SHOUTED AT YOU! Over the years, researchers set out to prove which type of text is the most legible. The general consensus today is that lower-case lettered words are easier to read. One reason is simply that you are used to reading this type of font. The majority of print that we read is in upper and lowercase font. Also, the human brain reads words, not letters. Words typed in upper and lowercase have more distinctive shapes and varied contours. This allows us to read the actual word and not just each letter. Words set in all capitals are read letter by letter and hence, reading rate is approximately 15% slower.

So the question that remains is: Why are the majority of closed captions written in all capitals? This problem was a solution to an early decoding problem. The original decoders had trouble with letters that had descenders (g, j, p, q, y). These letters ended up getting squished to fit in the caption line and thus became illegible. So the easy solution at the time was to convert text to all capitals. This became the trend for many years to follow.

However, is there still a need to close caption in all caps? The answer is no. Today’s decoders have no problem with the once tricky letters with descenders. So, in effort to move into the 21st century and to enhance readability, many closed captioning companies are choosing to make the switch to upper and lowercase captions. What are your thoughts on this?

The statistics are astounding: 42 million American adults are illiterate and 50 million are unable to read higher than a 4th-grade reading level. Studies show that frequent reading improves reading proficiency, although sadly, according to the “Washington Post,” only one in four Americans actually read a book in the past year. Instead of reading books, the average child in America watches 20 hours of TV each week. How do we remedy this ongoing problem? What if watching television could actually improve literacy?

According to studies conducted in Finland, it can! Children in Finland watch as much TV as Americans and even attend school less. Yet, they rank higher in educational achievement and literacy. Jim Trelease states in “The Read-Aloud Handbook” that Finland’s high test scores are due to their use of closed captions.  Children in Finland want to watch American sitcoms but are only able to understand them by watching the Finnish words at the bottom of the screen. Therefore, they must read to watch!

Victoria Winterhalter Brame conducted her own study after reading Trealease’s book. In her article “TV that Teaches: Using Closed Captions,” she writes about introducing her daughter to closed captioning at a young age and that once her daughter started Kindergarten, she began to recognize her vocabulary words on the screen while watching television.

Perhaps there is a simple solution to this nationwide problem of illiteracy and reading deficiency: turn on the closed- captions!

I find that most producers and television stations don't really love the idea of captions.  In fact, they often find it a nuisance.  There are reasons that everyone should love captions, but it is just a matter of being aware of the 94-million Americans who use--and most likely--love closed captions.  Read the top five reasons why you should love closed captions too.

1. The Deaf and Hard-of-Hearing Love Captions

I guess this is a given, but most people do not know how many people are actually affected if there are no closed captions present on a television program. The number of deaf and hard-of-hearing Americans is astounding--approximately 28 million people!  Equal rights to all include providing access to all, which is something we can all smile about.

2. Multi-lingual America Loves Captions

Being from the LA area, I hear how many different languages are spoken in US. All I have to do is go to a local market and hear the multiple languages spoken--Japanese, Spanish, Korean, Arabic, Farsi, Vietnamese... you name it!  In America, the number of non-native English speakers is growing, which means most of them are actively learning English.  In fact, over 30-million Americans are learning English as a second or other language. What easier way to learn than to watch television in English with English closed captions.  You are not only listening to the language but you are reading the language.   Your neighbor is learning English with the help of captions ... and that should make any American happy.

3. Grandma Loves Captions

As we all know, not all grandmas (or grandpas for that matter) need closed captions, but many, even the ones without significant hearing loss, enjoy watching television with closed captions.  My 85-year-old grandmother lives alone and is pretty much housebound due to her being on oxygen. Her main form of entertainment is television with closed captions, even though she has very little hearing loss. The day her cable box stopped working she almost had a panic attack.   Even if it is just to watch the latest Ellen DeGeneres show, closed captions make my Grandma happy... and probably yours too.

4. Six-year-olds Love Captions

Anyone who has young children, has had young children or knows young children, understands it can be a struggle teaching them to read.  Ten million Americans are school-age children learning to read in school, but ultimately the parent needs to take time to read with their child and have their child practice reading to them.  Most children watch some amount of television daily, in fact studies show that children watch an average of 1,680 minutes of television a week.  Children watching television with captions on, improves their reading skills at a far faster rate than children who do not watch television with captions.  That is pain-free teaching for parents... and that would make any parent happy.

5. Adults Who Cannot Read Love Captions

It sounds strange that someone who cannot read would love captions, but captions are a tool to help illiterate adults learn to read.  It may be an astonishing fact, but about one in 20 adults in the U.S. is not literate in English and 27 million American adults are improving their literacy skills.  Using captions is an easy way to aid in the adult learning process.    You may not have thought about so many adults not knowing how to read, but knowing your captions may help, makes it worthwhile.

It is hard to think about the usefulness of captions unless you have someone close to you that captioning directly affects.  Maybe captions could help someone close to you that is not using them.  Pass on the word and turn on your captions--it may help more people than you think!  Do you love captions yet?

Sony Electronics and Aberdeen Captioning along with software developer CPC have joined forces to develop the first file-based closed-captioning system that maximizes the benefits of Sony’s XDCAM HD422 tapeless technology. The new workflow uses Sony’s PDW-HD1500 optical deck to make the process more efficient, faster and more flexible.

“Because the XDCAM system is file-based, we’re able to do our work in a much more refined and streamlined way,” said Matt Cook, President of Aberdeen Captioning. “Now, once someone is done with their XDCAM edit, we take their file, caption directly onto that file, and then place it back onto the disc. We’ve eliminated the need to go through a closed-captioning encoder—which can cost up to $10,000—therefore eradicating the requirement to do real-time play-out.”

According to Cook, clients—which include a range of broadcast networks, groups and independent producers—benefit from faster turnaround times and a more cost- and time-efficient process than previous methods.

“The primary benefit for clients is that they can keep their file in its original form, and send it to us on a hard drive, via FTP site, or on a disc,” he said. “Once we put the captioning data back in the video file, we can then return it to the client in the format of their choice.”

The PDW-HD1500 deck is designed for file-based recording in studio operations. A Gigabit Ethernet data drive allows it to write any file format from any codec onto the optical disc media, and it also makes handling either SD or HD content much easier.

“This deck is perfect for applications like closed captioning, where turnaround time is often critical and multi-format flexibility is a key,” said Wayne Zuchowski, group marketing manager for XDCAM system at Sony Electronics.

Cook added, “We can handle any format without a problem. That type of capability and functionality is very important to us because as a captioning company we’re required to deliver a finished product in any format a client requires.”

When Aberdeen receives content from a client, the company first converts it to a smaller “working file,” for example Windows or a Quick Time media file, which is used to do the transcribing, captioning and timing.

“Once the captioning work is done, we marry the original MXF XDCAM file and our captioned data file through our MacCaption software,” Cook said. “With the press of a button, both files are merged, and we can drag and drop it back onto the disc and send out, or FTP it to a client and they can drag and drop onto a disc.”

The Sony and Aberdeen joint captioning system will be on display at NAB in Sony’s exhibit, C11001, Central Hall, Las Vegas Convention Center.

Article Written by Tom Di Nome, from Sony Electronics

Copyright notice: 

© Sony Electronics & Aberdeen Captioning, Inc. 2009.

This article can be freely reproduced under the following conditions:

a) that no economic benefit be gained from the reproduction

b) that all citations and reproductions carry a reference to this original publication on [online] http://www.abercap.com/blog

If not, you may want to encourage them to do so.  The process is simple, and it provides accessibility to one of the most important ceremonies of one's life: GRADUATION.  Read below for some answers to some common questions I get regarding captioning for these important life events.

What is the purpose for captioning commencements?

The purpose of closed captioning commencements is obvious: to provide access to viewers who are unable to hear the ceremony.

Who views the captioning and where do they view it?

Captions are usually present on screens at the commencement itself, on the web for live viewing, and potentially broadcast on a local station and viewed by the family or friends of graduates, graduates themselves, or perhaps the faculty members of the school.  If not present at the ceremony, viewers can connect to the ceremony from a home computer or from portable devices like laptop computers, PDAs, Smart Phones, et cetera.

For the captioner, what preparation goes into captioning for a commencement ceremony?

A live captioner will usually want to get names of speakers, especially the keynote speaker and valedictorian, and whoever else may be speaking during the ceremony. The captioner should familiarize themselves with the city the commencement is located in and its adjoining cities, as these names may be mentioned during the ceremony.  Depending on keynote speaker, the captioner will do some online research about the speaker's history and life so they can learn where she/he lives, works, what their "claim to fame" is.  This information will need to be defined in their dictionary.

How long are commencement ceremonies?

They can vary from three to six hours, depending on the size of the school and graduating class.

Does one captioner do the entire commencement?  If not, how do they transition?

Many times, one captioner will caption the entire commencement.  If it is extremely long, then two captioners will work on the captioning.  Usually at a predetermined point, for example, at the top of the hour, the first captioner will sign off with a [pause in captions] on the screen, until the second captioner dials in and gets linked up and they take over.

Does the captioner get a list of graduates beforehand?  If not, how do they caption the names? 

They usually do NOT get a list of the graduates.  When the graduates are announced, they are usually instructed to not write during that time, although sometimes they may write something like [names being read].

What does a captioner do if they do not know how to spell somebody's name or they do not have it in their dictionary?

The captioner will usually phonetically spell the person's last name as opposed to the first name.

Which universities and colleges currently caption their commencement ceremonies?

Many schools caption their commencement ceremonies, but some specific school who we have captioned for or will caption for this year include:

"A graduation ceremony is an event where the commencement speaker tells thousands of students dressed in identical caps and gowns that 'individuality' is the key to success."  ~Robert Orben

For more information on closed captioning commencement ceremonies, contact us.