In today's world defined by constant digital engagement, younger generations increasingly rely on captions and subtitles to enhance their viewing experience. This trend, largely popularized by Gen Z and Millennials, isn’t just limited to streaming shows or watching social media content; it’s spilling over into live events, with a strong case for captions as a way to boost engagement and attendance in venues that historically may have overlooked them, such as churches.
Here’s a look at the data supporting this movement and how churches can use captioning to foster a more engaging environment.
Younger Generations and the Subtitle Revolution
Preply, a language learning platform, conducted a survey titled, “Why America is Obsessed with Subtitles,” to explore the growing trend of subtitle usage among Americans. The study involved over 1,200 participants, aiming to understand how and why individuals use subtitles in their media consumption. The findings revealed that 50% of Americans watch content with subtitles most of the time, with younger generations, particularly Gen Z, showing a greater preference for subtitle use.
This data reveals a generation that sees captions not as an add-on but as an essential part of the viewing experience. For churches, this could signal an opportunity: integrating captions into services may not only help with accessibility but also align with the viewing habits of younger generations.
Captioning Live Events: A Path to Higher Engagement
The impact of captions on in-person attendance is significant. A study from Stagetext revealed that 31% of people would attend more live events if captions were readily available, with younger people leading this interest: 45% of 18-25-year-olds would be more likely to attend events if they were captioned, compared to 16% of those over 56.
This enthusiasm for live captions reflects a shift in how younger generations want to consume live content. Captions at events enhance accessibility for everyone, regardless of hearing ability, and address concerns with acoustics or unclear speech, which often deter audiences. In the church context, offering captions during sermons, worship songs, or events could break down barriers that may otherwise prevent younger individuals from fully engaging.
Engaging a New Generation: How Captioning Can Help Churches Reconnect with Young Adults
Christian churches across the United States are increasingly challenged to capture the interest and attendance of younger generations, who are showing declining levels of religious affiliation and engagement." The Pew Research Center's 2019 article, "In U.S., Decline of Christianity Continues at Rapid Pace," highlights a significant decline in Christian affiliation among younger Americans. The data indicates that only 49% of Millennials (born between 1981 and 1996) identify as Christians, compared to 84% of the Silent Generation (born between 1928 and 1945).
With reports indicating a decline in church attendance in the U.S., many churches are seeking strategies to re-engage their congregations, especially young adults. Captions could be a powerful, practical solution.
Offering live captions during services could address several issues:
Moving Forward: A Call for Churches to Embrace Captioning
By understanding the viewing habits of younger generations, churches have the opportunity to create an environment that aligns with their engagement preferences. Embracing captioning technology for in-person services and online sermon streams not only makes services more accessible but can also foster deeper engagement, particularly among younger congregants who see captions as an essential part of their everyday experience.
As churches consider how best to adapt to changing times, incorporating live captions could be a powerful step toward renewing attendance and helping younger generations feel seen, heard, and included in the community. It’s a practical, meaningful solution that could not only enhance accessibility but help bridge generational gaps, allowing churches to resonate with the next generation and grow their community in an inclusive and modern way.
In this session, Tony from Aberdeen Broadcast Services, an accessibility specialist focusing on higher education, dives deep into the essentials of remote captioning. The talk, co-presented by Amiyah Lee, addresses some of the most frequently asked questions from colleges and universities regarding setting up remote captioning, where the live writer does not need to be physically present in the classroom. This approach offers greater flexibility and cost-effectiveness while maintaining the quality needed to meet accessibility standards.
Tony shares some statistics on the use of captioning in higher education, such as the fact that 19% of college students in the United States experience some degree of hearing loss. Additionally, 71% of students without hearing difficulties use captions at least some of the time, and 90% of students who use captions say that they help them learn more effectively. Furthermore, 65% of students use captions to help focus and retain information, while 62% use them to overcome poor audio quality.
Tony and Amiyah provide an overview of the practical steps institutions can take to implement remote captioning, including selecting the right technology and overcoming common challenges. They also discuss how Aberdeen Broadcast Services supports educational institutions by offering affordable captioning solutions, both human captioning and AI-powered ASR technology, that meet compliance requirements without compromising on quality.
Watch it on-demand here:
For more on our work with captioning in higher education, visit: Live Captioning for Universities.
Automatic Speech Recognition (ASR) is now making a substantial impact on how local churches connect with their communities, breaking down barriers once caused by financial limitations. With ASR, churches can now offer inclusive services to the Deaf and Hard of Hearing (D/HH) community. But it doesn’t stop there—when combined with Automatic Machine Translation (AMT), this powerful duo overcomes language hurdles, translating sermons into dozens of languages. Even the smallest congregations can now reach a global audience, sharing their message far and wide.
We previously explored the ethical and theological concerns with AI in the Church in our last blog post: The Role of AI in Enhancing Church Accessibility and Inclusion.
While human-generated captions and translations always offer the highest quality, ASR and AMT provide a cost-effective solution that can be utilized by churches and ministries of any size or budget. Imagine your sermon reaching the Deaf and Hard of Hearing (D/HH) community, allowing for full participation, or sharing your message in various languages to a worldwide audience that might otherwise have been unreachable. AI-powered closed captioning and translations help make this a reality. ASR captions and translations are not only a technological advancement; they are tools for inclusivity and global outreach.
Churches aiming to make a significant impact can turn to AI-powered accessibility tools, once considered out of reach, for preaching and teaching. Practical uses of ASR include:
Aberdeen’s new ASR solution, developed with ministry in mind, employs robust AI engines and large language models to provide a powerful advantage in delivering Christian content. Each solution is carefully crafted to fit your specific ministry needs, providing high-quality captions at a fraction of the cost.
Discover how Aberdeen’s ASR solution offers a cost-effective approach to closed captioning & translation. Learn more here: Aberdeen ASR Closed Captioning.
Does Artificial Intelligence (AI) have a place in the Church? Countless podcasts, articles, and sermons are popping up addressing the same question. Despite the widespread discussion, many overlook how deeply AI is already embedded in our daily lives. It influences everything from manufacturing processes and automotive technology to how our food is produced, impacting many products and services we use daily.
To tackle the question, “Does AI have a place in the Church?” let’s first understand what AI is. At its core, AI simulates human intelligence, performing tasks that typically require human intervention. In the article Artificial Intelligence, IBM explains that “On its own or combined with other technologies (e.g., sensors, geolocation, robotics) AI can perform tasks that would otherwise require human intelligence or intervention”.
Think of AI as a modern tool designed to handle tedious, repetitive, and data-intensive tasks efficiently. For Christians, AI should be considered like any other tool, such as the internet. It’s a resource that, when used wisely, can enhance our practices and outreach without compromising our core values or mission.
The fear of AI has led some people to strongly believe that the world will be overtaken by it. This reaction is understandable, as AI has quickly evolved from a science fiction concept to a societal staple. Major news outlets continuously report on various issues related to AI. Hollywood produces big-budget movies and TV shows about AI taking over the world, while authors write extensively about its potential aftermath and impact.
Those influences have changed how the Christian views AI. 52% of Christians in the U.S., when polled in the Barna Group’s research How U.S. Christians Feel About AI & the Church, said they would be disappointed if their church used AI. Moreover, less than 25% of those polled view AI as good for the Church.
Several reasons contribute to why many Christians are hesitant about incorporating AI into the church, including:
One scholar, Dr. Cory Marsh, on the episode Christians and AI talked on the topic and stated that the major concern of many Christian Pastors, from the list of concerns, is the loss of critical thinking. When crafting a sermon, study, or class the major component of the process is critical thinking so when pastors take that away, other items on the list, such as theological concerns and loss of human connection, quickly follow. It’s completely rational why the church is concerned about the rise of AI and the use of AI-enabled products.
Moving past these concerns, it's also important to recognize how AI can positively impact church operations, especially by enhancing the inclusivity of services.
The pace of technological change is relentless, and the need for inclusive communication solutions in ministries is no different. To meet the ever-growing demand for accessibility, ministries require a diverse toolkit. One tool that has been transformed by the artificial intelligence boom is Automatic Speech Recognition (ASR).
Programs like Dragon NaturallySpeaking or your phone’s Speech-to-Text are examples of ASR technology. The IBM article What is Speech Recognition explains, in simple terms, that this tool converts spoken words into text. Although ASR systems initially have a limited vocabulary, they can be significantly enhanced by integrating resources like Large Language Models (LLMs) and Deep Learning. These advanced technologies improve the quality, accuracy, and efficiency of captions, transcriptions, and translations, making ASR tools more effective and reliable.
Unlike sermon content creators, ASR does not add interpretation or creativity to your content; it simply converts your speech to text, word for word.
While AI poses certain ethical and practical challenges within the church context—ranging from concerns about authenticity in worship to the potential for diminished human connection—its benefits, particularly in enhancing accessibility and inclusivity, cannot be ignored. Tools like Automatic Speech Recognition (ASR) exemplify how AI can serve the church by broadening access to religious services for those with hearing impairments or language barriers. As technology continues to evolve, it’s important for church leaders to critically evaluate the opportunities available that do not compromise the spiritual integrity of their mission.
Discover how Aberdeen’s ASR solution offers a cost-effective approach to closed captioning. Learn more here: Aberdeen ASR Closed Captioning.
On July 18, 2024, the FCC released Report and Order (FCC 24-79) which implements a “readily accessible” requirement for closed captioning display settings on various video devices, allowing users to customize font size, type, color, position, opacity, and background to enhance readability and viewing preferences. This Order addresses the difficulties many users, particularly those who are deaf or hard of hearing, face due to complex navigation, inconsistent device interfaces, limited customization options, and inadequate support. This initiative responds to widespread complaints about the accessibility challenges of closed captioning.
There will be four elements to consider in deciding whether or not these display settings are "readily accessible”, which manufacturers of covered apparatus and multichannel video programming distributors (MVPDs) will need to comply with in this Order. These four elements include: proximity, ensuring settings are easy to navigate to; discoverability, making them straightforward to find; previewability, allowing users to see changes in real-time; and consistency/persistence, maintaining user settings across devices and sessions.
FCC Commissioner Anna Gomez stated, "Ensuring that those who are deaf and hard of hearing can locate and adjust closed caption settings is essential to their being able to meaningfully access and enjoy video programming. While this is a milestone to be proud of, as technology continues to advance, it is crucial that manufacturers prioritize the inclusion of accessibility features into product development from the beginning. Accessibility by design."
The discussions and rulings on these matters emphasize the FCC's commitment to improving accessibility in communications technologies, ensuring that closed captioning features are more user-friendly and customizable. Hopefully, these changes will be implemented sooner rather than later, so more people can enjoy the benefits of closed captions.
In the broadcasting industry, ensuring that your content meets quality standards and network requirements is paramount. The last thing any content creator wants is to have their files rejected, leading to extra time, effort, and resources. In this article, we'll discuss the intricacies of broadcast file rejection and explore strategies that we follow at Aberdeen to help our clients prevent file rejection.
How common is it to see stations reject files? In 2023, our AberFast team delivered approximately 88,000 broadcast files. Out of those final deliveries, 510 issues were flagged which could have led to station rejections.
Amid technological advancements throughout the broadcasting industry, adapting and adhering to the changes in compliance, FCC regulatory guidelines, and delivery requirements pose a significant challenge for many broadcast teams. From quality issues to content concerns and file delivery requirements, numerous factors can contribute to the rejection of broadcast files.
Over the years, through our interactions with station partners and experts, and in assisting clients with program delivery across various networks in the broadcasting industry, we have identified the following list as the most common causes for rejections.
Quality issues often top the list of reasons for file rejection. Whether it's video or audio-related, production and post-production processes can introduce issues that can result in content not meeting the broadcast quality standards. Additional causes like content concerns, such as the use of inappropriate or controversial material, and creative issues like effects or artifacts failing to meet the station’s broadcast requirements can lead to rejection. File requirements, including format, resolution, and aspect ratio, must also be met to ensure acceptance.
At Aberdeen, we prioritize quality assurance to minimize the risk of file rejection.
Every workflow established in our system involves multiple layers of human and automated QC using Telestream VidChecker, which is configured to check and correct video and audio levels to meet broadcast standards. Some of the checks include chroma levels, RGB gamut, field orders, cadence, stripe errors, analog and digital dropouts, audio phase coherence, dual mono detection, and true audio peaks. (If any of these terms are new to you, here’s a resource to keep handy: Tektronix Glossary of Video Terms & Acronyms)
Now, the question is why use both? Our goal has always been to exceed the quality expectations of our clients as they have entrusted us to deliver their files on their behalf. So adding both helps us identify and resolve quality issues that can be detected by humans and issues that can evade human detection. Some of the issues flagged by our QC team are related to captioning accuracy, audio loudness, interlacing, etc. while the automated QC helps identify issues related to the container, color levels, field orders, audio true peaks, etc.
In a constantly evolving industry, staying abreast of regulatory broadcast standards and station requirements is essential. We regularly connect with our station partners to understand their updated delivery requirements and guidelines, ensuring that our workflows are aligned with their needs.
One example of a station-wide content requirement that we recently had to address involved coordinating with some of our clients regarding a network disclaimer policy. One of our network partners informed us about a change in the disclaimer policy for paid programming. According to the new policy, a program could be rejected if it did not include a 5-second disclaimer at the beginning. We gathered the details of the new disclaimer requirements and passed them along to our clients, assisting them in incorporating this change.
This isn't the first time we've encountered such a situation, and it certainly won't be the last. Therefore, we document these updates and ensure our clients are informed about any new requirements, and during our manual quality control (QC) checks, we use this documented information to flag any issues that do not meet the specified requirements.
Quality is non-negotiable for us. Over the years, we've learned to adapt and refine our processes to uphold the highest broadcast standards. Our automated QC software is regularly updated to align with broadcast standards and identify issues that may compromise quality. As mentioned earlier, we keep our station requirements up to date to serve our clients and make sure we deliver the correct file. In the event of any rejection, we follow a tested and proven engineered approach to conduct thorough investigations to pinpoint the root cause and implement preventive measures for the future.
While broadcast file rejection is a tough challenge, it is not something that cannot be prevented. We have learned over the years that the correct strategies like prioritizing quality assurance, staying informed about regulatory changes, and fostering collaborative relationships with network partners can help content creators minimize the risk of rejection and ensure seamless delivery of their content. We love our clients and want to help them navigate the complexities of broadcast file delivery with confidence and efficiency.
Learn more about the entire AberFast process in our webinar, An Overview of AberFast Broadcast Transcoding & Station Delivery, hosted by Matt Cook, President of Aberdeen Broadcast Services.
Closed captioning serves as a powerful tool that extends its impact far beyond aiding the deaf and hard-of-hearing community. Its significance transcends age, abilities, and background, making it an invaluable resource for both educators and learners. In the digital age, closed captioning has emerged as a transformative resource, with research revealing that students, English language learners, and children with learning disabilities who watch programs with closed captioning turned on improve their reading skills, increase their vocabulary, and enhance their focus and attention.
The scholarly article, Closed Captioning Matters: Examining the Value of Closed Captions for All Students (Smith 231) states that “Previous research shows that closed captioning can benefit many kinds of learners. In addition to students with hearing impairments, captions stand to benefit visual learners, non-native English learners, and students who happen to be in loud or otherwise distracting environments. In remedial reading classes, closed captioning improved students’ vocabulary, reading comprehension, word analysis skills, and motivation to learn (Goldman & Goldman, 1988). The performance of foreign language learners increased when captioning was provided (Winke, Gass, & Sydorenko, 2010). Following exams, these learners indicated that captions lead to increased attention, improved language processing, the reinforcement of previous knowledge, and deeper understanding of the language. For low-performing students in science classrooms, technology-enhanced videos with closed captioning contributed to post-treatment scores that were similar to higher-performing students (Marino, Coyne, & Dunn, 2010). The current findings support previous research and highlight the suitability of closed-captioned content for students with and without disabilities.”
Reading Rockets, a national public media literacy initiative provides resources and information on how young children learn and how educators can improve their students’ reading abilities. In the article, Captioning to Support Literacy, Alise Brann confirms that “Captions can provide struggling readers with additional print exposure, improving foundational reading skills.”
She states, “In a typical classroom, a teacher may find many students who are struggling readers, whether they are beginning readers, students with language-based learning disabilities, or English Language Learners (ELLs). One motivating, engaging, and inexpensive way to help build the foundational reading skills of students is through the use of closed-captioned and subtitled television shows and movies. These can help boost foundational reading skills, such as phonics, word recognition, and fluency, for a number of students.”
Research clearly demonstrates that “people learn better and comprehend more when words and pictures are presented together. The combination of aural and visual input gives viewers the opportunity to comprehend information through different channels and make connections between them” (The Effects of Captions on EFL Learners’ Comprehension of English-Language Television Programs).
From bolstering reading skills, to enhancing focus and language comprehension, the benefits of closed captioning are numerous. We at Aberdeen Broadcast Services are committed to providing quality closed captions for television (TV) and educational programming.
Here is the public service announcement (PSA) we released in 2016 on local broadcast stations, emphasizing how closed captioning can enhance children's literacy skills.
This past summer, Abby, a 15-year-old homeschooled student with a passion for learning new things, decided to take an accelerated 8-week American Sign Language (ASL) class at her local community college. Abby quickly immersed herself in the world of ASL, and she was amazed by the beauty and expressiveness of the language. She also learned a lot about the deaf community, their unique culture and traditions, and the challenges they face in a hearing world.
In her closing statement for the course's final assignment, Abby wrote:
"Overall, the deaf community makes their way through daily life interacting with the hearing. As a cultural minority group, they have traditions, beliefs, values, and a language they practice in their everyday life. As a member of the hearing community, having been made aware of the challenges the deaf experience in their everyday lives, I feel challenged to find practical ways to assist and acknowledge the needs of the deaf community. Making a commitment to learn and use ASL is a step in the right direction. Another step is encouraging other hearing individuals to do the same."
Abby's story gives us a glimpse into the silent world of the deaf. We encourage you to read her paper. It might be the moment of enlightenment that leads you to become a champion for this minority group, which can be found in every nation of the world.
The following is Abby's final assignment paper, a reflection on her experiences in the ASL class and her commitment to supporting the deaf community.
When you think of culture you typically think of the way of life in places such as China or Brazil, because they have their own food and beliefs specific to them. According to the Merriam-Webster dictionary, culture is the set of shared attitudes, values, goals, and practices that characterize an institution or organization. The deaf community is considered a culture since they have values, characteristics, and traditions. Their culture is considered a minority because their characteristics are fewer in number than the rest of the world, which does not have those characteristics. The deaf culture has unique aspects.
Five percent of the world's population, which is 360 million, are either deaf or hard of hearing. Three point five percent of the US population is deaf or hard of hearing. The deaf culture is a close-knit cultural group as they understand each other and the struggles they may go through. They look at their hearing loss as part of their identity and don't regard it as a disability or defect. Dr. Barbara Kannapel created a definition of the American Deaf culture which states a set of learned behaviors of a group of people who are deaf and who have their own language (ASL), values, rules, and traditions. Sign Language has 300 different forms spread throughout the world, American Sign Language is used in the USA and Canada.
Many people think the deaf are more unable than they really are. The deaf still do normal tasks, for example, driving, having children, going to school, or playing sports. Because driving is such a visual task you really don't need to hear, instead be hyper, visually aware of your surroundings. Some people even question if the deaf can have children, the answer is yes. Children born to deaf parents typically learn sign language, then English. The Children of Deaf Adults are even given a special term, which is CODA. It stands for Child of Deaf Adult, but children who are born deaf are considered special in the deaf community. Going to school for the deaf is completely normal, there are deaf schools that they can attend as well. Some of these schools are residential, meaning you can live there. The first school for the deaf was in France founded in the 1960s. The deaf can play nearly any sport. At the schools for the deaf, they have sports teams for their students to play and play against hearing teams, which they do. Gallaudet, a deaf university in Florida, is known for its football team in the 1920s to huddle and sign their next play so the other team could not see. This is used to this day by football teams all over to talk about their game plan in private. There are many successful deaf athletes in the world.
The hearing's spoken language and the deaf’s gesture and facial-driven language make it hard for the two groups to communicate. You may be wondering how Deaf individuals communicate or function with the hearing. There are many ways they can communicate with each other, for example hearing people who learn to sign, sign language is their way of communicating to the deaf. The deaf can typically read lips so they can understand when a hearing person is talking, and respond by talking. Writing messages down on paper back and forth is another way the two can communicate or now through text messaging.
The hearing community provides help for the deaf such as closed captions and subtitles, but the sad part of closed captions is that a majority of the deaf cannot read as fast as the hearing can talk. This means that in TV shows or movies when closed captions are on and a deaf person is trying to read it is hard to keep up because of the speed of talking. Although closed captions and subtitles are not the easiest to keep up with they still give the deaf community a way to watch media.
Associations such as the DCMP, and the NAD have been helping the deaf minority group for years. The NAD stands for National Association of the Deaf. This Association is of, by, and for the deaf community. The nonprofit organization fights for the civil rights of the deaf for the nation. They have been around since 1880. Their site is filled with information about the deaf culture, and videos for seniors to learn to sign. So far the NAD has made “advancements in education, employment, health care, technology, telecommunications, youth leadership, and more” for the deaf. (Celio 24) The DCMP is the Described and Captioned Media Program. This Program provides the services to support deaf students to succeed in their academics. On their website, it provides videos with subtitles talking about various school topics. The captions and subtitles on these videos are edited to a lower word-per-minute rate so that those with different reading levels can learn from the closed captions or subtitles on the videos. These types of associations and programs are some ways the hearing community is helping the deaf.
The deaf community has its own cultural norms like any other culture. The community tends to be more direct and as some would say without a filter. The hearing community tends to put “fluff” around sensitive questions, where the deaf culture asks the question straight up. Some hearing individuals can see this as offensive because it is so abrupt, but that is just how they communicate. For the deaf body language and facial expressions are conscious in communication, where for the hearing it is unconscious. Waving a hand in front of a deaf person's face or switching lights on and off is like putting your hand in front of the mouth of a hearing person. This would be considered rude. Eye Contact is needed for lip reading and seeing signs, no eye contact would have an impact or quality on communication. In the hearing community, they can look away from a conversation and still hear the spoken word and can still speak in the dark.
The values of deaf culture are clear communication for all, meaning that their expressions and comprehension are clear. Expressions when signing are half of how it makes sense, and that is why they need to be clear to understand. The natural social interaction that happens at Deaf residential schools and Deaf clubs is valued by the community. Deaf literature, deaf art, and deaf heritage are also all valued by the deaf community. The deaf community is brought together by their art. Deaf art is art that explains physical or cultural deaf experience using typical art materials. Deaf literature is “A collection of English and American Sign Language, such as printed writings, and video published text such as poetry, stories, essays, and plays that reflect a Deaf culture and Deaf experience.” according to ASL Content Standards. Deaf heritage is the history or development of the deaf community. These values are part of what makes the deaf culture.
A culture consists of traditions for the next generations to carry on. Deaf traditions can include social activities such as Deaf clubs, athletic events, organizational involvement, and school reunions. Music and poetry are also traditions in the culture. Deaf poetry like deaf literature and art is poetry about the experience of deafness. Music for the deaf is enjoyed by the vibrations they can feel. In an article from 2016 about a man named Albert Wong, he explains how when he is in the car he turns the music to a high volume so he can feel the vibrations of the bass.
Overall the deaf community makes their way through daily life interacting with the hearing, and as a cultural minority group has traditions, beliefs, values, and a language they practice in their everyday life. As a member of the hearing community, having been made aware of the challenges the deaf experience in their everyday lives, I feel challenged to find practical ways to assist and acknowledge the needs of the deaf community. Making a commitment to learn and use ASL I feel is a step in the right direction. Another step is encouraging other hearing individuals to do the same.
In today's consumer-driven world, companies are constantly seeking ways to increase their revenue. One approach that has gained traction is the addition of service fees to services that were once at no additional charge to the consumer. While this might seem like a beneficial move for businesses, it often raises concerns among users and can have negative implications such as a loss of trust, user abandonment, competitive disadvantages, and negative publicity.
One case that illustrates this trend is the recent development in the live captioning industry. EEG Enterprises, the leading manufacturer of closed captioning encoders and the creator of the iCap Cloud Network software was acquired by AI Media Technologies in 2021. Subsequently, AI Media Technologies has announced its intention to impose fees for accessing the EEG encoders through the iCap Network, which many agencies, including Aberdeen, have endorsed for more than two decades.
Beginning November 1, 2023, AI Media Technologies will institute a fee for accessing the iCap Network. This change necessitates all agencies, including Aberdeen, to agree to the new terms and conditions, which state that "If you reject the new iCap Terms, your ability to access the iCap Network will cease from October 1st, 2023." (now pushed out to November 1st, 2023). Many agencies express concerns about this new fee, asserting that it will inflate their operational costs and potentially hinder their ability to offer cost-effective captioning services to their clientele. Conversely, AI Media Technologies justifies the fee, citing the growing demands for data protection, privacy, security enhancements, and the need for a robust and stable network. They emphasize their commitment to investing significant capital and resources into the iCap network.
Read the full iCap Network Access Agreement here.
How the new fee will impact the captioning industry and whether it could lead to higher prices for consumers remains to be seen. However, it is clear that clients, who in most cases own the already high-cost encoders, will bear the brunt of the additional costs. Thus, while introducing service fees for existing services can provide short-term financial gains, companies should carefully consider the potential consequences before implementing such changes. Prioritizing customer satisfaction and the perceived value of services should always be a top priority when adjusting pricing structures.
The Federal Communications Commission’s (FCC) Public Safety and Homeland Security Bureau (PSHSB) issued a Public Notice to remind Emergency Alert System (EAS) participants of their obligation to ensure that EAS alerts are accessible to persons with disabilities.
The Federal Emergency Management Agency (FEMA), in coordination with the FCC, will conduct a nationwide Emergency Alert System (EAS) and Wireless Emergency Alert (WEA) test on October 4, 2023.
The Public Notice also reminded EAS Participants that they must file ETRS Form Two after the nationwide EAS test no later than October 5, 2023, and they must file ETRS Form Three on or before Nov. 20, 2023. For TV stations, to be visually accessible, EAS texts must be displayed as follows (as it relates to closed captioning):
“At the top of the television screen or where it will not interfere with other visual messages (e.g., closed captioning),” and “without overlapping lines or extending beyond the viewable display (except for video crawls that intentionally scroll on and off the screen)…”
This is in addition to another FCC Public Notice which states:
“Individuals who are Deaf or Hard of Hearing. Emergency information provided in the audio portion of programming also must be accessible to persons who are deaf or hard of hearing through closed captioning or other methods of visual presentation, including open captioning, crawls or scrolls that appear on the screen. Visual presentation of emergency information may not block any closed captioning, and closed captioning may not block any emergency information provided by crawls, scrolls, or other visual means.”
As EAS alerts are expected to be more common in the future, this is something that we in the captioning industry will be prepared for and do our part to make it better for viewers.