In today's world defined by constant digital engagement, younger generations increasingly rely on captions and subtitles to enhance their viewing experience. This trend, largely popularized by Gen Z and Millennials, isn’t just limited to streaming shows or watching social media content; it’s spilling over into live events, with a strong case for captions as a way to boost engagement and attendance in venues that historically may have overlooked them, such as churches.
Here’s a look at the data supporting this movement and how churches can use captioning to foster a more engaging environment.
Younger Generations and the Subtitle Revolution
Preply, a language learning platform, conducted a survey titled, “Why America is Obsessed with Subtitles,” to explore the growing trend of subtitle usage among Americans. The study involved over 1,200 participants, aiming to understand how and why individuals use subtitles in their media consumption. The findings revealed that 50% of Americans watch content with subtitles most of the time, with younger generations, particularly Gen Z, showing a greater preference for subtitle use.
This data reveals a generation that sees captions not as an add-on but as an essential part of the viewing experience. For churches, this could signal an opportunity: integrating captions into services may not only help with accessibility but also align with the viewing habits of younger generations.
Captioning Live Events: A Path to Higher Engagement
The impact of captions on in-person attendance is significant. A study from Stagetext revealed that 31% of people would attend more live events if captions were readily available, with younger people leading this interest: 45% of 18-25-year-olds would be more likely to attend events if they were captioned, compared to 16% of those over 56.
This enthusiasm for live captions reflects a shift in how younger generations want to consume live content. Captions at events enhance accessibility for everyone, regardless of hearing ability, and address concerns with acoustics or unclear speech, which often deter audiences. In the church context, offering captions during sermons, worship songs, or events could break down barriers that may otherwise prevent younger individuals from fully engaging.
Engaging a New Generation: How Captioning Can Help Churches Reconnect with Young Adults
Christian churches across the United States are increasingly challenged to capture the interest and attendance of younger generations, who are showing declining levels of religious affiliation and engagement." The Pew Research Center's 2019 article, "In U.S., Decline of Christianity Continues at Rapid Pace," highlights a significant decline in Christian affiliation among younger Americans. The data indicates that only 49% of Millennials (born between 1981 and 1996) identify as Christians, compared to 84% of the Silent Generation (born between 1928 and 1945).
With reports indicating a decline in church attendance in the U.S., many churches are seeking strategies to re-engage their congregations, especially young adults. Captions could be a powerful, practical solution.
Offering live captions during services could address several issues:
Moving Forward: A Call for Churches to Embrace Captioning
By understanding the viewing habits of younger generations, churches have the opportunity to create an environment that aligns with their engagement preferences. Embracing captioning technology for in-person services and online sermon streams not only makes services more accessible but can also foster deeper engagement, particularly among younger congregants who see captions as an essential part of their everyday experience.
As churches consider how best to adapt to changing times, incorporating live captions could be a powerful step toward renewing attendance and helping younger generations feel seen, heard, and included in the community. It’s a practical, meaningful solution that could not only enhance accessibility but help bridge generational gaps, allowing churches to resonate with the next generation and grow their community in an inclusive and modern way.
In this session, Tony from Aberdeen Broadcast Services, an accessibility specialist focusing on higher education, dives deep into the essentials of remote captioning. The talk, co-presented by Amiyah Lee, addresses some of the most frequently asked questions from colleges and universities regarding setting up remote captioning, where the live writer does not need to be physically present in the classroom. This approach offers greater flexibility and cost-effectiveness while maintaining the quality needed to meet accessibility standards.
Tony shares some statistics on the use of captioning in higher education, such as the fact that 19% of college students in the United States experience some degree of hearing loss. Additionally, 71% of students without hearing difficulties use captions at least some of the time, and 90% of students who use captions say that they help them learn more effectively. Furthermore, 65% of students use captions to help focus and retain information, while 62% use them to overcome poor audio quality.
Tony and Amiyah provide an overview of the practical steps institutions can take to implement remote captioning, including selecting the right technology and overcoming common challenges. They also discuss how Aberdeen Broadcast Services supports educational institutions by offering affordable captioning solutions, both human captioning and AI-powered ASR technology, that meet compliance requirements without compromising on quality.
Watch it on-demand here:
For more on our work with captioning in higher education, visit: Live Captioning for Universities.
Automatic Speech Recognition (ASR) is now making a substantial impact on how local churches connect with their communities, breaking down barriers once caused by financial limitations. With ASR, churches can now offer inclusive services to the Deaf and Hard of Hearing (D/HH) community. But it doesn’t stop there—when combined with Automatic Machine Translation (AMT), this powerful duo overcomes language hurdles, translating sermons into dozens of languages. Even the smallest congregations can now reach a global audience, sharing their message far and wide.
We previously explored the ethical and theological concerns with AI in the Church in our last blog post: The Role of AI in Enhancing Church Accessibility and Inclusion.
While human-generated captions and translations always offer the highest quality, ASR and AMT provide a cost-effective solution that can be utilized by churches and ministries of any size or budget. Imagine your sermon reaching the Deaf and Hard of Hearing (D/HH) community, allowing for full participation, or sharing your message in various languages to a worldwide audience that might otherwise have been unreachable. AI-powered closed captioning and translations help make this a reality. ASR captions and translations are not only a technological advancement; they are tools for inclusivity and global outreach.
Churches aiming to make a significant impact can turn to AI-powered accessibility tools, once considered out of reach, for preaching and teaching. Practical uses of ASR include:
Aberdeen’s new ASR solution, developed with ministry in mind, employs robust AI engines and large language models to provide a powerful advantage in delivering Christian content. Each solution is carefully crafted to fit your specific ministry needs, providing high-quality captions at a fraction of the cost.
Discover how Aberdeen’s ASR solution offers a cost-effective approach to closed captioning & translation. Learn more here: Aberdeen ASR Closed Captioning.
Does Artificial Intelligence (AI) have a place in the Church? Countless podcasts, articles, and sermons are popping up addressing the same question. Despite the widespread discussion, many overlook how deeply AI is already embedded in our daily lives. It influences everything from manufacturing processes and automotive technology to how our food is produced, impacting many products and services we use daily.
To tackle the question, “Does AI have a place in the Church?” let’s first understand what AI is. At its core, AI simulates human intelligence, performing tasks that typically require human intervention. In the article Artificial Intelligence, IBM explains that “On its own or combined with other technologies (e.g., sensors, geolocation, robotics) AI can perform tasks that would otherwise require human intelligence or intervention”.
Think of AI as a modern tool designed to handle tedious, repetitive, and data-intensive tasks efficiently. For Christians, AI should be considered like any other tool, such as the internet. It’s a resource that, when used wisely, can enhance our practices and outreach without compromising our core values or mission.
The fear of AI has led some people to strongly believe that the world will be overtaken by it. This reaction is understandable, as AI has quickly evolved from a science fiction concept to a societal staple. Major news outlets continuously report on various issues related to AI. Hollywood produces big-budget movies and TV shows about AI taking over the world, while authors write extensively about its potential aftermath and impact.
Those influences have changed how the Christian views AI. 52% of Christians in the U.S., when polled in the Barna Group’s research How U.S. Christians Feel About AI & the Church, said they would be disappointed if their church used AI. Moreover, less than 25% of those polled view AI as good for the Church.
Several reasons contribute to why many Christians are hesitant about incorporating AI into the church, including:
One scholar, Dr. Cory Marsh, on the episode Christians and AI talked on the topic and stated that the major concern of many Christian Pastors, from the list of concerns, is the loss of critical thinking. When crafting a sermon, study, or class the major component of the process is critical thinking so when pastors take that away, other items on the list, such as theological concerns and loss of human connection, quickly follow. It’s completely rational why the church is concerned about the rise of AI and the use of AI-enabled products.
Moving past these concerns, it's also important to recognize how AI can positively impact church operations, especially by enhancing the inclusivity of services.
The pace of technological change is relentless, and the need for inclusive communication solutions in ministries is no different. To meet the ever-growing demand for accessibility, ministries require a diverse toolkit. One tool that has been transformed by the artificial intelligence boom is Automatic Speech Recognition (ASR).
Programs like Dragon NaturallySpeaking or your phone’s Speech-to-Text are examples of ASR technology. The IBM article What is Speech Recognition explains, in simple terms, that this tool converts spoken words into text. Although ASR systems initially have a limited vocabulary, they can be significantly enhanced by integrating resources like Large Language Models (LLMs) and Deep Learning. These advanced technologies improve the quality, accuracy, and efficiency of captions, transcriptions, and translations, making ASR tools more effective and reliable.
Unlike sermon content creators, ASR does not add interpretation or creativity to your content; it simply converts your speech to text, word for word.
While AI poses certain ethical and practical challenges within the church context—ranging from concerns about authenticity in worship to the potential for diminished human connection—its benefits, particularly in enhancing accessibility and inclusivity, cannot be ignored. Tools like Automatic Speech Recognition (ASR) exemplify how AI can serve the church by broadening access to religious services for those with hearing impairments or language barriers. As technology continues to evolve, it’s important for church leaders to critically evaluate the opportunities available that do not compromise the spiritual integrity of their mission.
Discover how Aberdeen’s ASR solution offers a cost-effective approach to closed captioning. Learn more here: Aberdeen ASR Closed Captioning.
Closed captioning serves as a powerful tool that extends its impact far beyond aiding the deaf and hard-of-hearing community. Its significance transcends age, abilities, and background, making it an invaluable resource for both educators and learners. In the digital age, closed captioning has emerged as a transformative resource, with research revealing that students, English language learners, and children with learning disabilities who watch programs with closed captioning turned on improve their reading skills, increase their vocabulary, and enhance their focus and attention.
The scholarly article, Closed Captioning Matters: Examining the Value of Closed Captions for All Students (Smith 231) states that “Previous research shows that closed captioning can benefit many kinds of learners. In addition to students with hearing impairments, captions stand to benefit visual learners, non-native English learners, and students who happen to be in loud or otherwise distracting environments. In remedial reading classes, closed captioning improved students’ vocabulary, reading comprehension, word analysis skills, and motivation to learn (Goldman & Goldman, 1988). The performance of foreign language learners increased when captioning was provided (Winke, Gass, & Sydorenko, 2010). Following exams, these learners indicated that captions lead to increased attention, improved language processing, the reinforcement of previous knowledge, and deeper understanding of the language. For low-performing students in science classrooms, technology-enhanced videos with closed captioning contributed to post-treatment scores that were similar to higher-performing students (Marino, Coyne, & Dunn, 2010). The current findings support previous research and highlight the suitability of closed-captioned content for students with and without disabilities.”
Reading Rockets, a national public media literacy initiative provides resources and information on how young children learn and how educators can improve their students’ reading abilities. In the article, Captioning to Support Literacy, Alise Brann confirms that “Captions can provide struggling readers with additional print exposure, improving foundational reading skills.”
She states, “In a typical classroom, a teacher may find many students who are struggling readers, whether they are beginning readers, students with language-based learning disabilities, or English Language Learners (ELLs). One motivating, engaging, and inexpensive way to help build the foundational reading skills of students is through the use of closed-captioned and subtitled television shows and movies. These can help boost foundational reading skills, such as phonics, word recognition, and fluency, for a number of students.”
Research clearly demonstrates that “people learn better and comprehend more when words and pictures are presented together. The combination of aural and visual input gives viewers the opportunity to comprehend information through different channels and make connections between them” (The Effects of Captions on EFL Learners’ Comprehension of English-Language Television Programs).
From bolstering reading skills, to enhancing focus and language comprehension, the benefits of closed captioning are numerous. We at Aberdeen Broadcast Services are committed to providing quality closed captions for television (TV) and educational programming.
Here is the public service announcement (PSA) we released in 2016 on local broadcast stations, emphasizing how closed captioning can enhance children's literacy skills.
In today's consumer-driven world, companies are constantly seeking ways to increase their revenue. One approach that has gained traction is the addition of service fees to services that were once at no additional charge to the consumer. While this might seem like a beneficial move for businesses, it often raises concerns among users and can have negative implications such as a loss of trust, user abandonment, competitive disadvantages, and negative publicity.
One case that illustrates this trend is the recent development in the live captioning industry. EEG Enterprises, the leading manufacturer of closed captioning encoders and the creator of the iCap Cloud Network software was acquired by AI Media Technologies in 2021. Subsequently, AI Media Technologies has announced its intention to impose fees for accessing the EEG encoders through the iCap Network, which many agencies, including Aberdeen, have endorsed for more than two decades.
Beginning November 1, 2023, AI Media Technologies will institute a fee for accessing the iCap Network. This change necessitates all agencies, including Aberdeen, to agree to the new terms and conditions, which state that "If you reject the new iCap Terms, your ability to access the iCap Network will cease from October 1st, 2023." (now pushed out to November 1st, 2023). Many agencies express concerns about this new fee, asserting that it will inflate their operational costs and potentially hinder their ability to offer cost-effective captioning services to their clientele. Conversely, AI Media Technologies justifies the fee, citing the growing demands for data protection, privacy, security enhancements, and the need for a robust and stable network. They emphasize their commitment to investing significant capital and resources into the iCap network.
Read the full iCap Network Access Agreement here.
How the new fee will impact the captioning industry and whether it could lead to higher prices for consumers remains to be seen. However, it is clear that clients, who in most cases own the already high-cost encoders, will bear the brunt of the additional costs. Thus, while introducing service fees for existing services can provide short-term financial gains, companies should carefully consider the potential consequences before implementing such changes. Prioritizing customer satisfaction and the perceived value of services should always be a top priority when adjusting pricing structures.
On June 8, 2023, the Federal Communications Commission (FCC) released a Report and Order, Notice of Proposed Rulemaking, aiming to further ensure accessibility for all individuals in video conferencing services. The action establishes that under Section 716 of the Twenty-First Century Communications and Video Accessibility Act of 2010 (CVAA), video conferencing platforms commonly used for work, school, healthcare, and other purposes, fall under the definition of "interoperable video conferencing service."
Under Section 716 of the CVAA, Advanced Communications Services (ACS) and equipment manufacturers are required to make their services and equipment accessible to individuals with disabilities, unless achieving accessibility is not feasible. ACS includes interoperable video conferencing services such as Zoom, Microsoft Teams, Google Meet, and BlueJeans. The FCC previously left the interpretation of "interoperable" open, but in this latest report, it adopted the statutory definition without modification, encompassing services that provide real-time video communication to enable users to share information.
In the Notice of Proposed Rulemaking, the FCC seeks public comments on performance objectives for interoperable video conferencing services, including requirements for accurate and synchronous captions, text-to-speech functionality, and effective video connections for sign language interpreters.
The FCC's actions on this item are an important step toward ensuring that people with disabilities have equal access to video conferencing services. The Report & Order will help to make video conferencing more accessible and promote greater inclusion and participation of people with disabilities.
This article was co-written with the help of both ChatGPT and Google Bard as a demonstration of the technology discussed in this article. You can also read along with Aberdeen's President, Matt Cook in the recording below - but not really, this is Matt's voice cloned using a short clip of Matt's voice given to AI.
Artificial Intelligence (AI) has revolutionized numerous industries, and its influence on language-related technologies is particularly remarkable. In this blog post, we will explore how AI is transforming closed captioning, language translation, and even the creation of cloned voices. These advancements not only enhance accessibility and inclusion but also have far-reaching implications for communication in an increasingly globalized world.
Closed captioning is an essential feature for individuals who are deaf or hard of hearing, enabling them to access audiovisual content. Traditional closed captioning methods rely on human transcriptionists, however, AI-powered speech recognition algorithms have made significant strides in this field.
Using deep learning techniques, AI models can more accurately transcribe spoken words into text, providing real-time closed captioning. This is not up to the FCC guidelines for broadcast but is oftentimes good enough for other situations where the alternative is to have no closed captions at all. These models continuously improve their accuracy by analyzing large amounts of data and learning from diverse sources. As a result, AI has made closed captioning more accessible, enabling individuals to enjoy online videos with greater ease.
Our team is working hard to develop and launch AberScribe, our new AI transcript application powered by OpenAI, sometime in mid-2024. From any audio/video source file, the AberScribe app will create an AI-generated transcript that can be edited in our online transcript editor and exported into various caption formats. AberScribe will also have added features for creating other AI-generated resources from that final transcript. Resources like summaries, glossaries of terms, discussion questions, interactive worksheets, and many more - the possibilities are endless.
Sign up to join the waitlist and be one of our first users: https://aberdeen.io/aberscribe-wait-list/
Language barriers have long hindered effective communication between people from different linguistic backgrounds. However, AI-powered language translation has emerged as a game-changer, enabling real-time multilingual conversations and seamless understanding across different languages.
Machine Translation (MT) models, powered by AI, have made significant strides in accurately translating text from one language to another. By training on vast amounts of multilingual data, these models can understand and generate human-like translations, accounting for context and idiomatic expressions. This has empowered businesses, travelers, and individuals to engage in cross-cultural communication effortlessly.
In addition to written translation, AI is making headway in spoken language translation as well. With technologies like neural machine translation (NMT), AI systems can listen to spoken language, translate it in real-time, and produce synthesized speech in the desired language. This breakthrough holds immense potential for international conferences, tourism, and fostering cultural exchange.
The advent of AI has brought about significant advancements in speech synthesis, allowing for the creation of cloned voices that mimic the speech patterns and vocal identity of individuals. While cloned voices have sparked debates regarding ethical use, they also present exciting possibilities for personalization and accessibility.
AI-powered text-to-speech (TTS) models can analyze recorded speech data from an individual, capturing their vocal characteristics, intonations, and nuances. This data is then used to generate synthetic speech that sounds remarkably like the original speaker. This technology can be immensely beneficial for individuals with speech impairments, providing them with a voice that better aligns with their identity.
Moreover, cloned voices have applications in industries like entertainment and marketing, where celebrity voices can be replicated for endorsements or immersive experiences. However, it is crucial to navigate the ethical considerations surrounding consent and proper usage to ensure that this technology is used responsibly.
Artificial Intelligence continues to redefine the boundaries of accessibility, communication, and personalization in various domains. In the realms of closed captioning, language translation, and cloned voices, AI has made significant strides, bridging gaps, and enhancing user experiences. As these technologies continue to evolve, it is vital to strike a balance between innovation and ethical considerations, ensuring that AI is harnessed responsibly to benefit individuals and society as a whole.
Been tasked with figuring out how to implement closed captions in your video library? The process can be overwhelming at first. While evaluating closed captioning vendors, it’s good to understand the benefits of captioning, who your audience is, what to consider when it comes to quality, and what to expect from a vendor.
There are several things that an organization should consider and evaluate before choosing a closed captioning vendor. Some of the most important factors include:
Overall, closed captioning is a valuable tool that can benefit a wide range of audiences. It makes videos more accessible, engaging, and comprehensible for everyone.
By considering these factors, organizations can choose a closed captioning vendor that will meet their needs and provide a high-quality service:
Use these tips when evaluating closed captioning vendors and you’ll ensure that their videos are accessible to everyone and that they provide a positive viewing experience for all viewers.
In 2022, just days before winning the primary to become the Democratic candidate for the Senate in Pennsylvania, John Fetterman suffered a stroke. Like many stroke victims, he experienced a loss of function that persisted long after his recovery, including lingering auditory processing issues that made it challenging for him to understand spoken words. In interviews in the months that followed, John Fetterman relied on closed-captioning technology to help him comprehend reporters' questions and assist in his debates against his primary opponent, Dr. Mehmet Oz.
Upon being elected to serve in the US Senate, closed-captioning devices were installed both at his desk and at the front of the Senate chambers to facilitate his understanding of his colleagues as they spoke on the Senate floor. John Fetterman serves on several committees, including the Committee on Agriculture, Nutrition, and Forestry; the Committee on Banking, Housing, and Urban Affairs; the Committee on Environment and Public Works; the Joint Economic Committee; and the Special Committee on Aging. Closed-captioning has proven invaluable, benefiting both John Fetterman and his constituents in Pennsylvania, extending its utility beyond merely enabling him to watch TV at night or understand reporters.
With the assistance of closed-captioning technology, John Fetterman has been able to serve the people of Pennsylvania at the highest levels of government. During a hearing with the Senate Special Committee on Aging, Fetterman himself expressed gratitude for the transcription technology on his phone, stating, "This is a transcription service that allows me to fully participate in this meeting and engage in conversations with my children and interact with my staff." He later added, "I can't imagine if I didn't have this kind of bridge to allow me to communicate effectively with other people."
Captioning and transcription efforts extend well beyond being a mere requirement for broadcasting a program. As captioning technology continues to advance, an increasing number of individuals, like John Fetterman, will have the opportunity to participate in public life, even at the highest levels of government. They will serve others, even as transcription and captioning technology serves them.
Take a look at his setup in action here. Dedicated monitors with real-time captions displayed are becoming an increasingly popular setup at live events. Alternatively, explore the convenience of live captioning on mobile phones, making captions accessible from any seat in the venue. Either option is easily achievable — contact one of our experts to find out more.