In today's world defined by constant digital engagement, younger generations increasingly rely on captions and subtitles to enhance their viewing experience. This trend, largely popularized by Gen Z and Millennials, isn’t just limited to streaming shows or watching social media content; it’s spilling over into live events, with a strong case for captions as a way to boost engagement and attendance in venues that historically may have overlooked them, such as churches.

Here’s a look at the data supporting this movement and how churches can use captioning to foster a more engaging environment.

Younger Generations and the Subtitle Revolution

Preply, a language learning platform, conducted a survey titled, “Why America is Obsessed with Subtitles,” to explore the growing trend of subtitle usage among Americans. The study involved over 1,200 participants, aiming to understand how and why individuals use subtitles in their media consumption. The findings revealed that 50% of Americans watch content with subtitles most of the time, with younger generations, particularly Gen Z, showing a greater preference for subtitle use.

This data reveals a generation that sees captions not as an add-on but as an essential part of the viewing experience. For churches, this could signal an opportunity: integrating captions into services may not only help with accessibility but also align with the viewing habits of younger generations.

Captioning Live Events: A Path to Higher Engagement

Photo of a hand holding a phone up at church, reading live captions during a sermon.

The impact of captions on in-person attendance is significant. A study from Stagetext revealed that 31% of people would attend more live events if captions were readily available, with younger people leading this interest: 45% of 18-25-year-olds would be more likely to attend events if they were captioned, compared to 16% of those over 56.

This enthusiasm for live captions reflects a shift in how younger generations want to consume live content. Captions at events enhance accessibility for everyone, regardless of hearing ability, and address concerns with acoustics or unclear speech, which often deter audiences. In the church context, offering captions during sermons, worship songs, or events could break down barriers that may otherwise prevent younger individuals from fully engaging.

Engaging a New Generation: How Captioning Can Help Churches Reconnect with Young Adults

Christian churches across the United States are increasingly challenged to capture the interest and attendance of younger generations, who are showing declining levels of religious affiliation and engagement." The Pew Research Center's 2019 article, "In U.S., Decline of Christianity Continues at Rapid Pace," highlights a significant decline in Christian affiliation among younger Americans. The data indicates that only 49% of Millennials (born between 1981 and 1996) identify as Christians, compared to 84% of the Silent Generation (born between 1928 and 1945).

With reports indicating a decline in church attendance in the U.S., many churches are seeking strategies to re-engage their congregations, especially young adults. Captions could be a powerful, practical solution.

Offering live captions during services could address several issues:

Moving Forward: A Call for Churches to Embrace Captioning

By understanding the viewing habits of younger generations, churches have the opportunity to create an environment that aligns with their engagement preferences. Embracing captioning technology for in-person services and online sermon streams not only makes services more accessible but can also foster deeper engagement, particularly among younger congregants who see captions as an essential part of their everyday experience.

As churches consider how best to adapt to changing times, incorporating live captions could be a powerful step toward renewing attendance and helping younger generations feel seen, heard, and included in the community. It’s a practical, meaningful solution that could not only enhance accessibility but help bridge generational gaps, allowing churches to resonate with the next generation and grow their community in an inclusive and modern way.

ASR CAPTIONING & TRANSCRIPTION SERVICES

In-person Captions & Translation

Transform your in-person church services with instant, real-time captioning & translation.

With our real-time captioning solution, you can ensure that everyone, regardless of hearing ability or language preference, can follow along with ease. Designed for seamless integration into your existing setup, our captions are easy to use and perfect for enhancing the inclusivity of your in-person worship experiences.
Photo of people at a worship service with an audio wave graphic symbolizing the use of ASR as a tool

Automatic Speech Recognition (ASR) is now making a substantial impact on how local churches connect with their communities, breaking down barriers once caused by financial limitations. With ASR, churches can now offer inclusive services to the Deaf and Hard of Hearing (D/HH) community. But it doesn’t stop there—when combined with Automatic Machine Translation (AMT), this powerful duo overcomes language hurdles, translating sermons into dozens of languages. Even the smallest congregations can now reach a global audience, sharing their message far and wide.

We previously explored the ethical and theological concerns with AI in the Church in our last blog post: The Role of AI in Enhancing Church Accessibility and Inclusion.

While human-generated captions and translations always offer the highest quality, ASR and AMT provide a cost-effective solution that can be utilized by churches and ministries of any size or budget. Imagine your sermon reaching the Deaf and Hard of Hearing (D/HH) community, allowing for full participation, or sharing your message in various languages to a worldwide audience that might otherwise have been unreachable. AI-powered closed captioning and translations help make this a reality. ASR captions and translations are not only a technological advancement; they are tools for inclusivity and global outreach.

Churches aiming to make a significant impact can turn to AI-powered accessibility tools, once considered out of reach, for preaching and teaching. Practical uses of ASR include:

Aberdeen’s new ASR solution, developed with ministry in mind, employs robust AI engines and large language models to provide a powerful advantage in delivering Christian content. Each solution is carefully crafted to fit your specific ministry needs, providing high-quality captions at a fraction of the cost.

ASR CAPTIONING & TRANSCRIPTION SERVICES

In-person Captions & Translation

Transform your in-person church services with instant, real-time captioning & translation.

With our real-time captioning solution, you can ensure that everyone, regardless of hearing ability or language preference, can follow along with ease. Designed for seamless integration into your existing setup, our captions are easy to use and perfect for enhancing the inclusivity of your in-person worship experiences.

Discover how Aberdeen’s ASR solution offers a cost-effective approach to closed captioning & translation. Learn more here: Aberdeen ASR Closed Captioning.

This article was co-written with the help of both ChatGPT and Google Bard as a demonstration of the technology discussed in this article. You can also read along with Aberdeen's President, Matt Cook in the recording below - but not really, this is Matt's voice cloned using a short clip of Matt's voice given to AI.

Artificial Intelligence (AI) has revolutionized numerous industries, and its influence on language-related technologies is particularly remarkable. In this blog post, we will explore how AI is transforming closed captioning, language translation, and even the creation of cloned voices. These advancements not only enhance accessibility and inclusion but also have far-reaching implications for communication in an increasingly globalized world.

AI in Closed Captioning

Closed captioning is an essential feature for individuals who are deaf or hard of hearing, enabling them to access audiovisual content. Traditional closed captioning methods rely on human transcriptionists, however, AI-powered speech recognition algorithms have made significant strides in this field.

Using deep learning techniques, AI models can more accurately transcribe spoken words into text, providing real-time closed captioning. This is not up to the FCC guidelines for broadcast but is oftentimes good enough for other situations where the alternative is to have no closed captions at all. These models continuously improve their accuracy by analyzing large amounts of data and learning from diverse sources. As a result, AI has made closed captioning more accessible, enabling individuals to enjoy online videos with greater ease.

Our team is working hard to develop and launch AberScribe, our new AI transcript application powered by OpenAI, sometime in mid-2024. From any audio/video source file, the AberScribe app will create an AI-generated transcript that can be edited in our online transcript editor and exported into various caption formats. AberScribe will also have added features for creating other AI-generated resources from that final transcript. Resources like summaries, glossaries of terms, discussion questions, interactive worksheets, and many more - the possibilities are endless.

Sign up to join the waitlist and be one of our first users: https://aberdeen.io/aberscribe-wait-list/

AI-Driven Language Translation

Language barriers have long hindered effective communication between people from different linguistic backgrounds. However, AI-powered language translation has emerged as a game-changer, enabling real-time multilingual conversations and seamless understanding across different languages.

Machine Translation (MT) models, powered by AI, have made significant strides in accurately translating text from one language to another. By training on vast amounts of multilingual data, these models can understand and generate human-like translations, accounting for context and idiomatic expressions. This has empowered businesses, travelers, and individuals to engage in cross-cultural communication effortlessly.

In addition to written translation, AI is making headway in spoken language translation as well. With technologies like neural machine translation (NMT), AI systems can listen to spoken language, translate it in real-time, and produce synthesized speech in the desired language. This breakthrough holds immense potential for international conferences, tourism, and fostering cultural exchange.

Cloned Voices and AI

The advent of AI has brought about significant advancements in speech synthesis, allowing for the creation of cloned voices that mimic the speech patterns and vocal identity of individuals. While cloned voices have sparked debates regarding ethical use, they also present exciting possibilities for personalization and accessibility.

AI-powered text-to-speech (TTS) models can analyze recorded speech data from an individual, capturing their vocal characteristics, intonations, and nuances. This data is then used to generate synthetic speech that sounds remarkably like the original speaker. This technology can be immensely beneficial for individuals with speech impairments, providing them with a voice that better aligns with their identity.

Moreover, cloned voices have applications in industries like entertainment and marketing, where celebrity voices can be replicated for endorsements or immersive experiences. However, it is crucial to navigate the ethical considerations surrounding consent and proper usage to ensure that this technology is used responsibly.

Conclusion

Artificial Intelligence continues to redefine the boundaries of accessibility, communication, and personalization in various domains. In the realms of closed captioning, language translation, and cloned voices, AI has made significant strides, bridging gaps, and enhancing user experiences. As these technologies continue to evolve, it is vital to strike a balance between innovation and ethical considerations, ensuring that AI is harnessed responsibly to benefit individuals and society as a whole.

Think about what you know about the English language. Alphabet letters combine to form words. Words represent different parts of speech (such as nouns, verbs, and adjectives). To convey an idea or thought, we string words together to form sentences, paying attention to grammar, style, and punctuation. Because we understand pronunciation and phonetics, we can read other languages that use the Latin alphabet even if we do not understand the meaning of the words. However, the Chinese language functions in an entirely different way. Chinese is a conceptual language. It relies on written characters (not letters and words) to express ideas and general concepts.Read

Have you stumbled upon a goldmine of budget-friendly or complimentary voice-over services at the foreign TV station broadcasting your program? When you reach out to your local voice dubbing provider to inquire about the costs of voice dubbing services, do you find yourself taken aback by the quotes you receive?

Well, don’t be, because it does not reflect the same two services.

Numerous evangelical television ministries have a global reach, often translating their programs through voice dubbing into the local languages. Typically, they bundle airtime and voice-over dubbing services into a comprehensive package, where the fees for voice-over are encompassed within a single rate. This setup can make voice dubbing appear as a seamless and cost-effective endeavor.

As the international viewership expands, these ministries might approach our team at Aberdeen for voice dubbing rates in a different language, only to experience a form of sticker shock. They might express, "Yet the network in Russia, Italy, or China handles this for us at a fraction of the cost! Why is your pricing considerably higher?"

Insight into Voice Dubbing Operations at TV Stations

In these instances, these TV stations might resort to using their in-house TV presenters to cut down on expenses, instead of meticulously selecting suitable voice talents. Frequently, the voice dubbing is executed without synchronization of lip movements, resembling a rapid assembly-line process with no preceding script translation. The resulting production often resembles real-time interpretation at a meeting rather than a meticulously crafted, post-production performance.

However, it's crucial to note that this isn't a universal practice, but rather a common occurrence based on our observations. Often, the ministries remain oblivious to this distinction or fail to recognize the significance of quality until they receive unfavorable feedback from their audience (which might not even reach the US-based client unless actively sought). Some dubbing efforts are so subpar that they render the content amusing to viewers.

Functioning of a Professional Voice-Over Dubbing Service

When you approach Aberdeen or any reputable voice-over dubbing facility to have your program dubbed in another language, a plethora of elements come into play to ensure a professional outcome.

Our services encompass, among others:

All the services integrated into our holistic post-production voice dubbing process guarantee an outcome characterized by integrity and excellence. There's simply no room for compromise when aiming for such quality.

Ministries: Your Next Steps

Should a voice dubbing price appear exceptionally enticing, it's likely too good to be true, potentially leaving you with results that fall short of your expectations and requirements.

If your TV program's voice dubbing is currently handled by a foreign TV station at a minimal cost, we strongly advise having the voice-over work evaluated by an impartial native third party.

There's a common question: "Why would I need to pay for professional translation services if I have access to Google Translate?" However, there are numerous compelling reasons why Google Translate cannot replace the expertise of a native human speaker.

To begin, it's essential to understand how Google Translate operates. It functions like a computer, akin to how a search engine conducts a web search. Instead of phrases, it primarily deals with individual words, lacking comprehension of syntax, style, or grammar. Google Translate essentially dissects your sentence into words and attempts to replace each word with its translated counterpart. The outcome is often peculiar, disjointed, and occasionally humorous. Google Translate encounters difficulties with proper names, units of measurement, currencies, and other technical equivalents. Some amusing examples include translating "I love Sweden" to "I love Canada." There's even a website that takes an English phrase, translates it into several languages using Google Translate, and then back into English, resulting in hilarious outcomes reminiscent of the childhood game "telephone." For instance, the famous quote "To be or not to be, that is the question" transforms into "Or rather, it is a problem."

That said, Google Translate can be a handy tool when you're traveling in a foreign country and need to find a restroom, hail a taxi, or locate a hotel. However, when it comes to professional translation services, nothing can replace the expertise of native-speaking professionals who understand the nuances and intricacies of language. They have the ability to preserve the true meaning of the original text, ensuring that nothing is "lost in translation."