In today's world defined by constant digital engagement, younger generations increasingly rely on captions and subtitles to enhance their viewing experience. This trend, largely popularized by Gen Z and Millennials, isn’t just limited to streaming shows or watching social media content; it’s spilling over into live events, with a strong case for captions as a way to boost engagement and attendance in venues that historically may have overlooked them, such as churches.

Here’s a look at the data supporting this movement and how churches can use captioning to foster a more engaging environment.

Younger Generations and the Subtitle Revolution

Preply, a language learning platform, conducted a survey titled, “Why America is Obsessed with Subtitles,” to explore the growing trend of subtitle usage among Americans. The study involved over 1,200 participants, aiming to understand how and why individuals use subtitles in their media consumption. The findings revealed that 50% of Americans watch content with subtitles most of the time, with younger generations, particularly Gen Z, showing a greater preference for subtitle use.

This data reveals a generation that sees captions not as an add-on but as an essential part of the viewing experience. For churches, this could signal an opportunity: integrating captions into services may not only help with accessibility but also align with the viewing habits of younger generations.

Captioning Live Events: A Path to Higher Engagement

Photo of a hand holding a phone up at church, reading live captions during a sermon.

The impact of captions on in-person attendance is significant. A study from Stagetext revealed that 31% of people would attend more live events if captions were readily available, with younger people leading this interest: 45% of 18-25-year-olds would be more likely to attend events if they were captioned, compared to 16% of those over 56.

This enthusiasm for live captions reflects a shift in how younger generations want to consume live content. Captions at events enhance accessibility for everyone, regardless of hearing ability, and address concerns with acoustics or unclear speech, which often deter audiences. In the church context, offering captions during sermons, worship songs, or events could break down barriers that may otherwise prevent younger individuals from fully engaging.

Engaging a New Generation: How Captioning Can Help Churches Reconnect with Young Adults

Christian churches across the United States are increasingly challenged to capture the interest and attendance of younger generations, who are showing declining levels of religious affiliation and engagement." The Pew Research Center's 2019 article, "In U.S., Decline of Christianity Continues at Rapid Pace," highlights a significant decline in Christian affiliation among younger Americans. The data indicates that only 49% of Millennials (born between 1981 and 1996) identify as Christians, compared to 84% of the Silent Generation (born between 1928 and 1945).

With reports indicating a decline in church attendance in the U.S., many churches are seeking strategies to re-engage their congregations, especially young adults. Captions could be a powerful, practical solution.

Offering live captions during services could address several issues:

Moving Forward: A Call for Churches to Embrace Captioning

By understanding the viewing habits of younger generations, churches have the opportunity to create an environment that aligns with their engagement preferences. Embracing captioning technology for in-person services and online sermon streams not only makes services more accessible but can also foster deeper engagement, particularly among younger congregants who see captions as an essential part of their everyday experience.

As churches consider how best to adapt to changing times, incorporating live captions could be a powerful step toward renewing attendance and helping younger generations feel seen, heard, and included in the community. It’s a practical, meaningful solution that could not only enhance accessibility but help bridge generational gaps, allowing churches to resonate with the next generation and grow their community in an inclusive and modern way.

ASR CAPTIONING & TRANSCRIPTION SERVICES

In-person Captions & Translation

Transform your in-person church services with instant, real-time captioning & translation.

With our real-time captioning solution, you can ensure that everyone, regardless of hearing ability or language preference, can follow along with ease. Designed for seamless integration into your existing setup, our captions are easy to use and perfect for enhancing the inclusivity of your in-person worship experiences.
Photo of people at a worship service with an audio wave graphic symbolizing the use of ASR as a tool

Automatic Speech Recognition (ASR) is now making a substantial impact on how local churches connect with their communities, breaking down barriers once caused by financial limitations. With ASR, churches can now offer inclusive services to the Deaf and Hard of Hearing (D/HH) community. But it doesn’t stop there—when combined with Automatic Machine Translation (AMT), this powerful duo overcomes language hurdles, translating sermons into dozens of languages. Even the smallest congregations can now reach a global audience, sharing their message far and wide.

We previously explored the ethical and theological concerns with AI in the Church in our last blog post: The Role of AI in Enhancing Church Accessibility and Inclusion.

While human-generated captions and translations always offer the highest quality, ASR and AMT provide a cost-effective solution that can be utilized by churches and ministries of any size or budget. Imagine your sermon reaching the Deaf and Hard of Hearing (D/HH) community, allowing for full participation, or sharing your message in various languages to a worldwide audience that might otherwise have been unreachable. AI-powered closed captioning and translations help make this a reality. ASR captions and translations are not only a technological advancement; they are tools for inclusivity and global outreach.

Churches aiming to make a significant impact can turn to AI-powered accessibility tools, once considered out of reach, for preaching and teaching. Practical uses of ASR include:

Aberdeen’s new ASR solution, developed with ministry in mind, employs robust AI engines and large language models to provide a powerful advantage in delivering Christian content. Each solution is carefully crafted to fit your specific ministry needs, providing high-quality captions at a fraction of the cost.

ASR CAPTIONING & TRANSCRIPTION SERVICES

In-person Captions & Translation

Transform your in-person church services with instant, real-time captioning & translation.

With our real-time captioning solution, you can ensure that everyone, regardless of hearing ability or language preference, can follow along with ease. Designed for seamless integration into your existing setup, our captions are easy to use and perfect for enhancing the inclusivity of your in-person worship experiences.

Discover how Aberdeen’s ASR solution offers a cost-effective approach to closed captioning & translation. Learn more here: Aberdeen ASR Closed Captioning.

Closed captioning serves as a powerful tool that extends its impact far beyond aiding the deaf and hard-of-hearing community. Its significance transcends age, abilities, and background, making it an invaluable resource for both educators and learners. In the digital age, closed captioning has emerged as a transformative resource, with research revealing that students, English language learners, and children with learning disabilities who watch programs with closed captioning turned on improve their reading skills, increase their vocabulary, and enhance their focus and attention.

The scholarly article, Closed Captioning Matters: Examining the Value of Closed Captions for All Students (Smith 231) states that “Previous research shows that closed captioning can benefit many kinds of learners. In addition to students with hearing impairments, captions stand to benefit visual learners, non-native English learners, and students who happen to be in loud or otherwise distracting environments. In remedial reading classes, closed captioning improved students’ vocabulary, reading comprehension, word analysis skills, and motivation to learn (Goldman & Goldman, 1988). The performance of foreign language learners increased when captioning was provided (Winke, Gass, & Sydorenko, 2010). Following exams, these learners indicated that captions lead to increased attention, improved language processing, the reinforcement of previous knowledge, and deeper understanding of the language. For low-performing students in science classrooms, technology-enhanced videos with closed captioning contributed to post-treatment scores that were similar to higher-performing students (Marino, Coyne, & Dunn, 2010). The current findings support previous research and highlight the suitability of closed-captioned content for students with and without disabilities.”

Reading Rockets, a national public media literacy initiative provides resources and information on how young children learn and how educators can improve their students’ reading abilities. In the article, Captioning to Support Literacy, Alise Brann confirms that “Captions can provide struggling readers with additional print exposure, improving foundational reading skills.”

She states, “In a typical classroom, a teacher may find many students who are struggling readers, whether they are beginning readers, students with language-based learning disabilities, or English Language Learners (ELLs). One motivating, engaging, and inexpensive way to help build the foundational reading skills of students is through the use of closed-captioned and subtitled television shows and movies. These can help boost foundational reading skills, such as phonics, word recognition, and fluency, for a number of students.”

Research clearly demonstrates that “people learn better and comprehend more when words and pictures are presented together. The combination of aural and visual input gives viewers the opportunity to comprehend information through different channels and make connections between them” (The Effects of Captions on EFL Learners’ Comprehension of English-Language Television Programs).

From bolstering reading skills, to enhancing focus and language comprehension, the benefits of closed captioning are numerous. We at Aberdeen Broadcast Services are committed to providing quality closed captions for television (TV) and educational programming.

Here is the public service announcement (PSA) we released in 2016 on local broadcast stations, emphasizing how closed captioning can enhance children's literacy skills.

This article was co-written with the help of both ChatGPT and Google Bard as a demonstration of the technology discussed in this article. You can also read along with Aberdeen's President, Matt Cook in the recording below - but not really, this is Matt's voice cloned using a short clip of Matt's voice given to AI.

Artificial Intelligence (AI) has revolutionized numerous industries, and its influence on language-related technologies is particularly remarkable. In this blog post, we will explore how AI is transforming closed captioning, language translation, and even the creation of cloned voices. These advancements not only enhance accessibility and inclusion but also have far-reaching implications for communication in an increasingly globalized world.

AI in Closed Captioning

Closed captioning is an essential feature for individuals who are deaf or hard of hearing, enabling them to access audiovisual content. Traditional closed captioning methods rely on human transcriptionists, however, AI-powered speech recognition algorithms have made significant strides in this field.

Using deep learning techniques, AI models can more accurately transcribe spoken words into text, providing real-time closed captioning. This is not up to the FCC guidelines for broadcast but is oftentimes good enough for other situations where the alternative is to have no closed captions at all. These models continuously improve their accuracy by analyzing large amounts of data and learning from diverse sources. As a result, AI has made closed captioning more accessible, enabling individuals to enjoy online videos with greater ease.

Our team is working hard to develop and launch AberScribe, our new AI transcript application powered by OpenAI, sometime in mid-2024. From any audio/video source file, the AberScribe app will create an AI-generated transcript that can be edited in our online transcript editor and exported into various caption formats. AberScribe will also have added features for creating other AI-generated resources from that final transcript. Resources like summaries, glossaries of terms, discussion questions, interactive worksheets, and many more - the possibilities are endless.

Sign up to join the waitlist and be one of our first users: https://aberdeen.io/aberscribe-wait-list/

AI-Driven Language Translation

Language barriers have long hindered effective communication between people from different linguistic backgrounds. However, AI-powered language translation has emerged as a game-changer, enabling real-time multilingual conversations and seamless understanding across different languages.

Machine Translation (MT) models, powered by AI, have made significant strides in accurately translating text from one language to another. By training on vast amounts of multilingual data, these models can understand and generate human-like translations, accounting for context and idiomatic expressions. This has empowered businesses, travelers, and individuals to engage in cross-cultural communication effortlessly.

In addition to written translation, AI is making headway in spoken language translation as well. With technologies like neural machine translation (NMT), AI systems can listen to spoken language, translate it in real-time, and produce synthesized speech in the desired language. This breakthrough holds immense potential for international conferences, tourism, and fostering cultural exchange.

Cloned Voices and AI

The advent of AI has brought about significant advancements in speech synthesis, allowing for the creation of cloned voices that mimic the speech patterns and vocal identity of individuals. While cloned voices have sparked debates regarding ethical use, they also present exciting possibilities for personalization and accessibility.

AI-powered text-to-speech (TTS) models can analyze recorded speech data from an individual, capturing their vocal characteristics, intonations, and nuances. This data is then used to generate synthetic speech that sounds remarkably like the original speaker. This technology can be immensely beneficial for individuals with speech impairments, providing them with a voice that better aligns with their identity.

Moreover, cloned voices have applications in industries like entertainment and marketing, where celebrity voices can be replicated for endorsements or immersive experiences. However, it is crucial to navigate the ethical considerations surrounding consent and proper usage to ensure that this technology is used responsibly.

Conclusion

Artificial Intelligence continues to redefine the boundaries of accessibility, communication, and personalization in various domains. In the realms of closed captioning, language translation, and cloned voices, AI has made significant strides, bridging gaps, and enhancing user experiences. As these technologies continue to evolve, it is vital to strike a balance between innovation and ethical considerations, ensuring that AI is harnessed responsibly to benefit individuals and society as a whole.

Open captions and closed captions are both used to provide text-based representations of spoken dialogue or audio content in videos, but they differ in their visibility and accessibility options.

Here's the difference between closed and open captions:

Open Captions

Closed Captions

FeatureOpen CaptionClosed Captions
VisibilityPermanently embedded in the videoSeparate text track that can be turned on or off
AccessibilityCannot be turned offCan be turned on or off by the viewer
ApplicationsWide audiences, noisy environmentsDiverse audiences, compliance with accessibility regulations
CreationAdded during video productionGenerated in real-time or embedded manually during post-production or uploaded as a sidecar file

Both open and closed captions serve the purpose of making videos accessible to individuals who are deaf or hard of hearing, those who are learning a new language, or those who prefer to read the text alongside the audio.

The choice between open or closed captions depends on the specific requirements and preferences of the content creators and the target audience.

In the July ‘21 release of Premiere Pro, Adobe introduced its artificial intelligence (AI) powered speech-to-text engine to help creators make their content more accessible to their audiences. Their extensive toolset allows their users to edit, stylize, and export captions in all supported formats straight out of the sequence timeline of a Premiere Pro project. A 3-step process of auto-transcribing, generating, and stylizing captions all within the platform already familiar to its users delivers a seamless experience from beginning to end. But how is the accuracy of the final product?

Today, AI captions, at their best, have an error rate of 5-10% - much improved over the 80% accuracy we saw just a few years ago. High accuracy is crucial for the deaf and hard-of-hearing audience as each error adds to the possibility of confusing the message. To protect all audiences that rely on captioning to understand programming on television, the Federal Communications Commission (FCC) set a detailed list of quality standards by which all captions must meet to be acceptable for broadcast back in 2015. Preceding those standards, the Described and Captioned Media Program (DCMP) published its Captioning Key manual over 20 years ago and has since been a valuable reference for captioning of both entertainment and educational media targeted to audiences of all age groups. Simply having captions present on your content isn’t enough, it needs to be accurate and best replicate the experience for all audiences.

Adobe’s speech-to-text engine has been one of the most impressive that our team has seen to date, so we decided to take a deeper look at it and run some tests. We tasked our most experienced Caption Editor with using Adobe’s auto-generated transcript to create & edit the captions to meet the quality standards of the FCC and the deaf and hard of hearing community on two types of video clips: a single-speaker program and one with multiple speakers. Our editor used our Pop-on Plus+ caption product for these examples, which are our middle-tier quality captions that fulfill all quality standard requirements but are not always 100% free of errors.

Did using Adobe’s speech-to-text save time, or did it create more work in the editing process than needed? Here’s how it went…

In-depth comparison documents that evaluate the captions cell-by-cell are available for download here:

Single Speaker Clip

In this example, we used the perfect scenario for AI: clear audio, a single speaker at an optimal words-per-minute (WPM) speaking rate, and no sound effects or music.

The captions contained the following issues that would need to be corrected by Caption Editor:

Here’s the clip with Adobe’s speech-to-text captions overlayed on the top half of the video, and ours on the bottom half.

Multiple Speaker Clip

For the next clip, we went with a more realistic example of television programming where there are multiple speakers, an area where AI is known to struggle and has difficulties identifying the speakers. This clip also features someone with a pronounced accent, commentators speaking over one another, and proper names of athletes – all of which our editors take the time to research and understand.

The same errors detailed in the single-speaker example are present throughout, among the other difficulties we expected it to have. In fact, there were so many errors that our editor was unable to use the transcript from Adobe and started from the beginning using our own workflow.

Here’s a sample of the first 9 cells of captions with what Adobe transcribes in the first column, notes from our Caption Editor, and how it should look.

Adobe’s Automated SRT Caption FileIssueFormatted by Aberdeen
something
 you are never seen in your life, correct?
No speaker ID.(Pedro Martinez)
It's something you have
never seen in your life,
“Correct” is spoken by new speaker.(Matt Vasgersian)
Correct!
So it's.Missing text.So it's--so it's MVP
of the year!
So we're all watching something
 different. OK
(Pedro)
We're all watching
something different.
He gets the MVP.Okay, he gets the MVP.
I'd be better off.Completely misunderstood music lyrics.♪ Happy birthday to you ♪
Oh, you, you guys.(Matt)
You guys.
Let me up here to dove into the opening
night against the Hall of Fame.
Merged multiple sentences together.Just left me up here to die.
You left me up here to die
against the hall of famer.

Take a look at the clip. Again, with Adobe's speech-to-text on the top and Aberdeen on the bottom.

In-depth comparison documents that evaluate the captions cell-by-cell are available for download here:

The Verdict

Overall, the quality of the auto-generated captions exceeded expectations, and we found them to be in the top tier of speech-recognition engines available. The timing and punctuation were particularly impressive. However, when doing a true comparison to the captioning work that we would consider acceptable, AI does not meet Aberdeen’s broadcast quality standard.

Aberdeen's post-production Caption Editors are detail-oriented and grammar savvy and always strive to portray every element of the program with 100% accuracy so that the viewer misses nothing. For our most experienced Caption Editor, it took a 5:1 ratio in time for them to edit and correct the single-speaker clip; meaning, for every minute of video, it took 5 minutes to clean up the transcript and captions. Assuming your team is educated in the proper timing of caption cells, line breaks, and grammar, a 30-minute program may take over 2.5 hours to bring up to standards with a usable transcript. In the second example, the transcript was unusable and would have taken more time to clean up than it did to transcribe from scratch. Double that timeline now.

Consider all of the above when using this service. Do you have the time and resources to train your staff to know how to edit auto-generated captions and get them up to the appropriate standards? How challenging may your content be for the AI? Whenever and however you make the choice, make sure you deliver the best possible experience to your entire audience.

Closed captioning is an essential aspect of modern media consumption, bridging the gap of accessibility and inclusivity for diverse audiences. Yet, despite its widespread use, misconceptions about closed captioning persist. In this article, we delve into the most prevalent myths surrounding this invaluable feature, shedding light on the truth behind closed captioning's capabilities, impact, and indispensable role in enhancing the way we interact with video content.

Let’s debunk these common misunderstandings about closed captioning and gain a fresh perspective on the far-reaching importance of closed captioning in today's digital landscape.

Closed captioning is only for the deaf and hard of hearing

While closed captions are crucial for people with hearing impairments, they benefit a much broader audience. They are also helpful for people learning a new language, those in noisy environments, individuals with attention or cognitive challenges, and viewers who prefer to watch videos silently.

Closed captioning is automatic and 100% accurate

While there are automatic captioning tools, they are not always accurate, especially with complex content, background noise, or accents. Human involvement is often necessary to ensure high-quality and accurate captions.

Captions are always displayed at the same place on the screen

Some formats, like SCC, support positioning and allow captions to appear in different locations. However, most platforms use standard positioning at the bottom of the screen.

Captions can be added to videos as a separate file later

While it's possible to add closed captions after video production, it's more efficient and cost-effective to incorporate captioning during the production process. Integrating captions during editing ensures a seamless viewing experience.

Captions are only available for movies and TV shows

Closed captioning is essential for television and films, but it's also used in various other video content, including online videos, educational videos, social media clips, webinars, and live streams.

Captioning is a one-size-fits-all solution

Different platforms and devices may have varying requirements for closed caption formats and display styles. To ensure accessibility and optimal viewing experience, captions may need adjustments based on the target platform.

All countries have the same closed captioning standards

Captioning standards and regulations vary between countries, and it's essential to comply with the specific accessibility laws and guidelines of the target audience's location.

Closed captioning is expensive and time-consuming

While manual captioning can be time-consuming, there are cost-effective solutions available, including automatic captioning and professional captioning services. Moreover, the benefits of accessibility and broader audience reach often outweigh the investment.

In summary, closed captioning is a vital tool for enhancing accessibility and user experience in videos. Understanding the realities of closed captioning helps ensure that content creators and distributors make informed decisions to improve inclusivity and reach a broader audience.

Earlier this month, a team of Aberdeen caption editors took an educational field trip to the cinema! We were excited to check out Sony’s closed captioning glasses offered at Regal Cinemas. We decided to watch Hillsong’s “Let Hope Rise” documentary.

Our captioning team awaiting their showtime. Left to right: Nathan, Austin, Christina, and Brittany

We were surprised at how easy the process was—you simply need to check that accessibility devices are available for the movie you want to see. After you purchase your tickets, head over to guest services and request the glasses. Unfortunately, there were only 4 pairs of glasses available so we had to share.

The team sporting the Sony's Access Glasses. Left to right: Christina, Brittany, Nathan, and Flora

The glasses fit like heavy 3-D glasses. They are attached to a small receiver box where you can choose mid, near, or far distance and even change the brightness of the text.  (You can also choose different languages if available and there is a headphone output if you desire described audio as well.)

Christina's POV through Sony's Access Glasses

The captions appear in bright green font and “move” on the screen depending on where you look or how you tilt your head (which took a little getting used to).  This technology is wonderful for the deaf and hard-of-hearing community and is likely going to encourage these viewers to choose Regal over other theaters. We had a blast evaluating the closed captions as we watched and were all pretty impressed with the movie theater glasses.

The field trip was even more rewarding for our team because of our longstanding relationship with the Hillsong team. The influx of content due to the June 1st Hillsong Channel launch has exposed more of our captioners to their productions and, along the way, created a few more super-fans of their work. They all jumped at the opportunity to check out their new theatrical release.

Here’s the trailer for the documentary...

Think about what you know about the English language. Alphabet letters combine to form words. Words represent different parts of speech (such as nouns, verbs, and adjectives). To convey an idea or thought, we string words together to form sentences, paying attention to grammar, style, and punctuation. Because we understand pronunciation and phonetics, we can read other languages that use the Latin alphabet even if we do not understand the meaning of the words. However, the Chinese language functions in an entirely different way. Chinese is a conceptual language. It relies on written characters (not letters and words) to express ideas and general concepts.Read

One of the main goals of every producer is to try to reach the maximum amount of viewers every time their program airs. Apart from engaging content, time slots, and targeting the right regions, there is one simple thing EVERY producer can do. In this article, we will discuss why including Spanish captions is so important, how they work, and who is doing it.

Know Your Audience

First things first: Who is your audience? Perhaps it is as broad as every American across the nation.  But do you know who they are? Can they understand your program?

Did you know that according to the United States Census Bureau, the U.S. had the second-largest Hispanic population in the world just behind Mexico [4]? That means there are more Spanish speakers in the U.S. than in Argentina and Spain!

If you live in the United States, you are among 54 million Hispanic people, of whom 38.3 million speak Spanish at home! That is 17% of the entire United States population [4].

And get this… the projected Hispanic population of the United States in 2060 is 128.8 million, which would be 31% of the nation’s population [4]!

Are you taking into consideration this huge audience with your programming? Have you thought about how many more people you could reach with your national TV broadcast, web videos, or DVD sales if you localized your programming with Spanish closed captioning, subtitles, or Spanish voice dubbing? Ministries in the know, like In Touch Ministries, have been doing this for years. Learn from the leaders.

The Secret: Experienced Broadcasters Use CC3

Spanish captions cc3

The simple truth is this: By offering captions in various languages, you automatically reach more viewers. Statistically, Spanish is the second most-used language in the United States [1] and there are more Spanish speakers in the U.S. than speakers of Chinese, French, German, Italian, Hawaiian, and the Native American languages combined. Spanish is the best place to start localizing your programming, and there is no faster, more cost-effective way than to utilize CC3.

If you are broadcasting in English, chances are you have already heard or thought about broadcasting Spanish captions. Some of you may already broadcast it via CC2, so why think about using CC3 [2]? Although broadcasting via CC1 and CC2 works well, both of these channels are embedded in field 1. By choosing CC3, which is embedded in field 2, you are able to provide the maximum bandwidth and allow for more accurately timed captions in both languages.

Also, in order to avoid bandwidth problems with early caption decoders [3], the U.S. Federal Communications Commission (FCC) recommends bilingual programming to be broadcast via CC3. Many Spanish television networks, such as Univision and Telemundo, provide English captions for many of their Spanish programs in CC3. The standard nowadays has become to broadcast the original language’s captions on CC1, and then the alternative language’s captions on CC3.

All Ministries Should Consider Spanish Captions

In the Christian broadcasting industry, many ministries see the value of including Spanish captions. Take In Touch Ministries which has implemented the use of CC3 to offer Spanish captions for their English program, In Touch with Dr. Charles Stanley. This has allowed them to provide high-quality Spanish captions to their viewers across the country, and gain viewership with their message.

Any ministry that is investing to broadcast nationally, should not only be captioning in English but in Spanish too. There is approximately a 20% increase in viewership and you can get Spanish captions for a fraction of the cost you pay to broadcast your programming. The additional cost is minimal and usually discounted when English and Spanish captioning are done in tandem with the same captioning company.

Observe the languages spoken in your community and you’ll find English is most definitely not the only language understood by your neighbors, and it also isn't always the primary language of your national viewers. Give Spanish captioning a try!

If you have any questions regarding Spanish captioning via CC3 or would like to see what it would cost to add Spanish captions to your video programming, contact us.

More astounding facts about the Hispanic population in the U.S. can be found here: United States Census Bureau.

Sources:

[1] http://en.wikipedia.org/wiki/Spanish_language_in_the_United_States

[2] http://www.captionsinc.com/what.asp

[3] http://en.wikipedia.org/wiki/Closed_captioning

[4] http://www.census.gov/newsroom/facts-for-features/2014/cb14-ff22.html