It’s been twelve years since the FCC ushered in a new era of closed captioning.

On July 2, 2002, the Federal Communications Commission mandated all Digital Televisions include an EIA-708 caption decoder, adding new features to viewers who want to change the captions’ font, color, and size according to preference—an advance in the captioning world comparable to the leap in Television from Monochrome to Technicolor tube sets.

In addition to altering the text, EIA-708 has eight windows with fewer constraints than EIA-608, the original (and primitive) standard for closed captioning that preceded digital during the analog era. These windows provide added freedom when positioning captions to a specific location, which helps when a viewer wants to move captions around graphics on screen.

For more information about the differences between 608 and 708 captions, check out: The Basics of 608 vs. 708 Captions.

Does it seem like working in the television broadcast industry is a bit like learning a second language?  It certainly can feel that way for many content producers and video engineers dealing specifically with closed captioning and file compression.  Exposure to such foreign words and concepts such as metadata usually occurs on the floors of industry trade shows, like NAB, and in many a master control room across the nation and globe.  Hmmm…”metadata…”that sounds an awful lot like Greek, doesn’t it?  Actually, “meta” is a Greek prefix meaning “after,” or “adjacent,” among other things.  Makes sense then why the word “meta” was adopted by the TV industry to describe the area of a digital file containing data about the audio and video, but that is not the actual audio or video information in and of itself.  Rather, metadata is simply complimentary information pertaining to the audio and video tracks of the file.  This additional data happens to come “after” A/V tracks or is “adjacent” to the A/V tracks in a respect.

For instance, since captioning can be “closed” (not “open” or viewable), captioning is technically not part of the audio or video, but is additional metadata that compliments the rest of the file.  With traditional analog video, this extra information would be encoded onto Line 21 of the VBI (vertical blanking interval) as part of the EIA-608 captioning standard.  Hard to imagine, but the digital caption encoding protocol (EIA-708) is even more complex.  It is not enough then to simply put captions into the somewhat equivalent Line 9 or VANC (vertical ancillary) portion of a digital file.  That is due to the fact that the location of captions within this VANC area is dependent upon what file format the producer is required to send to each individual television station or network.

To complicate matters even further, not all TV stations use the same file formats and on-air play servers to air video programming, like in the analog days, for which BetaCam and DigiBeta tapes were the commonly accepted tape formats.  Some of the trouble associated with creating files for different station on-air play servers is that each of the play servers uses a different proprietary file format.  Each of these file formats, such as MXF, LXF, and GXF, among many others, look for captioning data in unique, and format specific, locations of the metadata area of each file.  So, unless you are transcoding files with this in mind, closed captioning stands a good chance of getting lost in translation.

More on file ingest and quality issues in future entries.  In the meantime, please see this link for more on 608 vs. 708 captioning standards.

 

This blog article was written by Steve Holmes, Sales Engineer for Aberdeen Broadcast Services