Does it seem like working in the television broadcast industry is a bit like learning a second language?  It certainly can feel that way for many content producers and video engineers dealing specifically with closed captioning and file compression.  Exposure to such foreign words and concepts such as metadata usually occurs on the floors of industry trade shows, like NAB, and in many a master control room across the nation and globe.  Hmmm…”metadata…”that sounds an awful lot like Greek, doesn’t it?  Actually, “meta” is a Greek prefix meaning “after,” or “adjacent,” among other things.  Makes sense then why the word “meta” was adopted by the TV industry to describe the area of a digital file containing data about the audio and video, but that is not the actual audio or video information in and of itself.  Rather, metadata is simply complimentary information pertaining to the audio and video tracks of the file.  This additional data happens to come “after” A/V tracks or is “adjacent” to the A/V tracks in a respect.

For instance, since captioning can be “closed” (not “open” or viewable), captioning is technically not part of the audio or video, but is additional metadata that compliments the rest of the file.  With traditional analog video, this extra information would be encoded onto Line 21 of the VBI (vertical blanking interval) as part of the EIA-608 captioning standard.  Hard to imagine, but the digital caption encoding protocol (EIA-708) is even more complex.  It is not enough then to simply put captions into the somewhat equivalent Line 9 or VANC (vertical ancillary) portion of a digital file.  That is due to the fact that the location of captions within this VANC area is dependent upon what file format the producer is required to send to each individual television station or network.

To complicate matters even further, not all TV stations use the same file formats and on-air play servers to air video programming, like in the analog days, for which BetaCam and DigiBeta tapes were the commonly accepted tape formats.  Some of the trouble associated with creating files for different station on-air play servers is that each of the play servers uses a different proprietary file format.  Each of these file formats, such as MXF, LXF, and GXF, among many others, look for captioning data in unique, and format specific, locations of the metadata area of each file.  So, unless you are transcoding files with this in mind, closed captioning stands a good chance of getting lost in translation.

More on file ingest and quality issues in future entries.  In the meantime, please see this link for more on 608 vs. 708 captioning standards.

 

This blog article was written by Steve Holmes, Sales Engineer for Aberdeen Broadcast Services

Sony Electronics and Aberdeen Captioning along with software developer CPC have joined forces to develop the first file-based closed-captioning system that maximizes the benefits of Sony’s XDCAM HD422 tapeless technology. The new workflow uses Sony’s PDW-HD1500 optical deck to make the process more efficient, faster and more flexible.

“Because the XDCAM system is file-based, we’re able to do our work in a much more refined and streamlined way,” said Matt Cook, President of Aberdeen Captioning. “Now, once someone is done with their XDCAM edit, we take their file, caption directly onto that file, and then place it back onto the disc. We’ve eliminated the need to go through a closed-captioning encoder—which can cost up to $10,000—therefore eradicating the requirement to do real-time play-out.”

According to Cook, clients—which include a range of broadcast networks, groups and independent producers—benefit from faster turnaround times and a more cost- and time-efficient process than previous methods.

“The primary benefit for clients is that they can keep their file in its original form, and send it to us on a hard drive, via FTP site, or on a disc,” he said. “Once we put the captioning data back in the video file, we can then return it to the client in the format of their choice.”

The PDW-HD1500 deck is designed for file-based recording in studio operations. A Gigabit Ethernet data drive allows it to write any file format from any codec onto the optical disc media, and it also makes handling either SD or HD content much easier.

“This deck is perfect for applications like closed captioning, where turnaround time is often critical and multi-format flexibility is a key,” said Wayne Zuchowski, group marketing manager for XDCAM system at Sony Electronics.

Cook added, “We can handle any format without a problem. That type of capability and functionality is very important to us because as a captioning company we’re required to deliver a finished product in any format a client requires.”

When Aberdeen receives content from a client, the company first converts it to a smaller “working file,” for example Windows or a Quick Time media file, which is used to do the transcribing, captioning and timing.

“Once the captioning work is done, we marry the original MXF XDCAM file and our captioned data file through our MacCaption software,” Cook said. “With the press of a button, both files are merged, and we can drag and drop it back onto the disc and send out, or FTP it to a client and they can drag and drop onto a disc.”

The Sony and Aberdeen joint captioning system will be on display at NAB in Sony’s exhibit, C11001, Central Hall, Las Vegas Convention Center.

Article Written by Tom Di Nome, from Sony Electronics

Copyright notice: 

© Sony Electronics & Aberdeen Captioning, Inc. 2009.

This article can be freely reproduced under the following conditions:

a) that no economic benefit be gained from the reproduction

b) that all citations and reproductions carry a reference to this original publication on [online] http://www.abercap.com/blog