Numerous individuals within the United States who indulge in foreign cinema find solace in the presence of subtitled text on the screen, as it facilitates their comprehension of the film's content. However, it's worth noting that numerous films and various forms of programming also incorporate dubbing. Dubbing involves the recording of dialogue in a different language, which is then either superimposed or used in lieu of the original actor's speech. This approach grants viewers the ability to grasp the film's narrative without the necessity of reading subtitles. This method is preferred over subtitles in numerous countries and by a considerable number of viewers. Our clients often opt to dub their original English programming into various other languages, thereby expanding their audience globally.
I frequently encounter inquiries regarding the individuals responsible for orchestrating the entire voice dubbing process, and a multitude of questions arise pertaining to the role of the director. The team engaged in voice-over dubbing typically comprises a director, talented voice actors, an engineer, a producer, and often, the client themselves. Prior to the recording session, the native director undertakes a comprehensive review of the material. This involves identifying sections within the script that might pose challenges and bringing attention to these aspects before the recording commences. The producer is thoroughly briefed on the project's nuances and expectations, subsequently assuming the role of session facilitator alongside the director.
The talented voice actors are the individuals tasked with delivering the voice-over performances. The director wields authority over the script, ensuring that the guidelines are lucid for the voice actors and the engineer. Additionally, the director serves as an intermediary in translating interactions between the producer, client, and the voice actors, particularly if the voice actors are not proficient in English. The director is entrusted with guaranteeing that the voice actors deliver lines with precision, encompassing proper intonation, pronunciation, articulation of specific words, and accurate rendering of proper nouns, all while preserving the essence of the original language's style. The director might propose alterations to the script, rectify any mistakes made by the voice actors, and offer suggestions for re-reads should the producer or client seek a distinct interpretation.
Essentially, the director's role is pivotal in upholding the caliber of the voice-over, as their familiarity with the original language is crucial. This collaborative ensemble functions as a cohesive unit to generate top-tier dubbing, whether it's for full-length features, corporate training videos, promotional content, educational series, and more.
Looking for voice dubbing services, captioning, or file delivery services? Click here to send us a note as to how AberLingo, Aberdeen Broadcast Services' Languages department, might be able to help you.
On Wednesday, March 13, Senator Tom Harkin introduced two bills that will expand access to closed captioning and video description in both movie theaters and airplanes. Harkin is the Senate sponsor of the Americans with Disabilities Act (ADA). According to Harkin, “More than two decades have passed since the enactment of ADA, and in that time we have seen a transformation of our physical landscape…however we still have more to do. These bills will allow Americans with visual or hearing impairments enjoy going to the movies and watching in-flight entertainment, through captioning and video description, just as they can at home.”
The Captioning and Image Narration to Enhance Movie Accessibility (CINEMA) Act would require movie theaters to make captioning and video description available for all films at all showings. The Air Carrier Access Amendments Act would require that airlines make captioning and video description available for programming that is available in-flight for passengers.
For more information, visit this page.
Aberdeen creates a successful workflow for the transfer of long-form HD programs over the internet even at connection speeds under 1 Mb/s.
Transferring files over the internet is hardly a new endeavor; however, many parts of the country still have limited access to high-speed bandwidth. One such client, Power Walk Ministries in Houston, TX had such a dilemma. Needing to have a weekly HD program go through post-production, get captioned and then delivered to a regional Fox affiliate in a three-day window left little hope for shipping and even less room for HD tape costs.
Quite a few hurdles needed to be overcome to ensure the Power Walk’s long-form (28:30) program could be transferred to Aberdeen for captioning and still make it to the station on time. A large part of the burden would be placed on Aberdeen’s AberFast file delivery service.
AberFast operates on the User Datagram Protocol (UDP) which maintains consistent transfer speeds well above what FTP or HTTP offers. This automated delivery mechanism is a managed file transfer solution that has bit-for-bit verification (MD5) and e-mail notification to ensure late-night deliveries arrive successfully well past the hours where shipping is even an option. Compounding the delivery difficulties was the outdated infrastructure of the neighborhood where the client’s studio was located. Upload speeds averaged less than 1Mb/s at their location. Even an SD-DV file at this speed could take over 15 hours to transfer. Significant compression was going to be needed to transfer a file in the necessary amount of time. Aberdeen’s broadcast solutions team designed a custom compression template for the client to ensure that every export would be as compressed as possible without sacrificing the visual or audio quality of the program.
The final compressed file has a size under 2GB and transfers from the client’s studio to Aberdeen’s data center in under five hours. This allows Aberdeen’s AberCap division to complete the closed captioning in time for AberFast to transcode and deliver a custom file back to the FOX station in Texas in less time than a tape could have made half the journey. AberFast’s ability to leverage the most advanced technological applications and workflows in the industry enables them to deliver high-quality content at unparalleled speeds, meeting airdates around the globe.
If this question is asked: How do television stations receive programming from producers? Most people outside the industry would assume that everything happens digitally file-based. We are in 2012 and this is the way things are done, right? Not exactly the case. Moving completely file-based has proved challenging for most TV stations, where receiving tape remains commonplace. This article seeks to explain step-by-step exactly how Aberdeen Broadcast Services has made going 100% file-based not only possible, but easier than would be expected for both producers and TV stations. You will learn about what is currently happening at stations to date, a solution, and then the steps to the process. In this article, insider information is shared from software to infrastructure used to make a complete digital file delivery to TV stations a success.
Necessity is the mother of all invention. This phrase holds true for Aberdeen Broadcast Services, a company based in Southern California, USA. Originally incorporated in early 2001 as a Closed Captioning company, Aberdeen built its business around providing closed captioning services to producers and broadcasters who were forced to comply with new FCC regulations that mandated closed captioning to appear on (almost) all US television broadcasts. During the first ten years in business, the program transfer medium of choice was tape, mostly analog Beta SP tape. Year after year thousands of tapes went in and out of the Rancho Santa Margarita, California offices via FedEx. As the business grew, owners Matthew Cook and Becky Isaacs knew there must be a more efficient, cost-effective way to move these programs around the nation and the world. With broadband Internet saturation reaching record levels every year, the Internet was looking increasingly like the avenue to accomplish such a task. However, initial experiences with popular file transfer services and other FTP-based options proved unreliable. In the television business, unreliable was not going to be acceptable.
The initial hurdle was moving large, high-quality broadcast master programs to Aberdeen’s offices for captioning and then getting those masters to the destination television stations for broadcast.
These large master files could be upwards of 30GB for a half-hour High Definition (HD) program. At the time, the most widely utilized distribution method of transfer over the public Internet was FTP (File Transfer Protocol). FTP is a standard network protocol used to transfer files from one host to another host over a TCP-based network. After robust testing, TCP was found to be far from ideal when trying to sustain a large data stream over long distances. The biggest drawback of TCP-based file transfers is latency. The time it takes for transferring packets to be sent from A to B and then acknowledgement confirmation sent back to the sender (B to A) to confirm the receipt is called Round-trip Time (RTT) or Round-trip Delay Time (RDT). The TCP protocol needs this acknowledgement before the sender will send another packet. This amount of time from delivery to confirmation is called latency; the higher the latency the slower the transfer. The further apart the two connections are the longer the return message can take, thus causing even the fastest high speed connection’s transfers to slow to a crawl. The longer the connection time necessary for a transfer the more vulnerable the transfer is to timeouts and service disconnects.
After looking into the broadcast transfer solutions available at the time, the software chosen was based on the adoption of the User Datagram Protocol (UDP) technology and its advantages over the more commonplace FTP protocol. UDP is an alternate method of transfer which does not wait for packet confirmation and delivers as many packets as fast as the connection will allow. An MD5 checksum at the end of the transfer, scans the file on the receiving end and makes sure the file is complete. Any missing packets that may have been lost during transfer are resent to complete the file. This improvement allowed transfers to maintain much greater data rates over optimum connection locations but offered the largest improvement for long distance transfers and slow speed connections. Other sensible features that would assist in the logistics of file transfers such as:
The decision to house the networking and computer systems off-site was made to add redundancies and protection from power outages and other service disruptions. A SSAE 16, SOC-1 Type II data center in Dallas, Texas was chosen due to its central location in the United States and its 24/7 support. Building out the appropriate transcoding farm was the next challenge. The initial architecture was built on a Dell Blade chassis with a Gigabit backplane connecting a 60TB Dell MD3000 and MD1000SAN in a RAID 5 configuration. As the digital file delivery business started to grow it was immediately apparent that the networking limitations of a chassis system connected via Windows Share (NTFS) to the SAN would not support the high resource demands of a multi-blade transcoding farm. Bottlenecks originally limited the six blade system to a shared 1Gbps iSCSI connection. This was not enough throughput considering the mezzanine file format for the system was uncompressed AVIs with data rates up to 600Mbps per file. The multiple transcoding blades were quickly slowed by these large files traversing the network. The decision was made to find a network consultant seasoned in video transcoding to redesign the system in an effort to increase throughput and scalability. Jeff Brue at Open Drives Inc. came highly recommended from a number of West Coast post houses and major motion picture companies. The redesign of the transcoding farm by Open Drives Inc. included a new custom 180TB Solaris Share storage system with full Active Directory support and Windows compatible ACL’s. New SuperMicro 16 core 2.4 GHz servers with 32GB of RAM and GPU processing video cards were implemented to maximize the transcoding horsepower for each blade. Network switches were also upgraded to support a central 10Gigabit Ethernet infrastructure. By efficiently utilizing memory caching and 40 Gigabits of bonded 10Gb/s connectivity, a balanced mix of processing power and storage speed have been achieved. Aberdeen operators currently remote into the ten servers at their Dallas data center using Remote Desktop and web GUIs. The team’s operators manage, receive, transcode, and deliver hundreds of full-length TV programs and movies a week from off-site locations around the globe.
Television stations’ method of broadcasting can easily be compared to an Apple iTunes playlist. The machine and interface that handles this is appropriately called the Play Server. Each track in the “playlist” is either a promo, commercial, or program and as each clip ends the next file is played and passed through the airchain to the end viewer. Although these Play Servers can handle a multitude of different codecs, wrappers, and metadata tracks, stations usually have a “house codec” that all files are encoded/ingested to for constancy and uniformity. In an effort to minimize the number of times a client’s program is encoded, Aberdeen transcodes the asset only once to the house codec for each station deliverable, allowing it to enter the station’s workflow seamlessly. Important encoding variables for individual station house codecs include codec, wrapper, bit rate, GOP structure, aspect ratio, resolution, color sampling, field dominance, frame rate, audio format, sample rate, bit depth, and caption data.
Brent Chance, the Broadcast Engineer at NBC Encompass had this to say about file delivery systems: “It is vital for digital delivery services to have the resources and means to send content in a format that is ideal for the station’s workflow, their systems demand it.”
Being a captioning company, the successful transcoding of digital files would hinge on Aberdeen’s ability to get caption data correctly into the house codecs of their clients’ stations. This proved to be no easy task as the caption information can live in various positions inside the different digital files. The following are different specifications for different codec/wrapper combinations for caption data to be written to. SMPTE 436M, ATSC, DTV, SCTE20, SMPTE RDD-11, SMPTE 328, SMPTE 314, SMPTE 374, SMPTE 360, CEA-608, and CEA-708. Believe it or not, getting correct caption insertion into the deliverable file is still an ongoing effort that involves working with the software developers that encode the files and the hardware developers that make the station Play Servers.
Automation was found to be the most advantageous way to move files through the various stages of the transcoding process. Using the latest professional software encoding engines, Aberdeen’s engineers utilize the built-in watch-folder functionality of these programs to automatically push assets through the television station specific workflows for encoding and quality control. Developing a folder structure that made logical sense to an operator and contained the correct layers necessary to utilize the watch folder automation was vital.
In addition to expensive broadcast encoding software, inexpensive or freeware programs have found a place in daily transcoding workflows. At Aberdeen, programs such as Handbrake, MediaInfo, VLC, Quicktime, Windows Media Player, and JES Deinterlacer are all used daily to play, analyze, and transcode different file formats. Most of these Freeware programs use the open source FFMPEG cross-platform solution as the engine to record, play, encode, and stream content.
Planning for the diverse cross-section of Internet connection speeds from client locations was critical to create proper expectations and standards for client file uploads. Most high-speed Internet, cable modem, and DSL connections are designed to allow much faster download speeds than upload speeds. Producers need to use their slower upload speeds to submit files. This compounded with the large file size of broadcast quality programs meant that a number of variables needed to be considered. Aberdeen customized a popular data transfer chart to illustrate the approximate transfer times relative to file size. Different file compression formats were then recommended based off the connection speed and file size calculation from the chart in an attempt to minimize transfer times while at the same time not compromising the quality of the file content.
After researching codec white papers and conducting critical visual analysis of all of the commonly used compression codecs, Aberdeen approved six popular/accessible delivery formats which would be recommended based on upload speed, NLE system, and source content. Among the codecs recommended were Apple ProRes (LT), Sony XDCAM, DVCPRO (HD, 50,25), DV, H.264, and MPEG-2.
Receiving programs digitally not only forced Aberdeen to require their clients to refine their file specifications but also their content specifications as well. Television broadcast standards put forth by the ATSC, DVB, ISDB, and DTMB require terrestrial, cable, satellite, and handheld content to adhere to strict signal specifications—the most notable of which are for Chroma, Luminance, Gamma, audio loudness, and peak level. Since the station deliverable files that are being created by Aberdeen go directly to the station’s Play Server and not through the traditional tape ingest process where signals are conformed by a Processing Amplifier, they needed to devise a method to ensure each program was delivered with legal values for broadcast. Among the many viable software options, VidChecker was chosen based on its ability to not only check for values outside of legal specifications but in most cases to correct them as well. It is this combination of client education and the software check-correction that allows Aberdeen to process and deliver hundreds of files a week with 99.999% station acceptance.
TV stations are well aware of the need and desire of programmers to deliver files digitally. However, this can be much easier for the producer than it can be for the station. Tapes have been the medium of choice for decades and the station engineers and technicians know how to handle them. To many engineers, files are new and complicated, especially if stations accept a number of different formats. There are hundreds of ways files can be encoded incorrectly and TV stations need to figure out new systems and train technicians and engineers on other ways of processing new programming. Many stations are trying to tackle this process on their own and are realizing the complexities involved and the need for additional personnel.
Aberdeen helps TV stations simplify the file delivery process. By creating a specific file that matches each TV station’s Play Server, Aberdeen limits the number of tasks that the operator/technician has to perform because the file is in the native format of the Play Server. The ability to create files in station specific formats has been well received by all stations due to the reduction of time it takes to process Aberdeen files for AIR, ultimately freeing up manpower and streamlining internal processes. An additional benefit is that the UDP delivery system being used is completely automated. When a file is placed in the appropriate station watch folder a notification e-mail is automatically sent to the appropriate station employees stating that a new file is ready and will be queued for download at the top of the next hour. Files are usually configured to download to a machine in Master Control or to a shared network attached storage system (NAS) for easy transfer into the station’s automation system or Play Server. At the station’s request, the leader and tail elements (bars, tone, slate and black) of a file can be trimmed off the program to make the file prep process even more seamless.
Files are delivered to stations with a specific naming convention that allows broadcasters to tell exactly what the program, episode, and airdate are without having to embed or attach additional metadata. The primary naming convention currently being used is ProgramIdentifier_EpisodeNumber_Airdate. By providing an optimal file for the station’s workflow, the process of receiving outside content is made more efficient.
Some stations that started with one or two programs have been pushing all their outside producers to use file-based delivery services. KDTN’s engineer Stephen Darsey had this to say: “I dare you to find the difference between a tape ingest and delivered file, you won’t and I don’t care how long you've stared at a scope and a monitor, you won’t see it!” David Tait, Broadcast Engineer from Zoomer Media says, “(The) technology used (UDP) beats FTP and is more robust and reliable. We also like the fact that the files are automatically delivered on a schedule.”
Automate everything! This has become the internal operational mantra of the AberFast business unit. By taking as many of the manual processes as possible out of the workflows, files are run through the system and are delivered and billed quickly, efficiently, and accurately. The manual processes that do remain are treated as opportunities for quality control and gate-keeping. Standardizing file naming conventions for all clients has allowed Aberdeen to employ filters to automatically sort files into appropriate workflows based on the prefix characters of the file name. Requiring clients to submit file names in the correct format eliminates manual tasks when files arrive and allows for faster and accurate processing. For example, a file filter dialog allows operators to enter inclusive or exclusive file naming filters. A file “JPM_KDTN_W308_092012” will go through a workflow built to include all file names with the prefix JPM_KDTN*. This simple ideological change proved to be a very powerful tool for routing different program versions through specific workflows automatically.
Proxies are used at multiple places in the automation chain. For example, the first task of the transcoding farm is to make a proxy in the same aspect ratio and frame rate as the asset file. This proxy is what is used by Aberdeen’s transcribers and caption editors to create caption files that are later embedded into the final deliverable files. These small proxies are usually 1/10th to 1/50 of the original asset size (usually in WMV or MOV formats) making them easy and quick to move within our network. Proxy files are also used in the QC process. From the final station specific file, a proxy with open/burned-in closed captions is created. This file is then manually reviewed to ensure that closed caption data, video and audio sync, and start/end times are all accurate– similar to checking a tape. These proxy files are available to the TV stations or producers to provide confidence and proof of captioning for the final deliverables.
Another layer to Aberdeen’s asset management workflow was standardizing the Windows folder structures of the shared storage (SAN). This was an important practice that aided in maintaining organization throughout the shared storage array. This consistency allows operators to easily understand where files need to be placed to start the automation and where finished files arrive after going through the job cycle.
In this article you have read exactly how 100% digital file delivery has been achieved successfully, but not without two years of late nights, countless meetings with software vendors, lots of trial and error, and just plain old hard work to come up with this functioning solution. We see that this process still takes extensive testing with stations, but once that initial testing is over, it is smooth sailing. Sounds like a dream come true! For the producers who use it, it is. For TV stations, it is more than a dream—it changes everything. Get moving in the right direction and go completely file-based.
Matt Donovan is the Director of Digital Delivery for Aberdeen. Matt is a Broadcast Engineer and graduate from Pepperdine University. Matt has worked in many live and studio environments over his young career for the likes of ESPN, FOX, CBS and Current TV. Matt is passionate about creating beautiful images. He uses his broadcast technology experience to build and maintain Asset Management Systems that enhance the quality of Aberdeen’s clients programming.
Aberdeen Broadcasting Services was recently contacted to present a solution for a unique challenge: providing English to Spanish translations at a live event in which a pro-football player would be interviewed in front of both an English and Spanish speaking audience. Our solution? Translation through captions.
At first glance, this might not seem like a big deal, but other more conventional translating solutions wouldn’t cut it this time. A standing translator was not an option since more than half of the audience would still be English speakers. This would have also thrown off the flow of the informal interview-style producers desired. On the other hand, the logistical challenges of getting radios for the several hundred Spanish speakers in the audience meant radio translation wasn’t a feasible option either.
In order to make this a success, we had to make it over several hurdles. First: human resources and talent. This time we used in an in-house translator for the English to Spanish oral interpreting and we secured one of our most skilled Spanish live writers.
The second set of challenges were technical: the event would be held at a sports arena in San Diego, the translator would work out of Aberdeen’s offices in Orange County, and the writer worked out of Columbia. To solve this, the translator connected to the event through a phone line and audio coupler; the translator connected with the writer through Skype; and the writer dialed into the encoder at the sports arena.
The third set of challenges belonged to the linguistic realm. The guest speaker was a pro football quarterback talking about his experiences as a Christian in the pro college football and the NFL; this meant his speaking would be filled with a mix of Christian and football terminology. Also, the translator was Puerto Rican, the writer Colombian, and most of the audience was Mexican. This required the translator to use words and terminology that kept the accuracy, integrity, and feeling of what was being said in English, including football terminology, but that at the same time would be understood by both the writer and the audience.
The last challenge was practical and related to the audience’s experience: keeping the delay to a minimum. The goal was to keep the final Spanish captions limited to a 4 to 5-second delay from the time they were originally spoken. This only gave 2-3 seconds for both translation and writing, plus the 2 to 3-second delay that is unavoidable when doing live captions.
In the end, the event was a success. All these challenges were met with unprecedented coordination and communication. And most importantly, the Spanish speaking audience’s need was met with a timely and accurate translation.
What do you think of Aberdeen’s solution to this event? Would you have done things differently? We would love to hear your feedback.
This blog was written by Rolando Betancourt
Since the implementation of FCC mandate for internet closed captioning, producers, TV stations, and video websites alike are looking for the best options. These days, HTML has evolved to be more functional with standardized guidelines for video rendering and captioning guidelines. In the past, there were no standards for playing a video on a web page. To play videos on a website almost always required plugins; such as QuickTime, Silverlight, RealPlayer, Flash Player etc. HTML5 has now improved and standardized accessible video and provides captioning to be displayed together. Companies like Aberdeen have incorporated this advancement in their day-to-day operations and provide the most up-to-date technical services to their clients.
As mentioned above, HTML5 is a major leap for standardizing video across web browsers and devices, and consequently simplifying closed captioning. The idea is that web video should be based on an open, universal standard that works everywhere. HTML5 natively supports video without the need for third party plugins.
There are two groups collaborating on HTML5 closed captioning standards: the Web Hypertext Application Technology Working Group (WHATWG) and the World Wide Web Consortium (W3C). Each group has its own CC standard. WHATWG has developed WebVTT (Web Video Text Tracks) and W3C has developed TTML (Time text markup language). These two standards are of different origins. WebVTT is basically a modified SRT; and TTML is a modified XML. However, I found that the old fashion .srt and .xml files will work just perfectly fine for subtitling and/or closed captioning.
Believe it or not, the code is very simple to incorporate in all types of videos and cc files. Below is a sample of HTML5 code to show how to incorporate the videos and closed caption file:
<video width="320" height="240">
<source type="video/mp4" src="my_video.mp4" >
<track src="cc_file.srt" label="English captions" kind="captions" srclang="en-us" default >
</video>
Here are the attributes of the track elements:
src: specifies the name and location of the cc or subtitle file
label: specifies the title of the track
kind: specifies the type of time-aligned text. The options are: Captions, Subtitles, Chapters, Descriptions or Metadata
srclang: specifies the language
default: specifies that this track is enabled by default. Note that multiple track elements can be used simultaneously.
The next question that comes to mind would be if HTML5 can handle multiple languages. The short answer is YES, definitely. We just have to modify the code to include the track for different subtitle language files. If you subtitle multiple languages, just contact a company like Aberdeen and they will surely help you get your subtitle project going with files that work seamlessly for HTML5!
Arif Kusuma is the Chief Technical Officer at Aberdeen Broadcast Services. He has his Bachelor's degrees in Chemical Engineering and Computer Information Systems, as well as a Masters of Business Administration. He has a passionate drive for technology, both hardware and software, and like many people, loves to be the first to get the latest gadgets.
Does it seem like working in the television broadcast industry is a bit like learning a second language? It certainly can feel that way for many content producers and video engineers dealing specifically with closed captioning and file compression. Exposure to such foreign words and concepts such as metadata usually occurs on the floors of industry trade shows, like NAB, and in many a master control room across the nation and globe. Hmmm…”metadata…”that sounds an awful lot like Greek, doesn’t it? Actually, “meta” is a Greek prefix meaning “after,” or “adjacent,” among other things. Makes sense then why the word “meta” was adopted by the TV industry to describe the area of a digital file containing data about the audio and video, but that is not the actual audio or video information in and of itself. Rather, metadata is simply complimentary information pertaining to the audio and video tracks of the file. This additional data happens to come “after” A/V tracks or is “adjacent” to the A/V tracks in a respect.
For instance, since captioning can be “closed” (not “open” or viewable), captioning is technically not part of the audio or video, but is additional metadata that compliments the rest of the file. With traditional analog video, this extra information would be encoded onto Line 21 of the VBI (vertical blanking interval) as part of the EIA-608 captioning standard. Hard to imagine, but the digital caption encoding protocol (EIA-708) is even more complex. It is not enough then to simply put captions into the somewhat equivalent Line 9 or VANC (vertical ancillary) portion of a digital file. That is due to the fact that the location of captions within this VANC area is dependent upon what file format the producer is required to send to each individual television station or network.
To complicate matters even further, not all TV stations use the same file formats and on-air play servers to air video programming, like in the analog days, for which BetaCam and DigiBeta tapes were the commonly accepted tape formats. Some of the trouble associated with creating files for different station on-air play servers is that each of the play servers uses a different proprietary file format. Each of these file formats, such as MXF, LXF, and GXF, among many others, look for captioning data in unique, and format specific, locations of the metadata area of each file. So, unless you are transcoding files with this in mind, closed captioning stands a good chance of getting lost in translation.
More on file ingest and quality issues in future entries. In the meantime, please see this link for more on 608 vs. 708 captioning standards.
This blog article was written by Steve Holmes, Sales Engineer for Aberdeen Broadcast Services
Have you ever heard when people use foreign words, such as “feng shui” or “faux pas,” but they completely change the pronunciation to where you’re left wondering if they just made up a word? As a bilingual caption editor at Aberdeen, I come across this situation pretty often while working in multi-language projects.
Just picture a car infomercial in Spanish: Spanish-speaking people trying to sell you cars that have English, German, and sometimes even French names. I mean, “craisler taun an contri” might not really mean anything to you but when you look at the screen you can clearly see they are selling a Chrysler Town & Country minivan. More recently I started captioning a sports TV show in Spanish that follows the American Football games. At first I thought the challenge would be that I’m the farthest thing from a sports fan there is, but I found out that my challenge, once more, is the fact that the football teams and players use English pronunciations for their names, but the show is in Spanish. Perhaps it’s because I’m not a sports’ fan, but when the sports commentators throw in names like “Chan Li” or “Imarco Mur” at 500 words per minute, the first thing that comes to mind is definitely not “Sean Lee” or “DeMarco Murray.”
Despite these challenges, I enjoy working with these types of shows because I know that after thorough research, I’m able to present captions that even the biggest football fan will be able to enjoy. So the next time you stumble upon a Spanish-speaking show on TV, you might want to turn on that CC button and find out if the captions are selling you a “bosbaguen yi-ti-ai” or a “Volkswagen GTI.”
My job as Technical Support Engineer at the AberFast Division of Aberdeen Broadcast Services is to secure station set-ups for digital media delivery. More specifically, to deliver half-hour programs complete with the FCC-required closed captions. One would think that a procedure that reduces the work of TV station personnel and technical resources would be a welcomed technological advancement. For the most part, it is, but often not without some prodding and cajoling.
Yet this past year since starting at AberFast, there have been many instances where I must plead my case with station engineers about the benefits of AberFast digital file delivery. The reasoning is quite varied – from virus intrusions, internet bandwidth congestion, tapes works well, access into station networks is prohibited, to “We just don’t accept digital media files, period!” At AberFast we work diligently to partner with stations to make their job easier, not harder and yet opposition is faced at every turn.
In order to understand the perspective of my TV engineering counterparts, I did a bit of soul searching from my years as a post-production editor who migrated from linear tape-based editing to the nonlinear digital world.
In video editing, the goal is to work efficiently by minimizing repetitive keystrokes and utilizing automation (macros) as much as possible. Once you get those button clicks in sync, you have rhythm, speed and an incredibly fast creative workflow. Once efficiency is achieved, you’re in a groove…that is until the technology changes or systems and software are no longer supported. Now you have to learn anew, back at square one.
I’ve realized it’s not so much the fear of technology but the fear of change. It’s the fear of struggling with new workflows and procedures in a hectic work environment, especially since the status quo works well. I remember being asked to deliver 30-second TV ads via FTP to our master-control facility across the state. I had no idea what FTP was. I had no idea about video codecs. I didn’t know a "dot m-o-v" from a "dot a-v-i" or "dot w-m-v". I knew I could lay 10 spots onto a beta tape in 5 minutes, blindfolded, and hand it to a receptionist to mail. That procedure worked fine for many years. Why did I have to change my routine to accommodate this FTP thingy? Compressing video files was time-consuming (CPU’s weren’t very fast 9 years ago). Furthermore, it tied up my AVID so I couldn’t edit because I was exporting. FTPing slowed my computer’s resources so editing was problematic. I had no engineering support to teach me about video codecs and compression schemes, and after a department downsizing, my workload was overwhelming at best.
So as the old saying goes, “When in Rome…”. I started to learn about codecs via Google searches. I would run side-by-side codec comparison tests. I evaluated file size with quality. The smallest file that provided the best video quality would be the file I could export and FTP the fastest. I learned to make the most of this new workflow. While my AVID was tied up exporting, I checked emails, wrote scripts, and contacted clients. I learned about and created watch folders to automate file conversion procedures. So because of this change I ended up broadening my knowledge of video files, sped up my ancillary work duties outside of editing, and I kept up with my workload more easily than before. I was back in the groove and a few years later in a new job at Aberdeen.
There’s another saying that goes, “The only constant in life is change” (and death and taxes, but that’s for another blog). Cloud-based file delivery is here to stay and at AberFast we are at the forefront of this rapidly changing technology. We want to partner with clients and stations—not to create more change—but to make change easier. We want to decrease distribution costs while increasing video programming quality. We want to connect program producers with TV stations so that content moves effortlessly. Just … embrace … the change. Otherwise, you’ll end up watching the clouds roll on by.
This blog was written by Vincent D’Amore, Technical Support Engineer for Aberdeen Broadcast Services.
My grandfather is a soft-spoken former Marine and my father has carried on the tradition of both soft-spoken tendencies as well as military service. The one thing that isn’t soft, however, is the volume at which they watch television. Both will ratchet up the sound until it’s blaring through the entire house. This may seem like an annoying habit or one that’s inconsiderate, at best, but for them, it’s the only way to hear. Both suffer from hearing loss – hereditary as well as linked to their military service.
My father is 80% deaf and wears hearing aids in both ears to compensate. His audiologist has told him that by the time he reaches 50 years old he will be nearly 100% deaf. My grandfather, too, wears hearing aids but often forgets to put them in rendering them useless. The solution the family has found for my grandfather who loves baseball and true crime movies is closed captioning. He can now sit in the living room with the family bustling around him and follow the storyline of Law and Order or Tigers baseball game without missing a beat.
It’s an amazing thing to now work for a company that provides these services for people around the world. Having my family be affected by hearing loss makes me so much more appreciative of people like those on our team who strive to deliver 100% accurate captioning.
Written by Amber Kellogg