by James Delhauer
On a set, the job of the person who is tasked with acquiring the content that is shot throughout the day is incredibly stressful. Whether we’re discussing the tape operators of days gone by or the most modern media recordists, there are challenges that have stood the test of time. Somewhere between hundreds of thousands and hundreds of millions of dollars are spent assembling the production. Countless man-hours contribute to making it the very best that it can be. Literal blood, sweat, and tears are spilled to create what we all hope will be a veritable work of art. Then, after all of that, it falls on the shoulders of the one person who is tasked with handling the media. They are simply given very delicate assets that have been created throughout the day and which represent the sum total of the production as a whole. Just about anything can go wrong. Data can be corrupted. Hard drives can be damaged. Video tape can tear. Fortunately, these risks are being minimized by the advent of a new method of media acquisition: server-based recording.
Though different productions utilize a vast array of workflows, every single one since the Roundhay Garden Scene was first filmed in 1888, has come down to the media. And every single production needs someone to manage it. In today’s digital era, the most common workflow goes a little something like this. Cameras or external recorders capture video and audio data to an internal storage device some sort. When that unit is full, it is ejected and turned over to a media manager. The production continues with another memory card while the media manager takes the first one and offloads, backs up, and verifies the files on it. This is usually done with an intermediate program such as Pomfort’s Silverstack or Imagine’s Shotput Pro—programs that can do file comparisons to ensure that what was on the source media is identical to what ends up on the target media. When all of that content is secured on multiple external hard drives, the original memory card is returned to the production so that it can be wiped and reused. Rinse and repeat. At the end of each day, the media manager turns over at least one set of drives containing the day’s work to someone who will bring it to a post-production facility.
There, the work is moved from these temporary storage drives onto work servers, where assistant editors can begin their work.
While prominent, this workflow does come with a few inherent drawbacks. Most notably, the process is both fragile and time-consuming. Digital storage, no matter how sophisticated, is vulnerable to failure, damage, or theft. When the media manager receives a card with part of the day’s work on it, that card is often the only raw copy of the work in existence. Careers could end in a heartbeat if anything were to happen to it. So it becomes his or her job to create multiple copies. Unfortunately, the time during which data transfers from one storage system to another is the time at which it is most vulnerable. An accidentally yanked cable or sudden power surge is all it takes to corrupt the open files as they are transferring over. This vulnerability is compounded by the fact that transferring files is time-consuming and becoming ever more so. As our industry continues to push the boundaries of resolution, color science, and bit depths, video files are getting bigger and bigger. As such, they require more time to offload, duplicate, and verify, meaning that the period of vulnerability is growing longer.
But emerging technologies are creating new workflows that circumvent these drawbacks. Among the most promising is server-based recording.
Rather than relying on disparate components that must be passed back and forth between different individuals on a set, server-based recording allows productions to streamline their workflows and unify everything through one interconnected system. All of the components can be plugged into a single network switch and communicate with one another directly. Cameras and audio devices send uncompressed media directly into the switch. The network feeds them into a digital recording server (such as a Pronology’s mRes or Sony’s PWS 4500), which takes the uncompressed data and encodes the signals into ready to edit files. These files are then sent back into the network, which in turn sends them to any desired network-attached storage devices (such as SmallTree’s TZ5 or Avid’s ISIS & NEXIS platforms). The moment the recordist hits the Stop button, he or she can open the files on a computer and bring the newly created clips into a nonlinear editing application in order to assess their viability. This method eliminates the intermediate process of utilizing memory cards, transfer stations, and shuttle drives in favor of writing directly to external storage and thus removes both the time and risk associated with manual offloading. It also offers instant piece of mind to both the person handling the media and the production as a whole that the work that has been done throughout the day is, in fact, intact and ready for post-production.
And this is only the most basic of network-based workflows.
By utilizing advanced encoder systems, such as the aforementioned mRes platform, multiple tiers of files can be distributed across multiple pieces of network-attached storage. This gives the recordist the ability to simultaneously create both high-quality and proxy-grade video files and to make multiple copies of each in real time as a scene is being shot. This eliminates the potential need for time-consuming transcodes after the fact and, more importantly, this instant redundancy removes the key period of danger in which only a single fragile copy of the production’s work exists. As a result, recordists can now unmount network drives mere minutes after productions wrap and turn them over for delivery to post with one hundred percent certainty that there are multiple functioning copies of their work from the day. There is no need to spend several hours after wrap each day offloading cards and making backups.
Or, to take things a step further, productions can take advantage of the inherent beauty that is the internet to skip the need for the shuttle process altogether. It is possible to create files in a manner that sends them directly to a post-production edit bay. With low bitrate files or a high-capacity upload pipeline, recordists can set up their workstations using transfer clients (such as Signiant Agent or File Catalyst) to take files that are created in a particular folder on their network-attached storage and automatically upload them to a cloud-based server, where post-production teams can download them for use. This process has the distinct advantage of sending editors new files throughout the day in order to accommodate a tight turnaround.
Conversely, for productions where the post-production team may be located on site, a hard line can be run from the recording network directly to the edit bays. By assigning the post team’s ISIS server (or comparable network attached server) as a recording destination, editors gain access to files while they are recording. In cases such as this, the production may opt to use “growing” Avid DNxHD files. This format takes advantage of Avid’s Advanced Authoring Format in order to routinely “close” and “reopen” files, allowing editors to work with them while they are still being recorded. For productions with incredibly tight turnarounds, this is the single fastest production to post-production workflow possible.
All of this makes server-based recording an incredibly versatile tool. However, it is not without its limitations. At this time, network-based encoders are limited to encoding widely available intermediate or delivery codecs, such as Apple ProRes or Avid DNxHD. Without direct support from companies with their own proprietary formats, they cannot output in formats such as REDCODE or ARRIRAW. Furthermore, setting up a network of this nature requires persistent power and space. It is also worth considering that, like most new technologies, server-based recording often comes with a hefty price tag. These limitations make the process unsuited for productions hoping to take advantage of the full range of Red and Arri cameras, productions in remote or isolated locations, and low-budget productions.
So when is it most appropriate or necessary to take advantage of this emerging technology? While it can be of use in a single-camera environment, this method of recording truly shines in live or archaically termed “live to tape” multi-cam environments, where anywhere from three to several dozen cameras are in use. After all, if a show records twelve cameras for one hour, the media manager suddenly has to juggle twelve hours’ worth of content. It is much easier to write all twelve to a network-attached storage unit than to offload all twelve cards one by one. Also, due to the fact that network-attached storage drives can be configured to store hundreds of terabytes, the process is ideally suited for live events or sports broadcasts where stopping and starting the records risks missing key one time only moments. But above all, it is best used when time is critical. The ability to bring files into a nonlinear editing system as they are being recorded and work in real time is a game changer for media managers, producers, and editors alike.
This technology is already revolutionizing the way television productions approach on-set media capture and it is still in its infancy. It will continue to grow and evolve. Given time, it is my sincere hope that it will find its way into the feature film market and become more practical for smaller productions to adopt. For the time being, Local 695 Video Engineers should begin to take note of what is available and familiarize themselves with the technology so that they are prepared to take advantage of the technology in the future.