How NY Times Leverages In-House CMS, AWS, and Aspera to Deliver Content

Categories

The New York Times is leveraging AWS, in-house CMS, and Aspera for its Media Factory encoding pipeline for video. Content providers have seen a massive shift toward video in recent years, and the NYT  is no exception. Its in-house hardware solutions are no longer enough to handle the bandwidth-heavy video content it is publishing with increasing frequency, including 360-degree video and Virtual Reality. As such, the Grey Lady has opted to switch to a video publishing platform that provides capacity, flexibility, and scalability.

The solution was built to ingest, encode, publish, and syndicate the NYT’s video content library, in a manner that is vendor agnostic and cloud-based. A team, dubbed the Media Factory, was assembled with the goal leveraging a microservices architecture and Go programming.

They’ve released an initial version of the Media Factory encoding pipeline that is being beta tested and integrated into its publishing system.

It comprises three parts:

  1. Acquisition: The edited and finalized high-res videos (usually in ProRes 442 format) are uploaded to an AWS S3 bucket for transcoding. To do this, NYT leverages video-acquisition-api, an internal API that supports multipart uploads used by server-side clients. It also uses a JavaScript wrapper, built on EvaporateJS, that is integrated with NYT’s internal Scoop CMS.
  2. Transcoding: NYT then uses video-transcoding-api to create multiple outputs for the source file. Typically it creates a HTTPS Live Streaming output with six resolutions and bitrates for adaptive streaming, four H.264/MP4 outputs, and a VP8/WebM for the Firefox users on Microsoft Windows XP.
    To work with cloud-based transcoders, the NYT has designed a provider-specific wrapper that allows it to flexibly trigger jobs based on a given set of parameters (i.e. speed, reliability, current availability, and cost).
  3. Distribution: The final renditions are moved to another AWS S3 bucket, from which the are transferred to the CDN for publishing and video delivery. That final step is accomplished via Aspera’s FASP protocol.

NYT has open sourced the video-transcoding-api, video encoding presets, and encoding-wrapper it uses. Going forward, NYT plans to have a fully open sourced video encoding and distribution pipeline. To that end, it’s developing an open source project, dubbed Snickers, to encode video. Snickers will give it the leeway to deploy NYT’s own encoding service, and tinker with it to better serve its newsroom, with features like automatic thumbnail generation and accurate audio transcripts. It also has plans for fragmented MP4 and an HLS-first approach for on-demand video, which ought to be helped by the fact that fMP4 has been incorporated into HLS. Finally, NYT plans to adopt content-driven encoding in two phases: 1) It has created four broad classes of content, each with its own preset. 2) Media Factory has integrated a microservice, which uses VMAF to check output quality and triggers new re-encoding jobs with optimized presets.

Scroll to Top