Site icon Bizety: Research & Consulting

Comcast’s Internal CDN

In January 2012, Comcast started building an internal CDN. In November 2012, the CDN was fully deployed. It took 5 engineers, eleven months to build Comcast’s Internal CDN from scratch, using open source software, on commodity hardware. The Comcast team evaluated Apache Traffic Server (ATS), Nginx, Varnish and Squid. After extensive testing, the team decided on ATS.

During their testing phase, Squid had performance issues, Nginx had compliance issues, and Varnish had issues with long tail content. The Apache Traffic Server met all the requirements for delivering VOD. The Traffic Server is the caching proxy server, developed by Inktomi many years back, which was acquired by Yahoo, later to be open sourced. Personally, the selection of ATS was a little surprising. I thought Comcast would have gone in the same direction of Netflix, building their CDN on Nginx.

The Comcast internal CDN system has five major components: caching platform, content router, health protocol server, configuration manager and the log management system. The CDN seems to be working extremely well. For any large media company that decides to build an internal CDN, there are two case studies to learn from, Comcast and Netflix. The question I have, why is this development team working for Comcast? They should be working in Silicon Valley or Silicon Beach creating the next billion dollar company.

Comcast CDN Design Goals
Caching Server Specs
Comcast CDN Subsystems
Exit mobile version