CDN Nugget #4

Categories

100% Bare Metal CDN

Google, Facebook, and Amazon are open source machines using bare metal hardware throughout their infrastructure. However, they got company in the next generation CDN. The infrastructure for the large CDN and start-up CDN slightly differ in that large CDNs with the exception of Amazon CloudFront, use some brand name hardware in their infrastructure, whether it’s EMC Isilon storage, DDN storage, Cisco, Juniper, F5, or whomever. In the start-up CDN environment, most shy away from Cisco, Juniper and the brand name product. Fastly uses Arista Switches, CloudFlare uses Plurisbus Networks, and most use bare metal servers from the likes of Quanta and SuperMicro.

Is the day coming when the CDN infrastructure will be 100% bare metal? It looks that way, especially with the explosion of SDN/NFV. Today, bare metal switches and routers are just starting to appear in the CDN infrastructure, but in a couple of years, it’s likely they will dominate. And if we take this out further, what Arista Networks is doing to Cisco, the Accton’s (Taiwan) of the world might do the same to Arista.

Future of High Performance of Bare Metal

On October 15, 2013, Fastly presented the hardware that goes into each rack. The compute, storage and bandwidth capacity in each rack is unlike anything in CDN history. That is why we called it the SuperRack. The specs of the Fastly SuperRack are as follows:

  • 16 Servers – each server has 2xE5-2690 vs 10 Core x 3Ghz 256MB I3 cache
  • Four 64x10Gb Arista Switches
  • 16 ports per switch for servers and 16 ports per switch for upstream

When you add all this up, you get 12TB of RAM, 288TB’s of SSD’s, caching capability of 16 billion objects, 400Gbps of bandwidth throughput capacity, and support for 600k request/second. The rack is no longer the bottleneck in the CDN infrastructure, but the WAN is the bottleneck. The specs of this rack was given one year ago, imagine what it looks like today, or what it will be like in 3 years. The bigger question is what kind of hardware will be available in 5 years. One thing is certain, today’s hardware in the CDN infrastructure will need to be completely replaced. That is an expensive proposition. How many CDNs have thought about this. Is it likely that server hardware in five years will look something like this?

  • 3 Petabytes of RAM, 100PB’s of SSD’s, caching capability of 1 trillion objects, 5Pbs of bandwidth throughput capacity, and support for 250M request/second.
Scroll to Top