Fastly’s CDN SuperRack vs CDNs SuperPOP

Most CDNs brag about deploying SuperPOPs in their infrastructure. However, there seems to be little consensus about what a SuperPOP really is? Is a SuperPOP a POP with thousands of servers in deployment and Tbps of bandwidth capacity? Or does it have to do more with the number of cores, SSDs, and so on? Since there is no agreement across the board about what a SuperPOP is, than we’ll discuss the CDN SuperRack.

The CDN SuperRack is a data center cabinet that packs the most punch, in terms of processing power, SSD storage, and bandwidth throughput capacity. Thanks to our friends at Fastly, we know exactly what a CDN SuperRack looks like, and since Fastly is the only CDN in existence that has published the specs of the SuperRack, by default Fastly is the CDN SuperRack global leader.

It didn’t start off that way. Fastly went through a long and painful process of trial and error, trying out different configurations of servers, storage, switching and so on. After many iterations, Fastly reach the land of CDN SuperRack Vahalla. It makes one wonder, the Fastly SuperRack might indeed be the most powerful rack on the planet, outperforming even the mighty Google rack.

Below is the five generations of the Fastly SuperRack, starting from the super slow Generation 1 rack to Generation 5 SuperRack.

Fastly’s Evolution of the CDN SuperRack
Gen 1

-2 machines/rack
-multiple direct ISP uplinks
-20Gbps 50k rp/s capacity
-didn’t scale well

Gen 2

-Arista Switch
-Keep alived / VRRP
-2 machines
-20Gbps 50k rp/s capacity
-Too Fragile

Gen 3

-Arista Switch
-Keepalived/VRPP
-3 Machines
-40Gbps 75 rp/s capacity
-better

    Gen 4

-3 racks in 3 different Equinix buildings
-Arista switches in each rack
-One ISP is plugged into each rack
-Fiber loop between locations / racks
-No machine in rack over subscribed
*Doesn’t work well cause of spanning tree

Fastly Gen 5 New Rack Design (Per Server)
  • 2x E5-2690 Vs 10 Core x 3Ghz 258MB l3 cache
  • 4×10 Gbit SPF+Ethernet
  • 768 Gbytes of RAM
  • 18TB of SSDs per machine
  • Performance per server: cache size 2+ billion objects, 25Gbps of continuous bandwidth and 40k req/sec
Gen 5 New Rack Design (Per Rack)
  • 16 Servers of above server
  • Four 64x10Gbit Arista Switches
  • No over subscription between servers or on up stream
  • 16 ports per switch for machines
  • 16 ports per switch for upstream
  • Performance per rack: 12TB of RAM, 288TB of SSDs, cache size of 16+ billion objects, 400Gbps of bandwidth and 600k req/sec
Performance of Two Racks
  • 24.5TB of RAM
  • 567 TB of SSD
  • Cache size of 32+ billion objects
  • Continuously push 800Gbps of bandwidth (1.2Tbps provisioned)
  • 1.2 million req/sec
  • 128 10Gbps machine ports / 128 10Gbps ISP links
  • Arista switches have terabit back plane capacity

for more info, check out Artur’s presentation

Also, some of the key takeaways from Artur’s presentation on the the evolution of their rack design:

  • S3 storage is slow as molasses taking as much as 4 seconds to retrieve an object, when the actual RTT is 50ms
  • Open Flow doesn’t cut the mustard in the CDN environment
  • Cisco and Juniper chassis do more harm than good
Scroll to Top