CDN Architecture: High Performance POP

Although Akamai, Fastly, EdgeCast and Limelight are bitter enemies that hate to lose to each other, they all have one common enemy, hardware obsolescence. Akamai might be the fastest content delivery network today, but that doesn’t mean it will be the fastest tomorrow. The performance optimization game is a never ending process. Of all the hardware components within the  CDN POP, the router is the most stable piece that doesn’t change much year after year due to innovation, followed by switches and disk arrays. However, the servers, that’s another story. Server hardware architecture is in a continuous state of change. The CPU caching technology changes, bus architectures improves, core technology improves, memory capacity increases, and so on.

Today, there are servers such as the Dell R920 that support up to 6TB of RAM. Over the next few quarters, server memory capacity will increase, as it always does. How does this impact Akamai, Fastly, CloudFlare or other CDNs? There might be a day when all CDN content is delivered off of RAM, including video. If we pack 30 servers in a rack, with each server supporting 6TB of RAM, that’s 180TB of RAM storage. Content delivery from RAM is going to be orders of magnitude faster than the content delivery from SSDs. This presents CDNs with a big upgrade challeng. The bigger the CDN and the more servers they have, the more work they’ll have in replacing servers.

 RAM Based CDNs

Google and Facebook deliver content from RAM. CDNs are next. Let’s call this new type of CDN, the RAM Based CDN, where they’ll deliver all content from RAM. The RAM based CDN might possibly make the disk spindles obsolete in the CDN environment. How much faster is RAM than SSD, the numbers below speak for themselves. As we can see, RAM is much faster than SSD.

  • SSD Drives – read/write rates of 250MB/sec
  • DDR3-1600 = read/write rates of 12,000MB+/sec