Infrastructure Advances Will Impact CDN Architectures in 2 Years

Categories

In the last sixteen months, advances in server hardware and network infrastructure have helped CDNs provide better performance to their customers at lower price points. However, while these advances are a tremendous gift to providers and their customers, there is a cost to implementing new technologies into existing infrastructure. The cost is what we refer to as the “ugly side of the business”. Not only is there an economical cost, but engineering resource allocation cost, because updating and/or replacing obsolete technology is a very expensive proposition.

100Gbps Internet Ports

Lets start with transit. Currently, if a company requires 100Gbps of egress capacity per PoP, the way its accomplished is simple; 10 Internet Ports @ 10Gbps/each of 95/5 burstable must be purchased – that gets us to 100Gbps of egress capacity. The rule of thumb is CDNs must keep traffic levels at 25% to 30% of total utilization; if utilization increases, then more transit must be purchased.

10 circuits requires 10 ports for termination. What happens when a single 100Gbps Internet Port becomes the norm in a couple of years? That means one 100Gbps circuit will replace 10x10Gbps. Talk about efficiency. However, deploying one circuit isn’t best practices, two are needed for redundancy. Also, what happens when a CDN uses 4 different carriers, things start to get hairy real fast. As we can see, there is a cost to having so much bandwidth available; current network designs will have to change for the CDN as a whole.

Next Generation Hardware

In terms of hardware, individual servers support 512GB’s to 768GB’s of RAM in the present time. By the end of next year, servers will start supporting 1TB of RAM. Soon after that, it will be multiple TBs of RAM. Thus, when you have multiple TBs of RAM, a bunch of problems start cropping up, especially in the latency department. Will caching platforms need to be re-architected, in order for them to be able to take advantage of this new profound power? Maybe.

Next, CDNs are currently using servers with network interface cards (NICs) supporting 10Gbps/port or 40Gbps/port. In the next couple of years, NICs with 100Gbps/ports will be the standard. That means servers will be able to support 200Gbps, which is a NIC with dual ports at 100Gbps/port. That’s orders of magnitude more powerful than whats being used today. A CDN engineer told me at this level, the kernal will need to be re-architected in order to support that much throughput.

Now lets combine 100Gbps Internet circuits + 1TB of RAM per server + 200Gbps of throughput capacity per server via NIC, and we get millions of dollars in resource cost to upgrade the entire CDN infrastructure. If a CDN has 300 servers at 30 PoPs – that means an engineer needs to power down the rack, unscrew each server, take out the RAM chips, pop in the new ones, power up server, and make sure each server is running smoothly. Then multiply this by 30 PoPs, and we get a feel of the work required for implementing advanced technology. That’s why we call it the “ugly side of the business” because even writing about it gives one a headache.

Scroll to Top