Deep Dive Into The Limelight Networks Technology Stack

Categories

Over the last year, Limelight Networks has been busy increasing efficiency and growing its global network. In fact, they increased server capacity by 50% and network capacity 39%, opened 12 new Points of Presence (PoPs), including three in India alone, strengthened its edge security suite (WAF + DDoS), improved its delivery efficiency (Mbps per watt) by at least 600%, and increased network capacity per server several hundred percent, thus getting closer to Netflix’s ability to serve 100Gbps from each server.

And that doesn’t include the coolest feature under the Limelight umbrella, which would be the recently deployed machine learning capacity management system, which is able to optimize delivery weeks in advance. This unique application of machine learning has unleashed an incredible amount of optimization in the form of improved quality and efficiency of delivery across the entire platform.

However, from a competitive standpoint, that’s doesn’t mean that others such as Akamai can’t do the same. This begs the question, is there anything that Limelight has or does that others can’t do at this point, including Akamai? Before we answer, lets state the obvious. Akamai is a great company and their business model transformation over the last two years is nothing short of groundbreaking. In two years, their security revenue is likely to surpass their media delivery revenue. Overall, Akamai has made the right acquisitions and developed the right products to stay ahead of the curve. However, one area in which Limelight Networks clearly sets itself apart is its private network. In fact, there is no other pure-play CDN on the planet that has a private network. 2018 is going to be the year of the private network, which trumps the BGP overlay networks, at least for the likes of Google, Facebook, and the other big players.

Currently, Akamai (like a few other CDNs) has developed a software technology called a BGP overlay network. Akamai’s overlay network is called Sureroute. Sureroute complements a traditional BGP network by giving them more control over the traffic that travels from an edge server to an origin. BGP is the technology that connects different networks together. The biggest problem with a BGP overlay network is that when content is not in cache at the edge, then the request must still travel over a public network. Akamai’s highly distributed servers result in a reduced edge cache hit ratio, so the ability to optimize this traffic is very important for them. In fact, as Akamai’s Leighton discusses at this year’s Edge conference, the bottleneck is “at the core” of the internet, not the last mile.

With a private network, things work differently. In the same scenario, when the requested content is not in cache, then the request travels over a private network, not a public network. The internet “core” that Akamai discusses is bypassed altogether. By having a private network, the service provider has complete control. Thus, private networks trump BGP overlay networks. In Limelight Networks’ case, their unique private network in many cases even includes leasing dark fiber and operating DWDM hardware.

Thus, there are two camps of thought when it comes to private network vs. public network. Each camp believes that their network type is the best. However, we believe that private networks, as in the network between the origin and CDN edge, will become more prevalent, especially since the Internet giants have already made the move across to private, including Google Espresso, Facebook Open/R, Aryaka Networks and Dropbox.

Limelight Architecture

We recently conducted an in-depth interview with Jason Hofmann, VP, Architecture at Limelight, and discussed many interesting areas that the CDN is working in, including the merits of its private backbone for content delivery and how it is newly approaching load balancing and employing machine learning to do so.

Jason noted the fact that Limelight was founded by early dial-up ISP pioneers with tremendous expertise in networks in 2001 and credits them with an immense degree of foresight in deciding to build a global private backbone early on.  While others were building CDNs for images on websites, they built a CDN for use-cases that didn’t yet exist – the immense consumption of live and on-demand video and massive software downloads like games and device updates to any device.

Limelight’s Private Backbone

From its founding, Limelight built dense PoPs (with high compute and storage power) that they named Super PoPs, allowing them to “bring the carrier to the servers” rather than bring the servers to the carrier, then interconnected the PoPs with their own private fiber backbone, something not generally possible when locating servers within the carrier’s networks. This also allowed the company to have more control over the hardware, to deploy denser architecture with significantly more cache, to control their own IP space, and to refresh the hardware more often, a major reason why, unlike Akamai, Limelight now has a nearly 100% SSD-based CDN.

Limelight has built a large backbone over the past sixteen years that interconnects all its PoPs. Its backbone even wraps the globe bi-directionally. It does not have to route traffic between Europe and Asia through the U.S.  For some other CDNs which may be just beginning to build backbones, their “backbones” may not connect continents at all, or situate North America at the center of the world, whereas Limelight’s circumnavigates the world East and West for redundancy.

Limelight’s dense Super PoPs are interconnected by private fiber, allowing critical cache fill traffic to bypass the congested public Internet. If you want to make cache miss/origin traffic reliable, – the key is to have your own backbone as issues on cache miss are amplified – they affect everyone in a shared geographic area.

As a public network, the Internet only has one route to deliver your “package.” Algorithms such as Akamai’s SureRoute request test content via different Akamai servers periodically, then let BGP take over from there to decide the rest of the route; for example, the algorithm will ask, if I take an arbitrary detour to another location first, does the public internet route from that point on get me the content faster? With a private network, however, you can make your own routing decisions and can effectively build a new “road” if you don’t like the current route from point A to B or the “road” is reaching capacity.

 According to Hofmann, “If you build and manage a global backbone well, you don’t need elaborate algorithms to get your content where you want it. Other CDNs have written algorithms to compensate for the unreliability of the public Internet, but they can’t allow you to achieve the best latency and availability simultaneously and at all times. There is an inherent trade-off.  With a true private network, Limelight gives you the best availability and latency all the time and at the same time – it’s not an either/or situation”. Hofmann adds, “As content has become larger and as bitrates continue to increase, such as with 4K video, Virtual Reality, and 4K video games, our dense Super PoP architecture with a wrap-the-globe private backbone has proven itself clearly superior to a highly distributed low-density architecture that relies on the congested, unreliable, ‘SLA-free’ public internet.”

Limelight and Machine Learning in the Caching Space

The machine learning capacity management system was created by the architecture team at Limelight and was a result of an internal initiative to simultaneously improve quality of delivery (e.g. industry-leading levels of video rebuffers) and increase efficiency.

According to Hofmann, caching servers are not like other kinds of servers where any server can take over the workload for another if it’s overwhelmed, without there being significant penalties for doing so. Why not? In a cluster of servers that cache, the moment you shift a request from one server to another, you are turning cache hit traffic into a guaranteed cache miss.

As soon as you drain traffic from server A to server B, server B – which has never seen that content before – needs to cache the content from the start. If you take the traditional approach of most, according to Hofmann, and simply shift traffic from one server to the next – you cause an increase in traffic to the origin, which can cause performance problems. By constantly trying to make new servers “temporarily” responsible for content, you hurt your cache hit ratio and hurt quality of delivery and efficiency.

Limelight uses machine learning “see into the future.” Their machine learning capacity management system is a forward-looking algorithm for segmenting a customer’s library.

No other CDNs are currently operating in this way.

This system has enabled Limelight to unleash capacity, and it has seen service problems caused by “hot spots” virtually eliminated. As a result, its most performance-sensitive customers began to give Limelight more business.

Conclusion

In summary, a dedicated private network, and a CDN architecture designed to leverage it for cache fill traffic, is a game changer for the CDN business model and the key differentiator between Limelight and the competition. Rather than developing complex software to attempt to route around the pervasive problems “at the core” of the internet, Limelight avoids those problems altogether with its mature global private backbone, and instead has been able to focus its R&D efforts on ground-breaking innovations like its machine learning capacity management system.

 

Additionally, a dedicated private backbone is not just better than a BGP overlay network, it also acts as a springboard for creating new products down the line, such as clientless VPN, network security, SD-WAN, and much more, should Limelight Networks wish to move in those directions.

Scroll to Top