Interview #4 with Per Buer, Founder of Varnish Software

Categories

In addition to cultivating his sophisticated look, Per Buer and the Varnish Team continue to grow all facets of the business including its headcount, offices, annual revenue, customer base, and feature-set. It is hard to believe that it’s been almost 2 years since our first session with Per Buer. Much has changed during that time, and Varnish has graduated into a top tier commercial caching platform with an extensive feature-set that powers some of the largest websites in the world, including CDN infrastructures. A big thanks to the Varnish Team for their insight.

Can the Varnish of today displace Apache Traffic Server at Comcast and Apple on a features and functionality perspective?

The answer is “it depends”.  The open source Varnish Cache isn’t as focused on bigger datasets as ATS, so in order to achieve comparable performance it requires a bit of tuning . That being said, there are quite a few Varnish-powered CDNs out there – some of them consisting of thousands of servers.

Our commercial offering on the other hand has a version of Varnish which is written specifically for CDN workloads. It is already deployed in a number of regional and global CDNs and has proven to be a great piece of software, giving the networks the flexibility and performance needed. The only piece of functionality that has been missing was the ability to persist content across restarts. We’ve been working on persistence for about a year now and the upcoming release of Varnish Plus (coming in April) has persistence built into it.

Our strength vis-a-vis ATS has of course always been the Varnish Configuration Language  (VCL), the policy language Varnish uses. Having VCL in your CDN makes it possible to execute business logic on the edge of your network.

Can Varnish be used as an additional security layer to  protect against DDoS attacks, while monitoring client traffic, and imposing limits on request, bandwidth and connections to prevent abuse?

DDoS happens on many different layers. Varnish is concerned with the application layer, dealing with HTTP requests and responses. It can be used to detect abusive patterns at the HTTP layer. Either local detection, using modules like “vsthrottle” or by networking the varnish instances and detecting abuse centrally.

So yes, Varnish can absolutely help shield your origins against HTTP-based attacks. The biggest publicly known attack against a site served by Varnish is probably the attack against Wikileaks. The admins at Wikileaks used Varnish to spot patterns in the attacks and deployed VCL to mitigate the attacks.

However, if the attack is directed at the network or the host Varnish is running on, Varnish can’t do much about it. so Varnish can never be your only line of defense.

Is Varnish as strong as or better than Apache Traffic Server when it comes to caching and delivering VOD files?

The most important bit is that you need to have software that you are comfortable with. If you know and like ATS and it does what you need it to do, then I think ATS is what you should use.

We’re more familiar with deploying Varnish Plus in CDNs, b ut we’re certain that it is at least as strong as ATS for VOD workloads. For VOD workloads you might need stuff like token access, geo-blocking and other authentication and authorization schemes – something VCL  lends itself very well to implementing.

How does Varnish API Engine compare against Nginx and Apache Traffic Server?  

The API Engine is built for APIs. Basically, we found that the API Gateway space is crowded with slow products. Some of them are crawling along at a couple of hundred requests per second. A customer of ours was stuck in a tough spot, having to serve 15,000 API calls per second with software that only did 200 requests per second per server. So, they asked if we could implement those exact features – in what later became the API Engine.

So, Varnish API Engine is a high performance API Gateway and it would be natural to compare it to the likes of Tibco Mashery, Apigee, Kong and Tyk. Compared to these, we look very good with regards to performance. 🙂 Some of these products include a feature set that is dramatically richer than ours. But hey, what is the point of having features when you lack the baseline performance to deliver?

Do you support instant purge out of the box?

Varnish still doesn’t do purges out of the box. The reason is primarily due to security as you need to add access control lists so that random strangers on the internet can’t purge content from your cache.

But yes, if you are willing to spend five minutes of reading up on purging in Varnish you can have it up and running within a minute or so – and specifically craft it to work just the way you want.

What trends are you seeing in the CDN market?

The DIY trend is very strong. There is a ton of interest in setting up your own CDN. We see big companies that push terabits of content doing it for cost reasons. Especially if you have your own network – then having a CDN inside your network is a no-brainer.

The other thing we’re seeing is dissemination of content that is proprietary in nature. We’ve seen big global companies doing video distribution. These videos are highly proprietary and these companies don’t want this content anywhere near the public internet. So setting up 4-8 servers across the globe isn’t a big deal and solves the video distribution challenge very efficiently.

Google QUIC, the UDP based transport protocol, is a game changer. Is Varnish QUIC ready?

Not yet. But we’ve been keeping an eye on QUIC for some time. I think it is obvious that HTTP 3.0 will probably use a UDP based transport, so we’re trying to factor that into design decisions we’re making now. We probably won’t support QUIC unless IEEE makes it a standard – but I think that might happen within 3-4 years or so.

 If a CDN decides to support QUIC, do they have to re-architect their entire caching infrastructure?   

Probably a lot of it will have to change. TCP is very forgiving of “weirdness” in the network – UDP isn’t. A lot of the tools we have to debug are TCP-oriented. The network equipment itself is probably a lot better tested on TCP workloads than UDP – since about 95% of the traffic on the internet has been TCP rather than UDP.

Scroll to Top