Varnish Total Encryption Helps With Cache Leaks and Encrypts HTTP Traffic

Categories

Varnish Software has just launched a new cache encryption service called Varnish Total Encryption, which aims to prevent cache leaks.

At its simplest, cache is a small amount of fast memory, which holds commonly accessed data and instructions so that it can be loaded more quickly. Caching has been used recently in several high profile security incidents, including Spectre, Meltdown and Cloudbleed. These attacks demonstrated the unique vulnerability of caches because of their very nature – the fact that they maximise efficiency and speed by storing large amounts of data in a tightly packed space with rapid access – means they’re generally designed to focus more on speed and easy access than on security.

In Spectre and Meltdown, the cache isolation between processes was violated, leading to the frightening possibility of full system compromise. The vulnerabilities are so severe that security researchers at first thought they were false. In the instance of Cloudbleed, a bug in a cache feature led to a read overflow and the leaking of user passwords and other potentially sensitive data of many of the CDN’s customers over a six-month period.

Varnish Total Encryption is designed to present this type of vulnerability through the use of keys. Each cache object is assigned its own unique AES256 encryption key; each request is only assigned one key and the key is based on the unique fingerprint of that request. This means that a request can’t return anything except its intended cache object. If as in the case of Cloudbleed, a request read overflows into another object’s cache, those bytes will immediately be seen as trash as they won’t have the correct unique key and the associated request to decrypt it. If as in Meltdown/Spectre, any aspect of a cache became unreadable, nothing can be seen because each cache object is uniquely encrypted. In essence, leaking cache would require breaking AES256 encryption in every single object in cache.

Varnish Encryption also relies upon kernel memory security for the storage of cryptographic keys. No object key is stored longer than the lifetime of the request and all objects keys are uniquely generated from the request fingerprint and a master key. Varnish Encryption will require double the CPU than what Varnish normally requires when processing traffic.

The service also works in streaming mode, meaning that it can be used to encrypt or decrypt any HTTP traffic. It produces 100% compliant HTTP responses with the payload and headers encrypted or decrypted in place. It doesn’t replace SSL/TLS transport encryption, but works in compliment with it.

Varnish Total Encryption works across all Varnish Cache storage types. All that is needed to run it is adding this one line to your VCL: include “total-encryption/random_key.vcl”;

After doing so, your cache is securely encrypted.

Varnish is currently in a limited alpha release through Varnish Plus; and the company is aiming for a beta release in Spring 2018, followed by a release of Varnish Total Encryption to all its Varnish Plus customers plus its cloud marketplaces.

Varnish intends that its Encryption service can be used across large secured architectures such as a CDN, using it as secured cloud transport. In a recent post, Reza Naghibi, VP of Technology, Varnish outlined the way in which this works. He starts off by breaking down how Varnish defines its tiers: “we call the front tier an edge tier and the back tier the fetch tier”. Naghibi adds, “The fetch tier can sometimes be referred to as a shield tier since it shields the backend from traffic.”

In a 2-tier caching cluster, it is not necessary to decrypt and then re-encrypt data between the edge and the fetch tier. Varnish only needs to perform encryption when the data enters its cluster and enact decryption when that data leaves. If this were expanded to include a storage tier, made up of dedicated storage nodes, which act as “dumb cache storage”, and a storage node cache hits, it immediately returns the object. If it misses, it sends the request to a different storage node or to the shield tier. All the VCL logic sits on Varnish’s shield and edge tears.

As there are dedicated storage nodes, it is unnecessary to cache the shield tear: it remains as streaming only. The edge tier can be cached if desired. Both tiers will still perform encryption and decryption on incoming and outgoing traffic, meaning that all objects are stored encrypted on Varnish’s storage tier and the keys do not need to be present on the servers. When objects leave its storage tier through the edge tier, they will be decrypted prior to delivery to the client.

If the dedicated storage nodes are replaced with a CDN, Varnish Total Encryption will perform the same functions despite the larger scale and possible global footprint.

Naghibi writes, “In such a configuration, our shield tier sits behind the CDN and is responsible for encrypting traffic before it enters the CDN. The CDN is responsible for object storage and for routing traffic to our edge tier. When traffic exits the CDN and hits our edge, it will be decrypted and delivered to the client. What this means is that our data is safely encrypted while it’s being stored and transported through the CDN without the CDN having any knowledge of our keys or the fact our data is encrypted. This mitigates any risk that our data could be exposed to 3rd parties.”

Scroll to Top