Deep Dive with Leon Kuperman: Zenedge Feature Set


On Monday, we published part one of our interview with Zenedge’s CTO and co-founder Leon Kuperman. The following is part two, a deep dive into the specifics of the company’s feature set. The Q&A session was done in person, and our team wrote down the answers almost exactly as they were described. The first thing our team noticed is that Leon is a very well spoken CTO that talks like a University Professor. He takes an extremely complex security concept, and breaks it down into easy-to-understand chunks.

Q: It looks like you have a lot of features, many that CDNs don’t have, would you like to emphasize any particularly unique features?

A: Lets talk about our security feature set first and foremost. The part about our product that I’m most excited about is that when you look at the way web app firewalls work they’re really rule based, which means that if an attack signature is known, web app firewalls tend to handle them very well. Most people think that if they roll out a core rule set that is available through OWASP they’re covered, but that’s not the case because zero day vulnerabilities are just that – vulnerabilities where a signature and pattern were unknown prior to the day the world found out about them.

So, there is a time window in which developers of that particular software need to issue a fix where attackers exploit the vulnerability. It could take 1, 2, 3 days before a fix is issued, so for us the strategy is not to rely solely on signature based strategies to detect web app intrusion and malicious attempts at requests. What we try to do is look at and separate normal behavior from malicious behavior in a general sense through several techniques.

Our first technique is to understand app behavior as it runs through Zenedge, and look for anomalies and anomalies in individual users’ activity. If someone has attempted three known vulnerabilities against a website, then we have a pretty good feeling that they’re attempting to find others, so that type of communication is considered high risk. For unknown patterns that come in, we have a pretty good idea if a particular communication, with either a human or a robot, is going to be malicious so it goes back to behavioral-based analysis of the application and its users.

We combine the two using a big data strategy, on which our platform is built. The way this manifests itself through the system is through two specific features. We use our device fingerprinting core feature heavily, it’s important because most security providers stop at the IP level with the idea that this is a good IP or a bad IP. This is not enough, many end consumers use the same IP address as a whole neighborhood and if one of those computers is involved in a DDoS attack and is maliciously affected then that IP address for the entire neighborhood is put into a black list which means traffic would be denied from a whole block of computers.

That’s a false positive and that’s intolerable, so we have to look much more deeply, from a malicious user’s perspective, which is why we use device fingerprinting. We discern details behind the IP address, like what device is addressing the network. IP can give a good idea of what’s behind malicious behavior, but one must dig deeper to remove false positives. One of our elements, behavioral understanding of individual devices, is one way we can understand user behavior at a much deeper level.

The next capability I’d like to highlight regards DDoS. When you look at the market and the way DDoS is handled from a device level, many strategies fail when HTTP traffic is going through devices because they can’t unwrap packets to see what the malicious DDoS activity is, because its much stronger than what a client would have. We unencrypt the traffic and analyze it in a secure way, and make determination as to whether traffic is coming from a human or a bot. Layer 7 is very different from other DDoS attacks because it needs a fairly low amount of traffic to have a severe impact on clients. In other words, 100 megabytes of traffic can be amplified in a way that chews up resources.

Here’s some great examples – if an attacker hits your search engine, it’s an expensive operation that chews up search capacity quickly. Or, say an attacker creates a malicious attack to get in your shopping cart system, adding to the cart is a great way of messing with an inventory system because you have items coming out of inventory and allocated to carts, which is expensive from a web application standpoint. Database, application cycles, database access chews up a lot of database resources, so layer 7 is a very different animal from layer 3 and 4 so you need specialized tactics.

The way we do this is we have a toolbox of escalating capabilities in our platform to mitigate layer 7. Obviously, we want the least intrusive capability possible in order to mitigate an attack. For us, putting a captcha (reverse turing test) link last line of defense, highly intrusive from an end user perspective. We have other layers of challenges that give us info about customers, end consumers, and computers, that allow us to discern between robots and humans in order to automatically create blacklists and white lists for normal customer behavior and malicious customer behavior, which we use at layer 7 to turn away bad traffic and let good traffic through.

All requests from a layer 7 attack look completely legitimate, and can be construed to look like a routine http request and the only way to discern good and bad traffic is to understand human behavior, and Zenedge is effective at that.  The last feature I’d like to discuss is our Zen Tags™. A lot of concerns we hear from security officers running infrastructure is about performance. They’re hesitant about turning on features and worried about extra scrubbing and analysis affecting a website so we pleasantly surprise customers by increasing performance by 15-40 percent depending on architecture of a lot of advanced app delivery because our nodes of network are globally distributed so we sit as close as possible to the end consumer.

We have advanced app layer optimization and delivery mechanisms that we user in order to reduce latency and increase perceptive performance of websites, like how quickly a page appears on a website and starts interacting with an end consumer. Zen Tags™ is one feature we use to do that and it is a feature that allows us to take dynamic content and separate it from the static.

Zenedge can cache the parts of a page that change a lot, like a shopping cart for example. Most of the time, a website is showing the same product to people, but a cart is different and can’t be cached because people will see personalized content. We deal wit this by caching dynamic content from less frequently changing areas of the website and merge results in real time. We’ve very excited about this practical application as we roll it out to customers.

Scroll to Top