Recently, the Bizety team spoke at length with Jerry Hall, CEO of Rev Software. The conversation touched upon a number of topics, including a deep-dive discussion of Rev’s feature set and its unique additions to the current marketplace. Caution, before you read the Q&A section below, you must put on your engineering hat because Jerry sounds like mad scientist, and that’s exactly what it takes to take on the big boys like Akamai and EdgeCast. Thanks to Jerry and team for the awesome insight.
Q: Please give me a little bit of background on your management team, vision, and state of your company. What makes you different?
A: I was at FastSoft, a network technology software company, where I joined as VP of Marketing in January 2011. I helped reposition the company, which was Cal Tech developed and focused on making TCP/IP fast, which was developed years ago when the Internet was slow and speed wasn’t an issue. Today, networks are fast, but protocols are slow and they have become the bottlenecks. I positioned FastSoft into web performance optimization, recognizing that CDN’s are all about moving as much content as possible closer to the users around the world. A growing amount is non-cachable and has to be delivered from the origin, including dynamic content that has to be quickly delivered to the edge so it doesn’t bog down static content.
A number of companies do front end optimization, which means fewer round trips, while CDN’s do the edge caching. The last-mile latency goal of CDNs is 10-milliseconds, although Akamai maintains a 5-millisecond goal. So, when I was at FastSoft, we were selling into almost every CDN. At this time, co-founder Jonathan Bosanac and myself would often converse about how mobile was going to disrupt the CDN model because the last mile in mobile is high latency and variable. We saw Cotendo come up and everyone was writing about how this was the first mobile oriented technology. Well, what Cotendo made was image optimization and it was one of the first testers of SPDY, which was helpful but didn’t fix the problem.
We sold our company to Akamai a little over two years ago and decided to go solve the mobile problem. John and myself have been in the networking industry for more than two decades, building network systems all our careers. I did my first start up in 1997.
The way to address the mobile last mile problem is you need to fix it at the network level. Our t-shirts say, “It’s the network stupid.” We decided to solve the problem and came up with two basic premises. First of all, if we had real-time visibility and we knew at any given millisecond what the latency was going to be in the mobile last-mile, could we do something? We said yes. And if we knew, could we do it fast enough? This turned into real-time visibility meets real time action. Another way to put that is to not make any optimization decisions in advance because conditions are dynamic. So, we set out to build that and we did.
We raised seed money in June of last year and had it all architected. We started to recruit a team, largely out of companies like Juniper, Cisco, Intel, and Adobe. We got networking guys and app level guys to help architect the system. We said let’s build a real-time feedback loop and let’s figure out a way to configure optimization resources on-the-fly, dynamically based on conditions. So every request that comes in is optimized uniquely and not decided in advance. When it came time to answer the question of how do we deploy, we realized how complex the system was, so how can we abstract complexity from customers? Web/content publishers and owners are used to using CDN’s where you make changes and point traffic to the CDN and they take it from there.
So, we said let’s do that, meaning we had to build a CDN. Each of our PoP’s has the system we built, an acceleration system with real time visibility and action. We recruited a veteran who had built both Cotendo CDN and Instart Logic’s CDN. He built our network, which currently has 22 PoP’s, so we’re doing end-to-end content delivery, but where the traditional CDN stops at the edge, we’re just getting started.
Q: Can you walk me through your feature set and prices?
A: We have two layers of service. We have a static CDN offering where we charge standard $/GB for static delivery. For customers that want to take advantage of the higher level, more dynamic feature, we call that MdotCDN and MdotCDN is where all the magic happens. From a static CDN standpoint, we offer standard features – customer portal, self-configuration available, instant purge, support of custom rule sets, WAF, DDoS, all of the check boxes you would expect for a CDN.
However, we lead with our MdotCDN because that is our strength. Another way we’re different is if you go to an Akamai, for example, and you want DSA, you have to buy the static CDN first and then you can buy value-adds. We will just sell MdotCDN, allowing customers to continue to use their incumbent static CDN for desktop delivery, while we act as an overlay network on top of that, effectively administering end-to-end management of mobile sessions, while utilizing last mile acceleration. Our static offering is in line with what everyone charges and our MdotCDN is in line with DSA pricing.
Q: What if someone just wants your mobile solution. Would that be on a per gigabyte basis or on a platform/bandwidth fee?
A: We have a couple of models. We recognize that customers are used to paying on $/GB so we have $/GB pricing. Or a customer can pay on a page-load basis. Pricing would be comparable to what you would pay for DSA. And we would be delivering the entire page. When I say we’re mobile, we wouldn’t just do dynamic content, we would do all of the mobile content. Instead of providing caching, we would pull static assets from the CDN edge and deliver an entire page over the last mile.
Q: Overlay networks appear to be the next trend, but the problem is customers are in contract with Akamai, Limelight, etc. and it’s impossible to yank that contract away. How do you resolve that?
A: Its especially hard if you’re a brand new service that they’ve barely heard of. Our strategy is that we can go in and say let us deliver your mobile traffic. And then they have the option, we can be truly additive or we can do the caching for that as well. It’s a lot more straightforward to do that if they have a mobile site or a touch site, but, as you know, more and more companies are preferring to go responsive, which is where we have to be willing to be overlay. You can keep your static assets, but we’ll do last mile delivery of them.
Q: In terms of latency, the differences between major CDNs are minimal. Have you done any kind of benchmarking against other competitors?
A: Our stated goal starts at 2x, but once we build out our complete roadmap we expect to be even faster. We don’t have our full feature set deployed yet, but to answer your question – yes absolutely. We did a benchmark with a large CDN and it was all mobile. It was a 1MB page and our average content load on 361 runs was 3.9 seconds, compared to the other CDN’s average of 6.3 So, it wasn’t quite 2x, but it was 75% faster. This test was actually done in cooperation with an incumbent CDN and they accepted the results as well. We’ve done a lot of testing and are continuing to benchmark with other CDN’s. For the mobile last mile, we are significantly faster.
Q: Are you at liberty to say how many customers you have?
A: We have a couple dozen, but it’s less than 30. We’re just getting started; we have a lot of trials in the queue and large deals in the works.
Q: What separates you from other players within the last mile?
A: We’re focused on the last-mile, but from an end-to-end perspective. Our ability to maintain end-to-end visibility and end-to-end control of the session allows us to do magic for the last-mile. I used to get asked a lot about what separates us from all the optimizations doing stuff in the last-mile. My answer is that they’re supporting a different agenda, a service provider agenda, which is all about managing subscriber flows and prioritizing subscriber traffic over others. But a content owner cares about all users, regardless of device and network, and we’re catering to that agenda. We’re more session-based, as last-mile optimizers are flow based. They don’t have end-to-end visibility and true application-level awareness.
Q: So you’re not going to focus on video/entertainment at all?
A: No, we’re currently focused on small object delivery and mobile. Part of our solution from the get-go has been about realizing that more than half of mobile traffic is apps. We set out, from day one, to accelerate delivery to both, which are very different types of traffic, REST calls vs. http requests. So, we can accelerate app traffic asymmetrically, meaning no footprint on the device, but we also have an SDK that allows us to do symmetric last-mile acceleration for native apps, accelerating in both directions and getting better access to device-level analytics (signal strength, cell ID, accurate location info including distance from cell towers, etc). We’ll be doing more, long-term, for apps than we do for the mobile web browser.
Q: Accelerate is such a commonly used term and its meaning sometimes gets lost. How do you accelerate?
A: Real-time visibility with real-time action. We have a real-time feedback loop that runs continuously and stores RUM data and key session parameters. We look at a combination of historical information to get a general idea about what needs to be done, we look at the real-time feedback loop for conditions at the time, and then we decide how to optimize. So, when an incumbent CDN says they do acceleration, they set a more aggressive window size on last mile transport, at 10 for example, whereas standard TCP starts at 2.
We don’t set the window size at all until we get a request and then we set it. And for the next session we set it different. We’re doing pacing and everything we can to maximize available bandwidth and then dynamically adjust to what’s available. Whereas other guys are making assumptions that bandwidth is low so they act conservatively. It’s sort of a guess and we don’t do guessing. We take real-time visibility and make our settings on-the-fly so when we say acceleration we mean in a dynamic and adaptive way and you can’t adapt to conditions unless you can see the conditions.
Q: Could you dive a little deeper into your RUM data?
A: We have our own RUM implementation and our RUM is able to calculate navigation timing for mobile browsers that don’t currently support navigation timing. One challenge for mobile RUM is the browser has to support W3C timings, otherwise you lose the first load time, but get a secondary one. But, if you lose the first one, you’ve lost a lot. That’s what one of the challenges for using RUM for mobile is, so we developed our own so we can capture that first navigation timing even if they don’t support it because we rely on that timing to inform our real-time loop.
Q: Anything else you’d like to add?
A: One other key differentiator I’ve been saving for last has to do with third-party links, especially in the ecommerce world. We have sites where more than half of the content comes from a third party as opposed to the origin. These are sites that hosts get loaded independently of a CDN. If you’re using a CDN to deliver your site and half the content is coming from third-parties, you’re not controlling that content and overall performance can get dragged down significantly.
To deal with this, we developed a way of routing that third party content to our edge server so that we can delivery that content as well. Another way we’re getting a speed advantage is that we’re accelerating third party content and origin content.