Interview with Chetan Ahuja, CTO and Founder of Packetzoom


As part of our ongoing interview series with leaders in the tech industry, we’ve previously interviewed PacketZoom’s CEO Shlomi Gian about his move to the mobile app acceleration company from his previous work at Akamai. This week, we had the opportunity to interview PacketZoom’s founder, Chetan Ahuja, about his background working in network optimization and mobile performance at the start of the smartphone revolution and how it led to the development of his own mobile performance company–as well as his thoughts on the future of online networks. A big thanks to Chetan for his insight.

Tell us about your background and how it led to the founding of PacketZoom?

Prior to starting with Google, I worked for Riverbed, which is an enterprise WAN optimization company. This is where my fascination with how to improve speed on high latency networks developed. Around this time, the iPhone had just come out, and there were a lot of complaints about network speed being directed at AT&T, but it was pretty clear to everyone in the background I was coming from that the problem was not just AT&T. The same problem was occurring with WiFi and all mobile networks. For me personally, this problem caused the most frustration when I was in commuting to Google’s Mountain View office by train. Any apps I would use were constantly interrupted as the phone switched between networks, and I was convinced there must be a solution to this problem. I assumed it would fall to one of the big players–Google, Facebook, etc.–to jump in and make something special to speed mobile app delivery, but that never happened. So in 2012, I left Google to get on with my own solution.

Did you come to this decision suddenly or had this been building for some time?

It was a progression. When I left  Riverbed, in 2008, the iPhone had taken over the world. I knew there was a solution out there to improve mobile app delivery, but it seemed like an enterprise network problem–something I didn’t intend to start a company to fix at the time. I went to work for Admob and then Google on improving speed and performance for mobile ad delivery, but despite our improvements, we were still bound by a forty-year old protocol that was built for delivery of web pages — not for mobile apps. It took a while, but ultimately I determined I needed to build my own solution.

What were the next steps after you left Google? Did you start approaching VCs?

What I did initially was build a prototype to demo, which took me a few months to create. After that I would do live demos everywhere using my own phone. Ninety-eight percent of the time, the demos would show huge improvements in speed, which got me some attention from VCs. I also talked to some advisors at the time to help me chalk out a business plan and determine how best to take the idea to market.

How is PacketZoom doing now and what are your goals for the coming year?

We started with sales and marketing efforts late last summer once Shlomi saw our product story and joined us from Akamai. Currently, we have 300+ apps using us, including Glu mobile (a mobile gaming company with dozens of popular gaming apps), Upwork (freelance job search app), Sephora (beauty), and Goat, which is a marketplace app for high-end sneakers–it’s a somewhat niche market, but among those who are passionate about that particular product, it’s very popular. So we have hundreds of apps and dozens of companies as well, with user numbers that are approximately doubling every quarter. The last we checked we were getting half a billion user requests per day, so hopefully we’ll be 4x by the end of the year.

How would you position PacketZoom in terms of your business model or the service you provide?

At core of our business is last-mile optimization, which is the root of everything we do. This comes from a superior networking stack– I use that word as opposed to transport protocol because we do many different things here. We’ve rethought the entire stack starting from server discovery via DNS all the way upto session management for the mobile app. What’s missing in the market is not only speed but real-time control and visibility, which is what we offer as part of our business model.

With many companies, they spend countless hours developing an app, but don’t know what’s going on with it after they launch it. They may obtain analytics from companies with solutions geared toward HTTP and the web, but those are yesterday’s solutions. So the first thing we do is give app developers strong visibility into what’s going on with their mobile app on different networks around the world.

Next, if you’re an app developer and you can see what’s going on, the first thing that strikes you is the diversity in conditions around the world. Even if your app is being used only in the U.S. and Europe, there may be 40-50 different networks in Europe alone and 20 different network types in the U.S. Each network has so many different conditions, it’s difficult for app developers to write rules to accommodate them. What they need is our SDK to provide overall rules to handle each different network conditions in a more intelligent fashion.

Are you using SDN technology for infrastructure to scale on demand for burst traffic?

Not in the sense that we have data centers–we do all our tech in the last mile, rather than on the back end. But by handling the last mile, we account for 70% of all latency. Once you overcome that, you don’t have don’t have to do much else, which lowers management expenses overall.

What kind of visibility do you have into network conditions?

We collect between one and two billion data points a day, both from traffic that is HTTP and mobile. We are watching all requests to network as data feeds into our backend system in order to make it better. That data gives us a dynamic picture of how the network is evolving.

How does that data help to improve your services?

If you talk to any networking experts who have worked in the field, they will all have various pet peeves about TCP Slow Start and other legacy aspects that arose as networking protocol evolved. The result of evolution is that protocol you use (TCP) has been designed to be extremely conservative as the default position in all situations. TCP is a 40-year-old legacy protocol that was designed by necessity as conservative, and you can understand why if you think about it. The same software stack your server is using is also running on your iOS phone or Android. The same protocol is supposed to handle traffic in Antarctica via satellite. A trip from there to NY might be 300ms or more, but the exact same software is supposed to work for that trip as for two boxes connected to each other inside a datacenter. Universal engineering challenges must be by necessity a compromise solution.

However, those design constraints mean that certain resources–fast database access, large amounts of memory available on the servers, SSD storage, etc.–could not be included in the design of earlier networking protocols. With TCP,  there was no mandate to optimize for specific scenarios; all it needed to do was work in most situations at a reasonable level of performance.

Our mandate, however, is to customize delivery for app users only, specifically dealing with WiFi, 2G, 3G, 4G, and now 5G networks, and we have complete freedom to use the rich computing resources available to us in 2017. So when a request comes from, say, a device running on Verizon LTE in mid-afternoon from San Francisco, we can quickly look up the historical performance data for that particular situation and optimize our content delivery algorithm for it in real time. By narrowing down the types of scenarios we are addressing and gathering as much information as possible on those precise circumstances, we were able to gain a lot more freedom to improve service and tailor delivery to specific situations.

Given the visibility you have into mobile networks, have you ever considered developing a product for mobile, such as an analytics dashboard that could be sold at a separate price?

It sounds like something people would be willing to pay for, but what we’re doing for them is one better. This has to do with the control part of our product, and how you handle the data we’re providing. So for example, you could use it to determine for yourself how many HTTP connections can happen simultaneously, how long timeouts should be, etc., letting the app developer decide how long to block on WiFi before determining the connection is gone and moving on to the next one. That decision alone has a huge impact on service, and there’s no single answer for it since there are so many different possible reasons for timeout. In our upcoming releases, we’re offering developers the ability to write an app once and let us do the intelligent control of timeouts and other issues automatically. We believe there’s more value in that than making hardcore decisions in the app while shipping it.

We’ve talked a lot today about network conditions today that were completely unforeseen when TCP was developed 40 years ago. What kind of changes do you think will take place in networks generally over the next few years?

It’s hard to say, because much of it depends on changes to other technologies. Like you said, in 1985, if you were trying to design network protocol and were told that years later, it would need to adapt to people working online while moving around in their car all day, you’d be laughed out of the room. Back then, what seemed impossible is now the reality for the majority of use cases. And even as that changed, the software stack did not. So I believe we need to consider change in two parts: one being the physical hardware and the other being how the software deals with it.

In terms of the hardware, I believe the physical devices we hold on to will go away. It’ll be some different shape that is more physically connected to your senses than simply holding a device in your hand. Google Glass was a step in that direction, although it didn’t take off. Maybe this won’t entail staring at a screen anymore so much as experiencing the network in a different way. And to keep up with that, certain things will need to change. Magnetic radiation will be a problem to solve, as well as transitioning away from a battery-based system, where you constantly have to concern yourself with charging the battery or it running out. What won’t change is how we move through our physical environment with these devices, because at a certain point those capacities are bound by the laws of physics and there are certain limits we can’t overcome.

On the software side, however, we are nowhere near our limits. The biggest change there will be how software deals with an ever-present, ever-moving network reality with even more continuity to overcome the problem of transitioning between, for example, WiFi and LTE.

That’s true, which is why we think machine learning will be so important. An intelligent system that can learn and predict customer behavior would be capable of bypassing problems before they occur or delivering content before it is requested–which is ultimately one way to get past a limit like light speed?

Yes, in that sense we do predictive caching and server-side caching. The hard problem is getting real-time data across. For that, you have to use the existing network in smartest way possible.

Scroll to Top