ZTE Reports Massive H1 Profit Gains Buoyed By Chinese 4G Expansion; And Other News

ZTE Reports Massive H1 Profit Gains Buoyed By Chinese 4G Expansion

Chinese telco equipment manufacturer ZTE has reported sizeable profit increases in the first half of 2016, booking 9.3% gains in profit to $270 million and a 4.1% increase in operating revenues to $7.2 billion. About 58% of its operating revenue growth came from domestic business whereas approximately 20% came from international dealings.

In so doing, ZTE gains further ground on its industry rivals, both domestic and foreign, including Ericsson and Nokia. On the other hand ZTE is still dwarfed by its domestic counterpart Huawei, currently the world’s largest telecommunications equipment maker, which reported astronomical 40% year-over-year revenue growth for the first six months of 2016.

Many of these gains can be imputed to the massive expansion of 4G networks undertaken by Chinese wireless service providers such as China Mobile, beyond which ZTE also grew its cloud computing and big data services to communications service providers and increased its involvement in video streaming and the internet of things.

Unlike their Western counterparts, Chinese state-owned telco suppliers focus on providing network hardware and equipment, components for which are partially supplied by the United States. ZTE encountered troubles earlier this year when it fell afoul of U.S. economic sanctions and was penalized with an export ban, which has since been suspended pending further assessment until November 28.

The looming curb on component shipments to ZTE from the US has heightened the uncertainty surrounding the Shenzhen-based company’s outlook. ZTE, for its part, has announced that it has oriented its stance to better anticipate virtualization, openness, intelligence, the internet of things, and cloudification. It has also allocated a sizeable portion of its total revenues- around 15%, to be more precise- to research and development.

That being said, ZTE is forecasting a decline for the second half of 2016 due to economic uncertainty and an unfavorable business climate:

“The development of traditional telecommunication industries will be subject to stronger challenges in the second half of 2016, given the slowdown in global economic growth and increasing uncertainties,” said ZTE Chairman Zhao Xianming.

Verizon Easily Outstrips T-Mobile, Sprint, and AT&T in Network Performance Assessment

Verizon snagged the top slot in J.D. Power’s national network performance rankings, called the “2016 Wireless Network Quality Performance Study – Volume 2.” It handily outperformed its domestic rivals, T-Mobile, Sprint, and AT&T in five of six U.S. regions assessed in the rankings.

Only regional carrier U.S. Cellular managed to outstrip Verizon Wireless in the North Central region, though Verizon did manage to come in a respectable close second.

While Verizon narrowly missed sweeping all six regions, its counterparts, performed rather inconsistently, notching varying results in the regions of the U.S. Sprint had a fairly positive showing, coming in second in the Southwest, West, Northeast, Southeast and Mid-Atlantic regions.

The J.D. Power survey was conducted using the input of 43,000 respondents on 10 problem areas of wireless performance, including dropped calls, calls not connected, audio issues, failed.late voicemails, lost calls, text transmission failures, late text message notifications, web/app connection errors, slow downloads/apps, and email connection errors.

Overall, the study found that network performance problems had increased incidence rates in urban areas due to the higher concentration of young, data-voracious users, underscoring the importance of increased infrastructural and network investment by carriers in urban areas. It also revealed that network performance quality was roughly on par with that of six months ago.

Samsung Pay Exceeds 100 Million Transactions

Samsung Pay, the South Korean tech giant’s mobile payment service, has facilitated roughly 100 million transactions within its first year of operation.

“Since its introduction a year ago, Samsung Pay continues to lead mobile payments by providing services that are simple, secure and virtually anywhere,” Samsung executive Injong Rhee said in a press release. “Our ambition to reach a world without wallets continues to draw ever closer, and this strong consumer adoption signals a shift in behavior and demonstrates the continued enthusiasm for a safer, smarter and better mobile wallet.”

Rolled out almost exactly a year ago this day, Samsung Pay has spread from South Korea to seven other major markets, including China, the US, Brazil, and Spain and is supported by more than 440 banks. Available on higher end phones, including Samsung’s flagship Galaxy smartphones, Samsung Pay functions as a mobile wallet and a replacement for your physical debit or credit card. It also provides other trimmings such as gift cards, redeemable coupons, and membership cards. There are currently 4 million registered membership cards in the US and South Korea, which Samsung points to as evidence that consumer behavioral trends are shifting to accommodate mobile transactions.

Samsung is not the only tech company active in the burgeoning mobile payment market. Apple and Google have notably released their own services. Samsung’s advantage, however, lies in the fact that it’s the biggest smartphone vendor in the world, having shipped nearly 80 million units globally last quarter.

Netflix Readies Itself For Temporary Video Downloads

All indications suggest that Netflix will release temporary video downloads as a feature on its video streaming platform by the end of the year.

Dan Taitz, CEO of Penthera, a company which specializes in providing software that enables video downloads to video streaming companies, states that it is only a matter of time.

“From industry sources, I know that Netflix is out in the market negotiating to get download rights in addition to streaming rights,” said Tiatz. “Content providers are getting asked that now from Netflix so [Netflix] can offer a download service.”

While Netflix leads in streaming video, it has been outpaced by competitors in video downloads. For instance, video-streaming archrival Amazon Prime Studios, has given its customers the option of downloading videos since last September. Though Netflix had previously averred that introducing additional options may have the unintended effect of paralyzing users with choice, causing anxiety and confusion, it appears to have changed its mind on the subject. Earlier this year, Netflix CEO Reed Hastings commented that he was open to the idea of downloadable video.

“While our focus remains on delivering a great streaming experience, we are always exploring ways to make the service better,” said Netflix spokesperson Anne Marie Squeo. But she adds, “We don’t have anything to add at this time.”

Taitz, for his part, sees downloadable video options as a natural progression from the proliferation of video streaming. He believes that once consumers are acclimated to the idea of downloading as a commonplace feature, they will grow to demand it.

If Netflix decides to pursue video downloading capabilities, it will have copyright and intellectual property issues to contend with. While all of its originally produced content will likely be available for download, as Netflix owns the content rights to it, the company will have to negotiate agreements with studios for downloading rights. It also has to determine which devices, mobile or otherwise, will support the feature.

“It would make sense to focus on mobile device support over streaming media boxes connected to the TV as that’s the real benefit of a downloading service,” notes Dan Rayburn, an analyst with Frost & Sullivan, “but the service will be limited in use to a degree, since mobile devices can’t store too many movies.”

Content Delivery Infrastructure Updates #7

CloudFlare and Telstra Do the Tango

CloudFlare, which supports over 4 million websites through its global network which spans 45 countries, has been trading jabs with Telstra over its claim that Telstra’s bandwidth costs are overpriced. In CloudFlare’s recent analysis of relative bandwidth costs, the popular bandwidth provider argued that certain networks drove up mean bandwidth prices: “there are six expensive networks (HiNet, Korea Telecom, Optus, Telecom Argentina, Telefonica, Telstra) that are more than an order of magnitude more expensive than other bandwidth providers around the globe and refuse to discuss local peering relationships. To give you a sense, these six networks represent less than 6% of the traffic but nearly 50% of our bandwidth costs.”

CloudFlare argues that, in view of these exorbitant rates, it has taken the drastic measure of moving Free customers off the six transit networks in order to show them “how out of step they are with the rest of the world.”

Telstra has hit back with the claim that CloudFlare overstated its bandwidth costs: “Previous analysis undertaken by CloudFlare two years ago overstated Telstra’s charges by a factor of ten.” CloudFlare, in turn, stated that this was simply untrue and averred that peering with CloudFlare would save the Telstra on transit costs.

Akamai Upping Its Game In Cloud Security: Managed Kona Site Defender Service

In the face of rapidly evolving threats, Akamai is unveiling its Managed Kona Site Defender Service, a web security solution designed for companies that wish to offload security monitoring, threat mitigation and management, of the Akamai Cloud Security Solution. The site defender service comprises three core focus areas — attack readiness, security monitoring and attack support, and security reporting.

Attack readiness entails thoroughly scrutinizing a client’s web traffic in order to assess its defense posture and come up with recommendations for combating malicious traffic, a procedure known as threat update review. After security solution configurations have been tuned and optimized to respond to the current threat landscape, Akamai and client security teams run drills to coordinate threat mitigation efforts.

Managed Kona Site Defender also provides real-time 24/7 threat monitoring, proactively assessing log events from Kona Site Defender Alerts to ensure early threat detection. Attack support is provided as soon as an attack is detected, with news of the incident being relayed to the client and the Security Operations Center.

After such an attack is successfully mitigated, Akamai will compose a Security After Action Report detailing the nature of the threat, the manner in which it was resolved, and recommendations.

Finally, as devising a robust web security solution is an iterative process, Managed Kona provides a monthly report that reviews the status of a client’s security solution as well as any security incidents that occurred.

VDMS (Edgecast) Expands in South America

Verizon Digital Media Services has expanded into six new points-of-presence in five South American countries on its Edgecast network. The new points-of-presence are Medellin, Colombia, Rio de Janeiro, Brazil, Barranquilla, Colombia, Lima, Peru, and Buenos Aires, Argentina. The Brazilian PoP functions as a Transact PoP, a robust, purpose-built PCI Level 1 Certified CDN that can securely carry e-commerce traffic.

It has also gained a new Brazilian client, CVC Brasil Operadora e Agencia de Viagens SA, which will not only take advantage of the network enhancements provided by Edgecast CDN, but also its authoritative-only IP Anycast DNS, Application Delivery Network, and Web Application Firewall. Miguel Alcantara, of CVC Brasil, said of the deal: “We required a dependable, agile solution that could adapt to our ever-growing digital market. Throughout a lengthy, rigorous trial process, Verizon Digital Media Services consistently demonstrated its quickness, nimbleness, issue resolution, and ease of use with high quality. We could not have asked for a better partner.”

This brings Edgecast’s network footprint in South America up to 8 PoPs in 7 cities and 5 countries, making it one of the largest in the continent. As OTT traffic continues to grow in South America, Verizon is investing heavily in improving its CDN services: “The addition of these new PoPs significantly improves our network performance on key measurements such as response time — the amount of time between receiving users’ requests and responding to them — and throughput — the speed of delivering data back to a user’s device. In recent tests, our network had the fastest throughput and quickest response time when compared to other CDNs in the respective LatAm countries.”

Affiliate Fraud Targeting 71 Of the Top Alexa 500

PerimeterX has detected a new affiliate marketing fraud attack based on a network of browser extension malware. This affiliate fraud attack has targeted thousands of websites, including 71 of the Alexa top 500 websites, hijacking users and charging affiliate and referral fees that can reach thousands of dollars.

Affiliate marketing is a common tactic used to drive up traffic wherein third-parties are hired to refer, divert, and otherwise encourage users to websites. Some affiliate marketing arrangements stipulate that as much as 30% of what a referred visitor spends on a website goes to the referring website, so there is plenty of financial motivation to attempt to exploit them.

The affiliate marketing fraud scheme proceeds by hijacking a browser and installing a malicious extension which can then be leveraged to monitor user activity and even operate while the user is active. What this browser extension does is refer all of a user’s activity and purchases to a false affiliate, stealing from websites and their referring third parties. Such schemes are difficult to detect because they piggyback on legitimate user behavior and simply take the credit for the referrals and purchases.

Akamai Delivers Like a Charm During Rio Olympics


The 2016 Summer Games were many things to many people, but in the CDN world, they proved that online-streaming of live events has grown tremendously, at the expense of broadcast. NBC’s Rio Olympic broadcast ratings were 20% lower than those for the 2012 London Olympics.

Conversely, NBC has streamed more hours of live video coverage (4,500+) at Rio than at any other previous Olympic event. Akamai, which handled the CDN for NBC’s Olympic broadcast, had delivered more data in three days at Rio than it had in all 17 days at London. All this is to say that if you are broadcasting the Olympics, you better have streaming down pat.

The task presented numerous logistical challenges for Akamai, which needed to take every live-streamed Olympic event (which was all of them), bring the content onto its platform and distribute it to end users scattered throughout the globe. The feat also necessitated supporting a massive library of footage ranging from complete replays to highlights.

During its coverage, Akamai also encountered 4.5+ Tbps peak traffic, higher than ever before for an event of its kind, and viewers equipped with stronger broadband connections than they had for the 2012 Olympics.

To that end, Akamai adopted a variety of precautions: “Large on-demand libraries put a huge stress on its caching platform, so it worked closely with partners to create a scalable solution, providing offload to protect the libraries. It looked at the full distribution pipe, providing layers of caching and distribution to protect the origin, while also managing the last mile to viewers’ homes.”

In addition, to prepare for 1080p video upload and distribution, Akamai dedicated parts of its platforms exclusively to Olympic videos and also worked to align its massively distributed edge platform with NBC’s encoding partners. Stay tuned for the Olympics in 2020, when 4K graphics are thrown into the mix.

What Does DIY Mean to Akamai and Industry

Akamai is Top of Mind

Since the day Akamai announced their earnings, they have been Top of Mind in the industry, dominating the PR battle against the competition, including CloudFlare. But not only is Akamai winning the PR game, they are also winning the innovation game, having introduced a bunch of new services in the last few months, like Managed Kona, which wraps Professional Services + 24/7 support into their WAF offering. Now let’s discuss some of the trends, shifts, and disruptions happening in the market.

Akamai’s Earnings: The dust has settled on Akamai’s stock price and it’s almost back to normal now. During the time of every earnings call, the market reaction before and after plays out like a broken record, which is fun to watch. The story goes – Akamai announces earnings, their stock drops a decent amount, only to come back to where it was a month later. During the earnings call, Akamai stated that two of their largest customers, likely Apple and Facebook, which represented 12% of total revenue in Q2-2015  decreased to 5% of total revenue.

In addition, Akamai’s Top 6  customers that represented 17.8% of total revenue in Q2-2015 dropped to 10.7% in Q2-2016. The market thinks this is bad news, however, we believe it’s the best news coming out of the Akamai camp in the last year. Akamai should put all their efforts and focus on reducing the 5% (two largest clients) and 10.7% (top 6 clients) to .005% and 3%. This is Business 101, Michael Porter style – the last thing you want in this world is to have your fate in the hands of one or two customers that have all the leverage in the world. Even if Apple and Facebook have a million caching servers in place, they’ll always need a secondary CDN as backup in case their networks take a dump, which happens more often than people think. And Akamai, having the largest network, it’s them by default.

Buckets and Bits and Numbers Don’t Lie

The Street and many thought leaders in the industry are screaming from the rooftops that DIY is a game changer that will wreak havoc on existing business models because of what Facebook, Amazon, Netflix, Apple, Google and Microsoft are doing. Just because Amazon, Facebook and Google are going nuts with DIY CDN, doesn’t mean everyone is doing it. Google, Facebook, and Amazon are DIY everything to their core; DIY servers, DIY routers, DIY storage, DIY everything. For clarity, let’s break down the CDN customer segment into buckets and bits and see if what happens:

Buckets (Demographics)
  • Bucket 1: Google, Apple, Amazon, Facebook, Netflix and Microsoft
  • Bucket 2: Big Telcos: Verizon, AT&T, Comcast, Telefonica, etc
  • Bucket 3: Riot Games, World of Warcraft (Blizzard), Steam (Valve Software, etc)
  • Bucket 4: Automatic (WordPress), Vimeo, Wowza, Wix,
  • Bucket 5: Hosting companies
  • Bucket 6: Disney, Viacom, Lions Gate, etc
  • Bucket 7: Stripe, LinkedIn, Whatsapp, Instagram, Airbnb, Salesforce, etc.

Continue reading What Does DIY Mean to Akamai and Industry

Content Delivery Infrastructure Updates #6

Akamai Raises The Bar of Vulnerability Management

Akamai has written the blueprint on threat mitigation and InfoSec, at least according to its blog post about the Akamai Intelligent Platform. The Platform is actually comprised of a globally distributed network of 216,000 servers and supports Akamai’s CDN and cloud security. Built from scratch with security in mind, the Platform relies on a customized and streamlined variant of Linux that removes any extraneous functionalities, which dramatically reduces its vulnerabilities. Each Edge server is also configured to be a bastion host in order to bolster security.

The Platform’s spare design also makes it difficult to apply patches, should the need ever arise. If a critical vulnerability is found, patches are developed according to quality assurance protocols and tested rigorously before being installed.

That being said, Akamai is brandishing the many other layers of defense against vulnerabilities that circumscribe the need for patching SLAs. Its first line of defense is a meticulous Vulnerability Management Process. The guidelines stipulate that Akamai continuously operate a parser that tracks daily CVEs and publicly available patches to stay abreast of new vulnerabilities and fixes. Security risks are evaluated based on potential damage and potential assailants, and those deemed critical or high risks are directed into the Incident Management Process which ensures that the proper attention is given to them. Moderate to low risks are evaluated and patched on a regular timeframe. Finally, Akamai adheres to Federal Risk and Authorization Management Program guidelines.

The second line of defense is a group of rules and backup health checks, such as the Web Application Firewall, the Forcefield port-hardening solution, and audit servers on the CDN that monitor server processes.

The final line of defense is Akamai’s Network Operations Command Center, which handles reports of anomalous or threatening activity 24/7. The NOCC is empowered to immediately suspend or wipe a server should the need arise.

Pinterest and Varnish VCL Make 50 Billion Pins Per Month Possible

Varnish Software has garnered favorable attention for its instrumental role in Pinterest’s day to day operations. Pinterest, a bookmarking application, boasts 100 million active users who generate 50 billion pins per month, 180,000 requests per second, and 10 million unique user actions per minute. In order to support the swarm of activity and traffic, Pinterest has leveraged Varnish VCL to create a resilient and scalable CDN.

Varnish is an open-source HTTP accelerator for websites that delivers quickly: ““Put in front of application server, it is super simple, therefore it is also 200 to 1000 times faster. So every time you move data from caching layer to application server, Varnish will supply the data in 30-40 microseconds as opposed to typical cache which is 10 to 20 milliseconds,” says Varnish founder Per Buer. It uses Varnish Configuration Language (VCL), a straightforward domain-specific language “ used to define request handling and document caching policies for the Varnish HTTP accelerator”.

While it may seem unnecessary and tiresome to pick up a new proprietary language, Varnish contends that VCL is designed using basic, straightforward syntax so as to be intuitive and easy to control. It is also very strict and rigid; VCL is not interpretive or reliant upon “if” statements or loops and does  not cache anything until all the caching rules have been satisfied. It can manipulate HTTPs, override TTLs, strip cookies, and rewrite URL’s.

What makes VCL convenient is the fact that it can be updated or altered without shutting down or requiring a system restart. In addition, it can run concurrently alongside other VCLs, allowing users to alternate seamlessly between them.

Verizon (Edgecast) and Airtel Create Partnership

Verizon has announced a partnership with Bharti Airtel and its intentions to leverage Airtel’s digital infrastructure in order to establish points of presence in Mumbai, Chennai, Bangalore, and New Delhi. In expanding and creating new points of presence, Verizon Digital Media Services will be able to transmit content and media to its enterprise customers from closer proximities, and process and respond to requests in shorter time.

“We are expanding our content delivery network in strategic markets that our customers care about, and we have found a long-term partner in Airtel Business,” Rob Peters, who is Chief Technology Officer of Verizon Digital Media Services commented. “The launch of these strategic PoPs marks the beginning of a strong partnership between Verizon and Airtel Business and further cements our commitment to providing consumers in India, one of the fastest-growing markets for digital media consumption, with exceptional services and quality.”

According to Airtel’s press release, Verizon has seen marked network performance improvements since installing the PoPs, reducing the amount of time its servers need to process requests, respond and deliver results.

“India is fast emerging as a large regional IP Hub and we are excited to enable Verizon Digital Media Services to provide best-in-class digital services in India. This collaboration ensures superior user experiences and seamlessly handles traffic spikes as connected devices, subscribers and content size continue to grow. We are delighted by the positive results seen in a very short time and we are confident that in the long term, Indian mobile Internet users are going to immensely benefit from this partnership,” said Ajay Chitkara, Director of Bharti Airtel.

Thus far, Verizon Digital Media Services has launched 3,000 points of presence across North and South America, Europe, Australia, and Asia.

How Not To Stream The Olympics

Channel Seven in Australia has incurred a withering hail of criticism for its botched broadcast of the Olympics, specifically the swarm of bugs in its digital streaming system. The network has exclusive Olympic broadcasting rights in Australia, and shows the Olympics on three digital channels as well as via live streaming online. It also offers a $20 premium package that promises access to 36 channels, comprehensive Olympic coverage, in additional to other furbelows such as on-demand replays on both its app and its website.

However, it has failed to deliver on its promises according to users who have complained of long loading times, crashes, inconsistent service, and even lack of color commentary. As more subscribers have called for their money back, Channel Seven has acknowledged the issue and promised to resolve it expeditiously. It attributed the glitches to unprecedented streaming traffic related to the Summer Games and other issues that its streaming partners were working to address. A network spokesperson informed the Sydney Morning Herald that “Our streaming partners – Olympic Broadcasting Services in Rio and [content delivery service] Akamai – have assured us they are doing everything they can to avoid a repeat of any problems.”

AWC Introduces Transit VPC Solutions to Support Complex Network Connectivity

Amazon Web Services has developed the Transit VPC Solution, which leverages a network construct known as the transit VPC to connect multiple VPCs to a core VPC that functions as the global network transit hub, simplifying network management and topology and reducing the amount of network connections needed. The transit VPC can be used in various scenarios including private networking, shared connectivity, and cross-account AWS usage. The solution is also completely virtual and does not require a physical colocation transit hub or network gear.

This allows AWS customers to create as many VPCs as they want to varying degrees of complexity regardless of geography. Transit VPCs can act as a global network transit center that ties together geographically remote networks in a hub-and-spokes configuration.

APAC Telco Updates

China Mobile’s Profits Fueled By 4G Boom

China Mobile, the world’s largest wireless service provider, has been working feverishly over the last two years to roll out an ambitious new 4G network in China. Its efforts have paid handsome dividends, as its Q2 results attest. Profits jumped 9.1% to nearly $5.5 billion during this period, cementing China Mobile’s status as the dominant Chinese telco, over its rivals Unicom and China Telecom.

While all three of China’s state-owned telcos have begun to transition into 4G, China Mobile enjoyed a head start having been granted a 4G license by the Chinese government a full year before any of its competitors. That being said, Light Reading notes that China Mobile’s successes in light of the fact that it deploys the TDD variant of 4G, which is uncommon compared to the FDD variants that Unicom and China Telecom have opted for. China Mobile also saw increased revenue from its data services.

Over the first half of this year, its operating revenues have grown 7.1% to $55.8 billion and China Mobile has fended off domestic rivals to garner another 116 million 4G subscribers, bringing its total up to 428 million. This means that well over half of its 837 million subscribers, a staggering sum in its own right, are paying for 4G service. Beyond having customers switch to more lucrative 4G subscriptions, China Mobile snagged nearly 11 million new customers during the first half of 2016.

Nokia Agrees to Help China Telecom with LTE Infrastructure

The Finnish telco vendor Nokia has inked an agreement with state-owned telco China Telecom, one of the biggest in China, to help expand its LTE footprint by deploying 4G architecture in 19 provinces of China. The base stations it has agreed to install will help increase throughput and coverage while enhancing quality of experience.

China Telecom is embroiled in fierce competition against China Mobile, whose 4G plans have been met with immense success, to win 4G subscribers as more and more suburban and urban Chinese users begin demanding ubiquitous 4G connectivity. China Telecom currently has 207 million subscribers with just under half of them (90 million) paying for 4G.

The 19 provinces in which Nokia will be installing its Flexi Multiradio 10 base stations are Shanghai, Jiangsu, Zhejiang, Hunan, Shandong, Hubei, Guangxi, Fujian, Jiangxi, Shanxi, Hebei, Henan, Heilongjiang. Sichuan, Anhui, Liaoning, Guizhou, Xinjiang, and Inner Mongolia.

Beyond installation, Nokia has agreed to provide associated services including project management, network design, hardware and software maintenance services, and optimization.

By expanding its LTE infrastructure, China Telecom is not only anticipating exponential growth of 4G data traffic, which will account 90% of mobile data traffic by 2017, it is also paving the way for Voice over LTE service in 2017.

Foxconn Finalizes $3.81 Billion Acquisition of Sharp

The Taiwanese electronics manufacturer Foxconn has acquired Sharp Corp., shelling out $3.81 billion for a 66% stake in the company after receiving approval for the deal from China’s anti-monopoly regulatory agency, The Wall Street Journal reports. As part of the transition, Sharp CEO Kozo Takahashi will be replaced by Tai Jeng-wu, who is known as Foxconn CEO Terry Guo’s right hand man.

Sharp, a century-old Japanese electronics company, struggled with big losses earlier this year and has seen its popularity fade over recent years. Its shares rallied by 19% however, after investors learned that the takeover had been approved and that Sharp would be receiving a $3.81 billion cash infusion. Sharp has undergone two major bailouts in the last four years and came close to bankruptcy in 2012. Though its display panel business is robust, Sharp struggled to compete against South Korean and Chinese rivals in the consumer electronics business, which it was once well-known for.

Foxconn, on the other hand, sees a clear opportunity to expand its business with Apple. Sharp’s liquid crystal display division which produces some of the expensive displays that go into the iPhone. Foxconn, which is a major iPhone manufacturer and assembler, allowing it to sell yet another component to Apple. This may in turn help Apple drive down production costs to provide iPhones at lower rates in emerging markets such as India where Apple has been taking a beating from better-priced competitors. It also provides Foxconn with an opportunity to branch out into other consumer electronics under the Sharp brand.

Foxconn also acquires Sharp’s organic light emitting diode (OLED) unit, which provides clearer images and greater energy efficiency than LCDs. While South Korean manufacturers currently lead in OLED development, the Taiwanese manufacturer may have plans to compete to provide OLED displays to Apple for upcoming iPhones and iPads. Still, Foxconn has its work cut out for it, not only given Sharp’s $3 billion in liabilities but also the stark difference in corporate cultures.

China Mobile and Ericsson Conduct First 5G Drone Test

Scandinavian telco vendor Ericsson and China Mobile have collaborated to conduct the world’s first trial of a 5G drone prototype, which may set the stage for a low latency, distributed 5G network. The drone may be deployed in low latency situations, provided that the network operator can reliably support them, in order to provide emergency services, according Ericsson.

The trial occurred in Wuxi, in China’s Jiangsu province, where the drone flew through multiple handovers across multiple sites, in which commercial mobile phone users were active. China Mobile, which has worked with Ericsson since 2010, provided the cellular network that supported the entire mission.

“With commercial implementation expected from 2020, Ericsson’s 5G research is coming out of the labs and into live test networks,” said Chris Houghton, Head of Region Northeast Asia at Ericsson. We see tremendous opportunities in 5G, and we are mobilizing the ecosystem and collaborating with industry leaders such as China Mobile to help make 5G  a reality.”

“China Mobile join hands with Ericsson on research and trials of 5G key technologies, leveraging China Mobile’s 5G joint innovation center as the platform,” said Huang Yuhong, Deputy Head of the China Mobile Research Institute. “Both companies target to explore the 5G possibilities in vertical industries and new use case scenarios, and together drive towards 5G commercial capability in 2020.”

One of the stated goals of the undertaking, according to Ericsson, is to optimize latency for mission critical use cases, using an aerial 5G-enabled drone to distribute part of the network to the cellular edge: “The drone trial is therefore an important step toward 5G networks in which part of a network can be distributed and dynamically deployed at the cellular edge in order to reduce end-to-end latency, and to serve a range of 5G use cases at the same time.”

Content Delivery Infrastructure Updates #5

Nginx Plus vs. F5 Big IP and Citrix Net Scaler

Nginx has posted two cost-benefit analyses of its Nginx Plus versus other hardware application delivery controllers (ADC), namely F5 Big IP and Citrix Net Scaler. Unsurprisingly, it found that Nginx Plus provided significant performance gains while cutting costs.

When compared with the functioning of F5 Big IP appliances on commodity hardware, Nginx Plus was found to meet or exceed performance while providing up to 85% savings in terms of cost. The three performance metrics it utilized in its analysis were HTTP requests per second, SSL/TLS transactions per second, and HTTP throughput.

Specifically, Nginx compared F5’s entry level, mid-range, and high-end ADP’s to comparable Nginx Plus use cases. In each comparison, Nginx was found to provide 75%-85% in cost savings while performing roughly equally to or better than F5 on performance metrics.

The key to its success? Nginx attributes it to the fact that it doesn’t bundle hardware with software and impose artificial performance caps to its software, allowing customers to select hardware provisions according to their needs.

Nginx ran through the same performance-cost analysis with Citrix Net Scaler and found that clients could save up to 89% in costs without sacrificing performance. This translates to potentially tens of thousands of dollars in savings when using Nginx Plus for roughly the same performance as a comparable edition of Citrix Net Scaler.

Nginx Plus, however, also has the added bonus of undeniably superior HTTP throughput performance, providing 20-30 Gbps, compared to the relatively moribund 0.5-6 Gbps range that Citrix provides. In summary, Nginx Plus provides better price-performance numbers and equivalent, if not superior, performance than comparable ADCs.

Imperva – Top 4 Flaws in HTTP/2

Imperva’s latest Hacker Intelligence Initiative Report outlines four high-profile flaws in the HTTP/2 World Wide Web next generation protocol. In particular HTTP/2 introduces new innovations and mechanisms that increase the attack surface and exposure of business critical web infrastructure.

In its analysis, Imperva was able to locate exploitable vulnerabilities in every major underlying mechanism of various HTTP/2 server implementations. It surmises that other similar HTTP/2 implementations will suffer from such vulnerabilities.

The findings are troubling and worth considering given the exponential rate at which HTTP/2 protocol is being adopted, currently accounting for nearly 9% of all websites.

“The general web performance improvements and specific enhancements for mobile applications introduced in HTTP/2 are a potential boon for internet users,” said Amichai Shulman, co-founder and CTO of Imperva. “However, releasing a large amount of new code into the wild in a short time creates an excellent opportunity for attackers. While it is disturbing to see known HTTP 1.x threats introduced in HTTP/2, it’s hardly surprising. As with all new technology, it is important for businesses to perform due diligence and implement safeguards to harden the extended attack surface and protect critical business and consumer data from ever-evolving cyber threats.”

In particular, there were four high-profile vulnerabilities that the Imperva report highlighted that were found in HTTP/2 server implementations from popular web ser Nginx, Apache, IIS, Jetty, and nghttp2.

  1. Slow Read- The Imperva Defense center identified various iterations of the slow read vulnerability, which calls on malicious clients to read responses very slowly, in many popular web servers.
  2. HPACK Bomb- An attacker sends seemingly innocuous and compact messages to a victim server. Upon entry the messages convert into gigabytes of data, eating up server memory.
  3. Dependency Cycle Attack – This attack preys on a new flow control mechanism that HTTP/2 introduces. The exploit uses requests that induce an infinite dependency cycle, crippling the server as it struggles to process these dependencies..
  4. Stream Multiplexing Abuse – A malicious client takes advantage of flaws in stream multiplexing functionality, crashing the server and denying service to legitimate users.

Nginx wrote up a blog post in which it hastened to add that it performed well relative to its peers. Its HTTP/2 server implementation was found to immune to 3 of the 4 flaws outlined above, only suffering from slow read vulnerability which allowed denial of service attacks to succeed. Once the fault was reported, Nginx implemented fixes and has reported that none of its current versions have slow read vulnerabilities. That being said, Nginx recommends that clients using older HTTP/2 implementations upgrade to newer versions.

Open-source Video Platform Kaltura Raises $50MM from Goldman Sachs and Plans to Go Public

Kaltura is a New York-based video platform that allows clients to manage, analyze, and monetize online video offerings. To this end, it also offers a suite of API’s that allow for transcoding, distribution, etc. according to Venture Beat. In anticipation of plans to go public and IPO, Kaltura has raised $50 million in capital from Goldman Sachs’s Private Capital Investment Group, which it calls pre-IPO funding. Prior to this infusion, Kaltura reported that it had raised approximately $115 million in funding rounds.

It serves video streaming software to a diversified range of prominent clients from various industries, including HBO, Time Warner, Viacom, Intel, SAP, and Oracle, which was one of the reasons Goldman Sachs cited in explaining its decision to directly invest in Kaltura.

In a press release, Kaltura CEO Ron Yekutiel commented that “Video is booming — it is the favorite data type for communication, collaboration, teaching & learning, marketing, and entertainment. Kaltura is in a unique position to capitalize on this huge market opportunity since it provides the broadest set of video products and capabilities to customers across all industries.”

While the company has yet to confirm a date for its upcoming IPO, it has announced that it will spend the investment on expanding its presence globally.

CenturyLink Selling Its Data Centers

CenturyLink is planning on selling off its data center assets by the end of this fiscal year, its CEO has confirmed, according to an article in Fierce Telecom. It is currently discussing and negotiating the finer points of the sale with a group of contenders, according to CEO Glen Post, and is nearing a final deal in which it would completely sell off its data center business.

While it is conjectured that the extra liquidity from the asset sale could free up funds to invest in network virtualization and broadband, Post has yet to confirm how the proceeds would be used.

Post also floated a couple of other possibilities including paying down debt: “We’ll obviously consider a number of alternatives, stock buybacks, debt reduction, investment and strategic services, and other possible areas…we’re waiting until we get a little closer to the close to really decide how to best spend those funds. Our primary goal will be to utilize the cash in the very most effective way we can to drive long-term shareholder value.”

If negotiations fall through, Post suggested in an earnings call with investors that the company would seek partnerships to jointly run the data centers in order to drive down costs. Colocation revenues have been stagnating recently so that may prove to be a difficult proposition. The move may presage similar moves from telcos that are involved in the data center business, such as Verizon.

Rackspace Going the Private Equity Route

Rackspace is nearing a the completion of a deal to be acquired by Apollo Global Management for as much as $3.5 billion. Rackspace Hosting Inc. is a U.S. cloud services provider and manager whose acquisition would help Apollo deepen its holdings and investment in the technology sector.

Rackspace had struggled to compete in the past as its prices are being undercut by larger cloud providers such as Amazon Web Services, Google, and Microsoft. SDX Central reports that in response to the situation, Rackspace moved to partner instead with the likes of Microsoft and AWS and revamp its cloud offerings so that corporate clients could seamlessly set up OpenStack clouds. Rackspace’s adjusted EBITDA in 2015 amounted to $678 million. The company’s shares spiked 11% following news of the deal.

Google Moves Into Cloud Billing and Selling By Acquiring Orbitera

Orbitera is the latest company to be absorbed into the ever-growing colossus that is Google. According to a report by Recode, Google’s Enterprise cloud and apps unit has been the most rapacious and acquisitive. Orbitera, which automates billing and selling process in the cloud, will diversify Google’s cloud offerings, expanding its client base and allowing it to compete more effectively with AWS.

According to Google’s announcement, “Orbitera provides a commerce platform that makes buying and selling software in the cloud simple, seamless and scalable for all kinds of businesses, including independent software vendors, service providers and IT channel organizations.” Google also touted the fact that Orbitera had launched 60,000+ enterprise stacks for the likes of Adobe, Oracle, and Megalogix.

Techcrunch is reporting that the sale was for $100 million plus, according to its sources. What these small-scale acquisitions do, however, is round out Google’s cloud offerings, increasing interoperability and expanding its cloud, marketing, and apps technology.

SigFox Partners With UnaBiz To Make Taiwan an IoT Testbed by 2017; And Other News

SigFox Partners With UnaBiz To Make Taiwan an IoT Testbed by 2017

French telco and global communications service provider SigFox has announced a major venture to make Taiwan an internet of things testbed, in partnership with UnaBiz, which operates IoT networks in Asia. SigFox, which is a leading provider of IoT related services in Europe and operates 22 IoT networks worldwide, made the announcement with French and Taiwanese officials and specified that its proprietary technology would be the centerpiece of an IoT testbed connecting Taiwan’s manufacturing industry and machinery to the rest of the world. UnaBiz has a strategic partnership in place with SigFox and will play a large role in deploying the network and led efforts to deploy a similar network in Singapore. This will also be SigFox’s fourth IoT network in the Asia-Pacific region.

UnaBiz CEO Henri Bong noted that launching a large-scale testbed in Taiwan made sense due to its developed electronics industry and large number of possible deployment scenarios: “This extension to Taiwan is key in our regional development strategy, because Taiwan has the largest ecosystem of devices and solutions to accelerate the adoption of IoT use cases globally.” Both SigFox and UnaBiz believe that Taiwan represents an ideal testbed for speeding the learning and implementation of IoT design and use cases and that the project will ultimately strengthen SigFox’s IoT ecosystem worldwide.

The plan predicts that Taiwan will enjoy nationwide coverage by 2018, while its six major metropolitan centers, in which more than half the population of 23.5 million resides, will see coverage as early as 2017. SigFox is touting its uniquely light infrastructure, as compared to traditional wireless networking infrastructure, as the main reason for the rapidity of the rollout.

French official Benoit Guidee noted that the joint undertaking represented an opportunity to bridge French and Taiwanese tech innovation and capabilities. “SIGFOX is not only one of the best ambassadors of La French Tech spirit, it is also a large contributor to the entrepreneurship ecosystem in France through its European IoT Innovation Hub: the IoT Valley,” Guidée said. “By bridging the IoT Valley with Taiwan design and manufacturing capabilities, I am convinced that France and Taiwan have a great opportunity to shape together the future of the IoT in the world. France is thus inviting Taiwan to use SIGFOX as a gateway into a global market and, by doing so, simplifying the integration and accelerating the go-to-market strategy.”

Arista Networks Buoyed By Cloud Networking Demand to $268.7MM in Q2 Revenue

Arista’s second quarterly financial reports have revealed that the company has made nearly $269 million in revenue during the quarter, up 37% year over year and exceeding the expectations of analysts by millions. The Santa Clara based cloud networking company also reported a net income of $38.9 million for the quarter and profits of 54 cents per share.

According to CNBC, Arista foresees revenues of $279 million to $285 million for the third quarter, driven by steadily growing and healthy demand for its networking products for building cloud networks which comes from “cloud titans,” which is its most important group of customers. Aside from supporting the data center and cloud networking needs of cloud builders, Arista also counts the financial, services, and web and tech vertical segments among its clientele.

The company is also accounting for $11 million in litigation costs from an ongoing legal brawl with Optumsoft and Cisco, while excluding stock-based compensation and other costs. The Cisco intellectual property litigation in particular has the potential to hinder Arista’s operations, preventing it from importing the necessary components from offshore suppliers.

“We are in the midst of a multi-year shift from legacy IT silos to cloud networking,” stated Jayshree Ullal, Arista President and CEO in an interview with BusinessWire. “This quarter puts Arista at a billion-dollar run rate, which is a fitting validation of this trend, based on our innovative cloud-class, cloud-scale and cloud-converged offerings.”

One of the innovative cloud-class offerings that Arista has recently launched is the 7280R Series Universal Leaf, a fixed switching and routing platform for data centers which is built on top of Arista’s Universal Cloud Network architecture to reduce OPEX. The integrated platform is versatile and able to handle a range of leaf networking deployment scenarios, especially IP storage, routing and digital media.

Arista’s quarterly highlights are as follows:

  • Revenue of $268.7 million, an increase of 11.0% compared to the first quarter of 2016, and an increase of 37.4% from the second quarter of 2015.
  • GAAP gross margin of 63.8%, compared to GAAP gross margin of 64.0% in the first quarter of 2016 and 65.4% in the second quarter of 2015.
  • Non-GAAP gross margin of 64.1%, compared to non-GAAP gross margin of 64.4% in the first quarter of 2016 and 65.8% in the second quarter of 2015.
  • GAAP net income of $38.9 million, or $0.53 per diluted share, compared to GAAP net income of $24.0 million, or $0.33 per diluted share, in the second quarter of 2015.
  • Non-GAAP net income of $53.7 million, or $0.74 per diluted share, compared to non-GAAP net income of $38.8 million, or $0.54 per diluted share, in the second quarter of 2015.

OpenSignal’s Test Shows T-Mobile Retains Fastest Wireless Over Verizon

Verizon may have the largest and most comprehensive LTE footprint in the US, but T-Mobile remains peerless when it comes to wireless speeds. OpenSignal conducted the tests between May 1 and July 10, generating over 2.8 billion data points by analyzing the data use of 120,000 participants. The study showed that the fastest download speeds in 3G and 4G networks as well as overall could be enjoyed on T-Mobile networks.

The average download speed on T-Mobile was 16.28 megabits per second versus the 15.94 megabits per second rate that Verizon clocked in. Verizon was just edged out by T-Mobile for the title of fastest network speed, though in reality it’s fair to say the two are neck and neck. Overall, both Verizon and T-Mobile clocked in greater speeds compared to their last evaluation, gaining around 5 megabits per second on average.

Verizon led in terms of 4G availability due to the expansion of its network. Its users were able to access 4G connectivity 86% of the time. T-Mobile made some gains in this measure as well, improving from 81.23% to 83.2% availability, while Verizon’s availability rate actually dipped slightly since the last test.

“While T-Mobile for some time has been contesting the top spot for speed, the big surprise was the rapid improvement in T-Mobile’s LTE availability that we saw in our data,” OpenSignal commented in its report. “Our users found Verizon is still the leader when it comes to providing a consistent LTE signal, but T-Mobile surpassed AT&T in 4G availability in our latest tests and is now closing the gap with Verizon.”

Sprint, on the other hand, was the sluggard among the Big Four wireless service providers evaluated by OpenSignal, struggling to overcome a significant gap, though it did lead in 4G latency.

Ookla which runs a similar evaluative mobile app called SpeedTest concurred with most of the results of the OpenSignal app test, with T-Mobile leading in upload speeds and just losing out to Verizon in terms of download speeds. The test also showed AT&T gaining ground and pulling closer to T-Mobile and Verizon.

“Over the past year, AT&T had the largest raw improvement of the four major U.S. carriers with an increase of 5.85 Mbps to 19.94 Mbps by the end of June 2016, a 42% improvement,” according to Ookla. “Meanwhile, Sprint had the largest percentage-based performance improvement of the four major U.S. carriers, increasing their speed over the previous year by 54%, going from 9.99 Mbps to 15.42 Mbps. Sprint still trails the other three major U.S. carriers, but they are making a strong push to close the gap.”

Facebook Opens Area 404, a Hardware Lab

Facebook has recently commenced operations in its Area 404 Lab, a 22,000 square foot secretive facility, similar to Google X, that will be home to its ambitious R&D projects. While the company is famous for its social network services, the billions in ad revenue it has accrued has given it the means to fund audacious hardware projects including Oculus VR headsets, Aquila the Internet beaming drone, portable cellular network platforms, among others.

Mark Zuckerberg constantly refers to the fact that more than half the world lacks stable internet access, a disparity known as the digital divide, and has promised to commit Facebook’s resources to making Internet and wireless connectivity ubiquitous. Facebook, in exchange, will benefit from the potential addition of billions of new users and the attendant advertising revenue and data that they will bring with them.

The lab will continue to work on advancing networking, connectivity, virtual reality, and data centers. The lab itself features large scale industrial grade heavy machinery, including computerized lathes, mills, and water jet cutters. “With this new space, we can now handle the majority of our modeling, prototyping and failure analysis in-house, decreasing each iteration of the development cycle from weeks to days,” Facebook said in its blog post.

“Even more important, the space has room for all teams, with more than 50 workbenches in the main area. Connectivity Lab, Oculus, Building 8 and our Infrastructure teams can now work collaboratively in the same space, learning from one another as they build.”

Content Delivery Infrastructure Updates #4

Akamai and Limelight Networks Create a New Chapter in Their Relationship With Licensing Agreement

Global CDN leader Limelight Networks, Inc. has ended a long and costly legal battle of attrition with Akamai Technologies over the disputed ‘703 patent. The two companies have agreed to waive all rights to appeals and convert the $51 million judgment into a $54 million license agreement instead, reaching an amicable accord.

The license has been subdivided into twelve parts that will be paid in quarterly installments over three years beginning on August 1, 2016. As a part of the deal Akamai is releasing its claim on the $51 million letter of credit to Limelight and frees Limelight to operate its network without fear of litigation.

“We are pleased by the outcome of this agreement. It eliminates the continuing risk from the ‘703 patent and allows us to extend the $51 million payment over a three-year period at an attractive interest rate,” said Robert Lento, CEO of Limelight in a press release. “This agreement, coupled with the series of financial and operational improvements, further enhances our confidence in Limelight’s value creation opportunity.”

It is worth noting however, that this settlement does not resolve other ongoing legal disputes between the two companies, particularly with regard to patent infringement claims filed by Limelight against Akamai and XO Communications in 2015. The press release clarifies that “This agreement excludes patents that Akamai asserted as counterclaims in the patent infringement case filed by Limelight against Akamai and XO Communications in the Eastern District of Virginia in late 2015 and does not otherwise impact Limelight’s rights in that case.”

Google Open Sources Implementation of Wide & Deep Learning API

Giving computers the power of deduction would be a giant leap in machine learning, enabling them to observe and memorize facts and extrapolate generalizations from them just as humans do. Google, in a research blog post, broadly categorizes memorization as wide learning and generalization as a deep neural network (or deep learning), jointly dubbed Wide & Deep Learning.

The post notes that such a conceptualization allows for generic large-scale regression analysis and classification problems with sparse inputs. Google has taken the step of open sourcing its Wide & Deep Learning API as a part of its TF.Learn API in order to allow the general public to develop models themselves. Links to tutorials are available within the blog post.

By integrating both wide and deep learning into app development, apps are trained to utilize two different prediction protocols in order to return more accurate results, improving with time and learning from its errors: “During training, the prediction errors are backpropagated to both sides to train the model parameters. The cross-feature transformation in the wide model component can memorize all those sparse, specific rules, while the deep model component can generalize to similar items via embeddings.”

Open Source Compression Technology Brotli is Key Ingredient To Dropbox

Google released its Brotli lossless compression algorithm to public consumption on Github late last year, under an Apache open source license. At the time Brotli was touted as a highly efficient and more advanced compression algorithm, capable of 20-26% higher compression rates than its predecessor.

Smaller compression is useful and important for the simple, yet powerful reasons that it allows for “better space utilization and faster page loads”, wrote Zoltan Szabadka, a Google engineer, on Google’s Open Source blog. “We hope that this format will be supported by major browsers in the near future, as the smaller compressed size would give additional benefits to mobile users, such as lower data transfer fees and reduced battery use.” Compression allows for a faster and more seamless user experience which drives traffic and ad revenues among other benefits.

Dropbox, it appears, couldn’t agree more. In a recent blog post, Dropbox revealed that it had been making open-source improvements to the Brotli codec in order to drive down bandwidth usage and improve user syncing and collaboration on its platform. The team is predicting that such improvements will result in a 4.4% decrease in bandwidth usage per bit for business customers. Another benefit of Brotli, aside from the fact that it is open source, is that it is already compatible with and supported as an encoding format on most popular browsers, including Firefox, Chrome, Opera, and Android Browser.

Akamai Introduces 10for10 Service and Managed Delivery Service

Akamai has released two press brochures detailing two new consulting services. The first, dubbed 10for10, is a service consulting package that promises to improve customer’s web performance metrics by as much as 10%. In the web delivery business, latency can be toxic from a profit standpoint, Akamai argues, citing a statistic that Amazon lost 1% in sales per 100 ms of latency.

Users are demanding when it comes to the performance they expect from websites and are turned off when websites take longer than 2 seconds to load. As such, Akamai is touting 10for10’s ability to decrease Page Load Times, increase SpeedIndex (which measures the time it takes to paint the area of the webpage above the fold), and increase offload hits and page weight (which measures the magnitude of all the embedded resources received by the client). Akamai guarantees a 10% improvement in at least one of the metrics or your money back.

The Managed Delivery Service provides offload website performance management services, allowing business to manage, monitor, and improve performance websites with the aim of improving user experience. Part of the unique value proposition of MDS is that it promises to provide actionable insights and remedial action to problems that have been identified. In order to do so, the MDS team leverages synthetic test data and real user monitoring data to identify problems and conversely opportunities to optimize performance 24/7.

The team essentially babysits an assigned group of URLs, assessing web performance based on metrics such as industry-standard KPI’s and user data. Once an issue is identified, the team performs triage, locating the root issue and prioritizing issues for mitigation. Finally the team offers weekly reports that define optimization opportunities.

Mellanox Introduces Dual-port 25GbE NICs for High Performance Servers

Mellanox is unveiling 25G networking technology that promises to boost network performance and speeds at a time when mobile data demands continue to grow steadily. Increasing throughput and scaling cost-effectively are tough demands that legacy technologies are struggling to meet. Part of the suite of 25G tech that Mellanox is debuting includes dual-port 25 GbE adapters in both mezzanine and stand-up PCIe card form, which allows users to drive down total cost of ownership and improve network performance. A dual-port 25 GbE solution provides 2.5 times more bandwidth than placing 10 GbE on each port, and is optimal for a variety of use cases including high frequency trading, video streaming, and NFV.

In terms of specs, the new HPE 25GbE adapters utilize an SFP28 port form factor in order to expand the number of connectivity options that each port supports suiting a variety of enterprise consumer needs and physical networks. The adapters utilize copper and fiber optic cabling, removes the need for breakout cables, and can be built on existing 10GbE fiber structures.

Telecom Updates #1

Softbank CEO Predicts Major Recovery for Sprint

Softbank, which is Sprint’s Japanese corporate parent, recently posted 19% gains in first quarter profits. A report by ABC News reports however that Sprint continued to be a liability, posting net losses this quarter which were in turn offset by Softbank’s robust telecom sales in Japan. Softbank’s quarterly sales exceeded $20 billion, while its overall profits were $3 billion.

Softbank also recently announced plans to acquire ARM, a British chip manufacturer, for $32 billion in a move that signals the Japanese telecoms and internet conglomerate’s commitment to furthering its involvement in the internet of things and is also being hailed as a vote of confidence in the stability of the post-Brexit British economy.

“I’m telling ARM management – let’s increase the number of engineers, boost R&D spending and make investments proactively,” Softbank CEO Masayoshi Son said in a briefing. Son also signaled that he would slightly reduce his time commitment to Sprint in order to focus on the process of integrating ARM’s operations.

Sprint, while posting deeper quarterly losses, part of its ostensibly lackluster sales could be attributed to an unfavorable exchange rate. The American telco also began showing glimmers of recovery, with a sharp uptick in postpaid phone subscriptions and promises of positive cash flows as early as the next financial year.

“Sprint has been a drag, but I’m now seeing signs of a V-shaped recovery,” Son said, promising that Sprint would soon become a moneymaker. Softbank has also liquidated its stake in Alibaba and sold off its stake in SuperCell to Tencent, freeing up cash reserves, though it still has $112 billion in debt. Son expressed confidence in his moves and has decided to helm Softbank for another decade.

American Tower’s Foreign Operations Compensate for Lagging US Market Growth

American Tower Corporation, one of the largest cell tower operators in the US, has posted $1.44 billion in quarterly revenue, up 23% year over year as well as a 22.4% rise in net income. Earnings per share however underperformed, hitting 37 cents rather than the expected 54 cents.

Analysts are reporting that much of American Tower’s earnings could be attributed to its move to diversify into foreign markets, which has paid handsome dividends, with international acquisitions of towers driving 69.7% of growth in international tower revenues. Motley Fool notes that its portfolio of foreign towers now accounts for 42% of total revenue and 32% of segment operating profit. It is also growing twice as fast as its domestic portfolio.

In all, American Tower’s Asian portfolios experienced 274.3% year over year growth in revenue to $225 million in the second quarter, on the back of its fruitful acquisition of a 51% stake in India’s Viom networks, which added 42,000 towers to its holdings. EMEA regional revenues also grew 91.4% to $135 million.

Much of its lagging growth in the US segment could be attributed to organic growth in tenant billing.

“Our company’s strong second quarter total property revenue growth of nearly 24 percent was powered by solid U.S. organic tenant billings growth of 6 percent, and more than double that level in our international markets, at approximately 14 percent,” said American Tower CEO Jim Taiclet.

Concerns have been growing regarding downturns in capital expenditures among American wireless carriers, which have been investing heavily in small cell densification in order to increase capacity and throughput in their networks. Such shifts have prompted questions regarding the viability of the traditional tower model, as companies such as Sprint famously shift to a small-cell oriented model to expand its network footprint.

“Tower investors have been waiting for an upturn in carrier capex; that didn’t transpire in Q2,” MoffettNathanson researchers wrote, regarding American Towers. “Verizon and T-Mobile maintained their full-year spending outlooks, whereas AT&T’s level of investment is trending below its 2016 guidance. Sprint stole the show, however, spending less than $500 million on network-related capex in the quarter, for a mid-single digit level of capital intensity. This really isn’t new news for tower investors, but it’s not what one wants to hear for a demand-driven model like this one, either.”

The analysts also expressed their reservations at the promise of international diversification, noting the limited value of international assets compared to American ones: “However, readers of our work will know we’re more sanguine about the appeal of international tower assets than most. This isn’t to say the assets are somehow categorically bad, or that American Tower is misallocating capital, but rather that they are demonstrably less attractive than those in the U.S. (to degrees that vary by country) and warrant lower multiples as a consequence. While we still like the shares here, we worry that investors who do not take this into account are investing with a slimmer margin of safety than they may believe.”

T-Mobile and Sprint Express Doubts Regarding Verizon’s Yahoo Purchase

Verizon recently announced a major acquisition, paying $4.83 billion in exchange of Yahoo’s core assets. The news sent waves rippling through the telco and internet industries, signaling Verizon’s decisive move into turning to content production and digital advertising for new revenue streams and presages potentially vast shifts in both industries.

In addition to the $10 billion purchase of AOL and its digital properties, the Yahoo acquisition brings its websites, e-mail users, search engine, and Tumblr into Verizon’s growing online portfolio. Other outlets such as the LA Times have also observed that the acquisition also includes valuable Silicon Valley real estate, making Verizon a significant landowner in a region notorious for exorbitant rent prices.

It also brings a definitive end to CEO Marissa Meyer’s failed attempts turn the flagging tech company’s fortunes around. While the CEO made splashy acquisitions and hires in order to boost advertising revenues, in the end it was Yahoo’s 40% stake in Alibaba which is worth $30 billion that proved to be the most valuable asset in an otherwise moribund portfolio.

As we noted before, Verizon competed with Berkshire Hathaway and AT&T in bidding for Yahoo assets and expressed confidence in its ability to marry up its internet holdings to increase advertising revenues. In particular it seeks to leverage the wealth of user data it holds to improve advertising strategies on AOL and Yahoo properties.

However, Sprint and T-Mobile CEOs have taken the opportunity to express skepticism at the wisdom of the acquisition. John Legere of T-Mobile, for instance, has noted in an interview with USA TODAY that the telco would be hard pressed to compete with established tech juggernauts such as Google and Facebook: “It becomes clear that they see customers as units of advertising revenue. They’re going into that game against the most powerful companies, Facebook and Google. I think it’s going to be a slippery slope for them. I don’t think it impacts us.”

T-Mobile, which added 1.9 million customers last quarter but saw earnings fall to $225 million, expressed that the transition to a media platform could prove to be a major distraction. Sprint’s CEO Marcelo Claure also expressed doubts as to Verizon’s ability to move successfully into the content business, noting that all such attempts had failed historically.

Unlike Verizon and AT&T, which seek to produce content and generate ad revenue streams, Sprint and T-Mobile focus on providing the throughput and capacity to support increased data consumption in intensive use cases like high definition video streaming.

Content Delivery Infrastructure Updates #3

Fastly Introduces Cutting Edge Next-Gen Control Panel

Fastly’s next generation control panel is a response to the need for complete visibility and instant feedback on applications, allowing customers to respond and innovate instantaneously. As its customer base has increased and expanded in terms of diversity, Fastly has made accommodations to serve a wider range of needs and use cases. Long-time users of Fastly need not worry, the interface retains a lot of its original features while improving navigation, easier access to data, and providing a more intuitive user experience. The dashboard features a reconfigured information architecture with easy-to-analyze grids and typography.

The service summary now features easy-to-access actions that can be selected from a drop-down panel at the upper-right corner of the page, as well as an array of links across the top of the screen so that you can toggle between existing configurations and create new ones. Finally, the configuration panel has been arranged horizontally to give a more minimalistic aesthetic to the interface. It also includes a summary of the tabs contents to improve clarity in user experience. You can try these new features out yourself at the following link.

Coming Right Up! DDoS Attacks-as-a-Service for $5

You can hire a DDoS attack like you would a bounty hunter, for as little as 5$. Distributed denial of service attacks (DDoS) are a form of cyber-criminal activity that has thrived in the digital era, creating a bustling underground marketplace where the DDoS attacks are bought and sold like commodities. Incapsula’s latest blog post details services that “rent out access to a network of enslaved botnet devices (e.g., Trojan-infected PC’s)” to launch the attacks on unsuspecting users.

DDoS-for-hire services get away with it by disguising themselves as stresser services, that test the durability and resilience of your server. Of course, as long as you pay them their required fee, the server that they “stress test” need not be your own.

It’s a pretty scary thought, but it’s one that is borne out in reality for as cheap as 5$ in the underground hacker community. It’s a steep drop in prices, down from $19 last year. Now anyone can hire a DDoS attack on the cheap.  Incapsula researchers searched for such DDoS-for-hire ads on Fiverr, on online marketplace that offers $5 services and found plenty of them offering to “stress test” your server against DDoS attacks.

When they reached out to one of the DDoSers, they admitted that any site could be targeted barring government websites and hospitals. While the Incapsula team reported the illegal services to Fiverr, which promptly removed a good deal of them, it’s a scary reminder that DDoS-for-hire is a becoming a common feature of everyday life.

Azure Rolls Out New CDN PoPs

Microsoft’s Azure is making new PoPs available to all CDN integrated Azure Media Service customers, across a range of regions including India, Australia, and South America. The new PoPs are compatible with existing streaming endpoints. Customers can now enable CDN for streaming endpoints created in any region, meaning that once you enable it from any region, all CDN PoPs will be automatically turned on. Azure is working on providing premier tier and multiple CDN options.

Azure Becoming An Edge Security Juggernaut

Azure is announcing the debut of its Azure Security Center, which is now available to all Azure customers. It is a signal of its commitment to becoming a leader in cybersecurity and threat intelligence by creating an integrative security platform. Azure has thus far invested over $1 billion in R&D to improve its threat solution capabilities.

The security center provides an overview and feedback on the security state of Azure resources, allowing customers configure cloud security policies and respond to threats and attacks. Customers also have access to actionable insights and recommendations provided by Microsoft’s vast global threat intelligence network, which is powered by machine learning and quickly detects threats. They also have access to solutions provided by Microsoft’s security ecosystem, which includes partners such as Cisco, Qualys, Barracuda, Check Point, Fortinet, F5, Imperva, and Trend Micro.

Other features which are detailed on its website include:

  • A log integrator which streamlines the process of getting security data.
  • Support for more Azure resource types including RedHat and many more Linux distros, including system update status, OS configurations, and disk encryption.
  • Email notifications.
  • New detection of lateral movement, outgoing attacks, and malicious scripts.
  • Security incidents overview.
  • REST APIs.
  • Integrated vulnerability assessment from Azure ecosystem and partners

AWS Raising the Bar on CDN+WAF Configuration

You can now use AWS CloudFormation to automate WAF configuration using example rules and match conditions. Amazon’s web application firewall integrates well with CloudFront, which is its CDN, to control and fine tune the type of traffic that is allowed or blocked on your web applications. Integrating CDN with WAF in this fashion allows you to provision, update, and delete the components of WAF. The CloudFormation template deploys some of the following rules and conditions as part of the solution:

  • A manual IP rule that contains an empty IP match set that must be updated manually with IP addresses to be blocked.
  • An auto IP rule that contains an empty IP match condition for optionally implementing an automated AWS Lambda function.
  • A SQL injection rule and condition to match SQL injection-like patterns
  • A cross-site scripting rule and condition to match Xss-like patterns in URI and query string.
  • A size-constraint rule and condition

The example rules count the requests that match the rules for you to test with your web application in order to provide a modular means of creating and updating nested stacks. In sum,  WAF Web ACL evaluates all requests against the rules and conditions prescribed, returning a HTTP 403 error message to client computers that send blocked requests and serving allowed or counted requests.

A basic configuration of rules and match conditions using CloudFormation is outlined in the following link along with instructions.

AWS Inspector is Proof They’re Serious About Security

AWS Inspector is an assessment service allows you to scale your security vulnerability testing, automating it across test, deployment and production environments, and allowing you to forward your findings to your ticketings and workflows. Amazon notes that the process can be conceptualized in two broad parts.

The first part of the security process entails using AWS Inspector to quickly identify and analyze security vulnerabilities while the second part deals with remedying the vulnerabilities that have been located. Because AWS Inspector is an integrated solution that works with workflow and ticketing systems, the process of remediation can be easily automated by using the AWS Lambda function, which allows you to run code without provisioning or managing servers. The function is also cost efficient, charging customers only for the compute time and not when the function is idle.

When AWS Inspector runs a security assessment, it sends a message to the Amazon Simple Notification Service, which in turn invokes AWS Lambda. Lambda fetches the findings and formats and emails them using a separate SNS topic.

Therefore, the major hurdle is setting up the Lambda function in the region in which Inspector is running, and setting up a dedicated SNS topic that Inspector notifies whenever there are findings. Instructions for configuring the SNS topic can be found on this link. After that, you’re ready to roll. Just set your findings to be published to the SNS topic, create the Lambda function, and you’ll receive email findings reports.

Bloodbath Coming To The CDN Industry

It should come as no surprise that 2017 is going to be “The Year of The Bloodbath” for the CDN industry. Many pure-plays will exit the industry, most by acquisition or merger. The writing is on the wall for CDNs providing basic caching and streaming services. What is basic caching + streaming services? Caching + FEO + DSA + VOD Delivery + Live Streaming. For the CDNs barely getting into the Edge Security business of  WAF + DDoS Mitigation, it’s too late to make a difference, unless it’s done via acquisition. There are just too many players out there in the market.

Akamai is no longer the 800lb gorilla of the CDN industry. The new 800lbs gorilla is the Cloud Trio of Amazon + Azure + Google. Google acquiring Anvato was the nail in the coffin for the CDN commodity business, especially video streaming. And not only is the Cloud Trio ramping up their CDN offerings, but also their edge security business. Azure is rolling out PoPs like mad and increasing thier security, plus forming partnerships with the likes of Qualys. AWS is on a WAF innovation rampage and it will only be a matter of time before they match the CDN WAF in functionality. Continue reading Bloodbath Coming To The CDN Industry

MaxCDN, Fireblade, Staminus and Cloak Join StackPath

Who saw this coming? MaxCDN and Fireblade have been acquired by StackPath, a startup security-as-service-platform that raised $180M. StackPath is led by Lance Crosby, founder of Softlayer, and Andrew Higginbotham, former SVP of Cloud Technology at CenturyLink. In addition, StackPath acquired Staminus and Cloak. The quad-combo mix of companies is very interesting and feature-packed.

MaxCDN + Fireblade + Staminus + Cloak gives StackPath Next-generation CDN + Cloud WAF + DDoS Mitigation + VPN capabilities.

  • MaxCDN: Pure-play CDN with online ordering system
  • Fireblade: Next-Gen WAF sold to cloud companies
  • Staminus: DDoS Mitigation platform
  • Cloak: VPN Platform

StackPath will integrate these Best-of-Breed services into a cohesive platform tailored to CDN Buyers. The million dollar question, which isn’t discussed on their website or press release – will StackPath also develop the capabilities to go after the cloud network security market and give Zscaler a run for their money? Why else would they acquire a VPN company if not for that? Congrats to Chris, David, Shay and Matt.

StackPath EdgeSecurity CDN

MaxCDN-Fireblade

Google Uses DeepMind to Run Its Data Centers Efficiently; And Other News

Google Uses DeepMind to Run Its Data Centers Efficiently

DeepMind famously came to the fore when its AI, AlphaGo, beat Lee Sedol in a series of Go matches, the first time in which a program beat a human professional at the notoriously complex game.

Now it appears that the advanced machine learning capabilities provided by DeepMind are providing tangible benefits for Google, which owns the company. Google has announced that it is ceding control of all of its data center cooling operations to DeepMind, after an initial trial run. The trial used DeepMind’s AI to manage the cooling of some of its data centers, which led to a 15% improvement in power usage efficiency, which helps reduce costs as well as the energy-footprint data centers.

Data centers are huge cost and energy sucks, accounting for a sizeable percentage of the 4,402,836 MWH of power that Google uses each year, according to Bloomberg. Given the high cost of energy, DeepMind can save Google hundreds of millions of dollars in utility bills over a span of just a few years.

DeepMind’s AI technology uses neural networks to analyze power usage and predict spikes in energy needs over time, thereby calibrating the equipment to maximize efficiency.

“It controls about 120 variables in the data centers. The fans and the cooling systems and so on, and windows and other things,” DeepMind CEO Demis Hassabis said to Bloomberg. “They were pretty astounded.”

Verizon Prepares for NG-PON2 Testing

Verizon has announced that by the end of this month, it will trial its landmark fibre-optic technology, NG-PON2 (next generation passive optical network), which can reach blazing-fast symmetrical speeds of up to 10 Gbps over FiOS. The NG-PON equipment is aimed at FiOS enterprise and consumer based services and was developed via its partnership with Ericsson, Calixx and Adtran. The trial is slated to occur in Verizon’s Innovation Lab located in Waltham, Massachusetts.

The technology was deemed acceptable by the International Telecommunications Union Telecommunications Standardization Sector and is expected to pave the way for Verizon’s most significant wireline telecommunications network upgrade in nearly a decade. What sets NG-PON2 apart is that it relies on four different wavelengths, which are denoted by different colors, to transfer data with a total capacity of 40GB per fiber. It also allows Verizon to deliver different services over different wavelength on the same fiber, differentiating between its residential and enterprise consumers.

Enterprise customers will be the first to benefit from the fruits of the testing as early as next year, if it proves to be successful, with consumers and residential clients gaining access to it later on.

The first successful field trial of NG-PON2 tech occurred last year in Framingham, Massachusetts.

Verizon is touting its fiber network as highly scalable and being easily upgraded: ““By implementing this advanced technology without having to change the current underlying fiber optic infrastructure, additional traffic can be carried cost effectively,” the company noted in a statement. “Verizon also can improve flexibility and resiliency using NG-PON2, because traffic can be shifted amongst multiple wavelengths without impacting customers.”

Huawei is Partnering with GE for IoT and Smart Cities in China

GE is bringing its IoT expertise to China, according to a new press release announcing its partnership with Chinese telco manufacturer Huawei.

The primary focus of its collaboration will be to provide smart machines that boost productivity and enhance worker output, which is a subset of IoT known as the industrial internet of things. The enticing promise of IIoT is that it combines big data analytics with machine learning, driving massive gains in productivity.

GE is investing heavily in an $11 million incubator, which will work on developing start-ups and software to improve machine intelligence, according to Reuters. The digital platform that developers will build their applications on is called Predix, which is owned by GE. Such applications would work with connected machinery to collect and analyze data in real-time.

GE has recently undergone a massive restructuring and shifted its focus to IoT, having invested more than $500 million in software annually.

In a recent statement to shareholders, GE noted: “We are a company that invests in broad industrial transitions, and they don’t come much bigger than the full application of data and analytics to machines and systems.”

The move has paid dividends; GE is expected to make $6 billion in revenues this year. Chief Digital Officer Bill Ruh notes that using smart machines had saved the company more than $500 million in production costs and predicts that such savings will increase to over $1 billion by 2020. This experience has motivated GE to bring its smart machine solutions to other partners and countries.

“Once we got it right for ourselves we take it to our customers … We’re bringing this to China, we’re open for business in China today to be able to do this,” he said.

GE’s business strategy of trialing solutions internally and translating them into saleable services has been working. Predix is not only used for internal manufacturing equipment and has been deployed to create new revenue streams for clients as well.

Such developments could also be used to innovate in the smart city space, which is a massive initiative that is being undertaken in China. Huawei reports that there are more than 300 smart cities planned in the nation, which necessitates collaboration with third-parties. GE can also bring its proprietary smart-city solutions suite, called Current, to the mix. Current leverages a network of LED technology, solar panels, and wireless controls to create green urban spaces, allowing cities to save on utility costs by generating power rather than purchasing it.

Google Expands Cloud Platform to Western Region of North America

Google has announced in a blog post that it is extending its cloud platform to customers in the west coast of North America, dubbing the region the Oregon Cloud Region. West Coast customers based in Vancouver, Seattle, Portland, San Francisco and Los Angeles can expect to see a 30-80% latency reduction. They also now have access to the Google Compute Engine, Cloud Storage, and Container Engine.

Latency reduction is crucial to providing consistent and immersive gaming experiences and is one of the biggest issues encountered by gamers. As Google client Multiplay, a video game hosting specialist, notes:

“Regional latency is a major factor in the gaming experience. Google Cloud Platform’s network is one of the best we’ve worked with, from a tech perspective but also in terms of the one-on-one support we’ve received from the team.”

As we noted in an earlier blog post, Google has also invested in a massive trans-Pacific cable network in furtherance of its aim to establish a Cloud Platform region in Tokyo. The Japan-based region is expected to commence operations later this year.

Google Unveils Two New Cloud Machine Learning API’s

Google has released a blog post announcing two new Cloud Machine Learning products– Cloud Natural Language and Cloud Speech API’s.

The Cloud Natural Language API is the fruit of its efforts to improve machine learning and acquisition of human language understanding, so that computers can parse the intricacies of syntax more accurately. It initially supports English, Spanish, and Japanese and can assess the general sentiment being expressed in a statement, identify the various entities mentioned (be they companies, persons, media, or animals), and parse the structure of a sentence.

Google believes that its natural language acquisition API can utilized to great effect in a wide range of industries: “ For example, digital marketers can analyze online product reviews or service centers can determine sentiment from transcribed customer calls.”

The Cloud Speech API serves a related but different function, specializing in speech-to-text conversion for over 80 different languages, applying voice recognition technology to use-cases in apps and IoT. It leverages and applies “advanced deep learning neural network algorithms to [user’s] audio for speech recognition with unparalleled accuracy. Speech API accuracy improves over time as Google improves the internal speech recognition technology used by Google products.”

The API allows machines to take advantage of tailored word hints to optimize recognition of certain relevant phrases, words, or neologisms. For instance, Google notes that a smart TV would be listen for keywords such as “fast forward.” Cloud Speech also features asynchronous calling that allows for faster and easier app development and has been fine-tuned to function in noisy environments.

Content Delivery Infrastructure Updates #2

Aryaka Grows 150% in Q2 2016

Aryaka, the global SD-WAN solutions provider, has announced its stunning fourth consecutive quarter of growth. In Q2 of FY 2016, Aryaka is reporting that it has grown over 150% year-over-year compared to Q2 of 2015.

As Aryaka states in its press release, the company has enjoyed strong partner contributions, growth in customer bookings (with over 70 added), and continued expansion attributed to its install base. This growth was due in large part to broader growth in demand for connectivity, which Aryaka provides in a cost efficient, flexible, and low-latency manner.

Aryaka also touted the fact that 40% of its total bookings came from its pre-existing user base, who sought further services from the company. Many of its bookings growth came from China, the APAC region, and Europe.

“I’m excited about another blockbuster quarter whereby we delivered solid execution and tremendous growth across all geographies,” said Shawn Farshchi, President and CEO of Aryaka, in the press release. “Aryaka is providing our customers with global connectivity, application acceleration, and ability to scale their businesses. Our results validate our leadership position in the global SD-WAN space.”

Facebook Is Introducing Open-Source AI Hardware Design

The Facebook AI Research team (FAIR) is touting its advances in the development of AI training hardware, which it attributes to a combination of strategic investments and partnerships with vendors, as well as hardware expertise. FAIR is now announcing that it will contribute its research in GPU hardware to the Open Compute Project.

The press release notes that while machine learning and artificial intelligence have been subjects of research for some time now, larger strides have been made in recent years due to the advances in computing capabilities as well as the public availability of larger and more numerous data sets. In particular, as computational capabilities increase, researchers are empowered to delve more deeply into complex systems.

However, larger scale developments in such software would require the development of a commensurate system: “But we realized that truly tackling these problems at scale would require us to design our own systems. Today, we’re unveiling our next-generation GPU-based systems for training neural networks, which we’ve code-named “Big Sur.”

Big Sur is an Open Rack-compatible piece of hardware, which incorporates eight high performance GPU’s of up to 300 watts to attain a speed that is two times as fast as its predecessor. Distributing the training load over eight GPU’s allows the team to increase the size and speed by a factor of two. In addition, Big Sur is built on NVIDIA’s Tesla Accelerated Computing Platform and can support multiple PCI-e cards. Finally, unlike comparable systems, Big Sur is a highly efficient and can operate in free-air cooled Open Computing data centers. The design materials will be freely available via the Open Compute Project.

Facebook Introduces the 6-Pack Open Modular Switch

For the past few years, Facebook has been working on a new, disaggregated and more flexible network in order to avoid some of the problems encountered in closed and iterative traditional networking technologies. The result of their efforts is an open modular switch platform, dubbed “6-pack.”

Facebook details the process of developing 6-pack in a blog post. The team began by developing a top-of-rack network switch (called “Wedge”) and a Linux-based OS (“FBOSS”) for the switch. In addition, they developed a scalable modular network architecture, a data center fabric that has 6-pack at its core. It is a “full-mesh non-blocking two-stage switch” with 12 unique switching components, each of which is capable of switching 1.28 Tbps.

As this description implies, it comes with two configurations, the first of which exposes 16X40 GE ports to the front and back. The second configuration opens all 1.28 T capacity to the back. Because the switching elements are independent and each contain a full local control plane on a microserver that communicates with the central controller, they can easily be modified without disrupting the system at large. The result is an open and modular switch platform that is highly scalable and built on off-the-shelf building blocks.

Facebook is currently testing 6-pack, FBOSS, and Wedge, and is planning on contributing them to the Open Compute Project in the hopes that they will be used by the community as the foundation of an entire industry.

Instart Logic Introduces Multi-Page Predictive Prefetching

Instart Logic is introducing Multi-Page Predictive Prefetching, as a part of its Appspeed Portfolio. This development promises to improve UX on a broad range of apps. In an era where customers increasingly use mobile devices to reach businesses, mobile UX is an important part of shaping a company’s perception.

The end-point aware application delivery solution addresses an area that is rarely addressed by developers, who expend their focus and resources on optimizing the primary landing pages and main access point. Multi-Page Predictive Prefetching refrains from simply guessing where the user will seek to go next and loading the page in the background, which wastes bandwidth  and resources. Instead the solution identifies static elements shared by across potential pages and models common flows in order to optimally push common resources to the browser, resulting in a more seamless browsing experience.

“Using the tremendous amount of user behavior data we analyze across millions of sessions with our machine learning algorithms in the cloud, we can apply additional performance optimizations that improve the experience throughout a user’s journey,” explains Shailesh Shukla, CMO and VP of Products at Instart Logic. “Multi-page Predictive Prefetching leverages our proprietary endpoint-aware Nanovisor technology to identify the best content to prefetch.”

Cisco’s Visual Networking Index Predicts 3X Increase in IP Traffic By 2020

Cisco has recently released the findings of its Visual Networking Index (VNI) forecast for 2015 to 2020. According to the VNI, global IP traffic will triple and grow at CAGR 22% over the five years leading up to 2020.

The global Internet-using population is also expected to increase by more than one billion in that time, reaching a total of 4.1 billion by 2020. The burgeoning of M2M connections and the growth of the Internet of Things will result in a massive increase in new devices and connections, totaling 26.3 billion by 2020.

As smart cities, virtualized health services, and other M2M connections proliferate (nearly tripling by 2020), global internet traffic will continue to balloon, which will in turn generate growth in the market.

Video traffic will drive the lionshare of global internet traffic in 2020, accounting for a staggering 79%, and reaching 3 trillion video minutes per month. HD and Ultra HD video will account for the highest share of internet video traffic (82%).

On a more somber note, the growth in Internet traffic will also lead to an increase in security threats. Specifically, DDoS breaches are expected to increase to 17 million by 2020. Cisco’s analysis of DDoS attacks found that they can account for as much as 10% of a nation’s internet traffic. Regionally, APAC will be a major driver of growth over the next five years, at a rate of 22% CAGR, followed by North America (19% CAGR).

Is Airtel Checking CloudFlare Traffic in India?

Karthik Balakrishnan suggested recently that Airtel was sniffing and censoring unencrypted data packets that were traveling between CloudFlare’s data centers and the websites connected to it. CloudFlare uses Airtel’s services to connect to host sites, which ostensibly gave Airtel the access to check each packet of data to see if they violated orders set forth by India’s Dept. of Telecoms.

Balakrishnan suggested that each connection between CloudFlare and content servers could be blocked by Airtel inserting an iframe, which he encountered when attempting to access ThePirateBay.org. The conclusion he reached was the Airtel was monitoring CloudFlare’s upstream traffic without its awareness, thereby snooping on traffic from all Indian users who access any one of the 2MM+ websites that use CloudFlare. Airtel denied the accusations in response to an inquiry by Medianama:

“This is completely baseless and incorrect. As a policy, Airtel does not block/sniff any content. Only in the case of instructions/orders from the Government or the Courts, specified URLs are blocked. Blocking of any page [as per instructions from relevant authorities] is done at the URL level and not whether it is http/https. This also has nothing to do with the validity of any certificate.”

In an interview with Medianama, CloudFlare CEO Matthew Prince said that the company had reached out Airtel regarding the interference with Pirate Bay’s traffic, which blocked their ability to connect to the customers origin. Airtel reportedly told CloudFlare that this particular customer had had access blocked due to a specific government request, which affected requests connecting to CloudFlare, and not requests from CloudFlare to the customers origin. While Airtel’s agreement with CloudFlare prohibits it from interfering with traffic, the only exception is in the case of government order.

Prince also notes that Pirate Bay opted to send traffic over an unencrypted connection, even though CloudFlare has the ability to pass it over encryption. Using encryption prevents Airtel from pulling the host header and redirecting traffic.

That being said, Barakrishnan’s concerns about user privacy remain open, as Airtel would have had to access the header of every unencrypted packet in order to block the site. While Prince acknowledged that this would problematic, he suggested that there was very little CloudFlare could do outside of providing the option of encrypted connections. Governments in various countries have circumvented this by blocking ISP’s traffic to CloudFlare in order to block certain websites.

Independent CDN Performance Testing

The Bizety Team is expanding deeper into Advisory Services. We’re now offering Independent CDN Performance Testing for CDN Buyers and Partners. When it comes to benchmarking CDN performance across multiple providers in different regions, the task is challenging. No single tool or platform provides all the answers. In fact, we’ve seen in our experience that decisions are sometimes made on flawed performance data.

Our team conducts extensive CDN Performance Testing over a one-month period for each engagement, using a variety of tools, custom scripts and proprietary research methods. The best part is we normalize network hiccups so there’s cleaner data for the decision making process. The more complex the testing environment, the more our team shines. Even though there are a variety of automated tools in the marketplace, the actual process of benchmarking is still very manual in nature. Inquire for more info at support@bizety.com.

CDN Performance Testing

  • Objective performance evaluation of major and specialized CDN providers
  • CDN-agnostic testing tools that take advantage of a suite of in-house and third party systems
  • Backbone testing as well as RUM testing for full spectrum analysis
  • Application-specific testing: object delivery, video delivery, website delivery and application acceleration
  • Component analysis for detailed performance insight
  • Transaction benchmarking for network and application components
  • Deep combination analysis based on user metrics (ISP/geo/time-of-day)
  • Statistic modeling for scenario-focused application behavior
  • Documentation and training