Interview with Eyal Hayardeni, CEO of Reblaze

Around the time that Akamai acquired Prolexic (circa 2013) and kick-started the Edge Security CDN segment, Reblaze came to market with an idea to revolutionize this new market space. Co-founders Eyal Hayardeny and Tzury Bar Yochay developed Reblaze to offer a holistic approach to edge security and website management. With 20 employees and growing, Reblaze is positioning itself to dramatically increase its market share over the next twelve months.

Distil Network’s Key Bot Traffic Findings for 2016

Distil Networks has just released its third annual Bad Bot Landscape Report. Distil has the largest library of bots in the industry and specializes in bot mitigation security. Bots can be a diverse group of website and API predators. They can perform high-speed attacks and a variety of nefarious activities including “web scraping, competitive data mining, personal and financial data harvesting, brute force login and man-in-the-middle attacks, digital ad fraud, spam, transaction fraud, and more.”

Not only are these attacks a nuisance to websites and users, but they are extremely costly to both productivity and the bottom line. A single attack last year to Dridex Banking cost the company a total of approximately $40.5 million. Analyzing the bot traffic of 2015 through data collected from its 17 data centers, several hundred customer and 74 billion bot requests that are contained in its Hadoop cluster, Distil hopes to shed some light on the trends in bot traffic and on how to mitigate bot attacks.

Page 13 - Onlien Users vs Bad Bots - Table

Distil Network’s Key Bot Traffic Findings for 2015

  1. Human Internet traffic has increased (now 54.4%) and good and bad bot traffic decreased (from 36.32% to 27.04% , 22.78% to 18.61% respectively)
    1. Humans now make up the majority of Internet traffic, a first since 2013, likely due to new Internet users in Indonesia, China, India
    2. Bad bots are mimicking our browser choices and Chrome was most popular in 2015 followed by Firefox
  2. Though fewer in number, bad bots become more advanced (advanced persistent bots; APBs)
    1. 88% of bad bot traffic (up from 77% in 2014) were evasive bot s(an 11% increase to 42%) or APBs (46%)
    2. Simple bots decreased from 23% to 12%
    3. To avoid security, APBs can mimic human behavior, load JavaScript and external resources, support cookies, perform browser automation, and spoof IP addresses and user agents
    4. 73% of APBs rotate through IP addresses and change their identities
      1. 36% of those APBs change IPs more than twice, 20% more than 100 times
  3. Bad bot attacks were seen most prevalently on medium-sized sites but humans remained the dominant traffic source on all site types and sizes
    1. Bas bots comprised 26% of all traffic on medium-sized sites (up from 17% in 2014)
    2. Small digital publishers (31%) and large real estate website (31% up 300% from 2014) were most prone to attack
      1. Likely due to bot attacks aimed at scraping of aggregate data by small startups
  4. 6 of the top 20 malicious ISPs originators were from China
    1. 90% of China Unicom IP Network’s traffic was malicious and all 6 averaged 72%
  5. Amazon remained on the top 20 malicious ISP originator list for the third year in a row likely due to its simple scalability and ease of use
    1. Verizon Business, Comcast and Time Warner are not on the list for 2015
  6. Netherlands, Korea and the U.S. topped the mobile bad bot traffic list
    1. Netherlands and U.S. were the only country with multiple carriers on the list
    2. Mobile-only traffic surpassed desktop-only traffic in 2015
  7. U.S. was the country in which the most bad bots originated
    1. Likely due to cheap cloud computing (Amazon, Google Cloud, Azure)
    2. Spain, Brazil, Ukraine, Switzerland, Australia added to the list of significant origins
    3. Many industrial companies made the top blocked list (Top 5: China, Norway, Germany, Netherlands, France)
    4. Maldives remains a bad bot hotspot with 536 bad bots/ person
    5. Worldwide average of bots per person rose from 26.1 in 2014 to 99.2 in 2015

ChinaCache 2015 Earnings Down Due to Platform Issues and Other News

ChinaCache 2015 Earnings Down Due to Platform Issues

ChinaCache has finally released its earnings for Q4 2015 and they show a decrease in both quarterly and yearly earnings. Quarterly, their revenue was down 8.1% yearly to $48.1 million and their 2015 revenue was down 2.1% year-over-year totaling $209 million. Their yearly revenue loss was $13.7 million in 2015, a more than 10 fold increase over 2014’s losses. Founder and CEO of ChinaCache, Song Wang, attributes these losses to issues with their platform optimization, a project which they launched in July of 2015.

The cost of revenue was also significantly impacted with reduced bandwidth efficiency due to decreased net revenues and depreciation. ChinaCache’s one area of growth, it seemed, was in its adjusted EBITDA non-GAAP revnue, which increased 16.7% to $18.6 million in 2015. ChinaCache reports that they expect to complete their platform optimization in the first half of 2016 and did successfully launch their Cloud Data Center and Internet Exchange operations in 2015. They will be focusing this year, reportedly, on their “dedicated carrier- and Internet-neutral data center network (DCN) ecosystem” to generate revenue once their platform is fully optimized.

First EDM Artist to Live Stream in VR and 360 Video

Hardwell, an EDM artist on the award-winning Dutch record label Revealed Recordings, streamed his live festival performance in Miami beach in VR and 360 video on March 16. Hardwell is the first EDM artist to stream his concert live and it was the first performance available on all platforms; VR headsets, iOS, Android, Apple TV and the web. Littlstar, the New York based network focused specifically on immersive experiences such as virtual reality and 360 video and VBR, an Amsterdam based virtual reality broadcasting company that is a part of Scopic, who specializes in 360 video camera equipment built specifically for live UHD streams, worked together on this endeavor.

Will Teridion Revolutionize the CDN Model

In 2013, a startup named Teridion began operations in stealth mode with $5 million in VC funding and the vague goal of making the Internet “lightning fast.” In September 2015, they garnered an additional $15 million in VC funding and eventually emerged from stealth mode with a plan to improve any Internet connection anywhere anytime without any hardware or software.

Teridion manages the cloud as an abstract layer and API that drastically simplifies the process of routing data. This layer is called the Global Cloud Network and works off x86 servers within the cloud. Their devices work at the IP level and use the Teridion Management System (TMS) instead of using  BGP routing protocol on their devices, as BGP can be problematic in both its lack of end to end network visibility and its “routing prioritization based on commercial relationships between ISPs and on economic considerations.”

The TMS spans multiple cloud providers to determine the best possible route for traffic. Some have called Teridion’s strategy “reverse load balancing.” Teridion routes in real-time and with real-time network-wide traffic visibility. While most routing occurs one step at a time, Teridion’s API has a network-wide view that works to accurately determine the quickest overall route and is constantly analyzing the path throughout the journey in case another, quicker route becomes available.

Service Overview

  • Teridion Measurement Agents (TMAs) that run on virtual machines within cloud operators, and are used to determine latency across an operator’s backbone or to other cloud operators
  • Teridion Cloud Virtual Routers (TCRs), like virtual routers, which create overlay paths across the Internet based on throughput, latency or geography, to optimize a given customer’s dynamic content
  • Teridion Management System (TMS) which is similar to a control plane and gathers data from TMAs to paint a live view of Internet performance and maintains APIs to cloud operators to create or destroy TCRs on-demand and based on customer requirements

Technical Newsletter: Ambari, Amazon EMR, and Other News

Ambari Views Now Available on HDInsight

Ambari software was developed through Apache to enable provisioning, managing and monitoring of Hadoop clusters with a GUI and API. Previously Ambari was only available through a plugin to Ambari View Framework. Now, it is available to be used on HDInsight, allowing for the deployment and management of Linux clusters. Two of the predefined views in Ambari are Pig and Hive views. Both can be launched through the Ambari portal.

Hive view allows one to browse databases, write an execute Hive query, look at job history, set Hive query execution parameters and debug Hive queries. An Ambari Views link and tab have been added to the portal to simplify the finding of this option. In addition, this portal will permit both Hive and Pig queries, changing of settings, provide a visual explanation of queries, allow the addition of UDFs, and allow monitoring and debugging of Tez jobs.

Issues with Extensible Web Resource Loading

Ilya Grigorik with igvita.com has published on essay on some of the issues he has found with loading of extensible web resources. Loading is typically decided based on the request for an asset. Either the parser detects a tag with a resource URL, Javascript initiates a dynamic request or it is detected through CSS, and each type has their own loading protocol. Browser vendors often determine the order in which resources on loaded onto a page. For example, “HTML, CSS, and Javascript are considered critical; images and other asset types are less so; some browsers limit number of concurrent image downloads; and CSS assets are lazyloaded by default.”

The Progress Toward 5G in 2017

For several years, experts have predicted that 5G would be commercially available in 2020. In light of the recent technological advancements in orthogonal frequency-division multiplexing, it seems that 5G is expected in the much nearer future. 5G will likely have limited commercial availability in 2017. At the Mobile World Congress in Barcelona this month, 5G has been the primary topic of interest and the race is on to see which company and which country will be the first to offer it to the public.

Trials and Milestones

This month, Verizon and Nokia started field tests of their 5G networks in the Dallas-Forth Worth area. They are examining the latency of the network as well as the overall speed. They are utilizing the 73 GHz and 28 GHz bands that are expected to allow multiple Gbps speeds with approximate 1ms latency. Some version of Verizon’s 5G network is expected to be available commercially available by 2017, and is likely to be faster than Google Fiber.

Korean SK Telecom, also in partnership with Nokia, claims that it can attain speeds of 20 Gbps and will be demonstrating speeds up to 25 Gbps in Sweden later this month. The University of Surrey’s 5G Innovation Center (5GIC) holds the record for achieving the fastest download time of 1Tbps.

T-Mobile is expected to start field trials of their 5G network later this year, and AT&T may attempt to use its DirecTV acquisition as leverage over those threatening to switch to Verizon and its rapidly developing 5G network. When acquiring DirecTV, regulators required AT&T to improve high-speed Internet penetration and focus heavily on expansion in rural areas. While working to expand their network, AT&T is attempting to bundle its broadband Internet, TV, and fixed-line phone services to offset these very high network expansion costs and retain customers.

Facebook’s Analysis of Flash-Based SSD in the Field

Facebook has released an analysis detailing its use of flash-based solid state drives (SSDs) in the field over the last four years. These drives averaged many millions of SSD-day usage. Due to the recent advances in flash capacity, SSDs are now increasingly used in data storage and Facebook’s study aims to detail some of their findings on the efficiency and abilities of SSDs.

To determine the usefulness of SSDs and the frequency of failure, measurements on the amount of data written and read on the chips, how data was mapped, the amount of data copied, erased, and discarded by the controller, and the flash board temperature and bus power were analyzed. This is one of the first field studies of SSDs that has attempted to capture and analyze this information. Their system was designed to give a “snapshot” of the system’s performance, but cannot store this data as a timeseries. As a result, analysis was performed on this “snapshot” of the SSD system’s behavior and lifetime metrics.

Analysts concluded that SSD failure rates go through distinct periods of failure rates relating to how the failures emerge and how they are detected-early detection, early failure, usable life, and wearout. They also found that read disturbance errors are not common in field use, sparse logical data layout plays a strong role in determining SSD failure rate, higher temperatures lead to high failure rates (especially for SSDs that do not use throttling), and data written by the operating system does not always accurately indicate the amount of wear on the flash cells. They also concluded that there seems to be a relationship between the number of discarded blocks and SSD failure rates and data indicated that more discarded blocks was indicative of higher failures rates.

The Tradeoffs of Next-Gen IDS/ISPs

The popularity of next generation IDS/ISPs have greatly increased in recent years. Now, according to Bromium, all of the major security vendors offer one. With these products, some of the network traffic is routed to VMs for threat detection. However, because there are only a few VMs, all the traffic cannot be routed through them. When operated passively (IDS), threat detection notices are sent to the security team, and when in in-line mode (ISP), blocks any traffic on which malware is detected. Though the IDS/ISP method of threat detection can be effective, it also has several aspects that can make it ineffective.

First, if the user is mobile or off-net, the IDS/ISP will not detect their activity. Second, though this method is excellent at detecting malware from known sources, attacks are often encrypted and is sometimes programmed to be sleepy and not activate within the VM, but wait until the traffic is passed to the endpoint. Any attack that is executed in a honeypot VM will be detected, but many of these attacks may not have been an issue for endpoint to which they were destined, as they would not be able to execute properly there. Finally, using a Windows VM may conflict with Microsoft’s license terms.