AWS Is Ramping Up Its DDoS CapabilitiesAWS is ramping up its DDoS capabilities, in which it details "best practices for DDoS resilience" in a white paper, aimed at enterprises with public-facing endpoints. A crucial factor in DDoS mitigation is the implementation of a network architecture that is capable of detecting and filtering excess traffic. AWS also provides elastic load balancing and elastic compute cloud (EC2), which allow clients to bolster DDoS endurance and scale up to bear unforeseen spikes in traffic volumes. Other edge services include Amazon CloudFront, AWS WAF, Amazon Route 53, and Amazon API Gateway, which allow enterprises to leverage a global edge network to enable greater fault tolerance against infrastructure and application layer attacks, as well as greater scale. Read More
FCC Broadcast Spectrum Auction Falters and Fails to Meet TargetThe Federal Communications Commission’s incentive auction has come to a screeching halt after hitting a speed bump, with the auction topping out at $23.1 billion, falling well below the target $88.3 billion. Overall, forward bidding activity was sluggish, plodding along at a pace of $1 billion in winning bids per round over the course of 27 rounds. It was not enough to reach the $86 billion needed to pay television broadcasters for the spectrum assets and fees for clearing and repackaging license holders. Read More
ZTE Reports Massive H1 Profit Gains Buoyed By Chinese 4G ExpansionChinese telco equipment manufacturer ZTE has reported sizeable profit increases in the first half of 2016, booking 9.3% gains in profit to $270 million and a 4.1% increase in operating revenues to $7.2 billion. About 58% of its operating revenue growth came from domestic business whereas approximately 20% came from international dealings. Read More
CloudFlare and Telstra Do the TangoCloudFlare, which supports over 4 million websites through its global network which spans 45 countries, has been trading jabs with Telstra over its claim that Telstra’s bandwidth costs are overpriced. In CloudFlare’s recent analysis of relative bandwidth costs, the popular bandwidth provider argued that certain networks drove up mean bandwidth prices: “there are six expensive networks (HiNet, Korea Telecom, Optus, Telecom Argentina, Telefonica, Telstra) that are more than an order of magnitude more expensive than other bandwidth providers around the globe and refuse to discuss local peering relationships. To give you a sense, these six networks represent less than 6% of the traffic but nearly 50% of our bandwidth costs.” Read More
Akamai Raises The Bar of Vulnerability ManagementAkamai has written the blueprint on threat mitigation and InfoSec, at least according to its blog post about the Akamai Intelligent Platform. The Platform is actually comprised of a globally distributed network of 216,000 servers and supports Akamai’s CDN and cloud security. Built from scratch with security in mind, the Platform relies on a customized and streamlined variant of Linux that removes any extraneous functionalities, which dramatically reduces its vulnerabilities. Each Edge server is also configured to be a bastion host in order to bolster security. Read More
China Mobile’s Profits Fueled By 4G BoomChina Mobile, the world’s largest wireless service provider, has been working feverishly over the last two years to roll out an ambitious new 4G network in China. Its efforts have paid handsome dividends, as its Q2 results attest. Profits jumped 9.1% to nearly $5.5 billion during this period, cementing China Mobile’s status as the dominant Chinese telco, over its rivals Unicom and China Telecom. Read More
Nginx Plus vs. F5 Big IP and Citrix Net ScalerNginx has posted two cost-benefit analyses of its Nginx Plus versus other hardware application delivery controllers (ADC), namely F5 Big IP and Citrix Net Scaler. Unsurprisingly, it found that Nginx Plus provided significant performance gains while cutting costs. Read More
SigFox Partners With UnaBiz To Make Taiwan an IoT Testbed by 2017French telco and global communications service provider SigFox has announced a major venture to make Taiwan an internet of things testbed, in partnership with UnaBiz, which operates IoT networks in Asia. SigFox, which is a leading provider of IoT related services in Europe and operates 22 IoT networks worldwide, made the announcement with French and Taiwanese officials and specified that its proprietary technology would be the centerpiece of an IoT testbed connecting Taiwan’s manufacturing industry and machinery to the rest of the world. Read More
Akamai and Limelight Networks Create a New Chapter in Their Relationship With Licensing AgreementGlobal CDN leader Limelight Networks, Inc. has ended a long and costly legal battle of attrition with Akamai Technologies over the disputed ‘703 patent. The two companies have agreed to waive all rights to appeals and convert the $51 million judgment into a $54 million license agreement instead, reaching an amicable accord. Read More
Softbank CEO Predicts Major Recovery for SprintSoftbank, which is Sprint’s Japanese corporate parent, recently posted 19% gains in first quarter profits. A report by ABC News reports however that Sprint continued to be a liability, posting net losses this quarter which were in turn offset by Softbank’s robust telecom sales in Japan. Softbank’s quarterly sales exceeded $20 billion, while its overall profits were $3 billion. Read More
Fastly Introduces Cutting Edge Next-Gen Control Panel
Fastly’s next generation control panel is a response to the need for complete visibility and instant feedback on applications, allowing customers to respond and innovate instantaneously. As its customer base has increased and expanded in terms of diversity, Fastly has made accommodations to serve a wider range of needs and use cases. Long-time users of Fastly need not worry, the interface retains a lot of its original features while improving navigation, easier access to data, and providing a more intuitive user experience. The dashboard features a reconfigured information architecture with easy-to-analyze grids and typography.
The service summary now features easy-to-access actions that can be selected from a drop-down panel at the upper-right corner of the page, as well as an array of links across the top of the screen so that you can toggle between existing configurations and create new ones. Finally, the configuration panel has been arranged horizontally to give a more minimalistic aesthetic to the interface. It also includes a summary of the tabs contents to improve clarity in user experience. You can try these new features out yourself at the following link.
Coming Right Up! DDoS Attacks-as-a-Service for $5
You can hire a DDoS attack like you would a bounty hunter, for as little as 5$. Distributed denial of service attacks (DDoS) are a form of cyber-criminal activity that has thrived in the digital era, creating a bustling underground marketplace where the DDoS attacks are bought and sold like commodities. Incapsula’s latest blog post details services that “rent out access to a network of enslaved botnet devices (e.g., Trojan-infected PC’s)” to launch the attacks on unsuspecting users.
DDoS-for-hire services get away with it by disguising themselves as stresser services, that test the durability and resilience of your server. Of course, as long as you pay them their required fee, the server that they “stress test” need not be your own.
It’s a pretty scary thought, but it’s one that is borne out in reality for as cheap as 5$ in the underground hacker community. It’s a steep drop in prices, down from $19 last year. Now anyone can hire a DDoS attack on the cheap. Incapsula researchers searched for such DDoS-for-hire ads on Fiverr, on online marketplace that offers $5 services and found plenty of them offering to “stress test” your server against DDoS attacks.
When they reached out to one of the DDoSers, they admitted that any site could be targeted barring government websites and hospitals. While the Incapsula team reported the illegal services to Fiverr, which promptly removed a good deal of them, it’s a scary reminder that DDoS-for-hire is a becoming a common feature of everyday life.
Azure Rolls Out New CDN PoPs
Microsoft’s Azure is making new PoPs available to all CDN integrated Azure Media Service customers, across a range of regions including India, Australia, and South America. The new PoPs are compatible with existing streaming endpoints. Customers can now enable CDN for streaming endpoints created in any region, meaning that once you enable it from any region, all CDN PoPs will be automatically turned on. Azure is working on providing premier tier and multiple CDN options.
Azure Becoming An Edge Security Juggernaut
Azure is announcing the debut of its Azure Security Center, which is now available to all Azure customers. It is a signal of its commitment to becoming a leader in cybersecurity and threat intelligence by creating an integrative security platform. Azure has thus far invested over $1 billion in R&D to improve its threat solution capabilities.
The security center provides an overview and feedback on the security state of Azure resources, allowing customers configure cloud security policies and respond to threats and attacks. Customers also have access to actionable insights and recommendations provided by Microsoft’s vast global threat intelligence network, which is powered by machine learning and quickly detects threats. They also have access to solutions provided by Microsoft’s security ecosystem, which includes partners such as Cisco, Qualys, Barracuda, Check Point, Fortinet, F5, Imperva, and Trend Micro.
Other features which are detailed on its website include:
- A log integrator which streamlines the process of getting security data.
- Support for more Azure resource types including RedHat and many more Linux distros, including system update status, OS configurations, and disk encryption.
- Email notifications.
- New detection of lateral movement, outgoing attacks, and malicious scripts.
- Security incidents overview.
- REST APIs.
- Integrated vulnerability assessment from Azure ecosystem and partners
AWS Raising the Bar on CDN+WAF Configuration
You can now use AWS CloudFormation to automate WAF configuration using example rules and match conditions. Amazon’s web application firewall integrates well with CloudFront, which is its CDN, to control and fine tune the type of traffic that is allowed or blocked on your web applications. Integrating CDN with WAF in this fashion allows you to provision, update, and delete the components of WAF. The CloudFormation template deploys some of the following rules and conditions as part of the solution:
- A manual IP rule that contains an empty IP match set that must be updated manually with IP addresses to be blocked.
- An auto IP rule that contains an empty IP match condition for optionally implementing an automated AWS Lambda function.
- A SQL injection rule and condition to match SQL injection-like patterns
- A cross-site scripting rule and condition to match Xss-like patterns in URI and query string.
- A size-constraint rule and condition
The example rules count the requests that match the rules for you to test with your web application in order to provide a modular means of creating and updating nested stacks. In sum, WAF Web ACL evaluates all requests against the rules and conditions prescribed, returning a HTTP 403 error message to client computers that send blocked requests and serving allowed or counted requests.
A basic configuration of rules and match conditions using CloudFormation is outlined in the following link along with instructions.
AWS Inspector is Proof They’re Serious About Security
AWS Inspector is an assessment service allows you to scale your security vulnerability testing, automating it across test, deployment and production environments, and allowing you to forward your findings to your ticketings and workflows. Amazon notes that the process can be conceptualized in two broad parts.
The first part of the security process entails using AWS Inspector to quickly identify and analyze security vulnerabilities while the second part deals with remedying the vulnerabilities that have been located. Because AWS Inspector is an integrated solution that works with workflow and ticketing systems, the process of remediation can be easily automated by using the AWS Lambda function, which allows you to run code without provisioning or managing servers. The function is also cost efficient, charging customers only for the compute time and not when the function is idle.
When AWS Inspector runs a security assessment, it sends a message to the Amazon Simple Notification Service, which in turn invokes AWS Lambda. Lambda fetches the findings and formats and emails them using a separate SNS topic.
Therefore, the major hurdle is setting up the Lambda function in the region in which Inspector is running, and setting up a dedicated SNS topic that Inspector notifies whenever there are findings. Instructions for configuring the SNS topic can be found on this link. After that, you’re ready to roll. Just set your findings to be published to the SNS topic, create the Lambda function, and you’ll receive email findings reports.
Google Uses DeepMind to Run Its Data Centers Efficiently
DeepMind famously came to the fore when its AI, AlphaGo, beat Lee Sedol in a series of Go matches, the first time in which a program beat a human professional at the notoriously complex game.
Now it appears that the advanced machine learning capabilities provided by DeepMind are providing tangible benefits for Google, which owns the company. Google has announced that it is ceding control of all of its data center cooling operations to DeepMind, after an initial trial run. The trial used DeepMind’s AI to manage the cooling of some of its data centers, which led to a 15% improvement in power usage efficiency, which helps reduce costs as well as the energy-footprint data centers.
Data centers are huge cost and energy sucks, accounting for a sizeable percentage of the 4,402,836 MWH of power that Google uses each year, according to Bloomberg. Given the high cost of energy, DeepMind can save Google hundreds of millions of dollars in utility bills over a span of just a few years.
DeepMind’s AI technology uses neural networks to analyze power usage and predict spikes in energy needs over time, thereby calibrating the equipment to maximize efficiency.
“It controls about 120 variables in the data centers. The fans and the cooling systems and so on, and windows and other things,” DeepMind CEO Demis Hassabis said to Bloomberg. “They were pretty astounded.”
Verizon Prepares for NG-PON2 Testing
Verizon has announced that by the end of this month, it will trial its landmark fibre-optic technology, NG-PON2 (next generation passive optical network), which can reach blazing-fast symmetrical speeds of up to 10 Gbps over FiOS. The NG-PON equipment is aimed at FiOS enterprise and consumer based services and was developed via its partnership with Ericsson, Calixx and Adtran. The trial is slated to occur in Verizon’s Innovation Lab located in Waltham, Massachusetts.
The technology was deemed acceptable by the International Telecommunications Union Telecommunications Standardization Sector and is expected to pave the way for Verizon’s most significant wireline telecommunications network upgrade in nearly a decade. What sets NG-PON2 apart is that it relies on four different wavelengths, which are denoted by different colors, to transfer data with a total capacity of 40GB per fiber. It also allows Verizon to deliver different services over different wavelength on the same fiber, differentiating between its residential and enterprise consumers.
Enterprise customers will be the first to benefit from the fruits of the testing as early as next year, if it proves to be successful, with consumers and residential clients gaining access to it later on.
The first successful field trial of NG-PON2 tech occurred last year in Framingham, Massachusetts.
Verizon is touting its fiber network as highly scalable and being easily upgraded: ““By implementing this advanced technology without having to change the current underlying fiber optic infrastructure, additional traffic can be carried cost effectively,” the company noted in a statement. “Verizon also can improve flexibility and resiliency using NG-PON2, because traffic can be shifted amongst multiple wavelengths without impacting customers.”
Huawei is Partnering with GE for IoT and Smart Cities in China
GE is bringing its IoT expertise to China, according to a new press release announcing its partnership with Chinese telco manufacturer Huawei.
The primary focus of its collaboration will be to provide smart machines that boost productivity and enhance worker output, which is a subset of IoT known as the industrial internet of things. The enticing promise of IIoT is that it combines big data analytics with machine learning, driving massive gains in productivity.
GE is investing heavily in an $11 million incubator, which will work on developing start-ups and software to improve machine intelligence, according to Reuters. The digital platform that developers will build their applications on is called Predix, which is owned by GE. Such applications would work with connected machinery to collect and analyze data in real-time.
GE has recently undergone a massive restructuring and shifted its focus to IoT, having invested more than $500 million in software annually.
In a recent statement to shareholders, GE noted: “We are a company that invests in broad industrial transitions, and they don’t come much bigger than the full application of data and analytics to machines and systems.”
The move has paid dividends; GE is expected to make $6 billion in revenues this year. Chief Digital Officer Bill Ruh notes that using smart machines had saved the company more than $500 million in production costs and predicts that such savings will increase to over $1 billion by 2020. This experience has motivated GE to bring its smart machine solutions to other partners and countries.
“Once we got it right for ourselves we take it to our customers … We’re bringing this to China, we’re open for business in China today to be able to do this,” he said.
GE’s business strategy of trialing solutions internally and translating them into saleable services has been working. Predix is not only used for internal manufacturing equipment and has been deployed to create new revenue streams for clients as well.
Such developments could also be used to innovate in the smart city space, which is a massive initiative that is being undertaken in China. Huawei reports that there are more than 300 smart cities planned in the nation, which necessitates collaboration with third-parties. GE can also bring its proprietary smart-city solutions suite, called Current, to the mix. Current leverages a network of LED technology, solar panels, and wireless controls to create green urban spaces, allowing cities to save on utility costs by generating power rather than purchasing it.
Google Expands Cloud Platform to Western Region of North America
Google has announced in a blog post that it is extending its cloud platform to customers in the west coast of North America, dubbing the region the Oregon Cloud Region. West Coast customers based in Vancouver, Seattle, Portland, San Francisco and Los Angeles can expect to see a 30-80% latency reduction. They also now have access to the Google Compute Engine, Cloud Storage, and Container Engine.
Latency reduction is crucial to providing consistent and immersive gaming experiences and is one of the biggest issues encountered by gamers. As Google client Multiplay, a video game hosting specialist, notes:
“Regional latency is a major factor in the gaming experience. Google Cloud Platform’s network is one of the best we’ve worked with, from a tech perspective but also in terms of the one-on-one support we’ve received from the team.”
As we noted in an earlier blog post, Google has also invested in a massive trans-Pacific cable network in furtherance of its aim to establish a Cloud Platform region in Tokyo. The Japan-based region is expected to commence operations later this year.
Google Unveils Two New Cloud Machine Learning API’s
Google has released a blog post announcing two new Cloud Machine Learning products– Cloud Natural Language and Cloud Speech API’s.
The Cloud Natural Language API is the fruit of its efforts to improve machine learning and acquisition of human language understanding, so that computers can parse the intricacies of syntax more accurately. It initially supports English, Spanish, and Japanese and can assess the general sentiment being expressed in a statement, identify the various entities mentioned (be they companies, persons, media, or animals), and parse the structure of a sentence.
Google believes that its natural language acquisition API can utilized to great effect in a wide range of industries: “ For example, digital marketers can analyze online product reviews or service centers can determine sentiment from transcribed customer calls.”
The Cloud Speech API serves a related but different function, specializing in speech-to-text conversion for over 80 different languages, applying voice recognition technology to use-cases in apps and IoT. It leverages and applies “advanced deep learning neural network algorithms to [user’s] audio for speech recognition with unparalleled accuracy. Speech API accuracy improves over time as Google improves the internal speech recognition technology used by Google products.”
The API allows machines to take advantage of tailored word hints to optimize recognition of certain relevant phrases, words, or neologisms. For instance, Google notes that a smart TV would be listen for keywords such as “fast forward.” Cloud Speech also features asynchronous calling that allows for faster and easier app development and has been fine-tuned to function in noisy environments.
Aryaka Grows 150% in Q2 2016
Aryaka, the global SD-WAN solutions provider, has announced its stunning fourth consecutive quarter of growth. In Q2 of FY 2016, Aryaka is reporting that it has grown over 150% year-over-year compared to Q2 of 2015.
As Aryaka states in its press release, the company has enjoyed strong partner contributions, growth in customer bookings (with over 70 added), and continued expansion attributed to its install base. This growth was due in large part to broader growth in demand for connectivity, which Aryaka provides in a cost efficient, flexible, and low-latency manner.
Aryaka also touted the fact that 40% of its total bookings came from its pre-existing user base, who sought further services from the company. Many of its bookings growth came from China, the APAC region, and Europe.
“I’m excited about another blockbuster quarter whereby we delivered solid execution and tremendous growth across all geographies,” said Shawn Farshchi, President and CEO of Aryaka, in the press release. “Aryaka is providing our customers with global connectivity, application acceleration, and ability to scale their businesses. Our results validate our leadership position in the global SD-WAN space.”
Facebook Is Introducing Open-Source AI Hardware Design
The Facebook AI Research team (FAIR) is touting its advances in the development of AI training hardware, which it attributes to a combination of strategic investments and partnerships with vendors, as well as hardware expertise. FAIR is now announcing that it will contribute its research in GPU hardware to the Open Compute Project.
The press release notes that while machine learning and artificial intelligence have been subjects of research for some time now, larger strides have been made in recent years due to the advances in computing capabilities as well as the public availability of larger and more numerous data sets. In particular, as computational capabilities increase, researchers are empowered to delve more deeply into complex systems.
However, larger scale developments in such software would require the development of a commensurate system: “But we realized that truly tackling these problems at scale would require us to design our own systems. Today, we’re unveiling our next-generation GPU-based systems for training neural networks, which we’ve code-named “Big Sur.”
Big Sur is an Open Rack-compatible piece of hardware, which incorporates eight high performance GPU’s of up to 300 watts to attain a speed that is two times as fast as its predecessor. Distributing the training load over eight GPU’s allows the team to increase the size and speed by a factor of two. In addition, Big Sur is built on NVIDIA’s Tesla Accelerated Computing Platform and can support multiple PCI-e cards. Finally, unlike comparable systems, Big Sur is a highly efficient and can operate in free-air cooled Open Computing data centers. The design materials will be freely available via the Open Compute Project.
Facebook Introduces the 6-Pack Open Modular Switch
For the past few years, Facebook has been working on a new, disaggregated and more flexible network in order to avoid some of the problems encountered in closed and iterative traditional networking technologies. The result of their efforts is an open modular switch platform, dubbed “6-pack.”
Facebook details the process of developing 6-pack in a blog post. The team began by developing a top-of-rack network switch (called “Wedge”) and a Linux-based OS (“FBOSS”) for the switch. In addition, they developed a scalable modular network architecture, a data center fabric that has 6-pack at its core. It is a “full-mesh non-blocking two-stage switch” with 12 unique switching components, each of which is capable of switching 1.28 Tbps.
As this description implies, it comes with two configurations, the first of which exposes 16X40 GE ports to the front and back. The second configuration opens all 1.28 T capacity to the back. Because the switching elements are independent and each contain a full local control plane on a microserver that communicates with the central controller, they can easily be modified without disrupting the system at large. The result is an open and modular switch platform that is highly scalable and built on off-the-shelf building blocks.
Facebook is currently testing 6-pack, FBOSS, and Wedge, and is planning on contributing them to the Open Compute Project in the hopes that they will be used by the community as the foundation of an entire industry.
Instart Logic Introduces Multi-Page Predictive Prefetching
Instart Logic is introducing Multi-Page Predictive Prefetching, as a part of its Appspeed Portfolio. This development promises to improve UX on a broad range of apps. In an era where customers increasingly use mobile devices to reach businesses, mobile UX is an important part of shaping a company’s perception.
The end-point aware application delivery solution addresses an area that is rarely addressed by developers, who expend their focus and resources on optimizing the primary landing pages and main access point. Multi-Page Predictive Prefetching refrains from simply guessing where the user will seek to go next and loading the page in the background, which wastes bandwidth and resources. Instead the solution identifies static elements shared by across potential pages and models common flows in order to optimally push common resources to the browser, resulting in a more seamless browsing experience.
“Using the tremendous amount of user behavior data we analyze across millions of sessions with our machine learning algorithms in the cloud, we can apply additional performance optimizations that improve the experience throughout a user’s journey,” explains Shailesh Shukla, CMO and VP of Products at Instart Logic. “Multi-page Predictive Prefetching leverages our proprietary endpoint-aware Nanovisor technology to identify the best content to prefetch.”
Cisco’s Visual Networking Index Predicts 3X Increase in IP Traffic By 2020
Cisco has recently released the findings of its Visual Networking Index (VNI) forecast for 2015 to 2020. According to the VNI, global IP traffic will triple and grow at CAGR 22% over the five years leading up to 2020.
The global Internet-using population is also expected to increase by more than one billion in that time, reaching a total of 4.1 billion by 2020. The burgeoning of M2M connections and the growth of the Internet of Things will result in a massive increase in new devices and connections, totaling 26.3 billion by 2020.
As smart cities, virtualized health services, and other M2M connections proliferate (nearly tripling by 2020), global internet traffic will continue to balloon, which will in turn generate growth in the market.
Video traffic will drive the lionshare of global internet traffic in 2020, accounting for a staggering 79%, and reaching 3 trillion video minutes per month. HD and Ultra HD video will account for the highest share of internet video traffic (82%).
On a more somber note, the growth in Internet traffic will also lead to an increase in security threats. Specifically, DDoS breaches are expected to increase to 17 million by 2020. Cisco’s analysis of DDoS attacks found that they can account for as much as 10% of a nation’s internet traffic. Regionally, APAC will be a major driver of growth over the next five years, at a rate of 22% CAGR, followed by North America (19% CAGR).
Is Airtel Checking CloudFlare Traffic in India?
Karthik Balakrishnan suggested recently that Airtel was sniffing and censoring unencrypted data packets that were traveling between CloudFlare’s data centers and the websites connected to it. CloudFlare uses Airtel’s services to connect to host sites, which ostensibly gave Airtel the access to check each packet of data to see if they violated orders set forth by India’s Dept. of Telecoms.
Balakrishnan suggested that each connection between CloudFlare and content servers could be blocked by Airtel inserting an iframe, which he encountered when attempting to access ThePirateBay.org. The conclusion he reached was the Airtel was monitoring CloudFlare’s upstream traffic without its awareness, thereby snooping on traffic from all Indian users who access any one of the 2MM+ websites that use CloudFlare. Airtel denied the accusations in response to an inquiry by Medianama:
“This is completely baseless and incorrect. As a policy, Airtel does not block/sniff any content. Only in the case of instructions/orders from the Government or the Courts, specified URLs are blocked. Blocking of any page [as per instructions from relevant authorities] is done at the URL level and not whether it is http/https. This also has nothing to do with the validity of any certificate.”
In an interview with Medianama, CloudFlare CEO Matthew Prince said that the company had reached out Airtel regarding the interference with Pirate Bay’s traffic, which blocked their ability to connect to the customers origin. Airtel reportedly told CloudFlare that this particular customer had had access blocked due to a specific government request, which affected requests connecting to CloudFlare, and not requests from CloudFlare to the customers origin. While Airtel’s agreement with CloudFlare prohibits it from interfering with traffic, the only exception is in the case of government order.
Prince also notes that Pirate Bay opted to send traffic over an unencrypted connection, even though CloudFlare has the ability to pass it over encryption. Using encryption prevents Airtel from pulling the host header and redirecting traffic.
That being said, Barakrishnan’s concerns about user privacy remain open, as Airtel would have had to access the header of every unencrypted packet in order to block the site. While Prince acknowledged that this would problematic, he suggested that there was very little CloudFlare could do outside of providing the option of encrypted connections. Governments in various countries have circumvented this by blocking ISP’s traffic to CloudFlare in order to block certain websites.
AT&T Unveils New Hadoop-Based Security Platform
AT&T’s latest security platform uses big data analytics on top of Hadoop to assess billions of security threats in mere seconds. It collects information from every corner of its network, leveraging the data sent by the routers, switches, and servers to identify and analyze threats in order to respond quickly and avoid them.
Jason Porter, VP of security at AT&T says that the AT&T Threat Intellect system allows the company to protect consumers and react quickly to limit the impact of threat events: “This is reducing the time it takes us to deploy security capabilities by greater than 95%. It allows us to be much more accurate, because we can correlate many different signatures. And more than it ever has before, the machine learning can identify that there is an abnormal traffic pattern and, with high confidence, detect a correlated threat event.”
Rather than characterizing it as a collection of security services, it would be more accurate to describe the Threat Intellect System as a machine-learning empowered brain that analyzes data patterns and threat activity with unprecedented speed and efficiency. Given the vast reach of AT&T’s network, which ranges from data centers to millions of user-end mobile devices, this brain constantly improves and learns to identify threats at an increasing rate before automatically deploying security solutions.
“AT&T secures more connections than any communications company in North America,” said Steve McGaw, a chief marketing officer at AT&T. “No carrier experiences the depth and scale of security threats we see on a daily basis– more than 30 billion vulnerability scans and 400 million spam messages are detected on our IP network. The power of Threat Intellect gives us the ability to process 5 billion security events, a full day’s worth of activity for all of our security customers combined – in only 10 minutes.”
This Hadoop-based architecture can analyze the 117 petabytes of traffic flowing across the network, which in turn frees up AT&T’s scientists to focus on developing and improving the security architecture and machine-learning itself, by feeding relevant data back into the security platform.
Porter continues on to say that “our PhDs and data scientists can focus on new things — they are always going to be reshaping and changing their attack rules, what this allows us to do is identify those more rapidly. We made the commitment to make security foundational, at the time we did Domain 2.0. So simultaneous to doing software-defined networking, we agreed that everything had to feed [data] into this platform.”
EU Levels New Antitrust Charges Against Google
The European Union’s antitrust authorities have been struggling to rein in Google’s activities in the region since 2010, arguing that the search giant’s actions have been harmful to consumers. The latest round of charges leveled against Google deal with its comparison shopping and advertising practices. The EU has argued in the past that the company unlawfully use its dominant market position to force companies to sign restrictive agreements to host Google’s search engine and ads on their websites. Sometimes these agreements prevent third-party websites from hosting rival advertisements.
The Commission is arguing this time around that it has uncovered more evidence that lends support to their view that Google is using its weight to stifle competition. This dominance, they argue, allows Google to privilege certain advertisements to the detriment of rival advertisements and reduce choices for consumers.
Google, on the other hand, is arguing that its products and innovations have had the net effect of empowering consumers and increasing the range of choices European users have. Google currently controls 80% of the market for on-site search hosting and roughly 90% of the European search market overall. If found guilty of violating EU competition regulations, Google can face staggering fines amounting to 10% of its global annual revenues.
Ericsson Applies to FCC for 5G Demo Trials
Ericsson has recently filed an application with the FCC for permission to conduct temporary 5G demo trials in partnership with T-Mobile, which it is planning to carry out later this summer. The telco manufacturer disclosed that it would use one base station and one user device to carry out the demonstration indoors at T-Mobile’s Washington headquarters, and planned to conduct the trial on government spectrum bands only: “We are seeking to demo 5G to T-Mobile. Ericsson is requesting to operate on the government spectrum bands only because this phase of our 5G research was designed in Sweden to operate on these bands. We have no plans to request that this spectrum be repurposed for commercial use.”
The Swedish telco also added that “the demo will last only two days, but we are seeking authorization for 60 days because the dates may be shifted at the last moment due to customer business needs.” Ericsson plans to run the demo on the 14.7 to 15.5 GHz bands in order to analyze the nature of millimeter wave transmissions indoors. T-Mobile is also working with Nokia, in addition to Ericsson, in developing 5G infrastructure.
All four of major US telcos have plans to roll out 5G in the near future, given that connections are projected to double by 2020 and increasing to an astonishing 500 billion connections by 2030. The FCC is planning to release spectrum in four large swathes by 2020 for commercial use.
NSF Invests $400 MM in Advanced Wireless Research Initiative Along with US Telcos
The National Science Foundation has announced today that it will be investing $400 million over the next seven years to develop wireless platforms and promote advanced wireless research in conjunction with the Advanced Wireless Research Initiative begun by the Obama administration. With experts anticipating rapid growth in wireless interconnections by 2020, the need for high-bandwidth, low-latency, and high-speed connections will grow exponentially.
As such the NSF noted in its press release that its investments had the aim of supporting “the research community in experimenting with and testing novel technologies, applications and services capable of making wireless communication faster, smarter, more responsive and more robust.”
Among factors driving the need for innovation in the wireless space is the predicted growth of the internet of things and ballooning rates of data being trafficked over an ever increasing number of smartphones. Wireless technologies also need to be improved and become more reliable in light of the prospect of self-driving vehicles and remote surgeries, which will need sturdy connections.
“NSF is a leader in catalyzing and supporting our nation’s academic research community to advance scientific discovery and innovation,” said NSF Director France Córdova. “In the area of computer networking and communication, NSF has nurtured communities of researchers, experimenters and developers from the very beginning. The efforts of the Advanced Wireless Research Initiative will continue this progress and have profound implications for science and society in the years to come.”
In addition, the NSF has announced that it will partner with US Ignite and other related public agencies in forming the Advanced Wireless Research Consortium, which has the mission cultivating cutting edge wireless research by fostering productive relationships between academics and industry leaders. Intel, Nokia, and Qualcomm, in addition to the Big Four wireless carriers in the US have all expressed support for the consortium.
The most exciting plan that this consortium has is by far the Advanced Wireless Research Platform, a city-scale wireless research platform that will become a testbed for trials in 5G architecture, millimeter wave, network security and privacy, and low-latency gigabit services. Industry partners have committed $35 million to support the development of such city-sized platforms.
“Working with other agencies and industry partners, NSF’s support of fundamental research on advanced wireless will be transformative and take us beyond the current and next generation of wireless — beyond what has been envisioned thus far,” said Jim Kurose of the NSF. “Many promising areas of research, including millimeter-wave networks, dynamic spectrum sharing and network virtualization, are maturing after years of federal investment but require additional experimentation and testing at scale.”