Open Source CloudRouter goes to Production

Over the past several years, there’s been an emerging trend in the networking industry that has been heading away from physical infrastructure and towards virtual/cloud environments. As a result, open source router software has begun to shift the tides, challenging the traditional networking equipment deployed in most current data centers. The most popular advancement in this realm has been the release of the open source CloudRouter Project, which was formed as a collaborative effort between CloudBees, Cloudius Systems, IIX, Nginx and the OpenDaylight Project.

This open source router project developed as part of an internal need that cloud service providers had for high-quality SDN and router distribution. What it looks to do is reduce the cost of networking by replacing proprietary routers and switches based on custom ASIC processors with software that runs on standard x86 servers. And while the project is still relatively young, it’s having a clear impact in the market.

SDN, CloudRouter and Cisco ACI

The last several years have seen exponential advancement in the development of software to support open networking, with the CloudRouter Project now allowing for network equipment to be managed by external devices such as commodity servers. The benefits of this means that it will be cheaper and easier to build and manage large-scale networks.

While the CloudRouter Project only recently emerged on the scene, it marks a clear shift in the network infrastructure industry. This shift is most evident when looking at Cisco’s unveiling of their own approach to programming the network known as Application Centric Infrastructure. They’ve been developing this new ACI approach after acquiring Insieme Networks, an SDN-focused startup founded by former Cisco execs.

Their ACI program hopes to reduce the time it takes to provision, change or remove applications, using a new Nexus 9000 switch. This new switch is the key building block for Cisco’s ACI solution, enabling the transition to ACI-mode of NX-OS with a software upgrade and the addition of a controller. Cisco claims that “unlike traditional SDN controllers, it operates independently of switch data and control planes, allowing the network to respond to endpoint changes even when the APIC is offline.” They hope that this flexibility in how their application networks are defined will help them to to rival the current advancements in SDN.

HAproxy Load Balancing Primer

Since its inception in 2001, HAproxy has grown to become one of the most widely used open source load balancers on the market. It was designed specifically as a high availability load balancer and proxy server for TCP and HTTP-based applications, operating in both layer 4 and layer 7.  Due to its speed and scalability, its became widely used by very high traffic websites, including GitHub, Reddit, Tumblr, Twitter, parts of Google, and so on.

Over the course of this guide, we’ll give you a break down its basic architecture and configuration, and also draw a comparative analysis, matching it up against other industry load balancing software to help you better choose what’s best for your platform.

Understanding Load Balancing

Load Balancing, otherwise known as fault-tolerant proxying, helps to disseminate requests going into one domain across multiple web servers, where they access the stored data requested by clients. The main objective of load balancing is to avoid having a single point of failure so that no part of the machine is important enough that if it fails the system will crash.

HAproxy was built to alleviate these concerns as a fast, reliable and free load balancer proxy for TCP and HTTP based applications. It is written in C programming language with a single-process, event-driven mode that was designed to reduce the cost of context switch and memory usage. Other systems that use pre-forked or threaded servers use more memory, but HAproxy can process several hundreds of tasks in as fast as a millisecond.

haproxy

The basic architecture we’ll be discussing is as seen above. While the diagram shows five backend servers, HAproxy could be connected to hundreds of them, with multiple load balancers handling the requests. What’s important to note is that the load balancers have a connection to each other, which is called peering. This is integral to running your system smoothly and we’ll discuss that specific configuration more in depth later on.

Cisco Invests in VeloCloud’s SD-WAN Solution

Given the burgeoning market for SD-WAN solutions, Cisco and the way of the router has been looking like yesterday’s news, but they announced this week that they have invested in VeloCloud’s SD-WAN solution. According to VeloCloud, Cisco is integrating its intelligent Wan SD-WAN product with VeloCloud’s technology to help them adapt in this new router-less market.

Cisco and VeloCloud take two different approaches to SD-WAN—Cisco with their all-in-one appliances at the branch offices, as opposed to VeloCloud and their controller in the cloud. This integration could position both companies at the head of the SD-WAN market, as competition for the best SD-WAN solution continues to heat up amongst the major players. We’ll have to wait and see what products they develop given this new technological partnership.

Dell Announces New Linux-based Switch OS

Dell recently announced a new switch OS that has the potential to become the software foundation across the company’s portfolio of data center infrastructure. Dell is adding the OS to the choices they offer to tech buyers for their vender switches, which can be purchased with an NOS from Big Switch Networks, Cumulus Networks, IP Infusion or Pluribus Networks. This news comes after the partnership between Dell and VMware, who is expected to use the new OS on their hyper-converged systems. Currently, Cumulus OS on Dell switches is looking to take the hardest hit from the inception of this new OS as they share the most compatible features.

Qwilt and three6five Help Operators in Africa with Streaming Video Traffic

Qwilt, a provider of online video delivery and video caching solutions, announced a technology partnership with three6five, an IP networking service integrator in South Africa whose goal is to improve OTT streaming video strategies for service providers in Africa. The partnership will help three6five to offer service providers in the region with open cache solutions to address the growing demand in the region.

The recent launch of new video streaming services throughout Africa has lead to increased traffic across networks with massive bandwidth demands for service providers who must find a cost effective way to scale their networks. Officials from Qwilt comment that “it won’t take long before streaming traffic in Africa accounts for 60 percent or more of a service provider’s downstream network traffic at peak—as we now see in Europe and the USA.”

Ericsson, TeliaSonera Go 5G

In partnership, Ericsson and TeliaSonera have decided to jointly develop and test selected 5G use-cases and services available in 2018 with Stockholm being the first city to receive the service. Officials from their team state that they are hopeful that by the end of 2021 there will be approximately 150 million subscriptions for this new 5G network. This partnership taps into both companies’ strength in innovation and technology, as they plan to develop 5G for both communication and Internet of Things (IoT) services to help support new business growth and opportunities. Sweden has long been a pioneer ICT-nation with Ericsson and TeliaSonera having launched the first 4G network in Sweden in 2009.

Deep Look Into Apache Traffic Server

Apache Traffic Server is high performance web proxy-caching server known for its use by Yahoo!, processing over 30,000 requests per second and serving more than 30 billion web objects a day across the Yahoo! network. Since its inception as an open source software back in 2009, Apache TS has taken over the market as one of the leading proxy servers, distributing content to millions of users on a daily basis. In this guide, we’ll be delving deeper into its configuration and features, helping you to better decide whether Apache TS best suits your caching needs.

Unlike Varnish and Nginx, which function more explicitly as HTTP accelerators, Apache TS was designed with a broader range of capabilities. It can best be deployed in three different ways:

  • Web Proxy: Receives user requests for web content and diverts the traffic. Either serves content directly from the stored cache or redirects the request to the origin server on the user’s behalf and then stores the response.
  • Reverse Proxy: Places Apache TS at the front of the server to accept the incoming client requests as if it were the origin server. This helps to accelerate the process of receiving requests and disseminating content.
  • Cache Hierarchy: Participates in cache hierarchies where unfulfilled requests to one cache are routed to other regional caches, which helps increase speed and lower bandwidth usage by searching nearby caches before going straight to the origin server for content.

Installation

For installation, you have the two basic options—download it from the source code or from Apache distribution packages. In order to ensure you have the latest features, Apache recommends that you download Traffic Server straight from the source code. Their distribution packages have been known to lag behind the current stable release by a significant amount.

In order to install it from the source, your server will need the following tools and libraries to properly build the software, with further guidelines outlined here.

  • pkgconfig
  • libtool
  • gcc (>= 4.3 or clang > 3.0)
  • GNU make
  • openssl
  • tcl
  • expat
  • pcre
  • libcap
  • flex (for TPROXY)
  • hwloc
  • lua
  • curses (for traffic_top)
  • curl (for traffic_top)

Configuration

Once you have Apache TS installed, there are two types of configuration—you can set it up as a reverse proxy or a forward proxy.

  • Reverse Proxy: The most common configuration is to set up ATS as a transparent and caching reverse proxy that forwards all requests to a single origin address and caches the responses based on their headers.
  • Forward Proxy: Unlike Varnish or Nginx, Apache TS has the ability to be configured as a transparent forward proxy. This is typically used when you need to improve the performance of a local network’s use of external resources or you want to have the ability to monitor or filter your traffic.

Below, we’ll be exclusively discussing the configuration and features of Apache TS’s reverse proxy, installed from the source code. For more information on their forward proxying capabilities, check here.

Reverse Proxy Configuration

In order to setup your reverse proxy, a few changes need to be made to the configuration files located in the /opt/ts/etc/trafficserver directory. In the records.config file, make sure that the following settings have been configured:

  • proxy.config.http.cache.http: enables caching of proxied HTTP requests.
  • proxy.config.reverse_proxy.enabled: enables reverse proxying support.
  • proxy.config.url_remap.remap_required: requires a remap rule that ensures your proxy can’t be corrupted by users trying to mask their identities to access your site.
  • proxy.config.url_remap.pristine_host_hdr: makes sure that all the client request headers stay the same, which is useful when the origin server is performing domain-based virtual hosting or other actions dependent on the header
  • proxy.config.http.server_ports: makes sure port 8080 is used for HTTP traffic

Nginx Technical Highlights

When it comes to web server software, Nginx has quickly been closing the gap set by rival Apache with an estimated 21% of the top one million busiest websites on the Internet today using Nginx software. Nginx performs all the functions of a web server, proxy server, web accelerator and load balancer, giving developers the flexibility to build and deploy their sites with ease. In order to better understand the specifics behind these features, I will breakdown the details necessary to implement Nginx on your system and delve deeper into their functionality.

The Installation

To begin, Nginx recommends installation of their software on Linux based systems. While it is possible to run it on Windows, for the best performance they don’t recommend it. When beginning installation it is important to note that given its status as an open source project, Nginx has multiple versions running in parallel. The two versions available for download are the stable version and the mainline version. Nginx recommends using the stable version for most users, which contains all current updates to the code, but may lag when implementing new development features and getting the critical bug fixes ported into it. On the other hand, the mainline version is used by those actively developing the software, allowing it to receive the new features and bug fixes quicker, but consequently this may cause the software to be less stable.

There are two different ways to install Nginx. The first option is to download the source code directly from Nginx’s online repository and build Nginx on your own. If you decide to do this, note that Nginx requires three library dependences to properly install the default build configuration, as seen below:

  • PCRE: for the HTTP rewrite module
  • Zlib: for the HTTP Gzip module
  • OpenSSL: for HTTPS protocol support

Nginx also publishes their own packages for installation on a wide variety of operating systems. Installing straight from the package is easier and faster, but these packages are often outdated and you won’t be able to install any 3rd party modules. They are, however, built with the same default modules as the source code.

Modules

Although Nginx differs greatly from the Apache module system, it does rely on its own modules that are built into the software, which can be enabled or disabled during build time. When building Nginx, you can specify which modules you want, but the downside is that if you don’t include a module in the beginning and later require it, you must rebuild Nginx from the source to apply it to your server. By default, though, many of the necessary modules are included at the start from their full list. Nginx also supports many third party modules that aren’t packaged with the main distribution that can be found here.

Basic Configuration

The way that Nginx works is specified in a configuration file called nginx.conf. The most basic Nginx configuration file that performs most tasks is shown below, containing the default values.

nginx1

To get a feel for the configuration language, there are two directives. The most basic one is called a simple directive, which is a simple statement made up of a name, the parameters and a closing semicolon. The other is a context directive, which wraps the directive inside of curly braces instead of using a semicolon.

Breaking down the configuration above, we’ll start with the first two lines of code. Since they aren’t wrapped in a context directive, they are part of the main context. The top line defines the user and group that the worker processes will run as, and the bottom line defines the number of worker processes, which is usually the same your system’s CPU. Within Nginx, there are two main processes—the master and the worker. The master process reads and executes the configuration, binds the necessary ports and runs the worker processes, while the worker process is responsible for taking in all client HTTP requests. This is the core of how Nginx functions as an HTTP router.

Nginx vs Varnish vs Apache Traffic Server – High Level Comparison

When building a CDN, companies have several options in terms of choosing an open source caching platform to help deliver their content to users with speed, security and reliability. While none of these proxy servers can be exclusively titled a winner over the rest, each of them have their pros and cons for building a CDN, which we’ll discuss here in a comparative view of their features and functionality.

What makes it so difficult to directly compare each of these three proxy servers is the fact that the differences in their architecture affect the features that they excel at. Beginning with Apache TS, it became widely known for its use by Yahoo!, processing over 30,000 requests per second and serving more than 30 billion web objects a day across the Yahoo! network. Since its inception as an open source software back in 2009, Apache TS has taken over the market as one of the leading proxy servers, distributing content to millions of users on a daily basis, often lauded as the most popular proxy server on the market. Apache TS uses a hybrid event-driven engine with a multi-threaded processing model to handle incoming requests. This means that it scales very well on modern multi-core servers even though it was designed for an older generation of servers.

Nginx came about in 2002 with a more direct focus on addressing concurrency issues in their architecture, using an asynchronous event-driven connection handling algorithm. They created a fast looping mechanism that continuously checks and processes events so each worker only concerns itself with a connection when a new event is triggered. This means that Nginx does not create new threads for each request, handling multiple connections and requests in one work processor.

Lastly, Varnish, the youngest of the three, was designed in 2006 with architecture similar to Apache TS, managing a thread pool that uses one thread per each connection. But unlike Apache TS or Nginx, it was written from the ground up to function solely as a high performance HTTP accelerator that functions by using the host operating system’s memory management abilities and threading abilities to cache content at higher capacities. The results proved much higher flexibility when handling the cached material and choosing what to store,

In order to better see a side by side of the objective features offered by each, below is a table reflecting their respective features.

 Features Apache TS Nginx Varnish
Reverse Proxy Yes Yes Yes
Forward Proxy Yes No No
Transparent Proxy Yes No No
Cache Yes Yes Yes
ESI Yes No Yes
ICP Yes No No
SSL Yes Yes No
SPDY Yes Yes No
Load Balancer Yes Yes Yes
DDoS Protection Yes Yes Yes
Streaming Server Yes Yes Yes
Support No Yes Yes

It’s clear from the table that Apache TS is the most inclusive in its features, but just because Apache TS offers more capabilities in one package, doesn’t make it superior in functionality.

One key function that administrators should keep in mind is the flexibility of the configuration. Apache TS and Varnish both include options to allow for a more distributed configuration on a per-directory basis by interpreting .htaccess files in the content directories. This distributes the configuration of the server, which can be useful for implementing URL rewrites, access restrictions, authorization and authentication, and caching policies.

EdgeConnex Opens New Data Center with Comcast as Anchor Tenant

EdgeConnex Opens New Data Center with Comcast as Anchor Tenant

EdgeConnex, a data center operator that specializes in edge data centers, has landed Comcast as their anchor tenant for a new center opening in Jacksonville, Florida. Due to Comcast’s new streaming services, on-demand programming and extensive nationwide network, they are a perfect customer for EdgeConnex and their edge services, allowing Comcast to keep content close to consumers without having to build their own dedicated data centers. This should also help EdgeConnex in bringing new clients to the Jacksonville area, which has seen significant growth in the quantity of tech companies opening up in that region.

Proofpoint Launches Instagram Security Solution

With 85% of top brands and over 400 million monthly active users, Instagram is one of the most prolific social media platforms available, and now Proofpoint has launched the industry’s first Instagram security solution. Most security solutions are blind to images and generate too many false threats when filtering through Instagram posts, but Proofpoint SocialPatrol has launched a new system unique to the site with the ability to simultaneously analyze text and images, while comparing them to over 100 algorithmic classifiers to notify the user of discernible threats.

Proofpoint also offers advanced solutions for protecting branded social media accounts across Facebook, Twitter, LinkedIn, Google+, and YouTube, helping to report fraudulent accounts, stop hackers, remove malicious content and prevent unauthorized publishing workflow, which is of the utmost importance given the impact social media has on branded content.

Azure Powering Nissan’s In-Vehicle Interactive System

Today Nissan announced that they have chosen Microsoft Azure’s cloud platform to power Connect Telematics Systems (CTS) in all Nissan LEAF and Infiniti models across Europe. In partnership with Azure, these Nissan vehicles will now offer remote connection to the car from the owner’s mobile phone with interactive features to enhance safety, adjust climate controls, and set charging functions on electric vehicles remotely, even when the car is powered down. As automakers like Nissan continue to implement connective features on their vehicles, companies like Azure and Ericsson, bringing cloud based platforms and Wi-Fi to cars, open themselves up for a whole new market of opportunity.