Envoy Proxy Features and Basics

Categories

Introduction to Envoy

Envoy is an open source L7 edge and service proxy. It was developed as a high performance C++ distributed proxy intended for standalone services and applications, in addition to large microservices. The Envoy code was built to be modular and easy to test while aiming for “the greatest possible absolute performance”. Envoy was open sourced in September 2016.

It was built by software “plumber” Matt Klein and the engineering team at Lyft, designed for use as an “universal data plane” for service mesh architectures or as a standalone proxy. The car-sharing service has migrated its monolithic applications to a service-oriented architecture (SOA) via Envoy.

“When I joined Lyft, people were actively fearful of making service calls”, Klein said when introducing the proxy server and communications bus at the Microservices Practitioner Summit in San Francisco in January 2017. “They feared that services calls would fail or bring high latency, both of which could slow an application’s performance. Thus was born Envoy.

The proxy architecture provides two key pieces missing in most stacks moving from monolith legacy systems to SOA — robust observability and easy debugging. Having these tools in place allows developers to focus on business logic.”

As we discussed in our recent post on the service meshes, AWS App Mesh and Microsoft Azure Stack, Envoy is the most popular data plane out there. It’s used by AWS App Mesh, as well as many others, including Airbnb, Booking.com, IBM, Medium, Netflix and Uber.

The Envoy project was specifically built out of the belief that: “The network should be transparent to applications. When network and application problems do occur it should be easy to determine the source of the problem”.

This is not a straightforward or easy goal, but Envoy aims to help solve it by providing many high level features, including the following:

Threading Model

Envoy uses three different kinds of thread:

Main: The main thread coordinates all the most critical process functionality, which doesn’t involve a significant amount of CPU to accomplish. The majority of management code can then be written as if it were single threaded. The main thread owns server startup and shutdown, xDS API handling (as well as DNS, health checking, and general cluster management), runtime, stat flushing, admin, and general process management. Everything that exists on this thread is asynchronous and “non-blocking.”

Worker: Envoy spawns a worker thread for all the hardware threads in the system by default. Each of these runs a “non-blocking” event loop, which listens on every listener (there is no listener sharding right now), accepts new connections, instantiates a filter stack for the connection, and processes all IO for the connection’s lifetime. This lets the majority of connection handling code be written as if it were single threaded.

File flusher: All of Envoy’s files has an independent blocking flush thread. When worker threads have to write to a file, the data is moved into an in-memory buffer, where it is flushed through the file flush thread. This is one area of the code in which all workers can block on the same lock attempting to fill the memory buffer.

Network Filters

L3/4 Filter Architecture

Envoy is an L3/4 network proxy. It has a pluggable filter chain mechanism, which lets filters be written to perform various TCP proxy tasks that can be inserted into the primary server. Filters already exist that support a range of tasks, including a raw TCP proxy, HTTP proxy, TLS client certificate authentication, etc.

HTTP L7 Filter Architecture

As HTTP is such an important part of modern application architectures, Envoy supports an additional HTTP L7 filter layer. HTTP filters can be integrated into the HTTP connection management subsystem which perform various tasks, including buffering, rate limiting, routing/forwarding, sniffing Amazon’s DynamoDB, etc.

Circuit Breaking

Circuit breaking is a key component of distributed systems. Envoy enforces circuit breaking limits at the network level rather than having to configure and code each application separately. Envoy supports several kinds of fully distributed circuit breaking, including: (i) cluster maximum connections (ii) cluster maximum pending requests (iii) cluster maximum requests (iv) cluster maximum active retries. Each circuit breaking limit can be configured and tracked on a per priority and per upstream cluster basis. This lets different components of the distributed system be tuned separately and have different set limits. The circuit breakers’ live state can be observed via statistics.

Load Balancer

A load balancer sits between the client and the backend and performs various critical tasks at a high level, including:

Service Discovery

The purpose of service discovery is to understand how to communicate at all times, so if your service discovery system is down, you will still be able to scale up and/or operate as normal. Envoy approaches service discovery in an eventually consistent way.  Envoy does this by creating an overlay routing mesh that uses a mixture of active health checking and service discovery data, in which health checking data has primacy. The initial mesh is formed by having each host check into the discovery service once every minute. The process is deleted from the mesh only if the health checks fail and the process isn’t in the service discovery database.

Health Checking

This involves working out what backends are currently healthy and available to accept requests and serve traffic. Health checking tends to fall into two categories:

    • Active –  The load balancer issues a pin on regular intervals to the backend in order to gauge health.
  • Passive – The load balancer works out health status from the primary data flow, for instance, an L4 load balancer could determine if a backend is unhealthy if there have been over three connection errors in a row. An L7 load balancer could determine a backend is unhealthy if there have been over three HTTP 503 response codes in a row.

Load Balancing

Load balancing is usually performed using a per service level load balancer. As Envoy tries to run as a service mesh as well as functioning as an edge proxy, it aims to implement service to service load balancing in addition to graceful degradation if upstream or downstream failures occur as opposed to delegating this work for applications. Load balancing algorithms continue to be an active area of research; they vary from simplistic ones like robin robin and random selection to more complicated algorithms that consider variable latency and backend load.

Advanced Load Balancing

Envoy can implement advanced load balancing in one place and let them be accessible to any application as it is a self contained proxy as opposed to a library. Envoy currently includes support for automatic retries, global rate limiting through an external rate limiting service, request shadowing, circuit breaking and outlier detection. Future support is in the works for request racing.

HTTP Routing

Envoy supports a routing subsystem (when operating in HTTP mode) that can route and redirect requests in relation to path, authority, content type, runtime values, etc. This functionality is especially useful when Envoy is being used as a front/edge proxy, however, it can also be leveraged when building a service to service mesh.

Observability

Envoy includes statistics support for all subsystems. statsd (plus compatible providers) is the supported statistics sink, although it should be possible to plugin different ones. In addition, statistics are viewable via the administration port. Envoy supports distributed tracing via third party providers.

The Front Envoy Configuration

Here is a Front Envoy configuration (thanks to Hacker Noon):

admin:
access_log_path: “/tmp/admin_access.log”
address:
  socket_address:
    address: “127.0.0.1”
    port_value: 9901
static_resources:
listeners:
  –
    name: “http_listener”
    address:
      socket_address:
        address: “0.0.0.0”
        port_value: 80
    filter_chains:
        filters:
          –
            name: “envoy.http_connection_manager”
            config:
              stat_prefix: “ingress”
              route_config:
                name: “local_route”
                virtual_hosts:
                  –
                    name: “http-route”
                    domains:
                      – “*”
                    routes:
                      –
                        match:
                          prefix: “/”
                        route:
                          cluster: “service_a”
              http_filters:
                –
                  name: “envoy.router”
clusters:
  –
    name: “service_a”
    connect_timeout: “0.25s”
    type: “strict_dns”
    lb_policy: “ROUND_ROBIN”
    hosts:
      –
        socket_address:
          address: “service_a_envoy”
          port_value: 8786
 

A minimal iteration of the Envoy configuration would largely consist of:

  • Listeners – One or more listeners run on a single Envoy instance. You use lines 9-36 to describe the address and port of the current listener. Each listener is also able to have one plus network filters, which lets you achieve most of the tasks such as routing, traffic migration, TLS termination, etc.
  • Routes – Line 22-34 determines the route specification for each filter, determining which domains should accept requests and a route matcher matches against each request, sending it to the correct cluster.
  • Clusters – Envoy routes traffic to clusters, which essentially are the specifications for upstream services e.g. lines 41-50 defines “Service A” (the upstream to which “Front Envoy” will talk to). Although what is listed above is only a simple round robin, Envoy supports multiple load balancing algorithms to route traffic.
  • Endpoints – “hosts” are used to detail the instances of Service A to which we want to route traffic. Envoy will cache all the hosts of “Service A” and it will continue to refresh the hosts list every 5 seconds. Envoy supports passive and active health checking.

Other features include:

    • Connection pooling
    • Outlier detection
    • Rate limiting
    • Tracing
  • TCP proxying and logging

Conclusion

Envoy is mainly a communication bus for large SOA. Through its ability to handle things such as rate limiting, load balancing, service discovery, circuit breaking and active/passive health checking and so on, it generates a large volume of observability data showing statistics, logging and tracing. The mesh works wherever the code is running (whether developers are writing it in scenarios, on a dev box on their laptop, in production or in staging).

Envoy was built initially to act as a single piece of software that could get fully distributed tracing, logging and statistics. Apparently when Lyft developers asked Klein why they needed Envoy to do something relatively straightforward like retry, Klein highlighted that retries are one of the easiest ways to take down your system from exponential overload and pointed out that “Nothing is easy in a complex system”.

Other similar systems include nginx, haproxy, AWS ELB, SmartStack, Finagle, linkerd, gRPC, proxygen, wangle and nghttp2. Envoy provides a useful comparison guide between itself and these alternative architectures, and points out its belief that “Though in any particular area (edge proxy, software load balancer, service message passing layer), Envoy may not be as feature rich as some of the solutions below, in aggregate no other solution supplies the same set of overall features into a single self contained and high performance package.”

Scroll to Top