Facebook Open Sources their Big Sur AI hardware

Semiweekly Tech Updates


 Facebook Open Sources their Big Sur AI hardware

Deep learning, while a small academic field for computational science, is garnering huge interest from companies such as Google, Facebook, and IBM. Deep learning trains machines to recognize patterns in the data, then classify and categorize them, all on their very own while not having to require much engineering labor. Facebook has recently announced Big Sur, the open-source hardware built to run the latest in AI algorithms.

Built around the Nvidia Tesla M40 GPU boards of 3072 cores and 12GB of memory and with up to eight Nvidia Tesla M40 per Big Sur chassis, provides mass deep learning process capabilities. Open-sourcing Facebook’s AI hardware means that deep learning has transitioned from the Facebook Artificial Intelligence Research (FAIR) lab into Facebook’s mainstream production systems intended to run apps created by its product development teams. This is most likely a competitive response to Google’s Tensorflow back in November, which also offers open-source AI engine.

BBC Digital Media Distribution Improves Caching Throughput by 4x

BBC Digital Media Distribution is improving their throughput from their caching infrastructure in order to deliver more content while reducing downtimes. General poor performance was due to Varnish’s usage of memory mapped files which limited a caching server to 4 Gbps, the caching content exceeding RAM, and lock contention within the Linux kernel when evicting large amounts of pages, exacerbated by the use of memory mapped files. As a result, they replaced Varnish with nginx, a web server with a focus on high concurrency, performance and low memory usage, which increased output significantly to 20 Gbps thereby causing a 4x increase in performance.

Netflix’s Per-Title Encoding Ensures Adaptability for All

As one of the largest streaming video platforms with on-demand content, Netflix must find unique strategies to address how they can offer their services for everyone. When streaming video, quality has to be adjusted based on user-bandwidth. In traditional broadcasting, there are limitations as to how much bandwidth is available and how to best allocate the bandwidth. The goal is to achieve the best possible quality at a predetermined resolution and with an average bitrate, resulting in a “budget” of bits per frame, which the second pass follows to actually encode the content.

Netflix takes it one step further by realizing that quality at a given resolution varies with content, and thus a higher bitrate at a lower resolution that’s scaled up can look better than a lower bitrate at native playback resolution. Using complex algorithms, Netflix is now customizing the target average bitrate for each title individually using a unique bitrate ladder.

This is applied to various types of video qualities such simple animations that require low bandwidth to HD action movies that require high bandwidth. Since Netflix is an internet streaming service, they do not have to dictate encoding quality for everything upfront (or pre-allocated channel constraints), but on a per-title basis depending on source video quality. Thus, when a Netflix user watches a video, they get the best quality level for the bandwidth they have have available.

Google Cloud Shell Free Throughout 2016

Throughout the end of 2015, Google announced a large variety of cloud platform-based announcements. They are now offering Google Cloud Shell, a Google Cloud Platform feature that lets users manage their infrastructure and applications from the command line in any browser for free throughout the new year.

This is great news for developers who rely on building projects through the Google Cloud platform. While Google has a set of command-line interface (CLI) tools already, but Google Cloud Shell is a virtual server that is pre-setup where a user would launch one of these instances then access to a remote computer via Secure Shell (SSH) in without having to install their own CLI tools.

Scroll to Top