NGINX Prepares for the IoT with UDP Load Balancing; and Other News

Categories

NGINX Prepares for the IoT with UDP Load Balancing

User Datagram Protocol (UDP), which is used by many IoT devices, is now supported by NGINX’s load balancing engine. NGINX works as an application delivery platform, and as the proliferation of devices associated with the Internet of Things (IoT) commences, companies are being forced to adapt to the protocols of these new technologies. With the onslaught of data that the IoT is expected to produce, NGINX believes that load balancing will be mission-critical for analysis of this data and seamless data transmission.

The NGINX load balancer is aware of all servers to which it is connected and helps applications run seamlessly, easing the burden of continuously monitoring the status of each individual server. This monitoring process can be especially cumbersome during peak use intervals and in high-capacity environments. Load balancing also improves scalability for companies to continue to grow and adapt to the changing technological world. NGINX will continue to support load balancing for both TCP and HTTP in addition to UDP.

Independently Troubleshooting the Facebook Network with NetNORAD

To support its global user base, Facebook has developed a massive network infrastructure. Detecting and troubleshooting problems within this behemoth network, however, can be problematic for mere humans. To ease this burden on their team, Facebook has developed NetNORAD. NetNORAD performs end-to-end network probing to troubleshoot network problems. It uses a pinger to send a UDP probe packet to a responder at another known point within the network. When the probe reaches the target, the packet is stamped with the time and is sent back to the pinger.

Using the round trip time (RTT) of the packet and the location of the machine that received the ping, Facebook can detect problems in the network before they escalate and cause network outages.While this proactive determination of a problem is helpful to the Facebook team, the ability of NetNORAD to determine the precise location of the problem and potentially mitigate the concern within seconds and without ever consulting a human is where its real value lies.

Once the probe packet analysis determines that there is a problem within the network, the pinger summarizes the data on many clusters of machines around the detected network problem and over the entire network. These data include average packet loss, loss variance, and percentiles of RTT for many pings. The data are analyzed in Scuba, a data analysis and visualization tool after they are time stamped and logged in Scribe. Using the results from pings in different datacenters, regions and global areas, NetNORAD creates an alarm set with particular threshold for each data type over a 10 minute interval.

Data are compared to one another at each hierarchical level and through the use of root cause isolation and downstream suppression rules, the precise location of the network error can be determined. The fbtracert tool (which uses an ad-hoc protocol) can be used to analyze packet loss at each hop between datacenters, regions and on a global scale, and then NetNORAD can fairly reliably troubleshoot network problems. Occasionally, such as when stable statistics for a path are unavailable, fbtracert will be unable to locate the problem, and human involvement will be necessary. Facebook is making the pinger and responder components (written in C++) and the fbtracert tool (written in Go) open source.

Allo: Machine Learning for Chatting from Google

The innovators at Google continue innovation to provide machine learning technologies for every aspect of our lives. Allo, Google’s technology designed to simplify texting and chatting, learns a user’s conversational style to provide useful predictive text responses during a chat or text conversation. In order to perform this task, Allo uses a recurrent neural network to decipher the context of the conversation, which it stores in its long short term memory (LSTM). Within the LSTM, the text information is coded as a vector, placing the words into semantic classes.

The semantic classification then generates a list of possible responses to the message, and “beam search” determines the most probable responses to be selected. For example, if a user receives a message asking “Where are you?” the message would be categorized into a semantic class associated with location and responses such as “I’m at work” or “I’m at home” would be suggested to the user.

The team of developers at Google replaced the typical softmax layer with a hierarchical one to accelerate the complicated process that occurred within the LSTM when generating responses. The hierarchical softmax layer uses decision trees instead of isolated words to create responses and interpret messages, which drastically increased the response prediction process, and they were able to achieve prediction speeds of approximately 200ms.

Biased beams and user choice were also utilized to control for message length. A limited memory BFGS (L-BFGS)-based technique helped learn the user’s choices and “style” very quickly. Allo is available not only in English, but in many other languages as well. Google’s machine translation software and it’s semi-supervised learning has made it its diversity possible. Allo is powered by “Google Assistant.”

Scroll to Top