Efficiency of HTTPS Long Polling and MQTT with SSL on Android

Categories
Efficiency of HTTPS Long Polling and MQTT with SSL on Android

According to Stephen Nicholas, a Super Geek and “IBMer” – MQTT, designed to provide mobile users with low-latency and low power usage messaging over fragile networks, and secure request-response protocol, HTTPS, were recently tested on an Android device to determine their efficiency. An SSL connection was tested on the MQTT connection and HTTPS was tested using Comet style long polling. With a Lithium Ion battery rated at 1400mAh & 3.7V, HTTPS beat MQTT, requiring 0.02972% battery usage over 3G and 0.00228% over WiFi compared to MQTT’s 0.04563% battery usage and 0.00276% to establish a connection to the server.

To maintain that connection over both WiFi and 3G, however, MQTT surpassed HTTPS, sometimes by an order of magnitude. For example, with a “keep alive” of 60 seconds, HTTPS was using 1.11553% of battery/hour over 3G while MQTT used 0.72465%, and HTTPS used 0.15839% over WiFi while MQTT used only 0.01055%. After approximately 5.5 minutes of use, MQTT made up for the initial surplus of energy spent to establish a connection.

MQTT also used less battery energy (0.27239% on 3G and 0.00411% on WiFi) to receive a message than HTTPS (0.34645% and 0.04817%), showing the difference in keep alive costs between the two approaches. When the tester sent as many messages as quickly as possible, MQTT was able accept 160278 and 263314 messages/hour over 3G and WiFi respectively, while HTTPS could only accept 1708 and 3628.

Likewise, MQTT received all 1024 messages sent, while HTTPS received only 240 over 3G and 524 over WiFi. Battery usage per message was also drastically lower on MQTT. Similar results were produced when sending a message both in battery efficiency per message and number of messages that were sent. Overall, power consumption was less and messages were received more reliably with the MQTT connection.

Google’s Quantum Annealing Research

Google, who has had a team studying quantum annealing for several years, has programmed its D-Wave 2X quantum annealer (operated jointly with NASA) for proof-of-principle optimization problem testing against simulated annealing and the Quantum Monte Carlo algorithm. D-Wave was 108 times faster than simulated annealing running on a single core for problems involving about 1000 binary variables. D-Wave was also sometimes as much as 108 times faster than Monte Carlo, which was running on a conventional processor. Google cautions, however, that quantum annealing is not yet a practical technology.

Improvements such as increasing the coherence, density and control precision of the connections between the qubits, and supporting higher order optimization (making rugged energy landscapes where quantum annealing operates more effectively, typical) would be necessary to this process. They also point out that some techniques in the current generation of D-Wave processors can solve the proof-of-principle problems faster than the quantum hardware, but they will soon become ineffective. Finally, lean stochastic local search techniques (simulated annealing) are superior for solving hard problems with little structure to exploit and are the primary competition for quantum annealing hardware.

Multiple HTTP Endpoints as a Health Proxy

When TCP, HTTP, and HTTPS health checks are unable to verify the health of an endpoint, an application can determine its own health and use an “HTTP endpoint as a health proxy to Route 53 Health Checks to perform DNS failover.” This can be highly advantageous when there are many pairs of standby and primary databases. Failure thresholds can be configured to 1 and at an interval of 10 seconds to yield fast results or can be changed to fit the criteria needed. A score of 200 is reported from a website proxy if the database is healthy and a score of 500 if it is unhealthy. However, if the proxy fails, all health checks will time-out and fail. To avoid this, multiple endpoints should be used for the same databases and proxies.

To ensure that the health of the primary database is checked first, a weighted round robin (WRR) resource record set can be used, setting the primary databases with a value of 1 and the standbys as zero. If all the primary weighted databases return an unhealthy reading, a random standby will be selected. Should all databases return an unhealthy reading, the results from a primary database will be sent. Though global connectivity and the maintenance of additional databases may be a concern for this method of health checks, with more database pairs than endpoints and global connectivity, this can be a great method to check the health of many resources such as file servers and mail servers.

Google’s Cloud SQL Second Generation Available in Beta

Google Cloud SQL is a fully managed MySQL database that can apply patches and updates, manage backups, provide automatic failover for High Availability and configure replications while using vanilla MySQL for ease of access by managers from any application. Google has recently released its second generation of Cloud SQL. The new Cloud SQL is 7x faster than the previous version and scales to 10TB of data, 15,000 IOPS and 104GB of RAM per instance.

The new Cloud SQL has also increased flexibility, allowing connection to the Compute Engine, Managed VMs, Container Engine and any workstation from any client with Internet access. It also provides simple scalability of databases with minimal downtown and the ability to stop and start databases through the command line interface (CLI) and APIs to maximize budgets. Google offers a plethora of partners to assist in the setup on their Cloud SQL and is currently offering a no-strings-attached trial with Beta pricing.

New PowerShell Data Attacks Discovered

FireEye recently announced that it observed several attacks in the wild that utilized Microsoft’s PowerShell for nearly all steps of the attack. The first instance of such an attack was through a legitimate-looking Russian domain website that, when the EXE file was executed, initiated a PowerShell command to download another PS1 (PowerShell) script. The script then copied files with victims’ user credentials, compressed the files into a ZIP folder, and sent them to the malware author via SMTP-related commands requiring an email ID and password of the malware author. To decrypt the stolen data, the malware used the RC4 algorithm in PowerShell script.

In another instance, an RTF file with Russian language was detected in Russia and Austria. This file, a drive-by download, executes a payload and initiates a series of PowerShell commands. This malware runs on a VM, steals a plethora of data, prioritizes data with credentials, closes the data stealing cycle with PowerShell’s Dofoil, and prepares a new payload with malicious commands. Encoding (using an –enc switch to hide the activity of first calling system sleep) and quote obfuscation are then used during the offload of the data. FireEye appliances claim to prevent these methods of attack.

Netflix Now Using Dynomite with Redis

Netflix introduced proxy layer Dynomite to its system about one year ago and has recently added Redis capabilities as well. Dynomite provides sharding and replication, can scale data to meet traffic demands, supports Active-Active Regional Resiliency, provides high availability, and makes non-distributed datastores a fully distributed system with multi-region replication. Dynomite also enables separate tunable consistency where “read and write consistency can be configured to manage availability versus data accuracy,” in DC_ONE (based on Availability Zones) or DC_QUORUM configurations (based on a quorum number of nodes).

To test the improvements on the systems with Redis, a Dynomite cluster of 3 nodes, one in each Availability Zone, were tested three times for three hours each on DC_ONE and DC_QUORUM. Results for DC_ONE indicated that Dynomite could scale horizontally for throughput by increasing the number of nodes per region. Dynomite did not provide extra latency as it scaled, contributing only about 20% of the average latency (remainder from network and client processing). Additionally, the 95th percentile latency remained at 0.4ms and the node effect was less than 10% of latency.

DC_QUORUM also saw great scaling as cluster nodes increased, though the node achieved 18K OPS per node compared to 40K OPS per node in DC_ONE. Average and mean latency stayed below 2.5ms, though they were slightly higher than in the DC_ONE experiment, and the Dynomite level 95th percentile remained below 2ms and 3ms on the client side.

When Redis Pipelining was used (client side batching that increases overall throughput at the expense of latency in some operations), throughput increased up to 50% for a small Dynomite cluster. However, as Dynomite scaled horizontally, the benefits were greatly diminished. To improve overall cluster performance, Dynomite nodes with better network capacity and using fewer availability zones could be used. Redis Pipeline can also be used for request batching to improve performance, and use of DC_QUORUM can yield better read and write guarantees but will have higher latencies and lower throughput.

The 30 Second Latency Spike

A company reported in November 2015 that it was experiencing a huge latency problem when CloudFlare’s CDN was responding to an HTTP request. Though CloudFlare initially blamed the network, ICMP pings to two IPs indicated a stable network with low latency. CloudFlare then tested a Linux server behind the router and saw the huge latency spikes, revealing that the problem was between the router and the server inside their datacenter. They then ran tcpdump on the affected server and located a particular ICMP packet that had kernel trouble.

With System Tap and a flame graph, they located the function that was the culprit (net_rx_action) and verified by testing the latency on that function. More flame graphs were made and the descendant of the function, tcp_collapse, was determined to be the cause of  the spike. Tcp_collapse was performing garbage collection, merging adjacent TCP packets into one larger sk_buff, and increasing latency. The TCP socket receive buffer size was changed, as was the rmemsysctl. Though rmemsysctl of 2MiB solved the latency problem, it was not practical for high throughput and high latency connections, so 4MiB max rmem was chosen to resolve the latency spike issue.

Scroll to Top