MIT Develops a New Technique to Load Webpages Faster
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have taken a different approach to solve the tedious problem of webpage bloat by developing a new tool called Polaris, which they say can reduce page load times by as much as 34% than prevalent traditional systems. Polaris has been tested across a range of network conditions on 200 of the most globally popular websites (via the Alexa index), which includes NYT.com, Apple.com, ESPN.com and Weather.com. Pages in the 95th percentile, and those with more webpage complexity such as New York times, load 59% more quickly.
MIT’s technique focuses on mapping the connections (aka ‘dependencies’) between different objects on a page in order to dynamically figure out the most efficient route for a browser to load the various interdependent elements. Polaris, depending on network conditions, has the best gains when delays are high and pages have more objects, especially dynamic objects that can lead to subsequent object fetches.
This in turn helps analyze the most efficient route for a browser and to help speed up the page loading process. While they note there have been prior attempts to do “dependency-tracking”, they claim theirs is a more “fine grained” mapping of these relationships, whereas they say other methods have focused on comparing lexical relationships via HTML tags and have thus failed to capture “more subtle dependencies”.
The MIT Researching team says that while Google and Amazon have focused on increasing loading speeds through data compression, Polaris’ unique approach allows it to work better with more complex websites, especially those on mobile which tend to experience more delays in page loading than traditional, wired networks. They are hoping to have Polaris incorporated into popular Internet browsers like Firefox and Chrome.
Google Joins Open Compute Project To ‘Drive Standards In IT Infrastructure’ With New 48-Volt Rack Design
Facebook’s Open Compute Project (OCP) was the company’s initiative to allow visibility into its’ custom-built servers, racks, and other equipment used in Facebook’s data centers. Major tech companies including Microsoft, Intel, Rackspace and Apple joined the effort. There were two holdouts among the largest Internet companies, namely Google and Amazon, which have kept their data center designs all to themselves. However, Google announced that they are now joining the OCP at the Open Compute Project Summit.
In the past, Google has kept their data center and server designs under wraps, which kept a competitive advantage over Facebook. Google will contribute a new rack specification that includes 48V power distribution and a new form factor to allow OCP racks into their data centers to support higher-performance computing systems while being more energy-efficient and cost-effective. Google has also intended other OCP projects in the future, which includes These the standardization of software, alternatives to SNMP (which is used to share operational data inside data centers) and redesigning disk drives used in cloud data centers to better work together.
With all companies leveraging their data center knowledge, this could make a great wave for advancement in areas such as artificial intelligence technologies. Making AI happen at a worldwide scale will require a massive working neural network, which could be possible with the collaboration of all tech giant companies on scalable computing.
Google Al Program Finally Loses to Human Go World Champion
After three consecutive losses, Chinese board game Go’s world champion Lee Sedol has beaten Google’s DeepMind artificial intelligence program, AlphaGo, in 176 moves during the fourth game of their five-game series. DeepMind founder Demis Hassabis states that the AI lost thanks to its delayed reaction to a slight mistake on the 79th turn and didn’t realize the extent of its mistake until the 87th move, where it started making bizarre moves in the middle of the game.. However, Hassabis believe that AlphaGo’s first and only loss against a professional human player will help Google eliminate weaknesses in the AI.
Go is considered as extremely complex and an unsolved “grand challenge” for artificial intelligence. Despite decades of efforts, the strongest Go computer programs still only play at the level of human amateurs. AlphaGo is a program based on general-purpose AI methods, using machine learning and deep neural networks to evaluate and execute board moves, mimic expert players, and improving by learning from games played against itself.
From what was inadvertently indicated in the press conference, AlphaGo considers only the opponent moves in the future which it deems to be the most profitable for the opponent. This leaves it with glaring blind spots when it has not prepared for lines which are surprising to it. Lee Sedol has now learned to exploit this fact in 4 games, whereas AlphaGo’s neural network requires millions of games to train on in order to alter its playing style
AlphaGo’s victories against legendary Go player Lee Sedol over the last few days mark a major milestone in AI research. DeepMind researchers are hoping AlphaGo can be used to drastically improve real-world applications to Google’s core products such as the Google search engine and self-driving cars.
Imperva’s Counterbreach Uses Machine Learning to Detect Insider Threats
With the increasing presence of data breaches and internal employers compromising personal company information to external attackers, a problem to identify is knowing what data was accessed, how they were accessing it, and steps taken for investigating the source problem. One approach to identify such sources is using a User Behavior Analytics (UBA) tool to collect user behavior monitoring data. These metrics offer transparency into potential insider threats, but they can also show early warning signs of what data was accessed, how it’s being used, and what other data is missing.
Imperva’s CounterBreach platform uses machine learning to analyze how users access data in order to identify dangerous data access and use patterns. CounterBreach “learns” how users normally access data on a daily basis and determines if record modification made from users or computer applications are out of the ordinary in order to spotlight dangerous data access and use.
CounterBreach also complements machine learning with non-invasive deception technology to identify compromised endpoint devices. By dynamically learning normal data access patterns through enhanced user behavior analytics, CounterBreach proactively alerts IT teams to potential anomalies. The CounterBreach solution should be available to the public sometime soon for this quarter.