Site icon Bizety: Research & Consulting

Google’s Latest Developments in Machine Learning, AI and Quantum Algorithms

Last month, we wrote a summary of the latest Google Cloud Next conference held in San Francisco. Not only did Google highlight its latest major investments in infrastructure, but the search giant went into detail about how it is pushing into big data and machine learning in a major way. In today’s post, we are going to look at several of its latest projects that utilize artificial intelligence (AI), machine learning (ML) and quantum algorithms.

On Google’s main AI page – the division at the company dedicated exclusively to AI (first announced at Google I/O 2017), the company outlines its overall mission for the technology: “Bringing the benefits of AI to everyone: At Google AI, we’re conducting research that advances the state-of-the-art in the field, applying AI to products and to new domains, and developing tools to ensure that everyone can access AI.

The division’s three main focuses of attention are: research, tools and ‘applied AI’.

Google is making many of its AI and ML projects open source as a matter of course, aiming for its architecture to be extended to other projects, and encouraging developers to share new use cases. At the 2017 I/O conference, CEO Sundar Pichai made this goal clear, “We want it to be possible for hundreds of thousands of developers to use machine learning.”  He also stated a larger long-term goal for unleashing the possibilities that AI offers to the future, saying, “One day AI will invent new molecules that behave in predefined ways.”

Last month, Google’s AI Quantum Team announced its latest work within quantum computing, the public alpha of Cirq, at the First International Workshop on Quantum Software and Quantum Machine Learning.

AI – How Android 9 Pie Uses AI and Machine Learning

Google launched Android 9 Pie on Thursday, Android’s latest release that calls on machine learning (ML) to make its Android phones easier to use. At the top of the notification to users, Google says Pie is “powered by AI”. AI is working in various ways via Pie, including:

Maching Learning – Google BigQuery and BigQueryML

Google BigQuery enables interactive analysis of big datasets, “making it easy for businesses to share meaningful insights and develop solutions based on customer analytics”. BigQuery ML was launched last month as a standalone capability embedded within BigQuery that enables data scientists and analysts to construct and deploy ML models on massive structured or semi-structured datasets. The goal is to make it easier for data analysts within traditional data science backgrounds to apply ML techniques to their data.

BigQuery ML works in the following ways:

Further details can be found here.

Quantum Computing – Cirq: A Framework for NISQ Algorithms

Quantum hardware and quantum algorithms have undergone significant growth in the last several years, providing a major boost to quantum computing. This is partly due to Noisy Intermediate Scale Quantum (NISQ) computers becoming available for use. These are devices with 50 – 100 qubits and high fidelity quantum gates that can perform tasks that far surpass the capabilities of today’s digital computers. NISQs are a step along the way to more accurate quantum gates, and in the long run, fully fault-tolerant quantum computing.

Google’s Cirq is an open source python framework for creating, editing and invoking NISQ circuits. Cirq is concentrated on near-term questions and finding out if NISQ computers will indeed be able to help solve computational problems of practical importance.

Once installed, Cirq offers researchers numerous capabilities, including:

In a July blog post, Google’s AI Quantum Team also announced the open source release of OpenFermion-Cirq: quantum circuits for simulations of quantum chemistry and materials as an example of a Cirq based application that is already enabling near-term algorithms. The team stressed, “Quantum computing will require strong cross-industry and academic collaborations if it is going to realize its full potential. In building Cirq, we worked with early testers to gain feedback and insight into algorithm design for NISQ computers.” The post also includes examples of other Cirq work from early adopters.

Google Research Initiative: How Connectomics is Using AI and New Kinds of Neural Networks

In July, several members of Google’s Connectomics team – in collaboration with members of the Max Planck Institute of Neurology, published the article, ‘High–precision automated reconstruction of neurons with flood-filling networks’ in Nature magazine. It focuses on the role of flood-filling networks, “a method for automated segmentation that, similar to most previous efforts, uses convolutional neural networks, but contains in addition a recurrent pathway that allows the iterative optimization and extension of individual neuronal processes”. The article demonstrates how a new kind of recurrent neural network is able to help significantly improve the accuracy of automated interpretation of connectomics data over previous deep learning techniques.

The field of connectomics is aimed at mapping the structure of the neuronal networks found in the nervous system in order to gain a deeper understanding of how the brain functions. This involves imaging brain tissue in 3D at nanometer resolution, which means that even a single cubic millimeter of brain tissue can generate over 1TB of data. The main bottleneck to date in brain mapping has been automating the interpretation of this huge data rather than the gathering of the data itself. The new study is aimed at helping move the issue to the next level.

TensorFlow Object Detection API

TensorFlow Object Detection API was released last year followed by a series of new features, including Open Images – a model trained on new datasets and other models trained via Neural Architecture Search and instance segmentation support. Use cases have varied from diagnosing diseases in Tanzanian cassava plants to finding scofflaws in New York City.

In July, Google announced various additions to the API, including:

Google also released pre-trained weights for each of the above models based on the COCO dataset.

Google Research – Computer Vision Used for Automating Drug Discoveries

“Every time you miss a protein crystal, because they are so rare, you risk missing on an important biomedical discovery.”
– Patrick Charbonneau, Duke University Dept. of Chemistry and Lead Researcher, MARCO initiative.

Also last month and in collaboration with researchers from the MARCO initiative, Google published “Classification of crystallization outcomes using deep convolutional neural networks” in Journal PLOS One to discuss several of the latest architectures of deep convolutional networks and how the joint researchers have been able to customize them to obtain an accuracy of over 94% on the visual recognition task of identifying protein crystals, which are rare and hard to detect. Missing these structures “can result in lost opportunity for important biomedical discoveries for advancing the state of medicine”.

Robotic Manipulation – Scalable Deep Reinforcement Learning and Teaching Uncalibrated Robots How to Visually Self-Adapt

Two tools exist that can help robots acquire skills from experience: deep learning, a ML technique which is strong in handling unstructured real-world scenarios and reinforcement learning, another area of ML that is inspired by behaviorist psychology, related to longer-term reasoning and helping robots exhibit “more complex and robust sequential decision making”. Google’s Brain Team is finding that by combining these two models, there is the potential “to enable robots to learn continuously from their experience, allowing them to master basic sensorimotor skills using data rather than manual engineering”.

The brain team has been working on building an off-policy reinforcement learning algorithm that can benefit from large amounts of disparate experience from past interactions. They did so by combining large-scale distributed optimization with a new fitted deep Q-learning algorithm that they call QT-Opt. The preprint is available on arXiv. In doing so, they found that sharing experience across robots was a useful tool to accelerate learning.

Another recent area of focus for the Google Brain Team has been teaching uncalibrated robots to visually self-adapt to better emulate the way in which humans do not need to adjust their viewpoint to a fixed or specific pose in order to manipulate objects. Visual motor integration is developed during childhood by repeatedly manipulating objects in a wide array of different situations, and learning through trial and error calling on visual and sensory cues to adapt. Until recently, vision-based controllers in robotics have not been able to emulate a similar kind of flexibility. However, Google has been working on honing robots’ capability to rapidly acquire motor control skills under large viewpoint variation. Envisioned use cases include helping in rescue efforts in war, disaster or emergency zones.

Copyright secured by Digiprove © 2018
Exit mobile version