Google’s Latest Developments in Machine Learning, AI and Quantum Algorithms

Categories

Last month, we wrote a summary of the latest Google Cloud Next conference held in San Francisco. Not only did Google highlight its latest major investments in infrastructure, but the search giant went into detail about how it is pushing into big data and machine learning in a major way. In today’s post, we are going to look at several of its latest projects that utilize artificial intelligence (AI), machine learning (ML) and quantum algorithms.

On Google’s main AI page – the division at the company dedicated exclusively to AI (first announced at Google I/O 2017), the company outlines its overall mission for the technology: “Bringing the benefits of AI to everyone: At Google AI, we’re conducting research that advances the state-of-the-art in the field, applying AI to products and to new domains, and developing tools to ensure that everyone can access AI.

The division’s three main focuses of attention are: research, tools and ‘applied AI’.

Google is making many of its AI and ML projects open source as a matter of course, aiming for its architecture to be extended to other projects, and encouraging developers to share new use cases. At the 2017 I/O conference, CEO Sundar Pichai made this goal clear, “We want it to be possible for hundreds of thousands of developers to use machine learning.”  He also stated a larger long-term goal for unleashing the possibilities that AI offers to the future, saying, “One day AI will invent new molecules that behave in predefined ways.”

Last month, Google’s AI Quantum Team announced its latest work within quantum computing, the public alpha of Cirq, at the First International Workshop on Quantum Software and Quantum Machine Learning.

AI – How Android 9 Pie Uses AI and Machine Learning

Google launched Android 9 Pie on Thursday, Android’s latest release that calls on machine learning (ML) to make its Android phones easier to use. At the top of the notification to users, Google says Pie is “powered by AI”. AI is working in various ways via Pie, including:

  • Adaptive Battery and Brightness
    • Pie enables this feature in default, allowing it to track users as they use apps and shift apps that are unused into a low-power state and note when you particularly use certain favorite apps, allowing Android to preload it for quick access at a certain time of day
    • Google says Adaptive Battery will reduce wakelocks (when an app stops your phone from going into deep sleep) by 30%
    • Learning also lets Android adjust user’s brightness levels across the day based on how users deploy it
  • Smart Replies
    • Inbox and Messenger already had Smart Reply, offering options like “OK” or “Right with you”, etc.
    • Now any app, not just Google’s, can use these AI-assisted suggestions in Pie (however, developers need to add support for Smart Reply before it can be enabled in a specific non-Google app)
  • Smart Linkify
    • Smart Linkify is a new API that adds clickable links when certain kinds of entity are discerned in text e.g. a friend sends you an address – Smart Linkify allows you to link straight to a map to look it up
    • It is a new version of the pre-existing Android Linkify API
    • A small feed-forward neural network (500kb) per language with low latency (under 20ms on Google Pixel phones) and a minimal inference code (250kb)
    • Uses the same ML tech that powers  Smart Text Selection (released as part of Android Oreo) to generate links
  • Suggested Actions
    • Your app drawer will get smarter with suggestions as it uses ML to track your actions to note which apps and features you use most often
    • Example actions include suggesting a link to watch a specific TV show, read a conversation in Messenger or select a specific security camera feed in your Nest app
    • Options will be given to either retain access to suggested actions or respond, “Do not show again”
  • Hidden Notifications
    • Android Pie is designed to help cut down on the noise of unnecessary notifications
    • If you tend to swipe notifications from a particular app away without looking deeper, Android will produce an alert in place of the notification to ask if you want to continue to see those notifications – if you choose not to, that app will be silenced.

Maching Learning – Google BigQuery and BigQueryML

Google BigQuery enables interactive analysis of big datasets, “making it easy for businesses to share meaningful insights and develop solutions based on customer analytics”. BigQuery ML was launched last month as a standalone capability embedded within BigQuery that enables data scientists and analysts to construct and deploy ML models on massive structured or semi-structured datasets. The goal is to make it easier for data analysts within traditional data science backgrounds to apply ML techniques to their data.

BigQuery ML works in the following ways:

  • It utilizes a set of simple SQL language extensions that allow users to take advantage of popular ML capabilities, such as forecasting sales via predictive analysis or creating customer segmentations at the source where data is stored
  • It automatically sets smart defaults
  • It takes care of data transformation
  • BigQueryML is based on the standard (batch) variant of gradient descent instead of the stochastic iteration, which gives it various practical advantages:
    • By contrast to stochastic ML algorithms, batch gradient descent is insensitive to the ordering and partitioning of data on disk, which uses less computing power
    • Batch methods can be used in combination with line search techniques from the classic optimization literature – this leads to a learning algorithm that is highly stable and requires less fine tuning than traditional stochastic ones
  • It includes support for regularization and preconditioning.

Further details can be found here.

Quantum Computing – Cirq: A Framework for NISQ Algorithms

Quantum hardware and quantum algorithms have undergone significant growth in the last several years, providing a major boost to quantum computing. This is partly due to Noisy Intermediate Scale Quantum (NISQ) computers becoming available for use. These are devices with 50 – 100 qubits and high fidelity quantum gates that can perform tasks that far surpass the capabilities of today’s digital computers. NISQs are a step along the way to more accurate quantum gates, and in the long run, fully fault-tolerant quantum computing.

Google’s Cirq is an open source python framework for creating, editing and invoking NISQ circuits. Cirq is concentrated on near-term questions and finding out if NISQ computers will indeed be able to help solve computational problems of practical importance.

Once installed, Cirq offers researchers numerous capabilities, including:

  • Fine tuned control over quantum circuits, allowing researchers to specify gate behavior using native gates (including placing them on the device and scheduling their timing)
  • Optimized data structures for writing and compiling quantum circuits to enable users to derive the maximum use from NISQ architectures
  • Cirq supports the quantum algorithms on a simulator, which was built to integrate with future iterations of quantum hardware or larger simulators via the cloud

In a July blog post, Google’s AI Quantum Team also announced the open source release of OpenFermion-Cirq: quantum circuits for simulations of quantum chemistry and materials as an example of a Cirq based application that is already enabling near-term algorithms. The team stressed, “Quantum computing will require strong cross-industry and academic collaborations if it is going to realize its full potential. In building Cirq, we worked with early testers to gain feedback and insight into algorithm design for NISQ computers.” The post also includes examples of other Cirq work from early adopters.

Google Research Initiative: How Connectomics is Using AI and New Kinds of Neural Networks

In July, several members of Google’s Connectomics team – in collaboration with members of the Max Planck Institute of Neurology, published the article, ‘High–precision automated reconstruction of neurons with flood-filling networks’ in Nature magazine. It focuses on the role of flood-filling networks, “a method for automated segmentation that, similar to most previous efforts, uses convolutional neural networks, but contains in addition a recurrent pathway that allows the iterative optimization and extension of individual neuronal processes”. The article demonstrates how a new kind of recurrent neural network is able to help significantly improve the accuracy of automated interpretation of connectomics data over previous deep learning techniques.

The field of connectomics is aimed at mapping the structure of the neuronal networks found in the nervous system in order to gain a deeper understanding of how the brain functions. This involves imaging brain tissue in 3D at nanometer resolution, which means that even a single cubic millimeter of brain tissue can generate over 1TB of data. The main bottleneck to date in brain mapping has been automating the interpretation of this huge data rather than the gathering of the data itself. The new study is aimed at helping move the issue to the next level.

TensorFlow Object Detection API

TensorFlow Object Detection API was released last year followed by a series of new features, including Open Images – a model trained on new datasets and other models trained via Neural Architecture Search and instance segmentation support. Use cases have varied from diagnosing diseases in Tanzanian cassava plants to finding scofflaws in New York City.

In July, Google announced various additions to the API, including:

  • Accelerated training support of object detection models via Cloud TPUs
  • Making the mobile deployment process better by accelerating inference and making it more straightforward to export a model to mobile with the TensorFlow Lite format
  • Multiple new model architecture definitions, which include:
    • RetinaNet (Lin et al., 2017)
    • MobileNet adaptation of RetinaNet
    • The Pooling Pyramid Network (PPN), a novel SSD-based architecture whose model size is three times smaller than that of SSD MobileNet v1 with very little accompanying loss in accuracy

Google also released pre-trained weights for each of the above models based on the COCO dataset.

Google Research – Computer Vision Used for Automating Drug Discoveries

“Every time you miss a protein crystal, because they are so rare, you risk missing on an important biomedical discovery.”
– Patrick Charbonneau, Duke University Dept. of Chemistry and Lead Researcher, MARCO initiative.

Also last month and in collaboration with researchers from the MARCO initiative, Google published “Classification of crystallization outcomes using deep convolutional neural networks” in Journal PLOS One to discuss several of the latest architectures of deep convolutional networks and how the joint researchers have been able to customize them to obtain an accuracy of over 94% on the visual recognition task of identifying protein crystals, which are rare and hard to detect. Missing these structures “can result in lost opportunity for important biomedical discoveries for advancing the state of medicine”.

Robotic Manipulation – Scalable Deep Reinforcement Learning and Teaching Uncalibrated Robots How to Visually Self-Adapt

Two tools exist that can help robots acquire skills from experience: deep learning, a ML technique which is strong in handling unstructured real-world scenarios and reinforcement learning, another area of ML that is inspired by behaviorist psychology, related to longer-term reasoning and helping robots exhibit “more complex and robust sequential decision making”. Google’s Brain Team is finding that by combining these two models, there is the potential “to enable robots to learn continuously from their experience, allowing them to master basic sensorimotor skills using data rather than manual engineering”.

The brain team has been working on building an off-policy reinforcement learning algorithm that can benefit from large amounts of disparate experience from past interactions. They did so by combining large-scale distributed optimization with a new fitted deep Q-learning algorithm that they call QT-Opt. The preprint is available on arXiv. In doing so, they found that sharing experience across robots was a useful tool to accelerate learning.

Another recent area of focus for the Google Brain Team has been teaching uncalibrated robots to visually self-adapt to better emulate the way in which humans do not need to adjust their viewpoint to a fixed or specific pose in order to manipulate objects. Visual motor integration is developed during childhood by repeatedly manipulating objects in a wide array of different situations, and learning through trial and error calling on visual and sensory cues to adapt. Until recently, vision-based controllers in robotics have not been able to emulate a similar kind of flexibility. However, Google has been working on honing robots’ capability to rapidly acquire motor control skills under large viewpoint variation. Envisioned use cases include helping in rescue efforts in war, disaster or emergency zones.

Scroll to Top