Google is now implementing “Try the API boxes” for users to test out their Cloud Learning API product line: Cloud Vision API, Speech API and Natural Language API. This effort is directed towards developers interested in machine learning and creating apps that understand image, text or speech.
With “Try the API boxes” now prominently displayed on each API product page, users can now drag-and-drop their files in a user-friendly REST API, and using label detection to dissect the file and generate what the API can recognize.
In Cloud Vision API, developers can upload images and the API uses a pattern matching algorithm to label what is within the images into thousand of categories. Image recognition can contextualize the image in order to detect and classify individual objects and faces and even landmarks and logos. Image classification can also flag “objectionable content” as users build their metadata and there is an option for the API to only recognize certain human emotions within the image (e.g. happy, sad, joyful) using sentiment analysis. The Optical Character Recognition (OCR) feature can recognize words in multiple languages and display them on a textbox with the position of each word the API recognizes.
With Cloud Natural Language API, the API will translate the meaning and structure of the uploaded text with various attributes and metadata. In addition, the API provides sentiment and syntactic analysis where it can understand the sentiment expressed in text, whether the statement was intended as positive or negative, the relevant topics discussed, and the syntax of the text so that software can train in learning the complexities of human languages.
And with Cloud Speech API, spoken content is converted into text in over 80 supported languages. As a leader in speech-to-text technology, Google uses deep neural-networks to continuously train and improve the quality of their speech recognition and receive their training data from Android users using Google’s voice search in the Google app and voice typing in Google’s Keyboard. It also offers app developers a way to accept commands by voice and to direct those commands at any network-accessible device.
Currently, Cloud Vision API is available for all developers while Speech API and Natural Language API is still in beta.