Site icon Bizety: Research & Consulting

Deepen AI’s Image Labeling Tool for Autonomous Systems

Santa Clara startup Deepen AI has developed an annotation and labeling tool for autonomous systems using deep learning and advanced computer vision. The tool works with 2D, 3D, and 4D environments, that can annotate and label each scene and frame from the massive amounts of visual data being collected by robots, self-driving cars, drones, and other autonomous systems.

The startup has developed algorithms that provide image matching, image classification, video segmentation, and more. As they point out, an autonomous system such as a drone, self-driving car, or robot must learn to perceive the physical world around them. In order to accomplish this feat, “computer vision, machine learning, and multiple sensors” play a key role. Sensors are devices like “cameras, radars, sonars, and LiDARs”.

LiDAR stands for light detection and ranging and it “is a remote sensing method used to examine the surface of the earth” using light “in the form of a pulsed laser to measure ranges (variable distances) to the earth”. In the case of a self-driving car, the system detects objects like cars, pets, people, signs, road conditions, and more.

source: NOS

An autonomous car has numerous sensors that capture data. Google’s self-driving car has eight sensors with a prominent LiDAR camera sitting on the roof capturing scenic data. One of the main challenges of collecting all this data, each “image of every frame needs to be vetted by humans” and according to Deepen AI, one hour of drive time may take up to 800 man hours to annotate because each object in a scene needs to be labeled. This is one of the problems that the startup is focus on solving.

Background

Exit mobile version