Product
6
min read

Training Your AI in 3D

Today, we're announcing the production availability of our new 3D annotation engine for the Sama.

Full Name*

Email Address*

Phone Number*

Subject*

Topic

Thank you! Your submission has been received.
We'll get back to you as soon as possible.

In the meantime, we invite you to check out our free resources to help you grow your service business!

Free Resources
Oops! Something went wrong while submitting the form.
Training Your AI in 3DAbstract background shapes
Table of Contents
Talk to an Expert

The world has three dimensions. Why shouldn't your AI?Today, we're announcing the production availability of our new 3D annotation engine for the Sama. This new offering allows our expert annotation team to nimbly explore your lidar point clouds, searching for objects of interest and producing the highest quality 3D labels.

Autonomous-vehicle

As much as we love a good "3D" joke, we can't leave out the important 4th dimension -- time. Building on the object tracking expertise from our video annotation toolkit, we support lengthy sequences of point cloud data, tracking moving objects through time and space.

WAIT, WHAT ARE POINT CLOUDS?

Before we get too deep into the weeds on 3D annotation, let's take a step back and talk about 3D point clouds and why they are important.Throughout the history of computer vision technology, two-dimensional camera images have dominated. Techniques to detect edges and extract objects from the background, approaches to estimate relative distances in order to construct implied dimensions, and carefully-calibrated dual-camera captures that triangulate all incrementally nudged forward a machine's ability to "see."Advances in deep neural nets dramatically improved computer vision with a greater degree of robustness -- and kicked off the current race to bring self-driving cars to market.Cars are heavy and filled with combustible material. They travel at high velocities. They share the same space as other hurtling vehicles, ambling humans, and wandering animals. This makes the perception of distance absolutely critical. Recognizing an obstruction in the road is step 1; step 2 is estimating distance and making the appropriate response (slam on the brakes? maneuver safely around?).Hence, lidar technology has been rapidly adopted as the key enabler for self-driving cars. Kyle Vogt, CEO of GM Cruise, says, “sensors are a critical enabler for deploying self-driving cars at scale, and LIDARs are currently the bottleneck.” Rapid innovation to make lidar sensors faster, smaller, and cheaper with higher density, higher accuracy data make adoption near inevitable.What does the data look like? Rather than a 2D image displayed on a flat screen, a lidar sensor generates a 3D snapshot of the surrounding environment. It scans the area with pinpricks of light, measuring the time-of-flight of reflections, and thus precisely capturing a collection of (X, Y, Z) points. We can reconstruct this to display on our screens, much like an Xbox game.

Lidar-sensor-xyz-points

The richness and fidelity of this type of data feeds a new generation of AI algorithms (e.g., see this paper about deep neural nets with direct voxel inputs by Apple), making depth perception much easier.

MUCH MORE THAN SELF-DRIVING CARS

It's not just self-driving cars. 3D point clouds power a variety of machine vision use cases.While an autonomous vehicle may use a technique like SLAM to dynamically build a 3D map of the environment, providers of high fidelity mapping data collect their own point clouds so that they can provide the ground-truth reference map. For example, providers like HERE could build 3D into their HD Localization maps.Delivery robots are another exciting use case for lidar sensor based perception. The navigation concepts are very similar to automotive, but many delivery robots can more easily traverse sidewalks and coexist with pedestrians.In a warehouse context, robots with 3D lidar sensing are able to shift materials around, rapidly picking up and transporting pallets to reconfigure the warehouse or optimize manufacturing workflows. Collision avoidance in the presence of human operators as well as indoor navigation and mapping round out the common ways that 3D point cloud data power cognitive robots.Finally, unmanned aerial vehicles -- otherwise known as drones -- can use lidar to collect high resolution 3D data, such as when "corridor mapping" a pipeline, surveying a construction site, or performing visual structural inspections.

Corridor-mapping

Empowering the Human Element

Sama has extensive project experience annotating lidar sensor data from cutting-edge instruments. While we quickly ramped projects to produce those annotations on our clients’ own in-house apps, we wanted to create a tool flexible enough for the broader market.That's why we're launching our 3D annotation tool that supports a wide range of 3D data types from essentially all lidar sensors. We have likely worked with sensor data from your lidar vendor of choice.There's more to it beyond the raw data.The Sama mission is to use human ingenuity to accelerate AI development, and one of our favorite tricks is to use algorithms to assist and interact with our expert human annotators. Our new 3D tool has some powerful time-saving features such as:

  • automatic cuboid estimation to quickly capture an entire object after clicking only a single point;
  • automatic ground-plan extraction to make objects and environs easier to discriminate;
  • synchronized camera images for 2D confirmation of the 3D points; and
  • object tracking estimation to accelerate annotation and predict movement even when objects are occluded or only partially visible

This approach lets us combine the skills of expert annotators, who can cleverly correlate what a camera shows with even sparse point clouds, with algorithms to accelerate the manipulation of 3D annotations in space and time. Together, we rapidly produce ground-truth annotations of the highest quality to power 3D perception.

3D-annotation

The bottom line: our clients get to market faster with their latest 3D perception algorithms. Quality and throughput make a difference.

Experience Matters

We have run many projects through our teams, and consistently produce the quality results made possible only by long-term, dedicated annotation experts. The Sama model allows our agents to concentrate on a single client's data, understanding the nuances and unique requirements of their machine learning team (e.g., how precisely to enclose an object, whether to include side-view mirrors or bicycle racks, how to treat objects that pass out of view, what constitutes sufficient visual contact, and so on).One project included side-mounted lidar sensors that produced data rotated by 90 degrees. Rather than expecting our team to annotate with their necks tilted at an angle, our engineering team whipped up a quick "rotate camera" tool to quickly reorient the workspace. Between continuous improvements of our Hub annotation studio, iterative refinement of the project objectives and annotation goals, and experience with point cloud data from nearly every lidar sensor out there, there's a bright future in 3D annotation here at Sama.Partnering with Sama, you can get the most from your 3D lidar projects. We’re incredibly excited about this production-ready 3D annotation tool and the future of high-performance lidar. If you have 3D annotation on your mind and would like to see a demo of our annotation platform in action: Drop us a line!

Author
Matthew Landry
Matthew Landry

RESOURCES

Related Blog Articles

No items found.