No items found.
6
min read

Fast Vector Annotation with Machine Learning Assisted Annotation

At Sama, Fast Vector Annotation of objects using polygons is a task that our expert annotators spend a great deal of time on. This is especially true for projects involving autonomous vehicles

Full Name*

Email Address*

Phone Number*

Subject*

Topic

Thank you! Your submission has been received.
We'll get back to you as soon as possible.

In the meantime, we invite you to check out our free resources to help you grow your service business!

Free Resources
Oops! Something went wrong while submitting the form.
Fast Vector Annotation with Machine Learning Assisted AnnotationAbstract background shapes
Table of Contents
Talk to an Expert

At Sama, Vector Annotation of objects using polygons is a task that our expert annotators spend a great deal of time on. This is especially true for projects involving autonomous vehicles, where it is typical to apply instance segmentation to label scenes comprising hundreds of frames, each with multiple objects (vehicles, pedestrians, traffic signs, etc.) like you see in Figure 1.

Figure 1. Example of Polygonal Annotation in Sama.In this post, we summarize an approach that we have developed to speed up polygonal instance segmentation using machine learning. This approach was presented earlier this year at the CVPR Workshop on Scalability in Autonomous Driving, and the ICML Workshop on Human-in-the-Loop Learning.

Few-Click Annotation

Building instance segmentation Deep Learning (DL) models for autonomous vehicles requires a significant amount of labeled data. The use of Machine Learning (ML) for producing pre-annotations to be reviewed by human annotators, whether in an interactive setting or as a pre-processing procedure, is a very popular approach for scaling up labeling while controlling the costs.Multiple approaches have been suggested for machine-assisted instance segmentation. These typically consist of a DL-based segmentation of the object(s) integrated into a human-in-the-loop system. The human can interact with the system by correcting the model output, initializing the model with one or several clicks, or a combination of those steps. Examples of such systems include Polygon-RNN++ , DELSE , DEXTR , and CurveGCN . Those systems all present good results, but some open questions remain:

  • Do these methods perform well when a production-level accuracy is required, as when working for a customer project?
  • Does the choice of annotation tool influence the results? The gains to be made by using ML depend on how difficult it is for humans to draw polygons in the provided UI. Here we used our optimized drawing tool for polygons, which is part of our labeling platform.
  • ML integration is not usually approached from a human-centric perspective. Beyond the optimization of traditional metrics like IoU, what interactions are most desirable and how should we present the output of the model to annotators?

Our method relies on combining the well-known DEXTR approach with a raster-to-polygon algorithm, to make the result more easily editable. This is not unlike what other tools (such as CVAT) have implemented, though we have optimized this approach for our specific use cases using A/B testing.

The Model

Our instance segmentation model is based on the well-known Deep Extreme Cut (DEXTR) approach , along with a raster-to-polygon conversion algorithm that yields high quality polygons whose vertices are sampled in a way that reproduces human drawing patterns. The model uses the few clicks provided by human annotators at inference time. The steps are described in Figure 2.

Figure 2. An overview of the approach.Regarding the model itself, we adopted a custom version of the UNet along with an EfficientNet backbone (instead of the ResNet backbone used in the paper).In our experience, for human annotators to produce good instance segmentation masks efficiently, a polygon annotation tool should be used. As such, we needed to convert the raster masks produced by our model to high quality polygons. To add to the challenge, humans tend to produce sparse polygons, adding vertices only when necessary. We therefore adopted a raster-to-polygon procedure that minimizes the number of output vertices.

A/B Testing the Approach

At Sama, we use A/B testing as much as possible to systematically refine and improve our new features. To this end, we have developed a flexible testing infrastructure that can ingest and aggregate data from multiple internal processes and is made available to anyone within the organization.This framework measures the statistical impact of proposed changes on our efficiency metrics (such as drawing time or shape adjustment time). The significance of observed differences on a given efficiency metric is evaluated using statistical tests.

Toy A/B Tests

We conducted an A/B test of the method using a synthetic automotive dataset called SYNTHIA-AL . The dataset's images and corresponding annotations were generated from video streams at 25 frames per second (FPS). Figure 3 shows SYNTHIA image examples, along with their segmentation (done manually and with the Few-Click tool).

Figure 3. SYNTHIA example images, along with their manual and semi-automated annotations.The test, applied only to motor vehicles, reproduced realistic annotation guidelines, namely:

  • The drawn polygon needs to be within 2 pixels of the edge of the vehicle.
  • All vehicles down to 10 pixels (height or width) need to be annotated.

Following this test, we found a nearly 3-fold reduction in annotation time. On the other hand, we also found that on some of the more complex shapes, annotators were spending quite some time manually adjusting the ML output. DEXTR's authors originally showed that the segmentation can be improved with additional clicks beyond the four initial ones. We therefore extended our few-click tool to allow online refinement of the polygons by considering modifications to their vertices as extra clicks. At train time we simulated the corrective clicks by considering the point of greatest deviation between predicted mask and ground truth as illustrated in Figure 4.

  • Problem: DEXTR’s 4 extreme clicks are not always sufficient.
  • Observation: DEXTR trained on 4 clicks benefits from additional clicks.
  • Solution: Fine-tune DEXTR model with additional clicks for hard samples as established by IoU at train time.

Figure 4. Integrating additional clicks in the training process. Using this method, annotators are able to re-trigger the model inference with an additional click, instead of manually adjusting the output. We proceeded to a second toy A/B test, and results showed that we could obtain a theoretical efficiency gain of up to 3.5x on vehicles using the improved method.References

  1. Acuna, D., Ling, H., Kar, A., and Fidler, S. Efficient annotation of segmentation datasets with polygon-rnn++. In CVPR, 2018.
  2. Ling, H., Gao, J., Kar, A., Chen, W., and Fidler, S. Fast interactive object annotation with curve-gcn. In CVPR, 2019.
  3. Maninis, K.-K., Caelles, S., Pont-Tuset, J., and Van Gool, L. Deep extreme cut: From extreme points to object segmentation. In Computer Vision and Pattern Recognition (CVPR), 2018.
  4. Papadopoulos, D., Uijlings, J., Keller, F., and Ferrari, V. Extreme clicking for efficient object annotation. In ICCV, 2017.
  5. Ronneberger, O., Fischer, P., and Brox, T. U-net: Convolutional networks for biomedical image segmentation. CoRR, abs/1505.04597, 2015. URL http://arxiv. org/abs/1505.04597.
  6. Tan, M. and Le, Q. V. Efficientnet: Rethinking model scaling for convolutional neural networks. CoRR, abs/1905.11946, 2019. URL http://arxiv.org/ abs/1905.11946.
  7. Wang, Z., Acuna, D., Ling, H., Kar, A., and Fidler, S. Object instance annotation with deep extreme level set evolution. In CVPR, 2019.
  8. Zolfaghari Bengar, J., Gonzalez-Garcia, A., Villalonga, G., Raducanu, B., Aghdam, H. H., Mozerov, M., Lopez, A. M., and van de Weijer, J. Temporal coherence for active learning in videos. arXiv preprint arXiv:1908.11757, 2019.

This post was written by Frederic Ratle and Martine Bertrand.

Author
Frederic Ratle
Frederic Ratle

RESOURCES

Related Blog Articles

No items found.