This week, Sama visited the Auto.AI event, which bills itself as the platform bringing together the stakeholders who play an active role in the deep driving, computer vision, sensor fusion, perception and cognitive vehicles scene.
Or to put it another way: anyone who is anyone in the greater ecosystem for self-driving cars.
In attendance were leading OEMs (Original Equipment Manufacturers) like GM, Toyota, Hyundai, Ford, Mercedes-Benz, and so on. Also, there were many hardware technology companies developing sensors and cameras, as well as myriad providers of technology to make useful the vast data output from these sensors. (ICYMI – We are in the latter category).
A Keen Interest in Data Annotation
We spoke to many attendees, from both OEMs and sensor tech companies who were very curious about our offerings in the annotation space. They asked how we could annotate data sets, what kinds of data we could work with, and what makes Sama stand out from other options.
Our differentiators in brief: there is great appreciation for the value of our skilled agents, as we can train them directly during pilot projects, leading to immediate and consistent quality output during the project; they are on staff and work in Sama-operated offices with structured supervision and project management; and our ISO 9001 certification was very appreciated.
And the fact that we have a measurable social impact mission adds value to the company’s CSR departments. (Stay tuned for our upcoming blogs where we go into a little more detail on these aspects.)
Autonomous Driving (AD) Tech Trends
It was notable that there were several speakers talking about the need for car/passenger interaction design. Some examples:
- Acknowledgement of the importance of the vehicle’s ability to communicate to the passenger what objects the car is noticing and that influence self-driving decisions.
- Exposing these decision hints visually will help people to gain trust in the AI and understand how it works. There is an intrinsic problem where it’s really hard to explain why a neural network makes a given decision — but this could help.
- The awareness that autonomous vehicles need to take suggestions from the passenger on Spotify stations or travel route preferences, keep track of who is in the car, and have an internal sense of the state of the car (spilled coffee, forgotten phones).
- There were many examples of the research around reading facial cues or what human commands or questions really mean.
What this tells me is that the technology is getting to a more mature state where we can put effort into these secondary considerations are nonetheless necessary to making this technology viable and successful in the mainstream.
The conference ended with a demo from GM of a L2 autonomous system – currently available at a Caddy dealership near you. While not fully autonomous, the vehicle can take control on the highway and proactively vibrate seats and flash lights to keep drivers alert and engaged.
They also showed demo videos of their L4 autonomous Chevy Bolts fielded by the Cruise Automation subsidiary operating in San Francisco. Though there were passengers in the car who could take control if necessary, it was a compelling proof of concept in a denser, more dangerous environment with bicyclists and pedestrian traffic. GM/Cruise hasn’t officially announced when their automated ride hailing service will launch to the public, but it’s already operating in beta for San Francisco employees! In short, GM is demonstrating pretty rapid rapid development and growth.
An overview of some of the players that was shared at the conference. GM is leader of the pack. Navigant Research
A company called Renovo Motors shared their vision of an OS for self-driving taxis – tying together a coalition of technologies around the projected $1 trillion “robotaxi” marketplace. They make bold, exciting predictions that by 2030, 95% of all urban miles will be driven by autonomous taxis.
A trend worth watching – “multitasking networks.” These autonomous driving algorithms are able to run different perception routines in parallel and switch between them as desired for changing environments or scenarios. For example, you might have one algorithm that’s controlling the car while you’re in clear daylight conditions. A little helper algorithm detects that you’ve driven into a fog bank (as I do every morning when crossing the Golden Gate Bridge!), and the car may switch to a different control algorithm optimized for fog navigation.
Combining this multitasking with interest in “sensor fusion” definitely shows that there’s a need for multimodal sensor data and more sophisticated algorithms to operate in transient situations.
Sama Commitment to Best Annotation Technology
We continuously look for the data types that are of interest to our customers. There is so much data needed for 2D and Video annotation technology that we can support. We also heard a lot of interest around Lidar 3D technology. Look for future reports from the field on this blog.
As head of product for Sama, I can tell you we are always looking to how we can incorporate emerging data sources into our platform to provide rapid, high quality annotation. If there’s one thing the Sama journey has shown, it’s that the benefit of a skilled workforce is their flexibility and adaptability to varied tasks with uncompromising quality.
I think that puts us in a very good position as new sensor data becomes more important for more sophisticated algorithms. I feel very confident we’re going to be able to build those workflows into our platform and train up our workforce to deliver.
If you’d like to learn any more about our capabilities and approaches to supporting your data needs in this area, we would urge you to download our solution brief about our Annotated Data Sets. We look forward to joining you in this adventure unfolding on the the roads of the world.