The last few years have increased pressure on grocers, supermarket chains, and convenience stores to improve customer experiences like never before. Despite supply chain bottlenecks and rising supermarket prices, more than a third of respondents to a recent survey said they are buying more groceries than before the pandemic.
More shopping doesn’t need to lead to longer lines. Many retailers have implemented self-checkout lines to help shoppers get their errands done more quickly. And for retailers at the forefront of innovation, an even more friction-free option is delighting customers across the globe.
Grab-and-go smart checkout enables shoppers to completely skip the line. Rather than relying on traditional barcode scanners, retailers are leveraging smart cameras to capture and identify tens of thousands of grocery items in their inventory.
Here’s the rub: if retailers want smart checkout to see higher adoption rates than self-checkout, the process needs to happen seamlessly. And for it to happen seamlessly, data labeling efforts need to be on point.
Data labeling challenges of cashier-free checkout
It’s difficult to teach computer vision algorithms to identify the products shoppers briskly collect and stow in their bags before walking out the door in stores with smart checkouts.
For starters, algorithms needs to be able to identify a high volume of products, ranging from apple sauce to zucchini. Then think of the broad spectrum of angles, lighting quality, and even the speed at which shoppers take items and drop them in their bag, basket, cart, or even their pockets.
Then, throw in unexpected impediments. For example, what if someone is wearing dark gloves, or are obscuring labels with shopping lists? What happens if a shopper walks behind a product being captured, or if poor lighting shrouds a product? What if product packaging changes, or if it’s flipped upside down or backwards?
In the world of AI-driven computer vision, these unexpected situations which occur outside of normal parameters and operating conditions are known as edge cases.
Need to lighten your edge caseload?
Computer vision algorithms work best with expertly labeled and annotated training data, including images of the products in your retail inventory at a variety of angles, lighting conditions, and simulated occlusions.
Data labeling and annotation can help convenience stores, supermarkets, grocery chains and other retailers to create a more seamless shop and go experience. To recognize products in different kinds of lighting, positioning, and clarity scenarios, computer vision needs:
- A large, diverse set of training data to draw from
- Skilled annotators and human validators to help identify and resolve edge cases
- An iterative labeling process— with tight feedback loops between annotators and machine learning engineers — to uncover and quickly address edge cases
- A sound labeling strategy which starts by labeling best-selling articles first
Data curation tools and techniques can help you ensure you are prioritizing annotation work in a way that will get you promising results more quickly.
To ensure your training data repository can meet your real-world requirements, it helps to brainstorm with your store merchandising and operations teams. Don’t stress on trying to identify every conceivable edge case — instead, put in place the processes and teams that can help you catch and resolve them quickly.
Overcome obstacles to smart checkout
Digital transformations like “grab and go” shopping arrangements are helping many retailers to minimize lines, and even get rid of them completely. Accurate data labeling ensures you are selling and being paid for every product which leaves your store.
In the event that your company doesn’t have the technology, expertise, or person power to create, test, scale, and label your training data or product data repository, Sama can help.