Let’s start at the beginning. What are automated quality accelerators and why should we care? Automated quality accelerators are technology innovations that are focused on reducing the amount of manual quality assurance time spent in QA processes. They may be used to expedite annotator education, generate automated measures of annotation quality, and prevent logical fallacies. Our accelerators are integrated into our Sama training data platform and can be customized for unique use cases and needs by our dedicated Customer Success Engineering team.
While it’s key to have a human in the loop when creating and verifying training data, automating processes within the workflow improves efficiency while guaranteeing high quality, saving everyone time and money. Quality accelerators also focus the effort of Sama’s annotators towards the most challenging aspect of the task, minimizing the volume of manual rework they need to do, and catching mistakes early in the process —equipping them to do their job well. Ultimately, automated quality accelerators enable us to deliver super high-quality data for complex use cases in the most efficient manner.
Automated Quality Accelerators at Sama
Automated Logic Checks:
Automated Logic checks are triggered before a task is submitted on Sama by our annotators. Each task is assessed using a fixed set of rules to check for invalid combinations of labels within a shape, across all shapes, and dependencies with the metadata. These rules are flexible and customized to each workflow, focusing on all errors types that don’t need human judgement. If an error is found, the annotator needs to fix the task before it can be submitted. To help the annotator to fix the task and learn from their mistake, a message is displayed which contains shape specific error tags.
Auto logic checks are optimized for different kinds of errors, including but not limited to:
- Invalid answer combinations: Combinations that the ontology prevents, for example more than two wheels being tagged on an item labeled “bicycle.”
- Uniqueness or preventing repetitions: More than one object in an image being assigned the same unique identifier or the same label when the ontology prohibits that. For example, two noses in a single person keypoints workflow.
- Size requirements: Ensuring that size specifications are met. For example, guaranteeing consistent size in a 3D workflow where constant cuboid size is required or enforcing a minimum pixel rule
- Relational checks: Given attributes and sub-attributes, ensure that values selected aren’t contradictory. For example, a bicyclist polygon isn’t grouped/attached to a car bounding box
These checks are incredibly efficient for the following reasons:
- Create an instant feedback loop to prevent logical fallacies, which helps annotators improve and get it right the first time
- Prevents errors that may be impossible to detect by a manual QA review processes
- Reduces time spent by manual QAs and allow them to focus on the more critical errors or sample more tasks
- Helps annotators to adapt quickly to changing project instructions, ensuring that new instructions are being followed and are understood correctly
- Lastly, it improves overall TPT and reduces the time from task creation to delivery
Under a strictly manual process, highly skilled quality assurance annotators would need to review and provide feedback manually. While the latter process is still a vital part of our human-in-the-loop data annotation, auto logic checks free up their expertise to focus on more subjective errors and edge cases.
Our enterprise-level clients are successfully using Sama Quality Accelerators to realize extremely high data quality for their most complex use cases, improving overall model performance. Now is the time to supercharge your data quality.