Blog > AI
Navigating AI Liability Compliance – how Sama helps

Navigating AI Liability Compliance – how Sama helps

This article is the first in a series regarding how Sama uniquely equips its clients with a data labeling process that complies with the emerging body of AI regulation to ultimately avoid liability risks and costly rework.

Lots of ink has been spilled regarding the EU AI Regulations released in 2021. How Sama can help customers prepare for the conformity assessments (also known as impact assessments under different national laws) will be the subject of a separate article, and the new US AI Bill of rights is moving in the same direction.

Less talked about is the recently released Liability Guidelines that will create a higher standard for AI companies in order to ensure that companies creating, importing, and deploying AI technology do so responsibly, and remain accountable for the systems they deploy. The proposed directive will have a profound impact on how companies should be carrying out and/or sourcing for all phases of their AI development process.

How the directive could affect your organization

Most critically, the proposed directive changes the burden of proof when someone suffers a damage because of an AI system (e.g. If someone were to be hit by an autonomous drone). In essence, this presumption of causality releases claimants from the burden to show how the damaging output could have been caused by a specific act or omission of an AI provider or user. It then is up to the AI software provider to prove that they acted correctly in order to avoid liability.

The AI provider will have to show that each phase of the AI development process was carried out correctly. For data labeling, this means making sure that labelers were well-trained, that proper quality assurance was carried out on an ongoing basis. Crowd labelers can have variable quality; it cannot be shown that they were properly trained, and cannot be validated that instructions and QA were properly carried out.

How Sama helps with compliance

On the other hand, Sama can provide all of the necessary material to make sure that you can prove your data labeling was carried out properly, including:

  • Named, quality, experienced, and highly-trained staff as well as training records
  • Benchmarking and advice on annotation instructions to make sure that your annotations are useful, not just accurate
  • Agents that can escalate issues spotted and edge cases in ground-level data or annotations
  • Industry-leading SLAs on annotation quality
  • Experienced, ongoing quality assurance and quality assurance reports
  • Through our network of partners, data visualization and selection support to make sure you’re annotating the right data

The proposed directive also creates a right for a party to request information on the high-risk AI system from the provider – which the provider is required to document and store. Not only is Sama able to be a trusted partner for this type of documentation and process, but the PR costs of such disclosures is important as well. For a company already embroiled in litigation, this means that they will either be associated with crowd platforms that can pay below minimum wage, or Sama, who has helped lift tens of thousands out of poverty.

Do we need to be concerned now?

Companies should also know that even though these directives won’t be coming into force for a few years and will likely only apply to damages that occur after the guidelines come into force, development activities that occured before the guidelines came into force will be held up to the standards set by the guidelines. This means that companies that don’t start living up to this standard now will either have to redo work, or take on important liability risks later.

Related Resources

In-House vs Outsourcing Data Annotation for ML: Pros & Cons

13 Min Read

Sama’s Experiment-Driven Approach to Solving for High-Quality Labels at Scale

6 Min Read

ML Assisted Annotation Powered by MICROMODEL Technology

8 Min Read