Less talked about is the recently released Liability Guidelines that will create a higher standard for AI companies in order to ensure that companies creating, importing, and deploying AI technology
This article is the first in a series regarding how Sama uniquely equips its clients with a data labeling process that complies with the emerging body of AI regulation to ultimately avoid liability risks and costly rework. Lots of ink has been spilled regarding the EU AI Regulations released in 2021. How Sama can help customers prepare for the conformity assessments (also known as impact assessments under different national laws) will be the subject of a separate article, and the new US AI Bill of rights is moving in the same direction.
Less talked about is the recently released Liability Guidelines that will create a higher standard for AI companies in order to ensure that companies creating, importing, and deploying AI technology do so responsibly, and remain accountable for the systems they deploy. The proposed directive will have a profound impact on how companies should be carrying out and/or sourcing for all phases of their AI development process.
Most critically, the proposed directive changes the burden of proof when someone suffers a damage because of an AI system (e.g. If someone were to be hit by an autonomous drone). In essence, this presumption of causality releases claimants from the burden to show how the damaging output could have been caused by a specific act or omission of an AI provider or user. It then is up to the AI software provider to prove that they acted correctly in order to avoid liability.The AI provider will have to show that each phase of the AI development process was carried out correctly. For data labeling, this means making sure that labelers were well-trained, that proper quality assurance was carried out on an ongoing basis. Crowd labelers can have variable quality; it cannot be shown that they were properly trained, and cannot be validated that instructions and QA were properly carried out.
On the other hand, Sama can provide all of the necessary material to make sure that you can prove your data labeling was carried out properly, including:
The proposed directive also creates a right for a party to request information on the high-risk AI system from the provider – which the provider is required to document and store. Not only is Sama able to be a trusted partner for this type of documentation and process, but the PR costs of such disclosures is important as well. For a company already embroiled in litigation, this means that they will either be associated with crowd platforms that can pay below minimum wage, or Sama, who has helped lift tens of thousands out of poverty.
Companies should also know that even though these directives won’t be coming into force for a few years and will likely only apply to damages that occur after the guidelines come into force, development activities that occured before the guidelines came into force will be held up to the standards set by the guidelines. This means that companies that don’t start living up to this standard now will either have to redo work, or take on important liability risks later.