The European Union (EU) is leading the way in terms of Artificial Intelligence (AI) legislation. We strongly support the EU approach and believe that it can have a positive impact on the responsible and ethical development of AI. We encourage the EU to pass legislation now so that, like the EU’s previous GDPR legislation, it moves other countries to act and sets the baseline for global compliance.
However, as the legislation moves forward towards its adoption we believe that it should be broader and cover the entire AI supply chain, from data acquisition to workers’ rights. The legislation should also include vetting and inspection processes for procurement by all entities based in the EU.
Why does the EU AI Act matter?
Millions of workers are impacted directly by the production of AI models. The treatment of these workers are currently unregulated and can cause companies to take advantage of this lack of legislation. Sama has been pioneering and advocating the Ethical AI supply chain since 2008 and has partnered with research and non-profit organizations like the Everest Group and the Clinton Foundation to showcase the benefits of impact sourcing in the AI supply chain. Our particular focus has been on the under recognized workers in data enrichment that are a vital part of the AI supply chain, but whose rights and wellbeing are often ignored.
The EU has a template for how to address these issues with the significant steps it has taken to combat other global concerns, such as forced labor and ensuring responsible business practices within global supply chains.
For example, the EU’s Regulation on the responsible sourcing of minerals originating from conflict-affected and high-risk areas, its proposed regulation on products of forced labor and Germany’s Supply Chain Act each create a strong precedent for ensuring that human rights and regulation can draw on existing AI-specific work such as that done by the Partnership on AI or the OECD principles to address industry-specific issues.
Every time the EU regulates an industry or a process, it has a tickle effect on countries supplying the raw materials to build products either produced or bought in the EU. Data Labeling is one part of the AI supply chain that should be regulated to ensure that EU human rights and labor standards are common practice in the world.
What is missing in the law?
The legislation is still being studied before its final passage. There are several opportunities to make it better. Here are our key ideas:
Idea 1: Discourage self-regulation of the AI industry. It doesn’t work in other industries and has not worked in AI.
The compromise text encourages the EU AI Act to rely on industry standards. This amounts to self-regulation, where a select group of companies creates the current standards that will be referred to in the regulations. Taking this approach will lead to little regulation and a negative impact on millions of workers. The law must enshrine core principles of Human Rights and Labor laws already existing in other legislation. Where standards are useful, the regulation should set minimum requirements for those standards.
Idea 2: Encourage open development that tracks the AI supply chain from the raw materials (data) to the finished product (AI model).
General Purpose models will impact every aspect of our lives. We need transparency in the data they use to be built, how they are built and how they are deployed (and retrained). Currently, the EU Law requires disclosure of copyrighted training data only. In order to allow users and others affected by a General Purpose system to evaluate how they may be affected by it, the disclosure requirements should cover data use more broadly (for instance by giving datasheet-type information), as well as information on how that data was annotated and who did that work.
In the interest of quality, transparency and most importantly respect for Human Rights, the information requirements should include information on data enrichment, specifically how data was collected and/or labeled, a general description of labeling instructions and whether it was done using identifiable employees or contractors, and standards for such employment.
Idea 3: Formalize the audit process for AI to ensure human oversights create a positive impact.
The inclusion of Human Oversight requirements is a positive development. A formal auditing process should be added to ensure that industry standards are created on the reporting and accountability of AI development throughout the AI supply chain. Every company in the world should be held to the same standard across their development process.
Final Takeaway: Pass the EU AI Act Now!
Along with other principles-based prohibitions such as real-time surveillance which align with Sama’s own prohibitions, the EU AI Act has a level of detail as to how the act will be applied and what the regulations will specify which was lacking in other similar bills and should be applauded. This is a robust approach that comes very close to plugging the major gaps that we see in emerging regulation, and other jurisdictions should follow suit.
As stated in our previous post on the Canadian Artificial Intelligence bill, we believe that AI can be a force for good in the world with appropriate safeguards guiding its responsible and ethical development. Like the regulatory inclusion of seatbelts and windshield wipers in cars, regulators need to embrace AI while ensuring individuals and communities are safe.
As with GDPR, we hope that the EU will set the standard for what global compliance looks like.