We hosted Andrew Freedman, Chief Strategic Officer at Fathom, with our CEO Wendy Gonzalez to explore the fluctuating AI regulations that companies need to consider around AI implementation.
As international, federal, and state governments develop policies and grapple to balance consumer protection and tech innovation, enterprise companies still need to conduct business. And whether you’re developing AI or considering how to leverage AI tools internally, you have to understand how to proceed. That’s why we hosted Andrew Freedman, Chief Strategic Officer at Fathom with our CEO Wendy Gonzalez to explore the critical aspects companies need to consider around AI implementation. This timely discussion examined how forward-thinking businesses can navigate responsible AI adoption while maintaining stakeholder trust, even while regulations are in flux.
Watch the full interview, and read our key takeaways below.
Current AI regulatory discussions tend to focus on either slowing down development with heavy regulation or taking a laissez-faire approach and waiting for problems to emerge. Andrew Freeman, Chief Strategy Officer at Fathom.org, proposes a "third way": one that involves multi-stakeholder regulatory organizations (MROs), which would provide a forum for developing adaptive standards that evolve with the technology.
Responsible AI is crucial for building consumer trust. Regardless of government mandates, businesses need trustworthy AI for adoption to succeed. Wendy Gonzalez even says, "At the end of the day, you're not going to use AI if it's not trustworthy."
There is also high public demand for responsible practices. Polling data from Fathom indicates 81% of Californians want human oversight in AI validation. Think of it like having ingredient labels on food: consumers want to know what they're using and how it’s made.
According to Andrew Freedman, standards should vary depending on the potential harm (e.g., higher standards for self-driving cars than for airline ticket pricing algorithms). The approach also accommodates businesses of different sizes, with potentially different standards for startups versus large corporations, making compliance achievable for companies with limited resources.
What can your business do while regulations fluctuate, and eventually settle?
One way is to actively engage in shaping AI governance rather than waiting for regulations to be imposed. As Freedman notes, policymakers are eager to hear from this segment of the market, but they're often underrepresented in policy discussions.
Companies can participate in multi-stakeholder forums, provide input on workable standards, and help develop frameworks that balance innovation with responsibility. This proactive approach helps ensure that eventual standards will be practical for implementation and aligned with both business needs and societal expectations.
But if your primary concern is making numbers, your first priority should be a commitment to responsible AI development. No matter how regulations change or evolve, adhering to safe and ethical AI practices is the sustainable business choice, and one that will earn brand trust and loyalty.
Sama helps companies around the globe meet AI development goals. Talk to an expert today.