Ethics in AI is considered vital to the healthy development of all AI technologies, but this is easier said than done. In this episode of How AI Happens, we speak to Maria Luciana Axente to help us unpack this essential topic. Maria is a seasoned AI policy expert, public speaker, and executive and has a respected track record of working with companies whose foundation is in technology. She combines her love for technology with her passion for creating positive change to help companies build and deploy responsible AI.
Ethics in AI is considered vital to the healthy development of all AI technologies, but this is easier said than done. In this episode of How AI Happens, we speak to Maria Luciana Axente to help us unpack this essential topic. Maria is a seasoned AI policy expert, public speaker, and executive and has a respected track record of working with companies whose foundation is in technology.
She combines her love for technology with her passion for creating positive change to help companies build and deploy responsible AI. Maria works at PwC, where her work focuses on the operationalization of AI, and data across the firm. She also plays a vital role in advising government, regulators, policymakers, civil society, and research institutions on ethically aligned AI public policy. In our conversation, we talk about the importance of building responsible and ethical AI, while leveraging technology to build a better society. We learn why companies need to create a culture of ethics for building AI, what type of values encompasses responsible technology, the role of diversity and inclusion, the challenges that companies face, and whose responsibility it is. We also learn about some basic steps your organization can take and hear about helpful resources available to guide companies and developers through the process.Key Points From This Episode:
Tweetables:“How we have proceeded so far, via Silicon Valley, 'move fast and break things.' It has to stop because we are in a time when if we continue in the same way, we're going to generate more negative impacts than positive impacts.” — @maria_axente “You need to build a culture that goes above and beyond technology itself.” — @maria_axente “Values are contextual driven. So, each organization will have their own set of values. When I say organization, I mean both those who build AI and those who use AI.” — @maria_axente “You have to be able to create a culture of a dialogue where every opinion is being listened to, and not just being listened to, but is being considered.” — @maria_axente “AI doesn't have a technical problem. AI has a human problem.” — @maria_axente Links Mentioned in Today’s Episode:Maria Luciana Axente on LinkedInMaria Luciana Axente on TwitterPwC UKPwC responsible AI toolkitSama