She combines her love for technology with her passion for creating positive change to help companies build and deploy responsible AI. Maria works at PwC, where her work focuses on the operationalization of AI, and data across the firm. She also plays a vital role in advising government, regulators, policymakers, civil society, and research institutions on ethically aligned AI public policy. In our conversation, we talk about the importance of building responsible and ethical AI, while leveraging technology to build a better society. We learn why companies need to create a culture of ethics for building AI, what type of values encompasses responsible technology, the role of diversity and inclusion, the challenges that companies face, and whose responsibility it is. We also learn about some basic steps your organization can take and hear about helpful resources available to guide companies and developers through the process.
Key Points From This Episode:
- Maria’s professional career journey and her involvement in various AI organizations.
- The motivation which drives AI and machine learning professionals in their careers.
- How to create and foster a system that instills people with positivity.
- Examples of companies that have successfully fostered a positive and ethical culture.
- What are good values for building responsible and ethical technology.
- We learn about the values the responsible AI toolkit prescribes.
- Some of the challenges faced when building responsible and ethical technology.
- An outline of the questions a practitioner can ask to ensure operation by the universal ethics.
- She shares some helpful resources concerning ethical guidelines for AI.
- Why diversity and inclusion are essential to building technology.
- Whose responsibility it should be to ensure the ethical and inclusive development of AI.
- We wrap up the episode with a takeaway message that Maria has for listeners.
“How we have proceeded so far, via Silicon Valley, ‘move fast and break things.’ It has to stop because we are in a time when if we continue in the same way, we’re going to generate more negative impacts than positive impacts.” — @maria_axente [0:10:19]
“You need to build a culture that goes above and beyond technology itself.” — @maria_axente [0:12:05]
“Values are contextual driven. So, each organization will have their own set of values. When I say organization, I mean both those who build AI and those who use AI.” — @maria_axente [0:16:39]
“You have to be able to create a culture of a dialogue where every opinion is being listened to, and not just being listened to, but is being considered.” — @maria_axente [0:29:34]
“AI doesn’t have a technical problem. AI has a human problem.” — @maria_axente [0:32:34]
Links Mentioned in Today’s Episode:
Maria Luciana Axente on LinkedIn
Maria Luciana Axente on Twitter
PwC responsible AI toolkit