Podcast
Arize Founding Engineer Tsion Behailu

Arize Founding Engineer Tsion Behailu

EPISODE SUMMARY

Tsion discusses current challenges in ML observability, and explains how Arize’s Bias Tracing Tool was developed to help companies root out bias in their models.

EPISODE NOTES

Arize and its founding engineer, Tsion Behailu, are leaders in the machine learning observability space. After spending a few years working as a computer scientist at Google, Tsion’s curiosity drew her to the startup world where, since the beginning of the pandemic, she has been building breaking-edge technology. Rather than doing it all manually (as many companies still do to this day), Arize AI technology helps machine learning teams detect issues, understand why they happen, and improve overall model performance. During this episode, Tsion explains why this method is so advantageous, what she loves about working in the machine learning field, the issue of bias in machine learning models (and what Arize AI is doing to help mitigate that), and more!

Key Points From This Episode:

  • Tsions’s career transition from computer science (CS) into the machine learning (ML) space.
  • What motivated Tsion to move from Google to the startup world.
  • The mission of Arize AI.
  • Tsion explains what ML observability is.
  • Examples of the Arize AI tools and the problems that they solve for customers.
  • What the troubleshooting process looks like in the absence of Arize AI.
  • The problem with in-house solutions.
  • Exploring the issue of bias in ML models.
  • How Arize AI’s bias tracing tool works.
  • Tsion’s thoughts on what is most responsible for bias in ML models and how to combat these problems.

Tweetables:

“We focus on machine learning observability. We’re helping ML teams detect issues, troubleshoot why they happen, and just improve overall model performance.” — Tsion Behailu [0:06:26]

“Models can be biased, just because they’re built on biased data. Even data scientists, ML engineers who build these models have no standardized ways to know if they’re perpetuating bias. So more and more of our decisions get automated, and we let software make them. We really do allow software to perpetuate real world bias issues.” — Tsion Behailu [0:12:36]

“The bias tracing tool that we have is to help data scientists and machine learning teams just monitor and take action on model fairness metrics.” — Tsion Behailu [0:13:55]

Related Resources

Credo AI Founder & CEO Navrina Singh

20 Min Listen

CES 2023 | January 5-8 @ 8:00am ET | Las Vegas, NV

Sama Webinar: Pick the best data labeling solution | December 1 @ 1:00pm ET | Online