November 11, 2022
min LISTEN

Arize Founding Engineer Tsion Behailu

Listen and Subscribe

EPISODE SUMMARY

Tsion discusses current challenges in ML observability, and explains how Arize's Bias Tracing Tool was developed to help companies root out bias in their models.

EPISODE NOTES

Arize and its founding engineer, Tsion Behailu, are leaders in the machine learning observability space. After spending a few years working as a computer scientist at Google, Tsion’s curiosity drew her to the startup world where, since the beginning of the pandemic, she has been building breaking-edge technology. Rather than doing it all manually (as many companies still do to this day), Arize AI technology helps machine learning teams detect issues, understand why they happen, and improve overall model performance. During this episode, Tsion explains why this method is so advantageous, what she loves about working in the machine learning field, the issue of bias in machine learning models (and what Arize AI is doing to help mitigate that), and more! Key Points From This Episode:

  • Tsions’s career transition from computer science (CS) into the machine learning (ML) space.
  • What motivated Tsion to move from Google to the startup world.
  • The mission of Arize AI.
  • Tsion explains what ML observability is.
  • Examples of the Arize AI tools and the problems that they solve for customers.
  • What the troubleshooting process looks like in the absence of Arize AI.
  • The problem with in-house solutions.
  • Exploring the issue of bias in ML models.
  • How Arize AI’s bias tracing tool works.
  • Tsion’s thoughts on what is most responsible for bias in ML models and how to combat these problems.

Tweetables:“We focus on machine learning observability. We're helping ML teams detect issues, troubleshoot why they happen, and just improve overall model performance.” — Tsion Behailu “Models can be biased, just because they're built on biased data. Even data scientists, ML engineers who build these models have no standardized ways to know if they're perpetuating bias. So more and more of our decisions get automated, and we let software make them. We really do allow software to perpetuate real world bias issues.” — Tsion Behailu “The bias tracing tool that we have is to help data scientists and machine learning teams just monitor and take action on model fairness metrics.” — Tsion Behailu

RESOURCES

Related Podcast Episodes

No items found.