Tsion discusses current challenges in ML observability, and explains how Arize's Bias Tracing Tool was developed to help companies root out bias in their models.
Arize and its founding engineer, Tsion Behailu, are leaders in the machine learning observability space. After spending a few years working as a computer scientist at Google, Tsion’s curiosity drew her to the startup world where, since the beginning of the pandemic, she has been building breaking-edge technology. Rather than doing it all manually (as many companies still do to this day), Arize AI technology helps machine learning teams detect issues, understand why they happen, and improve overall model performance. During this episode, Tsion explains why this method is so advantageous, what she loves about working in the machine learning field, the issue of bias in machine learning models (and what Arize AI is doing to help mitigate that), and more! Key Points From This Episode:
Tweetables:“We focus on machine learning observability. We're helping ML teams detect issues, troubleshoot why they happen, and just improve overall model performance.” — Tsion Behailu “Models can be biased, just because they're built on biased data. Even data scientists, ML engineers who build these models have no standardized ways to know if they're perpetuating bias. So more and more of our decisions get automated, and we let software make them. We really do allow software to perpetuate real world bias issues.” — Tsion Behailu “The bias tracing tool that we have is to help data scientists and machine learning teams just monitor and take action on model fairness metrics.” — Tsion Behailu