AI
3
min read

The Training Data Challenge: 5 AI Fails

From accidental Alexa purchasing to bias in recruitment, we have gathered 5 AI fails from the last few years.

Full Name*

Email Address*

Phone Number*

Subject*

Topic

Thank you! Your submission has been received.
We'll get back to you as soon as possible.

In the meantime, we invite you to check out our free resources to help you grow your service business!

Free Resources
Oops! Something went wrong while submitting the form.
The Training Data Challenge: 5 AI FailsAbstract background shapes
Table of Contents
Talk to an Expert

AI is often hailed as the ‘next big thing’ and an answer to all of our problems. We’re certainly betting on it: global spend on AI is predicted to be a whopping $98 Billion by 2023, up from 37.5 Billion in 2019. But we’re still in the early days when it comes to fulfilling the promise of AI and one of the reasons is training data—the annotated data a machine needs to learn to see or hear. Training data is essential to the development of any machine learning model. A global survey of Data Scientists, AI Experts and Stakeholders revealed that 8 out of 10 AI projects fail and 96% run into problems with data quality and labeling.

Even applications now leading the scene have had their fair share of mishaps along the way. We’ve listed some examples of when AI might not have been performing at its best.

1. The Somewhat Vulgar Virtual Assistant
It only took 24 hours for Tay, Microsoft's new conversational understanding chatbot, to start tweeting some extremely insensitive and offensive material online. The initial idea was to have Tay learn through ‘casual conversation’ with fellow Twitter users, which perhaps, at that time, was too far-fetched. While a majority of the 100k+ tweets were not considered to be eyebrow-raising, a selection of mimicking and extremely offensive posts saw Tay taken down after just one day. Unfortunately, the follow-up chatbot, ‘Zo’ resulted in the same fate, albeit after five months of live interaction.

2. IBM Watson for Oncology
Many see AI as the future of medicine, but it’s been suggested that IBMs Watson for Oncology initially over-promised and underdelivered with misdiagnosis, incorrect drug treatment offerings and unsafe judgement calls on patients. While there was and still is promise, the current ‘messy’ state of healthcare systems and mismatch between both the working and learning styles of doctors and machines became evident. Watson Oncology, however, continues to improve, with memory banks of every rare disease, increasing knowledge and a lack of cognitive bias potentially held by long serving medical professionals.

3. Amazon AI Bias Recruitment
Now scrapped, the once used AI recruiting tool from Amazon was found to be perpetuating the gender gap in tech jobs. The initial hope for a perfect filtration system for the top applications, was soon found to host significant bias towards hiring men due to being trained on resumes submitted in previous years—which were predominantly from male applicants. The use of historical data which facilitated existing bias has seen great advancements in regard to hiring diverse talent, but continues to have a long road ahead.

4. Purchasing on Alexa
Research suggests that by 2023 over eight billion voice assistants will be present in consumers' lives. We predict many of these will be subject of, or susceptible to, some humorous moments while in development. We have seen doll houses ordered through ads on television (oops!), parrots casually chatting with Alexa, hacking issues, and all sorts. There is no doubt that the virtual assistant has further developed to a more accurate level since, but we’re excited for the future of viral YouTube videos to come.

5. Insurance company uses social media data to issue rates, shows bias
Admiral Insurance, a popular company for first-time drivers, aimed to embed AI to analyse the Facebook data of those applying for insurance as they claimed there was a ‘proven link’ between personality and their driving style or ability. Facebook was quick to stop any advancements in this due to underlying ethical concerns, data privacy and potential bias of models.

While AI carries a lot of promise, it also carries a lot of hype. When it comes to AI, your model is only as good as the data it’s trained on, because here applies: garbage in, garbage out.

Author
Kyra Harrington
Kyra Harrington

RESOURCES

Related Blog Articles

Lorem ipsum dolot amet sit connsectitur

No items found.