NewsMicrosoft Shuts Down AI Chatbot After Offensive Tweets

Microsoft Shuts Down AI Chatbot After Offensive Tweets content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

This week, Microsoft launched a chatbot powered by artificial intelligence (AI). Called Tay, the chatbot sent out automated tweets in response to other Twitter users, and it was supposed to learn to converse more like an actual person. But within 24 hours, Tay was sending out offensive messages that included racist and misogynistic slurs, advocated genocide and questioned the existence of the Holocaust.

In response to the incident, Microsoft issued a statement that said, “Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

View article

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends & analysis

Latest Posts

Related Stories