Microsoft pulls plug after chat robot slings slurs, rips Obama and denies Holocaust

Elon Musk and Stephen Hawking warned us about this.

Nobody was harmed — physically —by Microsoft’s
 foul-mouthed Twitter chat robot, of course, but what started out as a fun experiment in artificial intelligence turned ugly in a hurry. To some, that doesn’t bode well for the future of robot-human relations:

Microsoft initially created “Tay” in an effort to improve the customer service on its voice recognition software. “She” was intended to tweet “like a teen girl” and marketed as having “zero chill.” Not so sure about the first part, but they definitely nailed the second part.

Microsoft said Tay was designed to “engage and entertain people where they connect with each other online through casual and playful conversation.”

Like this?

Those tweets don’t even begin to cover it. N-words, 9/11 conspiracy theories, genocide, incest, etc. Tay really lost it. Needless to say, this wasn’t programmed into the chat robot. Rather, the trolls on Twitter exploited her, as one would expect them to do, and ruined it for everybody.

In a statement cited by several news outlets, Microsoft explained what happened:

“Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

The company also deleted some of the most offensive tweets. To see more of them, check out the Socialhax website, which took screenshots while they were still available.

See also  Why online-travel giant Booking is getting into the fintech game

After her rough day, Tay herself signed off as Microsoft went back to the drawing board:

View more information:

Articles in category: moneyist

Leave a Reply

Back to top button