Algorithmia Blog - Deploying AI at scale

Terminating Tay – A Microsoft AI Experiment Gone Wrong

Tay, the Microsoft AI Bot for Twitter

You Might Have Heard: The Microsoft AI experiment with Tay, their machine learning Twitter bot, ended after a mere 24-hours. The company pulled the plug when she almost immediately turned into a sexist, racist Nazi. Tay was suppose to learn how to communicate like a human by engaging in conversations with Twitter users.

“This gets to the underlying problem,” Vice argues. “Microsoft’s AI developers sent Tay to the internet to learn how to be human, but the internet is a terrible place to figure that out.”

The New Yorker writes that “Tay’s breakdown occurred at a moment of enormous promise for A.I.” Earlier in the week, an AI-written novel passed the first round of a literary competition in Japan, and last week AlphaGo, the AI from Google’s DeepMind,defeated the top-ranked Go player in the world.

As information destined for humans is increasingly handled by AI’s, the need for an open dialogue about the ethics grows. Google and DeepMind still haven’t revealed who sits on their AI ethics board.

+ A question of lesser importance: why are AI’s like Siri and Cortana so clever, but so bad at empathy anyway? A recent study might hold the key.

Did you enjoy this? Consider joining Emergent Future, a weekly, hand-curated dispatch exploring technology through the lens of artificial intelligence, data science, and the shape of things to come. Emergent Future is powered by Algorithmia, an open marketplace for algorithms, enabling developers to create tomorrow’s smart applications today

Product manager at Algorithmia helping to give developers super powers.

More Posts - Website

Follow Me:
TwitterFacebookLinkedIn