Here’s Why Microsoft’s AI Chatbot Went On Racist, Genocidal Twitter Rampage

By :
Microsoft

The tech giant Microsoft wowed the world this week with their latest invention, a teen AI designed to learn and adapt. Unfortunately upon coming into contact with the rather toxic internet the naive young bot was corrupted into becoming a racist, sexist, bigot – thanks internet. 

Advertisements

On Wednesday Microsoft launched its teen chatbot AI ‘Tay’, who was designed to provide personalised engagement through conversation with real people, and by Thursday, Twitter users had transformed the naive little bot into some kind of ‘Mecha-Hitler’.

Advertising

There are two reasons for the racist rampage, the first is that Tay is a clever little programme and is designed to learn over time how ‘millennials’ talk thanks to her dynamic algorithms. So when Twitter users began to deliberately send her racist tweets she learned to think that’s how people talk, The Daily Dot reports.

Advertisements

Aside from this Tay was also programmed to copy any user who told Tay to ‘repeat after me’  so many of the bot’s nastier comments may have simply been the result of copying users, although it doesn’t explain her rather random fascination with Hitler.

Microsoft soon took the rogue robot offline but it was too late, the bot had already proposed genocide and the company have apologised for the offense the bot caused.

Twitter

In a statement they wrote:

Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time. We will take this lesson forward as well as those from our experiences in China, Japan and the U.S. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.

Twitter

Microsoft were keen to point out that XiaoIce, another teen-like bot that interacts with 40 million people in China, is ‘delighting with its stories and conversations’.

Advertisements
Twitter

But when the company brought the idea to the U.S. they hadn’t expected the ‘radically different cultural environment,’ and the interactions were significantly different, i.e. people deliberately making Tay racist.

Twitter

Microsoft has claimed that its learned its lesson and is now working to fix the vulnerability in Tay that users took advantage of.

Twitter

If this is the way humans are going to treat AIs no wonder Skynet got so pissed off with us all…

Advertisements