Microsoft Take AI Twitter Account Offline After It Goes On Racist Rampage

0 Shares

Microsoft’s new AI chatbot has been taken offline after it suddenly became oddly racist, like your gran after one too many sherries. 

Advertisements

The bot named  ‘Tay’ was only introduced this week but it went off the rails on Wednesday, posting a flood of incredibly racist messages in response to questions, Business Insider reports.

screen shot 2016-03-24 at 09.50.46Twitter

Tay was designed to respond to users’ queries and to copy the casual, jokey speech patterns of a stereotypical millennial, which turned out to be the problem.

Advertisements

The idea was to ‘experiment with and conduct research on conversational understanding,’ with Tay able to learn from ‘her’ conversations and get progressively ‘smarter’. Unfortunately the only thing she became was racist.

screen shot 2016-03-24 at 11.10.58Twitter

You see  Tay was too good at learning and was targeted by racists, trolls, and online troublemakers who persuaded her to use racial slurs, defend white supremacist propaganda, and even call for genocide.

Microsoft have now taken Tay offline for ‘upgrades,’ and is now deleting some of the worst tweets although many still remain.  Also it’s important to say that Tay’s racism is not a product of Microsoft, it’s a product of the morons who have ruined the bot.

However, it’s still hugely embarrassing for the company.

Advertisements

In one highly publicised tweet, which has now been deleted, Tay said:

bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we’ve got.

The scariest thing is that there are probably a few twitter accounts that really believe these warped ideas.