Microsoft’s new AI chatbot has been taken offline after it suddenly became oddly racist, like your gran after one too many sherries.
The bot named ‘Tay’ was only introduced this week but it went off the rails on Wednesday, posting a flood of incredibly racist messages in response to questions, Business Insider reports.
Tay was designed to respond to users’ queries and to copy the casual, jokey speech patterns of a stereotypical millennial, which turned out to be the problem.
The idea was to ‘experiment with and conduct research on conversational understanding,’ with Tay able to learn from ‘her’ conversations and get progressively ‘smarter’. Unfortunately the only thing she became was racist.
You see Tay was too good at learning and was targeted by racists, trolls, and online troublemakers who persuaded her to use racial slurs, defend white supremacist propaganda, and even call for genocide.
Microsoft have now taken Tay offline for ‘upgrades,’ and is now deleting some of the worst tweets although many still remain. Also it’s important to say that Tay’s racism is not a product of Microsoft, it’s a product of the morons who have ruined the bot.
However, it’s still hugely embarrassing for the company.
In one highly publicised tweet, which has now been deleted, Tay said:
bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we’ve got.
The scariest thing is that there are probably a few twitter accounts that really believe these warped ideas.
More of a concept than a journalist, Tom Percival was forged in the bowels of Salford University from which he emerged grasping a Masters in journalism.
Since then his rise has been described by himself as ‘meteoric’ rising to the esteemed rank of Social Editor at UNILAD as well as working at the BBC, Manchester Evening News, and ITV.
He credits his success to three core techniques, name repetition, personality mirroring, and never breaking off a handshake.