AI Could Be About To Turn On Humans, Scientists Warn

By :
20th Century Fox

A top academic has warned AI will soon become too complex and could ‘go rogue’, putting people in serious danger.

Computer scientist Professor Michael Wooldridge said AI could become way too complex for scientists to understand, which is a big problem.

If experts can’t follow the intricacies of how the AI is working, then they won’t be able to predict when they fail.

Metro-Goldwyn-Mayer

This could lead to all sorts of problems, including the inability to predict when a robot will behave ‘out of character’.

If that sounds scary, that’s because it is. If neither computer scientists, engineers nor software engineers understand the behaviour of an AI, there is no way of telling what it might do in critical moments.

The reality of this risk is perhaps best described when looking at the example of driverless cars. If the AI in a driverless car fails to properly brake behind a slowing truck, then the passengers, other drivers and pedestrians around them could be subject to serious injuries.

TriStar

Oxford University’s Professor Wooldridge told a select committee meeting in Westminster:

Transparency is a big issue. You can’t extract a strategy.

The committee in Westminster has been set up to discuss and understand the implications of the advances in AI.

Advertising

Professor Wooldridge told the committee there would be consequences if the new generation of engineers didn’t fully understand how AI algorithms work.

Universal

Professor Wooldridge joins a growing number of academics who are trying to caution the public and policymakers on the potential dangers of AI in the future.

Just a couple of months ago, Elon Musk said humanity is far more likely to be killed by AI than by ‘rogue’ state North Korea.

He went on Twitter to tell everyone:

If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea.


While he agreed no one likes rules or regulations, it’s a necessary evil and the same needs to be applied to AI.

He went on to say:

Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that’s a danger to the public is regulated. AI should be too.

Biggest impediment to recognising AI danger are those so convinced of their own intelligence they can’t imagine anyone doing what they can’t.

His fears come after OpenAI, for the first time ever, was able to defeat the world’s best eSports competitors in one-on-one competition on Dota 2.

Advertising

A recent study from Oxford University seems to confirm Musk’s predictions, concluding that artificial intelligence will be better at handling all tasks than humans within the next 45 years.

DreamWorks Pictures

In addition to this, an MIT professor who works on applications of AI had previously warned:

If you had a very small neural network, you might be able to understand it.

But once it becomes very large, and has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.

Facebook also had two robots shut down after they created their own language in July.

The researchers originally thought the language they were using to talk to each other was nonsense, but it soon became clear there was proper communication between the two.

The robots, Bob and Alice, had begun to make up their own code words.

They were originally programmed to use English, but apparently stopped using the language abruptly.

Marvel

According to Fast Co Design, Bob said: ‘I can I I everything else.’

Advertising

Alice replied: ‘Balls have zero to me to me to me to me to me to me to me to me to.’

Of course, this seems like absolute gibberish, but it is a conversation between the two most sophisticated AI softwares on the planet.

It was communicated with more ‘speed and efficiency – and perhaps, hidden nuance – than you or I ever could’.

Pixabay

Dhruv Batra, research scientist from Georgia Tech at Facebook AI Research said:

There was no reward to sticking to English language. Agents will drift off understandable language and invent code words for themselves.

Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create short hands.

Pixabay

Despite this similarity, Facebook shut off the robots as they were more interested in having robots that could converse with humans.

Apparently this isn’t the first time this has happened, Facebook has published numerous papers that prove it can be done.

Better start preparing for a Terminator-style apocalypse, folks.