Google Deep Mind AI develops human aggression

Posted on 18 Feb 2017 by Michael Cruickshank

An Artificial Intelligence (AI) program developed by Google has demonstrated human-like aggression during simulations.

Google’s Deep Mind AI was tested playing several different games to see what kind of behavior would emerge.

In one particular game, the AI was tasked with attempting to collect more ‘apples’ within a 2D environment than another ‘player’. The AI also had the ability to hit a player with a beam which would remove them from the game.

After running the game through millions of simulations, Deep Mind’s ‘deep reinforcing learning’ algorithm showed human-like emergent behavior.

Specifically, when the number of apples within the 2D environment was low, it would adapt to this, and choose an aggressive play style where it would constantly attack the other player, in order to maximize the number of apples it could collect.

Such behavior is remarkably similar to many theories relating to how scarcity creates conflict within human societies.

In a second and different game, called Wolfpack, the AI had the option of working with another player in order to catch a ‘prey’.

This prey would yield a high reward, at lower risk, if captured in close proximity to the other player in the game.

Unsurprisingly, after millions of simulations of this game running its course, the Deep Mind AI developed a cooperative play style where it would actively work together with other players in order to hunt the prey.

“Cooperation makes possible better outcomes for all than any could obtain on their own. However, the lure of free riding and other such parasitic strategies implies a tragedy of the commons that threatens the stability of any cooperative venture,” said the Google team behind Deep Mind in a research paper.

One particularly important use of these findings will be helping to create powerful AI that is not necessary driven to compete with its human creators.

Should an intelligence be able to be nurtured in an environment where it learns to cooperate with humanity, the risk of a potentially dangerous rouge AI could be substantially reduced.