DeepMind AI hands resources society

Deepmind’s new AI may be better at distributing society’s resources than humans

How groups of people working together should redistribute the wealth they create has been a problem that has plagued philosophers, economists and political scientists for years. A new study from DeepMind suggests that AI may be able to make better decisions than humans.

AI is proving increasingly adept at solving complex challenges in everything from business to biomedicine, so the idea of ​​using it to design solutions to societal problems is compelling. But doing this is tricky, because answering these kinds of questions requires relying on highly subjective ideas such as fairness, fairness, and accountability.

For an AI solution to work, it must align with the values ​​of the society it deals with, but the diversity of political ideologies that exist today suggests they are far from uniform. That makes it difficult to figure out what to optimize for and introduces the danger that developers’ values ​​skew the outcome of the process.

The best way human societies have found to deal with inevitable disagreements over such problems is democracy, in which majority opinion is used to guide public policy. So now researchers at Deepmind have developed a new approach that combines AI with human democratic deliberation to come up with better solutions to social dilemmas.

To test their approach, the researchers conducted a proof-of-concept study using a simple game in which users decide how to share their resources for mutual benefit. The experiment is designed to act as a microcosm of human societies in which people of different levels of wealth must work together to create a fair and prosperous society.

The game involves four players who each receive different amounts of money and must decide whether to keep it for themselves or put it into a public fund that generates a return on investment. However, the way this return on investment is redistributed can be adjusted in ways that benefit some players over others.

Possible mechanisms include strict egalitarianism, with returns on public funds being shared equally regardless of contribution; libertarian, where payouts are proportional to contributions; and liberal egalitarian, where each player’s payout is proportional to the portion of their private money they contribute.

In research published in Nature Human behavior the researchers describe how they got groups of people to play many rounds of this game under different levels of inequality and using different redistribution mechanisms. They were then asked to vote on how they would divide the profits.

This data was used to train an AI to mimic human behavior in the game, including how players vote. The researchers pitted these AI players against each other in thousands of games, while another AI system adjusted the redistribution mechanism based on how the AI ​​players voted.

At the end of this process, the AI ​​had opted for a redistribution mechanism similar to liberal egalitarian, but gave almost nothing back to the players unless they contributed about half of their private wealth. When people played games that pitted this approach against the three main established mechanisms, the AI ​​designed consistently won the vote. It also outperformed games where human umpires decided how to share the proceeds.

The researchers say the AI-designed mechanism probably did well, because basing payouts on relative rather than absolute contributions helps to correct initial wealth imbalances, but enforcing a minimal contribution prevents less wealthy players from simply piggybacking on contributions from richer.

Translating the approach from a simple four-player game to large-scale economic systems would clearly be an incredible challenge, and whether its success on a toy problem like this gives any indication of how things would fare in the real world is unclear.

The researchers themselves identified a number of potential problems. One problem with democracy can be the ‘tyranny of the majority’, which can perpetuate existing patterns of discrimination or unfairness towards minorities. They also create problems explainability and trust, which would be crucial if AI-designed solutions were ever applied to real-world dilemmas.

The team has explicitly designed their AI model to implement mechanisms that can be explained, but this can become increasingly difficult as the approach is applied to more complex problems. Players were also not told when redistribution was controlled by AI, and the researchers admit this knowledge could affect the way they vote.

However, as the first proof of principle, this research demonstrates a promising new approach to solving social problems that combines the best of both artificial and human intelligence. We’re still a long way from machines that help shape government policy, but it looks like AI could one day help us find new solutions that go beyond established ideologies.

Image credit: harish / 41 images

Leave a Comment

Your email address will not be published.