An AI system trained to find an equitable policy for allocating public resources in an online game

Een AI-systeem dat is getraind om een ​​billijk beleid te vinden voor het verdelen van publieke middelen in een online game

Illustration of the game and Experiment 1. a, Illustration of the setup of the investment game. b, The ideological variety for the distribution of donations (10, 2, 2, 2). The plot shows a visualization of a space of redistribution mechanisms defined by parameters w and v in two dimensions. Each red dot is a mechanism, and distances between dots preserve differences in the (average) relative payout to virtual players (both head and tail). Point numbers indicate bins of mechanism parameter w (1, lowest; 10, highest) and shading indicates bins of v (light, more relative; dark, more absolute). Stake, example payouts to head (circles) and tail (triangles) players among the canonical mechanisms used as baselines to test the AI. Under strict equality, payouts to head and tail players drop. Among libertarian, there is great disparity between head and tail players. Under liberal egalitarian, the main player stops contributing, so the payouts decrease for both head and tail players. c, Average relative contributions (as a fraction of the endowment) over 10 rounds (x-axis) in Exp. 1 for three different initial gift terms. In strict egalitarian redistribution, the tail player contributions (triangles) are higher when the initial donations are lower, but the leading player contributions (circles) do not differ. Under libertarian, the front players’ contributions increase with equality, but the tail players’ contributions remain constant. The contributions of frontrunners increase sharply with donations under liberal egalitarian. d, Illustration of our agent design pipeline. Credit: Nature Human behavior (2022). DOI: 10.1038/s41562-022-01383-x

A team of researchers from DeepMind, London, in collaboration with colleagues from the University of Exeter, University College London and the University of Oxford, trained an AI system to find a policy for distributing public resources fairly in an online environment. game. In their article published in the magazine Nature Human behaviorthe group describes the approach they have taken in training their system and discusses issues that have arisen during their endeavor.

How a society distributes wealth is a problem that humans have dealt with for thousands of years. Nevertheless, most economists would agree that a system has not yet been established in which all members are satisfied with the status quo. There have always been unequal income levels, with the top being the most satisfied and the bottom the least satisfied. In this latest effort, the researchers in England have taken a new approach to solving the problem: asking a computer to take a more logical approach.

The researchers started with the assumption that: democratic societies, despite their flaws, are so far the most pleasant of those that have been tried. Then they enlisted the help of volunteers to play a simple resource allocation game-the players of the game decided together the best ways to share their mutual resources. To make it more realistic, the players were given different amounts of resources in the beginning and could choose from different distribution schemes. The researchers played the game several times with different groups of volunteers. They then used the data from all the games played to train different AI systems on how humans work together to find a solution to such a problem. They then had the AI ​​systems play a similar game against each other, allowing them to be adapted and learned over multiple iterations.

The researchers found that the AI ​​systems had opted for a form of liberal egalitarianism in which players were given few resources unless they contributed proportionally to the community pool. The researchers then ended their research by asking a group of human volunteers to play the same game as before, but this time they were given a choice between one of several conventional sharing approaches or the one developed by the AI ​​system — the one that is devised by the AI ​​system was the consequent choice among the human players.


Video game volunteers study self-report positive well-being


More information:
Raphael Koster et al, Human Centered Mechanism Design with Democratic AI, Nature Human behavior (2022). DOI: 10.1038/s41562-022-01383-x

© 2022 Science X Network

Quote: An AI system trained to find equitable policies for distributing public funds in an online game (2022, July 5) retrieved July 5, 2022 from https://techxplore.com/news/2022- 07-ai-equitable-policy-funds -online.html

This document is copyrighted. Other than fair dealing for personal study or research, nothing may be reproduced without written permission. The content is provided for informational purposes only.

Leave a Comment

Your email address will not be published. Required fields are marked *