stories on progressAI software with social skills teaches humans to collaborate

Exploring human-computer co-operation.
May 1, 2022


— contents —

~ story
~ quote
~ featurette
~ reading


— story —

A team of computer researchers developed an AI software program with social skills — called S Sharp — that out-performed humans in its ability to co-operate. This was tested through a series of games between humans and the artificial intelligence software. The tests paired people with S Sharp in a variety of social scenarios.

Building AI that co-operates with us at human-level.

One of the games humans played against the software is called the prisoner’s dilemma. This classic game shows how two rational people might not co-operate — even if it appears that’s in both their best interests to work together. The other challenge was a sophisticated block-sharing game.

In most cases, the S Sharp software out-performed humans in finding compromises that benefit both parties. To see the experiment in action, watch the good featurette below. This project was helmed by two well known computer scientists:

  • Iyad Rahwan PhD  |  the Massachusetts Institute of Technology
  • Jacob Crandall PhD  |  Brigham Young Univ.

The researchers tested humans and the AI in 3 types of game interactions:

  • computer to computer
  • human to computer
  • human to human


— quote —

name: by Jacob Crandall PhD
bio: computer scientist
bio: teacher | Brigham Young Univ.
web: profile

Computers can now beat the best human minds in the most intellectually challenging games — like chess. They can also perform tasks that are difficult for adult humans to learn — like driving cars. Yet autonomous machines have difficulty learning to co-operate. That’s something even young children do.

Human co-operation appears easy. But it’s very difficult to emulate because it relies on cultural norms, deeply rooted instincts, and social mechanisms — that express disapproval of non-collaborative behavior.

Such common sense mechanisms aren’t easily built into machines. The same AI software programs that effectively play the board games of chess +  checkers, Atari video games, and the card game of poker — often fail to consistently co-operate when it’s necessary.

Other AI software often takes 100s of rounds of experience to learn to collaborate with each other, if they do at all.  Can we build computers that co-operate with humans — the way humans do with each other? Building on decades of research in AI, we built a new software program that learns to collaborate with other machines . Simply by trying to maximize its own world.

We did experiments that paired the AI with people in various social scenarios — including a prisoner’s dilemma challenge and a sophisticated block-sharing game. The program consistently learns to co-operate with another computer — but it doesn’t with people. But people didn’t co-operate much with each other either.

As we all know, humans can collaborate better if they can communicate their intentions through words + body language. So we gave our AI a way to listen to people, and talk to them.

This lets the AI play in previously unanticipated scenarios. The resulting algorithm achieved our goal. It consistently learns to co-operate with people as well as people do. Our results show that 2 computers make a much better team — better than 2 humans, and better than a human + a computer.

But the program isn’t a blind collaborator. The AI can get pretty angry if people don’t behave well. The historic computer scientist Alan Turing PhD believed machines could potentially demonstrate human-like intelligence. Since then, AI has been regularly portrayed as a threat to humanity or human jobs.

To protect people, programmers have tried to code AI to follow legal + ethical principles — like the 3 laws of robotics written by Isaac Asimov PhD. Our research shows that a new path is possible.

Machines designed to selfishly maximize their pay-offs can — and should — make an autonomous choice to co-operate with humans across a wide range of situations. Two humans — if they’re honest with each other + loyal — would do as well as 2 machines. About half of the humans lied at some point. So the AI is learning that moral characteristics are better — since it’s programmed not to lie. And it also learns to maintain co-operation once it emerges.

We need to understand the math behind collaborating with people. What attributes does AI need so it can develop social skills. AI must be able to respond to us — and articulate what it’s doing. It must interact with other people.

This research could help humans with their relationships. In society, relationships break-down all the time. AI is often better than humans at reaching compromise — so it could teach us how to get-along.

— Jacob Crandall PhD



featurette


institution: the Institute for Advanced Study in Toulouse
featurette title: Unlocking robot-human co-operation

watch | featurette


presented by

the Institute for Advanced Study in Toulouse | home ~ channel
tag line: Knowledge across frontiers.
banner: A unified scientific project studying human behavior.



webpages


1. |

name: Iyad Rahwan PhD
web: home

profile | the Massachusetts Institute of Technology
profile | the Max Planck Institute for Human Development


2. |

name: Jacob Crandall PhD
web: home

profile | Brigham Young Univ.



reading


1. |

publication: Nature
paper title: Co-operating with machines

read | paper


presented by

Nature | home ~ channel
tag line: text

Springer | home ~ channel
tag line: text



— notes  —

AI = artificial intelligence
S# = S Sharp software

MIT = the Massachusetts Institute of Technology
BYU = Brigham Young Univ.

IAST = the Institute for Advanced Study in Toulouse | France

univ. = university


— file —

box 1: stories on progress
box 2:

post title: AI software with social skills teaches humans to collaborate
deck: Unlocking human-computer co-operation.

collection: the Kurzweil Library
tab: stories on progress