digest | AI software with social skills teaches humans how to collaborate

Unlocking human-computer cooperation.
May 30, 2021


— contents —

~ story
~ featurette
~ webpages
~ paper


— story —

A team of computer researchers developed an AI software program with social skills — called S Sharp (written S#) — that out-performed humans in its ability to cooperate. This was tested through a series of games between humans and the AI software. The tests paired people with S# in a variety of social scenarios.

One of the games humans played against the software is called “the prisoner’s dilemma.” This classic game shows how 2 rational people might not cooperate — even if it appears that’s in both their best interests to work together. The other challenge was a sophisticated block-sharing game.

In most cases, the S# software out-performed humans in finding compromises that benefit both parties. To see the experiment in action, watch the good featurette below. This project was helmed by 2 well-known computer scientists:

  • Iyad Rahwan PhD ~ Massachusetts Institute of Technology • US
  • Jacob Crandall PhD ~ Brigham Young Univ. • US

The researchers tested humans and the AI in 3 types of game interactions:

  • computer  – to – computer
  • human – to – computer
  • human – to – human


Building AI that cooperates with us at human-level.

Researcher Jacob Crandall PhD said:

Computers can now beat the best human minds in the most intellectually challenging games — like chess. They can also perform tasks that are difficult for adult humans to learn — like driving cars. Yet autonomous machines have difficulty learning to cooperate, something even young children do.

Human cooperation appears easy — but it’s very difficult to emulate because it relies on cultural norms, deeply rooted instincts, and social mechanisms that express disapproval of non-cooperative behavior.

Such common sense mechanisms aren’t easily built into machines. In fact, the same AI software programs that effectively play the board games of chess +  checkers, Atari video games, and the card game of poker — often fail to consistently cooperate when cooperation is necessary.

Other AI software often takes 100s of rounds of experience to learn to cooperate with each other, if they cooperate at all.  Can we build computers that cooperate with humans — the way humans cooperate with each other? Building on decades of research in AI, we built a new software program that learns to cooperate with other machines — simply by trying to maximize its own world.

We ran experiments that paired the AI with people in various social scenarios — including a “prisoner’s dilemma” challenge and a sophisticated block-sharing game. While the program consistently learns to cooperate with another computer — it doesn’t cooperate very well with people. But people didn’t cooperate much with each other either.

As we all know: humans can cooperate better if they can communicate their intentions through words + body language. So in hopes of creating an program that consistently learns to cooperate with people — we gave our AI a way to listen to people, and to talk to them.

We did that in a way that lets the AI play in previously unanticipated scenarios. The resulting algorithm achieved our goal. It consistently learns to cooperate with people as well as people do. Our results show that 2 computers make a much better team — better than 2 humans, and better than a human + a computer.

But the program isn’t a blind cooperator. In fact, the AI can get pretty angry if people don’t behave well. The historic computer scientist Alan Turing PhD believed machines could potentially demonstrate human-like intelligence. Since then, AI has been regularly portrayed as a threat to humanity or human jobs.

To protect people, programmers have tried to code AI to follow legal + ethical principles — like the 3 Laws of Robotics written by Isaac Asimov PhD. Our research demonstrates that a new path is possible.

Machines designed to selfishly maximize their pay-offs can — and should — make an autonomous choice to cooperate with humans across a wide range of situations. 2 humans — if they were honest with each other + loyal — would have done as well as 2 machines. About half of the humans lied at some point. So the AI is learning that moral characteristics are better — since it’s programmed to not lie — and it also learns to maintain cooperation once it emerges.

The goal is we need to understand the math behind cooperating with people — what attributes does AI need so it can develop social skills. AI must be able to respond to us — and articulate what it’s doing. It must interact with other people. This research could help humans with their relationships. In society, relationships break-down all the time. People that were friends for years all-of-a-sudden become enemies. Because the AI is often better at reaching these compromises than we are, it could teach us how to get-along better.


— featurette —

group: Institute for Advanced Study in Toulouse
tag line: Knowledge across frontiers.
web: homechannel

featurette title: Unlocking robot-human cooperation
watch | featurette


— webpages —

name: Iyad Rahwan • PhD
web: home

profile: Massachusetts Institute of Technology | visit
profile: Max Planck Institute for Human Development | visit


— webpages —

name: Jacob Crandall • PhD
web: home

profile: Brigham Young Univ. | visit


— paper —

publication: Nature
tag line: text
web: home • channel

paper title: Cooperating with machines
read | paper

presented by

group: Springer
tag line: text
web: home • channel


— notes + abbreviations —

AI = artificial intelligence

MIT = Massachusetts Institute of Technology • US
BYU = Brigham Young University • US
IAST = Institute for Advanced Study in Toulouse • France

US = United States


[ post file ]

post title: digest | AI software with social skills teaches humans how to collaborate
deck: Unlocking human-computer cooperation.

collection: the Kurzweil library
tab: stories on progress

[ end of file ]