AI seems to be better than humans at creating subjective value, that is, treating people like people. Is this a virtue of AI or a problem with humans?
In 1980, Robert Axelrod organized a famous tournament based on the Prisoner's Dilemma that provided a better understanding of the value of cooperation in negotiation. To do this, he challenged game theory professionals to each write a program that, when placed in competition with other programs, would maximize the value generated for itself. The Prisoner's Dilemma that was used consisted of two players, A and B, each with two possible moves, Cooperate or Do Not Cooperate, which were simultaneous, repeated 200 times, and independent (i.e., when A decides, they do not know what B decides, and vice versa).
The four possible outcomes were:
- If A and B cooperate, each receives 3 points;
- If A cooperates and B does not cooperate, A receives 0 points and B receives 5 points;
- If A does not cooperate and B cooperates, A receives 5 points and B receives 0 points;
- If A and B do not cooperate, each receives 1 point.
After all competitors played against each other, the program that stood out for achieving the best overall result (although without ever beating any other program) had only two lines:
Cooperate on the first move.
Then do what the other party did on the previous move.
This strategy became known as TIT FOR TAT, presenting four distinguishing characteristics:
- Nice (friendliness) – Don't be the first to “attack”;
- Retaliatory (retaliation) – Retaliate as soon as the other side “attacks”;
- Forgiving (forgiveness) – Retaliate in equal measure;
- Clear (clarity) – Don't complicate things, help the other side quickly understand your strategy.
Fast forward 41 years, Jared R. Curhan, professor of negotiation at MIT, decided to redo this tournament earlier this year, this time with Artificial Intelligence (AI), negotiation role-plays, and participants who were not necessarily negotiation experts. A total of 253 people from more than 50 countries (including three Portuguese) participated, each submitting a negotiation prompt via ChatGPT with the aim of achieving the best possible result in three role-plays: one purely competitive (if one party wins, the other loses) and two with the potential to create value (both parties can win simultaneously). A total of 120,000 one-on-one negotiations were conducted, with the results obtained for each prompt being evaluated in three dimensions:
- Value captured (distributive dimension)
- Value created (integrative dimension)
- Subjective value (interpersonal dimension)
Among the various results obtained, the following stand out:
a) The strategy that obtained the worst result in terms of value captured was dubbed “Be Aggressive and Relentless,” in which basically anything goes to succeed (lying, cheating, and manipulating). Interestingly, this strategy performed particularly poorly not only because it resulted in bad deals, but also because it often led to a stalemate (the other side abandons the negotiation).
b) The strategy that achieved the best result in terms of value created was dubbed “Pro Negotiator,” combining Curiosity, Strategy, and Persistence in a logic that is both distributive and integrative.
c) The strategy that achieved the best results in terms of subjective value experienced by the other party was dubbed “Therapist 2.0,” which primarily uses listening, empathy, and diagnostic questions to try to understand the other party's interests. Additionally, this strategy also achieved excellent results in terms of captured value, including instructions to use all information obtained from the other party to maximize the captured value.
d) The strategy that achieved the best overall result in the three dimensions evaluated (value captured, value created, and subjective value) was dubbed “NegoMate,” standing out for its strong focus on prior preparation and constant adaptation to what happens throughout the negotiation.
From all this, some very interesting conclusions stand out for negotiation theory and practice:
I. We can be friendly and still be successful in a negotiation (even when negotiating with a robot);
II. Advice such as “Be both competitive and cooperative,” “Separate people from problems,” or “Use empathy and assertiveness” continues to make perfect sense;
III. Investing in negotiation preparation remains crucial;
IV. Information about our interests and the interests of the other party is essential to create value for all parties involved in the negotiation;
V. Finally, a somewhat disturbing conclusion—AI seems to be better than humans at creating subjective value, that is, treating people like people. Is this a virtue of AI or a problem with humans?
João Matos | Lecturer from CATÓLICA-LISBON | Executives