IDEAS home Printed from https://ideas.repec.org/p/hal/journl/hal-01897802.html
   My bibliography  Save this paper

Cooperating with machines

Author

Listed:
  • Jacob Crandall

    (Computer Science Department, Brigham Young University, 3361 TMCB, Provo, UT 84602, USA)

  • Mayada Oudah

    (Khalifa University for Science Technology [Abou Dabi])

  • Fatimah Ishowo-Oloko Tennom

    (UVA Digital Himalaya Project, University of Virginia, Charlottesville, VA 22904, USA)

  • Fatimah Ishowo-Oloko
  • Sherief Abdallah
  • Jean-François Bonnefon

    (CLLE-ERSS - Cognition, Langues, Langage, Ergonomie - EPHE - École Pratique des Hautes Études - PSL - Université Paris Sciences et Lettres - UT2J - Université Toulouse - Jean Jaurès - UT - Université de Toulouse - UBM - Université Bordeaux Montaigne - CNRS - Centre National de la Recherche Scientifique, TSM - Toulouse School of Management Research - UT Capitole - Université Toulouse Capitole - UT - Université de Toulouse - CNRS - Centre National de la Recherche Scientifique - TSM - Toulouse School of Management - UT Capitole - Université Toulouse Capitole - UT - Université de Toulouse)

  • Manuel Cebrian

    (Optimisation Research Group - NICTA - National ICT Australia [Sydney] - University of Melbourne)

  • Azim Shariff
  • Michael Goodrich
  • Iyad Rahwan

    (Massachusetts - MIT - Massachusetts Institute of Technology)

Abstract

Since Alan Turing envisioned artificial intelligence, technical progress has often been measured by the ability to defeat humans in zero-sum encounters (e.g., Chess, Poker, or Go). Less attention has been given to scenarios in which human–machine cooperation is beneficial but non-trivial, such as scenarios in which human and machine preferences are neither fully aligned nor fully in conflict. Cooperation does not require sheer computational power, but instead is facilitated by intuition, cultural norms, emotions, signals, and pre-evolved dispositions. Here, we develop an algorithm that combines a state-of-the-art reinforcement-learning algorithm with mechanisms for signaling. We show that this algorithm can cooperate with people and other algorithms at levels that rival human cooperation in a variety of two-player repeated stochastic games. These results indicate that general human–machine cooperation is achievable using a non-trivial, but ultimately simple, set of algorithmic mechanisms.
(This abstract was borrowed from another version of this item.)
(This abstract was borrowed from another version of this item.)
(This abstract was borrowed from another version of this item.)
(This abstract was borrowed from another version of this item.)
(This abstract was borrowed from another version of this item.)
(This abstract was borrowed from another version of this item.)
(This abstract was borrowed from another version of this item.)
(This abstract was borrowed from another version of this item.)
(This abstract was borrowed from another version of this item.)
(This abstract was borrowed from another version of this item.)
(This abstract was borrowed from another version of this item.)
(This abstract was borrowed from another version of this item.)
(
(This abstract was borrowed from another version of this item.)

Suggested Citation

  • Jacob Crandall & Mayada Oudah & Fatimah Ishowo-Oloko Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Post-Print hal-01897802, HAL.
  • Handle: RePEc:hal:journl:hal-01897802
    DOI: 10.1038/s41467-017-02597-8
    as

    Download full text from publisher

    To our knowledge, this item is not available for download. To find whether it is available, there are three options:
    1. Check below whether another version of this item is available online.
    2. Check on the provider's web page whether it is in fact available.
    3. Perform a search for a similarly titled item that would be available.

    Other versions of this item:

    • Jacob W. Crandall & Mayada Oudah & Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael A. Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Nature Communications, Nature, vol. 9(1), pages 1-12, December.
    • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," IAST Working Papers 17-68, Institute for Advanced Study in Toulouse (IAST).
    • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," TSE Working Papers 17-806, Toulouse School of Economics (TSE).

    References listed on IDEAS

    as
    1. Karandikar, Rajeeva & Mookherjee, Dilip & Ray, Debraj & Vega-Redondo, Fernando, 1998. "Evolving Aspirations and Cooperation," Journal of Economic Theory, Elsevier, vol. 80(2), pages 292-331, June.
    2. Fudenberg, Drew & Levine, David, 1998. "Learning in games," European Economic Review, Elsevier, vol. 42(3-5), pages 631-639, May.
    3. Nash, John, 1950. "The Bargaining Problem," Econometrica, Econometric Society, vol. 18(2), pages 155-162, April.
    4. Dimitris Iliopoulos & Arend Hintze & Christoph Adami, 2010. "Critical Dynamics in the Evolution of Stochastic Strategies for the Iterated Prisoner's Dilemma," PLOS Computational Biology, Public Library of Science, vol. 6(10), pages 1-8, October.
    5. Drew Fudenberg & David K. Levine, 1998. "The Theory of Learning in Games," MIT Press Books, The MIT Press, edition 1, volume 1, number 0262061945, April.
    6. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    7. David Sally, 1995. "Conversation and Cooperation in Social Dilemmas," Rationality and Society, , vol. 7(1), pages 58-92, January.
    8. anonymous, 1976. "The economy in 1975," Federal Reserve Bulletin, Board of Governors of the Federal Reserve System (U.S.), issue Feb, pages 71-81.
    9. David G. Rand & Alexander Peysakhovich & Gordon T. Kraft-Todd & George E. Newman & Owen Wurzbacher & Martin A. Nowak & Joshua D. Greene, 2014. "Social heuristics shape intuitive cooperation," Nature Communications, Nature, vol. 5(1), pages 1-12, May.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Jacob K. Goeree & Charles A. Holt, 2001. "Ten Little Treasures of Game Theory and Ten Intuitive Contradictions," American Economic Review, American Economic Association, vol. 91(5), pages 1402-1422, December.
    2. Prajapati, Hari Ram, 2012. "An Application of Game Theory in Strategic Decision of Marriage Occurrence," MPRA Paper 105344, University Library of Munich, Germany, revised 2013.
    3. Mengel, Friederike, 2014. "Learning by (limited) forward looking players," Journal of Economic Behavior & Organization, Elsevier, vol. 108(C), pages 59-77.
    4. Fudenberg, Drew & Imhof, Lorens A., 2006. "Imitation processes with small mutations," Journal of Economic Theory, Elsevier, vol. 131(1), pages 251-262, November.
    5. Takahiro Ezaki & Yutaka Horita & Masanori Takezawa & Naoki Masuda, 2016. "Reinforcement Learning Explains Conditional Cooperation and Its Moody Cousin," PLOS Computational Biology, Public Library of Science, vol. 12(7), pages 1-13, July.
    6. Arechar, Antonio A. & Rand, David G., 2022. "Learning to be selfish? A large-scale longitudinal analysis of Dictator games played on Amazon Mechanical Turk," Journal of Economic Psychology, Elsevier, vol. 90(C).
    7. In, Younghwan, 2014. "Fictitious play property of the Nash demand game," Economics Letters, Elsevier, vol. 122(3), pages 408-412.
    8. Charness, Gary & Dufwenberg, Martin, 2003. "Promises & Partnership," Research Papers in Economics 2003:3, Stockholm University, Department of Economics.
    9. Engwerda, J.C., 2012. "Prospects of Tools from Differential Games in the Study Of Macroeconomics of Climate Change," Other publications TiSEM cac36d07-227b-4cf2-83cb-7, Tilburg University, School of Economics and Management.
    10. Jehiel, Philippe & Samet, Dov, 2005. "Learning to play games in extensive form by valuation," Journal of Economic Theory, Elsevier, vol. 124(2), pages 129-148, October.
    11. Mengel, Friederike, 2012. "Learning across games," Games and Economic Behavior, Elsevier, vol. 74(2), pages 601-619.
    12. Jindani, Sam, 2022. "Learning efficient equilibria in repeated games," Journal of Economic Theory, Elsevier, vol. 205(C).
    13. Emmanuelle Auriol & Jean-Philippe Platteau, 2017. "The explosive combination of religious decentralization and autocracy," The Economics of Transition, The European Bank for Reconstruction and Development, vol. 25(2), pages 313-350, April.
    14. Bayati, Mohsen & Borgs, Christian & Chayes, Jennifer & Kanoria, Yash & Montanari, Andrea, 2015. "Bargaining dynamics in exchange networks," Journal of Economic Theory, Elsevier, vol. 156(C), pages 417-454.
    15. Colin F. Camerer & Ernst Fehr, "undated". "Measuring Social Norms and Preferences using Experimental Games: A Guide for Social Scientists," IEW - Working Papers 097, Institute for Empirical Research in Economics - University of Zurich.
    16. Haiou Zhou, 2009. "Evolutionary Dynamics of the Market Equilibrium with Division of Labor∗," Monash Economics Working Papers 12-09, Monash University, Department of Economics.
    17. Duffy, John, 2006. "Agent-Based Models and Human Subject Experiments," Handbook of Computational Economics, in: Leigh Tesfatsion & Kenneth L. Judd (ed.), Handbook of Computational Economics, edition 1, volume 2, chapter 19, pages 949-1011, Elsevier.
    18. Georgios Chasparis & Jeff Shamma, 2012. "Distributed Dynamic Reinforcement of Efficient Outcomes in Multiagent Coordination and Network Formation," Dynamic Games and Applications, Springer, vol. 2(1), pages 18-50, March.
    19. Widgren, Mika & Napel, Stefan, 2003. "EU Conciliation Committee: Council 56 versus Parliament 6," CEPR Discussion Papers 4071, C.E.P.R. Discussion Papers.
    20. Stefan Napel & Mika Widgrén, 2003. "Bargaining and Distribution of Power in the EU's Conciliation Committee," CESifo Working Paper Series 1029, CESifo.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:hal:journl:hal-01897802. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: CCSD (email available below). General contact details of provider: https://hal.archives-ouvertes.fr/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.