IDEAS home Printed from
   My bibliography  Save this paper

Using Response Times to Measure Strategic Complexity and the Value of Thinking in Games


  • Gill, David

    () (Purdue University)

  • Prowse, Victoria L.

    () (Purdue University)


Response times are a simple low-cost indicator of the process of reasoning in strategic games (Rubinstein, 2007; Rubinstein, 2016). We leverage the dynamic nature of response-time data from repeated strategic interactions to measure the strategic complexity of a situation by how long people think on average when they face that situation (where we define situations according to the characteristics of play in the previous round). We find that strategic complexity varies significantly across situations, and we find considerable heterogeneity in how responsive subjects' thinking times are to complexity. We also study how variation in response times at the individual level across rounds affects strategic behavior and success. We find that 'overthinking' is detrimental to performance: when a subject thinks for longer than she would normally do in a particular situation, she wins less frequently and earns less. The behavioral mechanism that drives the reduction in performance is a tendency to move away from Nash equilibrium behavior.

Suggested Citation

  • Gill, David & Prowse, Victoria L., 2017. "Using Response Times to Measure Strategic Complexity and the Value of Thinking in Games," IZA Discussion Papers 10518, Institute for the Study of Labor (IZA).
  • Handle: RePEc:iza:izadps:dp10518

    Download full text from publisher

    File URL:
    Download Restriction: no

    References listed on IDEAS

    1. Jérôme Hergueux & Nicolas Jacquemet, 2015. "Social preferences in the online laboratory: a randomized experiment," Experimental Economics, Springer;Economic Science Association, vol. 18(2), pages 251-283, June.
    2. Neil Stewart & Christoph Ungemach & Adam J. L. Harris & Daniel M. Bartels & Ben R. Newell & Gabriele Paolacci & Jesse Chandler, 2015. "The Average Laboratory Samples a Population of 7,300 Amazon Mechanical Turk Workers," Mathematica Policy Research Reports f97b669c7b3e4c2ab95c9f805, Mathematica Policy Research.
    3. John Horton & David Rand & Richard Zeckhauser, 2011. "The online laboratory: conducting experiments in a real labor market," Experimental Economics, Springer;Economic Science Association, vol. 14(3), pages 399-425, September.
    4. Jenkins, Stephen P, 1995. "Easy Estimation Methods for Discrete-Time Duration Models," Oxford Bulletin of Economics and Statistics, Department of Economics, University of Oxford, vol. 57(1), pages 129-138, February.
    5. Katrin Schmelz & Anthony Ziegelmeyer, 2015. "Social Distance and Control Aversion: Evidence from the Internet and the Laboratory," TWI Research Paper Series 100, Thurgauer Wirtschaftsinstitut, Universität Konstanz.
    6. Anderhub, Vital & Muller, Rudolf & Schmidt, Carsten, 2001. "Design and evaluation of an economic experiment via the Internet," Journal of Economic Behavior & Organization, Elsevier, vol. 46(2), pages 227-247, October.
    7. Chesney, Thomas & Chuah, Swee-Hoon & Hoffmann, Robert, 2009. "Virtual world experimentation: An exploratory study," Journal of Economic Behavior & Organization, Elsevier, vol. 72(1), pages 618-635, October.
    8. Jon Anderson & Stephen Burks & Jeffrey Carpenter & Lorenz Götte & Karsten Maurer & Daniele Nosenzo & Ruth Potter & Kim Rocha & Aldo Rustichini, 2013. "Self-selection and variations in the laboratory measurement of other-regarding preferences across subject pools: evidence from one college student and two adult samples," Experimental Economics, Springer;Economic Science Association, vol. 16(2), pages 170-189, June.
    9. Johannes Abeler & Daniele Nosenzo, 2015. "Self-selection into laboratory experiments: pro-social motives versus monetary incentives," Experimental Economics, Springer;Economic Science Association, vol. 18(2), pages 195-214, June.
    10. Gächter, Simon & Herrmann, Benedikt, 2011. "The limits of self-governance when cooperators get punished: Experimental evidence from urban and rural Russia," European Economic Review, Elsevier, vol. 55(2), pages 193-210, February.
    11. Neil Stewart & Christoph Ungemach & Adam J. L. Harris & Daniel M. Bartels & Ben R. Newell & Gabriele Paolacci & Jesse Chandler, 2015. "The average laboratory samples a population of 7,300 Amazon Mechanical Turk workers," Judgment and Decision Making, Society for Judgment and Decision Making, vol. 10(5), pages 479-491, September.
    12. Jan Stoop & Charles N. Noussair & Daan van Soest, 2012. "From the Lab to the Field: Cooperation among Fishermen," Journal of Political Economy, University of Chicago Press, vol. 120(6), pages 1027-1056.
    13. Krupnikov, Yanna & Levine, Adam Seth, 2014. "Cross-Sample Comparisons and External Validity," Journal of Experimental Political Science, Cambridge University Press, vol. 1(01), pages 59-80, March.
    14. Blair Cleave & Nikos Nikiforakis & Robert Slonim, 2013. "Is there selection bias in laboratory experiments? The case of social and risk preferences," Experimental Economics, Springer;Economic Science Association, vol. 16(3), pages 372-382, September.
    15. Jesse Chandler & Gabriele Paolacci & Eyal Peer & Pam Mueller & Kate A. Ratliff, 2015. "Using Nonnaive Participants Can Reduce Effect Sizes," Mathematica Policy Research Reports bffac982a56e4cfba3659e74a, Mathematica Policy Research.
    16. Jeffrey Carpenter & Erika Seki, 2011. "Do Social Preferences Increase Productivity? Field Experimental Evidence From Fishermen In Toyama Bay," Economic Inquiry, Western Economic Association International, vol. 49(2), pages 612-630, April.
    17. Urs Fischbacher, 2007. "z-Tree: Zurich toolbox for ready-made economic experiments," Experimental Economics, Springer;Economic Science Association, vol. 10(2), pages 171-178, June.
    18. Gachter, Simon & Herrmann, Benedikt & Thoni, Christian, 2004. "Trust, voluntary cooperation, and socio-economic background: survey and experimental evidence," Journal of Economic Behavior & Organization, Elsevier, vol. 55(4), pages 505-531, December.
    19. Simon Gachter & Ernst Fehr, 2000. "Cooperation and Punishment in Public Goods Experiments," American Economic Review, American Economic Association, vol. 90(4), pages 980-994, September.
    20. John A. List, 2004. "Young, Selfish and Male: Field evidence of social preferences," Economic Journal, Royal Economic Society, vol. 114(492), pages 121-149, January.
    21. Michèle Belot & Raymond Duch & Luis Miller, 2010. "Who should be called to the lab? A comprehensive comparison of students and non-students in classic experimental games," Discussion Papers 2010001, University of Oxford, Nuffield College.
    22. Bock, Olaf & Baetge, Ingmar & Nicklisch, Andreas, 2014. "hroot: Hamburg Registration and Organization Online Tool," European Economic Review, Elsevier, vol. 71(C), pages 117-120.
    23. Michal Krawczyk, 2011. "What brings your subjects to the lab? A field experiment," Experimental Economics, Springer;Economic Science Association, vol. 14(4), pages 482-489, November.
    24. Guillén, Pablo & Veszteg, Róbert F., 2012. "On “lab rats”," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 41(5), pages 714-720.
    25. Berinsky, Adam J. & Huber, Gregory A. & Lenz, Gabriel S., 2012. "Evaluating Online Labor Markets for Experimental Research:'s Mechanical Turk," Political Analysis, Cambridge University Press, vol. 20(03), pages 351-368, June.
    Full references (including those not matched with items on IDEAS)


    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.

    Cited by:

    1. Larbi Alaoui & Antonio Penta, 2017. "Reasoning about others’ reasoning," Economics Working Papers 1587, Department of Economics and Business, Universitat Pompeu Fabra.
    2. Larbi Alaoui & Antonio Penta, 2017. "Reasoning about Others’ Reasoning," Working Papers 1003, Barcelona Graduate School of Economics.

    More about this item


    response time; decision time; thinking time; strategic complexity; game theory; strategic games; repeated games; beauty contest; cognitive ability; personality;

    JEL classification:

    • C72 - Mathematical and Quantitative Methods - - Game Theory and Bargaining Theory - - - Noncooperative Games
    • C91 - Mathematical and Quantitative Methods - - Design of Experiments - - - Laboratory, Individual Behavior

    NEP fields

    This paper has been announced in the following NEP Reports:


    Access and download statistics


    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:iza:izadps:dp10518. See general information about how to correct material in RePEc.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Mark Fallak). General contact details of provider: .

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.