How Can We Improve Our Decisions? Results From Multiple Methods And Experiments
Nick Byrd, Ph.D.
Assistant Professor of Cognitive Science
Geisinger College of Health Sciences
Department of Bioethics and Decision Sciences
Nobel laureates like Daniel Kahneman popularized research about cognitive biases. To better understand and ameliorate these reasoning errors, we have been developing more scalable methods to (a) trace reasoning processes and (b) improve faulty reasoning. So far, we have run at least a dozen experiments involving thousands of participants. Consider some examples:
Thinking aloud is a common critical thinking exercise in education, computer science, and other analytic decision-making contexts. Alas, legacy methods of recording people as they think out loud are time consuming. Also, thinking aloud may use cognitive resources that could otherwise be spent on improving decisions. To address these issues, I partner with startups to develop web apps that remotely (and consensually) record and transcribe people’s reasoning process (e.g., from the microphone on the participants’ smartphone), which drastically expedited data collection and transcription — from months to hours. Behavioral results have improved our understanding of what reflection tests measure and how people overcome faulty intuitions.
Debate is supposed to enhance intelligence analysis, policymaking, and other forms of critical thinking. However, facilitating debates require significant human resources. So, we developed web apps to automatically facilitate solitary and discussion-based reflection with varying financial incentives. This allows hundreds of debates to be recorded in parallel from afar, drastically accelerating data collection and transcription. Our quantitative decision analyses find that conversation can be better than cash in terms of improving decisions.
Our thinking-aloud, writing, and chatting protocols also yielded decision transcripts that contain much more information than standard survey data. Research assistants, crowd workers, and language models can categorize and quantify aspects of these step-by-step decision records. The resulting ratings allow us to quantitatively test the assumptions of cognitive tests and isolate which reasoning patterns actually predict better decisions.
We are also testing interventions such as information formatting (e.g., argument mapping or data visualization), philosophical reflection (e.g., thought experiments), testing effects (e.g., having people complete some reasoning test items before the primary test of reasoning), nudges (e.g., text message reminders), and boosts (e.g., educational infographics).
Ultimately, few interventions reliably improve decisions. And some popular interventions seem ineffective in improved research designs (e.g., with larger samples, better data, better measures, or more controlled variables). This presentation will dive deeper into the methods and results.
For Zoom link email Daryl Cameron at cdc49@psu.edu.
Occurrences
-
Thursday, November 6, 2025, 2:00 p.m.–3:00 p.m.
Groups
Our events and programs are open to all students regardless of sex, gender, sexual orientation, race, or any other protected class.
The College of the Liberal Arts is committed to building a community of belonging for all.