BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//events.la.psu.edu//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:STANDARD
DTSTART:20201101T020000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20200308T020000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:17708-0229d95ddb9c662ff2b1a5b376124d30@events.la.psu.edu
DTSTAMP:20260307T150120Z
DTSTART;TZID=America/New_York:20251106T140000
DTEND;TZID=America/New_York:20251106T150000
SUMMARY:Consortium Workshop: Nick Byrd
DESCRIPTION:\nHow Can We Improve Our Decisions? Results From Multiple Me
	thods And Experiments\n\nNick Byrd\, Ph.D.\n\nAssistant Professor of Cog
	nitive Science\n\nGeisinger College of Health Sciences\n\nDepartment of 
	Bioethics and Decision Sciences\n\nNobel laureates like Daniel Kahneman 
	popularized research about cognitive biases. To better understand and am
	eliorate these reasoning errors\, we have been developing more scalable 
	methods to (a) trace reasoning processes and (b) improve faulty reasonin
	g. So far\, we have run at least a dozen experiments involving thousands
	 of participants. Consider some examples:\n\nThinking aloud is a common 
	critical thinking exercise in education\, computer science\, and other a
	nalytic decision-making contexts. Alas\, legacy methods of recording peo
	ple as they think out loud are time consuming. Also\, thinking aloud may
	 use cognitive resources that could otherwise be spent on improving deci
	sions. To address these issues\, I partner with startups to develop web 
	apps that remotely (and consensually) record and transcribe people’s rea
	soning process (e.g.\, from the microphone on the participants’ smartpho
	ne)\, which drastically expedited data collection and transcription — fr
	om months to hours. Behavioral results have improved our understanding o
	f what reflection tests measure and how people overcome faulty intuition
	s.\n\nDebate is supposed to enhance intelligence analysis\, policymaking
	\, and other forms of critical thinking. However\, facilitating debates 
	require significant human resources. So\, we developed web apps to autom
	atically facilitate solitary and discussion-based reflection with varyin
	g financial incentives. This allows hundreds of debates to be recorded i
	n parallel from afar\, drastically accelerating data collection and tran
	scription. Our quantitative decision analyses find that conversation can
	 be better than cash in terms of improving decisions.\n\nOur thinking-al
	oud\, writing\, and chatting protocols also yielded decision transcripts
	 that contain much more information than standard survey data. Research 
	assistants\, crowd workers\, and language models can categorize and quan
	tify aspects of these step-by-step decision records. The resulting ratin
	gs allow us to quantitatively test the assumptions of cognitive tests an
	d isolate which reasoning patterns actually predict better decisions.\n\
	nWe are also testing interventions such as information formatting (e.g.\
	, argument mapping or data visualization)\, philosophical reflection (e.
	g.\, thought experiments)\, testing effects (e.g.\, having people comple
	te some reasoning test items before the primary test of reasoning)\, nud
	ges (e.g.\, text message reminders)\, and boosts (e.g.\, educational inf
	ographics).\n\nUltimately\, few interventions reliably improve decisions
	. And some popular interventions seem ineffective in improved research d
	esigns (e.g.\, with larger samples\, better data\, better measures\, or 
	more controlled variables). This presentation will dive deeper into the 
	methods and results.\n\nFor Zoom link email Daryl Cameron at cdc49@psu.e
	du.\n\nFor more details: https://events.la.psu.edu/event/consortium-work
	shop-nick-byrd/
X-ALT-DESC;FMTTYPE=text/html:<html><head></head><body><p><strong>How Can
	 We Improve Our Decisions? Results From Multiple Methods And Experiments
	</strong></p><p>Nick Byrd, Ph.D.</p><p>Assistant Professor of Cognitive 
	Science</p><p>Geisinger College of Health Sciences</p><p>Department of B
	ioethics and Decision Sciences</p><p>Nobel laureates like Daniel Kahnema
	n popularized research about cognitive biases. To better understand and 
	ameliorate these reasoning errors, we have been developing more scalable
	 methods to (a) trace reasoning processes and (b) improve faulty reasoni
	ng. So far, we have run at least a dozen experiments involving thousands
	 of participants. Consider some examples:</p><p>Thinking aloud is a comm
	on critical thinking exercise in education, computer science, and other 
	analytic decision-making contexts. Alas, legacy methods of recording peo
	ple as they think out loud are time consuming. Also, thinking aloud may 
	use cognitive resources that could otherwise be spent on improving decis
	ions. To address these issues, I partner with startups to develop web ap
	ps that remotely (and consensually) record and transcribe people’s reaso
	ning process (e.g., from the microphone on the participants’ smartphone)
	, which drastically expedited data collection and transcription — from m
	onths to hours. Behavioral results have improved our understanding of wh
	at reflection tests measure and how people overcome faulty intuitions.</
	p><p>Debate is supposed to enhance intelligence analysis, policymaking, 
	and other forms of critical thinking. However, facilitating debates requ
	ire significant human resources. So, we developed web apps to automatica
	lly facilitate solitary and discussion-based reflection with varying fin
	ancial incentives. This allows hundreds of debates to be recorded in par
	allel from afar, drastically accelerating data collection and transcript
	ion. Our quantitative decision analyses find that conversation can be be
	tter than cash in terms of improving decisions.</p><p>Our thinking-aloud
	, writing, and chatting protocols also yielded decision transcripts that
	 contain much more information than standard survey data. Research assis
	tants, crowd workers, and language models can categorize and quantify as
	pects of these step-by-step decision records. The resulting ratings allo
	w us to quantitatively test the assumptions of cognitive tests and isola
	te which reasoning patterns actually predict better decisions.</p><p>We 
	are also testing interventions such as information formatting (e.g., arg
	ument mapping or data visualization), philosophical reflection (e.g., th
	ought experiments), testing effects (e.g., having people complete some r
	easoning test items before the primary test of reasoning), nudges (e.g.,
	 text message reminders), and boosts (e.g., educational infographics).</
	p><p>Ultimately, few interventions reliably improve decisions. And some 
	popular interventions seem ineffective in improved research designs (e.g
	., with larger samples, better data, better measures, or more controlled
	 variables). This presentation will dive deeper into the methods and res
	ults.</p><p>For Zoom link email Daryl Cameron at cdc49@psu.edu.</p><p>Fo
	r more details: <a href='https://events.la.psu.edu/event/consortium-work
	shop-nick-byrd/'>https://events.la.psu.edu/event/consortium-workshop-nic
	k-byrd/</a></p></body></html>
URL:https://moralconsortium.psu.edu
LOCATION:413 Welch Building
END:VEVENT
END:VCALENDAR