BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//events.la.psu.edu//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:STANDARD
DTSTART:20201101T020000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20200308T020000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:16891-583573b1afa6e776d318794567821841@events.la.psu.edu
DTSTAMP:20260412T053330Z
DTSTART;TZID=America/New_York:20201016T090000
DTEND;TZID=America/New_York:20201016T103000
SUMMARY:Roger Beaty (Penn State) - Using Computational Semantic Models to Assess
	 Verbal Creativity
DESCRIPTION:Using Computational Semantic Models to Assess Verbal Creativ
	ity\n\nConducting creativity research often involves asking several huma
	n raters to judge responses to verbal creativity tasks. Although such su
	bjective scoring methods have proved useful\, they have two inherent lim
	itations—labor cost (raters typically code thousands of responses) and s
	ubjectivity (raters vary on their perceptions of creativity)—raising cla
	ssic psychometric threats to reliability and validity. In this talk\, I 
	attempt to address these limitations by capitalizing on recent developme
	nts in automated scoring of verbal creativity via semantic distance\, a 
	computational method that uses natural language processing to quantify t
	he semantic relatedness of texts. Five studies compared the top performi
	ng semantic models (e.g.\, GloVe\, continuous bag of words) previously s
	hown to have the highest correspondence to human relatedness judgements.
	 We assessed these semantic models in relation to human creativity ratin
	gs from a canonical verbal creativity task and novelty/creativity rating
	s from two word association tasks. We find that a latent semantic distan
	ce factor—comprised of the common variance from five semantic models—rel
	iably predicts human ratings across all creativity tasks\, with semantic
	 distance explaining over 80% of the variance in creativity and novelty 
	ratings. We also replicate an established experimental effect in the cre
	ativity literature and show that semantic distance correlates with other
	 creativity measures\, demonstrating convergent validity. I conclude by 
	describing an open platform that can efficiently compute semantic distan
	ce\, and I discuss potential applications of semantic distance for asses
	sing creative language use.\n\nFor more details: https://events.la.psu.e
	du/event/roger-beaty-penn-state-using-computational-semantic-models-to-a
	ssess-verbal-creativity/
X-ALT-DESC;FMTTYPE=text/html:<html><head></head><body><h2 style="text-al
	ign: center; ">Using Computational Semantic Models to Assess Verbal Crea
	tivity</h2><p>Conducting creativity research often involves asking sever
	al human raters to judge responses to verbal creativity tasks. Although 
	such subjective scoring methods have proved useful, they have two inhere
	nt limitations—labor cost (raters typically code thousands of responses)
	 and subjectivity (raters vary on their perceptions of creativity)—raisi
	ng classic psychometric threats to reliability and validity. In this tal
	k, I attempt to address these limitations by capitalizing on recent deve
	lopments in automated scoring of verbal creativity via semantic distance
	, a computational method that uses natural language processing to quanti
	fy the semantic relatedness of texts. Five studies compared the top perf
	orming semantic models (e.g., GloVe, continuous bag of words) previously
	 shown to have the highest correspondence to human relatedness judgement
	s. We assessed these semantic models in relation to human creativity rat
	ings from a canonical verbal creativity task and novelty/creativity rati
	ngs from two word association tasks. We find that a latent semantic dist
	ance factor—comprised of the common variance from five semantic models—r
	eliably predicts human ratings across all creativity tasks, with semanti
	c distance explaining over 80% of the variance in creativity and novelty
	 ratings. We also replicate an established experimental effect in the cr
	eativity literature and show that semantic distance correlates with othe
	r creativity measures, demonstrating convergent validity. I conclude by 
	describing an open platform that can efficiently compute semantic distan
	ce, and I discuss potential applications of semantic distance for assess
	ing creative language use.</p><p>For more details: <a href='https://even
	ts.la.psu.edu/event/roger-beaty-penn-state-using-computational-semantic-
	models-to-assess-verbal-creativity/'>https://events.la.psu.edu/event/rog
	er-beaty-penn-state-using-computational-semantic-models-to-assess-verbal
	-creativity/</a></p></body></html>
END:VEVENT
END:VCALENDAR