DOI: 10.1101/479089Nov 29, 2018Paper

Integrating prediction errors at two time scales permits rapid recalibration of speech sound categories

BioRxiv : the Preprint Server for Biology
Itsaso Olasagasti, Anne-Lise Giraud

Abstract

Speech perception can be derived from internal models of how speech sounds are systematically associated with sensory features. When features associated with a speech sound change, the listener should recalibrate its internal model by appropriately weighing new versus old evidence in a volatility dependent manner. Models of speech recalibration have classically ignored volatility. Models that explicitly consider volatility have been designed to describe the behavior of human participants in tasks where sensory cues are associated with arbitrary experimenter defined categories or rewards. In that setting, a model that maintains a single representation of the category but continuously adapts the learning rate works well. We argue that recalibration of existing natural categories is better described by a model that represents sound categories at different time scales. We illustrate our proposal by modeling the rapid recalibration of speech categories reported by Lüttke et al. (2016).

Related Concepts

Speech
Speech Perception
Participant

Related Feeds

BioRxiv & MedRxiv Preprints

BioRxiv and MedRxiv are the preprint servers for biology and health sciences respectively, operated by Cold Spring Harbor Laboratory. Here are the latest preprint articles (which are not peer-reviewed) from BioRxiv and MedRxiv.

Related Papers

Critical Care Medicine
David A HarrisonKathy Rowan
Pharmazie in unserer Zeit
C Beyer
BMJ : British Medical Journal
Glenn F Cornish
Journal of the Experimental Analysis of Behavior
N Stemmer
© 2021 Meta ULC. All rights reserved