Abstract:
Speech perception requires the integration of information from multiple phonetic and phonological dimensions. Numerous studies have investigated the mapping between multiple acoustic-phonetic dimensions and single phonological dimensions (e.g., spectral and temporal properties of stop consonants in voicing contrasts). Many fewer studies have addressed relationships between phonological dimensions. Most such studies have focused on the perception of sequences of phones (e.g., 'bid', 'bed', 'bit', 'bet'), though some have focused on multiple phonological dimensions within phones (e.g., voicing and place of articulation in [p], [b], [t], and [d]). However, strong assumptions about relevant acoustic-phonetic dimensions and/or the nature of perceptual and decisional information integration limit previous findings in important ways. New methodological developments in the General Recognition Theory framework enable a number of these assumptions to be tested and provide a more complete model of distinct perceptual and decisional processes in speech sound identification. A Bayesian non-parametric analysis of data from four experiments probing identification of (two sets of) consonants in onset (syllable initial) and coda (syllable final) position indicate that integration of phonological information is partially independent in both perception and decision making for most subjects, and that patterns of independence and interaction vary with the set of phonological dimensions under consideration and with syllable position.