|Title:||The cognitive mechanisms underlying the extrinsic perceptual normalization of vowels|
|Advisors:||Peng, Gang (CBS)|
Hong Kong Polytechnic University -- Dissertations
|Department:||Department of Chinese and Bilingual Studies|
|Pages:||xv, 147 pages : color illustrations|
|Abstract:||Since researchers discovered that acoustic realizations of the identical phonological category vary a great deal across speakers, the question of how listeners achieve perceptual constancy has been a prevalent research theme across speech perception studies. Acoustic cue distribution in the context of speech affects listeners' interpretation of the target speech; this is known as extrinsic normalization. Although extrinsic normalization is effective in terms of solving the perceptual problems caused by speech variability, its cognitive mechanisms remain largely unknown. The present dissertation sheds light on this question by investigating the extrinsic perceptual normalization of vowels. To test whether or not spectral contrast is the main prerequisite for extrinsic normalization, the present dissertation compares the normalization effects of speech contexts and spectral-matched nonspeech contexts. Lexical tones are also included to generalize the findings. The results show that speech contexts can reliably exert normalization effects regardless of the target cues (i.e., vowels or lexical tones), but no significant and consistent normalization effect was observed for the nonspeech contexts at the group level. Therefore, extrinsic normalization requires speech-specific information and is probably operated via a speech-specific mechanism. To explore the time locus of the extrinsic vowel normalization process, the present dissertation utilizes the unequal context effects of speech and nonspeech. Listeners' electroencephalographic (EEG) activity was recorded when they perceived vowels in speech and nonspeech contexts. A comparison of the event-related potentials (ERP) elicited in the two conditions suggests that extrinsic vowel normalization generated a large P2 component. Since P2 is related to phonetic and phonological processes, extrinsic vowel normalization is largely implemented in the phonetic and/or phonological processing stages.|
To test the phonetic and phonological constraints on extrinsic vowel normalization and to further clarify the time locus of extrinsic vowel normalization, a cross-language vowel perception experiment was conducted. It was found that both contexts consisting of native vowels and contexts consisting of non-native vowels triggered significant contrastive context effects for English speakers, but the effect size of native contexts was significantly larger than that of non-native contexts. Furthermore, the phonetic features of the non-native contexts used in the experiment (i.e., [+high], [-back], and [+round]) also exist in English. The results suggest that extrinsic vowel normalization relies on both phonetic and phonological information. Therefore, extrinsic vowel normalization is implemented successively from the phonetic processing stage to the phonological processing stage, supporting a cumulative extrinsic vowel normalization process. To understand how two extrinsic normalization processes interact with each other, the present study conducted an extrinsic vowel normalization task and an extrinsic lexical tone normalization task in comparable conditions, and found that the ERP component related to extrinsic vowel normalization emerged earlier than that for lexical tones. This finding suggests that, although the acoustic information of lexical tones and vowels in most cases reach the auditory system simultaneously, they are normalized at least partially independently. The normalization process takes each phonological component instead of the whole syllable as the normalization unit. An N-TRACE model that integrates the findings from the present dissertation and previous related studies is proposed to explain the extrinsic perception normalization process in tonal languages. In this model, four speech process stages (i.e., acoustic, phonetic, phonological, and lexical processing stages) are specified. The segmental components and suprasegmental components are processed in different pathways. Extrinsic perceptual normalization is implemented by allowing the previously activated contextual information to modulate the phonetic and phonological processes of the target stimuli. More studies are needed to specify how the strength of extrinsic perceptual normalization is modulated by other factors, such as intrinsic acoustic cues and listeners' general cognitive abilities.
|Rights:||All rights reserved|
As a bona fide Library user, I declare that:
- I will abide by the rules and legal ordinances governing copyright regarding the use of the Database.
- I will use the Database for the purpose of my research or private study only and not for circulation or further reproduction or any other purpose.
- I agree to indemnify and hold the University harmless from and against any loss, damage, cost, liability or expenses arising from copyright infringement or unauthorized usage.
By downloading any item(s) listed above, you acknowledge that you have read and understood the copyright undertaking as stated above, and agree to be bound by all of its terms.
Please use this identifier to cite or link to this item: