You are here:

A System for Adaptive High-Variability Segmental Perceptual Training: Implementation, Effectiveness, Transfer

, ,

Language Learning & Technology Volume 22, Number 1, ISSN 1094-3501


Many types of L2 phonological perception are often difficult to acquire without instruction. These difficulties with perception may also be related to intelligibility in production. Instruction on perception contrasts is more likely to be successful with the use of phonetically variable input made available through computer-assisted pronunciation training. However, few computer-assisted programs have demonstrated flexibility in diagnosing and treating individual learner problems or have made effective use of linguistic resources such as corpora for creating training materials. This study introduces a system for segmental perceptual training that uses a computational approach to perception utilizing corpus based word frequency lists, high variability phonetic input, and text-to-speech technology to automatically create discrimination and identification perception exercises customized for individual learners. The effectiveness of the system is evaluated in an experiment with pre- and post-test design, involving 32 adult Russian-speaking learners of English as a foreign language. The participants' perceptual gains were found to transfer to novel voices, but not to untrained words. Potential factors underlying the absence of word-level transfer are discussed. The results of the training model provide an example for replication in language teaching and research settings.


Qian, M., Chukharev-Hudilainen, E. & Levis, J. (2018). A System for Adaptive High-Variability Segmental Perceptual Training: Implementation, Effectiveness, Transfer. Language Learning & Technology, 22(1), 69-96. Retrieved January 22, 2021 from .

This record was imported from ERIC on January 9, 2019. [Original Record]

ERIC is sponsored by the Institute of Education Sciences (IES) of the U.S. Department of Education.

Copyright for this record is held by the content creator. For more details see ERIC's copyright policy.