Large amounts of untranscribed audio data are generated every day. These audio resources can be used to develop robust acoustic models that can be used in a variety of speech-based systems. Manually transcribing this data is resource intensive and requires funding, time and expertise. Lightly-supervised training techniques, however, provide a means to rapidly transcribe audio, thus reducing the initial resource investment to begin the modelling process. Our findings suggest that the lightly-supervised training technique works well for English but when moving to an agglutinative language, such as isiZulu, the process fails to achieve the performance seen for English. Additionally, phone-based performances are significantly worse when compared to an approach using word-based language models. These results indicate a strong dependence on large or well-matched text resources for lightly-supervised training techniques.
Reference:
Kleynhans, N, De Wet, F and Barnard, E. 2015. Unsupervised acoustic model training: comparing South African English and isiZulu. In: Pattern Recognition Association of South Africa and Robotics and Mechatronics International Conference (PRASA-RobMech), Port Elizabeth, South Africa, 25-26 November 2015
Kleynhans, N., De Wet, F., & Barnard, E. (2015). Unsupervised acoustic model training: comparing South African English and isiZulu. IEEE. http://hdl.handle.net/10204/8629
Kleynhans, N, Febe De Wet, and E Barnard. "Unsupervised acoustic model training: comparing South African English and isiZulu." (2015): http://hdl.handle.net/10204/8629
Kleynhans N, De Wet F, Barnard E, Unsupervised acoustic model training: comparing South African English and isiZulu; IEEE; 2015. http://hdl.handle.net/10204/8629 .
Copyright: 2015 by IEEE.Pattern Recognition Association of South Africa and Robotics and Mechatronics International Conference (PRASA-RobMech), Port Elizabeth, South Africa, 25-26 November 2015