US project aims to assess quality of live captioning

Error message

Deprecated function: Array and string offset access syntax with curly braces is deprecated in include_once() (line 14 of /home/mediacc/public_html/themes/engines/phptemplate/phptemplate.engine).
Wednesday, 10 November 2010 16:55pm

A major American access provider, WGBH’s National Center for Accessible Media (NCAM), is collaborating with Nuance Communications to develop a prototype system which will automatically assess the quality of live captions on news programs.

In the US, most live or real-time captioning is done by stenocaptioners, who use a phonetic keyboard to create captions as a program is broadcast. (In Australia, live captioning is performed both by stenocaptioners, and by captioners using speech recognition software). The quality of live captions can be variable, and earlier this year, NCAM conducted an online survey which asked caption users to rate different types of caption errors, and the degree to which they make news programs hard to follow.

In the new project, which is funded by the US Department of Education, a system involving language processing, data analysis and benchmarking tools will be developed, which will use Nuance’s Dragon Naturally Speaking speech recognition software as a basis. The project will also work with advisors from the National Institute of Standards and Technology, Gallaudet University and the National Technical Institute for the Deaf.

The project comes as the Federal Communications Commission (FCC) has requested industry and consumer feedback as it considers setting quality standards for live captions.


Top of page