Japan trials live captioning system

Error message

Deprecated function: Array and string offset access syntax with curly braces is deprecated in include_once() (line 14 of /home/mediacc/public_html/themes/engines/phptemplate/phptemplate.engine).
Monday, 28 September 2015 13:24pm

Kyoto University in Japan is trailing a new live captioning system for use in academic conferences, using an automatic speech recognition system to cut down on the amount of human input needed to deliver live captions.

Camphor tree in front of the Clock Tower at Kyoto University

The drive for this is the new accessibility laws scheduled for 2016 which mandates reasonable accommodation provisions to people with disabilities. In a university and conference setting this means that the amount of captioning will need to increase.

A major issue is that there is a shortage of skilled captioners that can undertake this kind of work. The way that live captioning is currently produced in Japan requires two captioners working together because of the limitations (until now) of automated systems. They each caption an alternate phrase at a time using the full extent of a keyboard to create the Kanji script. This way they can produce a live caption flow that is not too delayed. However, the nature of the production method relies on two captioners who work well together and can anticipate the others actions.

The new system of live captioning substitutes a voice recognition system for one of the captioners and then the remaining captioner corrects and edits the output before it is displayed.

The system made its debut on 22 August 2015 and the research team is hoping that it can reduce the cost of providing live captioning to around a tenth of the current cost. To improve the accuracy of the voice recognition system, the conference speakers are asked to speak slowly and clearly and repeat questions from the audience. Speakers are also asked to provide conference papers ahead of time. This allows proper names and other technical terms to be inputted into the system which improves accuracy.

“This is a very interesting development for countries that don’t have the benefit of all of the automatic voice recognition carried out in English-speaking countries and using European languages where such systems are well developed,” said Media Access Australia CEO Alex Varley.

“A problem that these other languages face is that there aren’t enough people working on such systems to see significant breakthroughs quickly. This work in Japan should help other language groups, particularly those with smaller speaker numbers.”


Top of page