Measuring caption quality is an emerging field, with many different systems being tried around the world. Ai-Media’s audit was scored using the NER (Number, Edition error and Recognition error) system developed by Pablo Romero-Fresco and Juan Martinez. This model recognises that different kinds of errors have different impacts and therefore the quality measure should take this into account. According to this model, which is also being used by the UK regulator Ofcom in its quality evaluations, a score of 98% or better is considered to be a good standard.
Thirteen randomly selected programs aired in the first quarter of 2014 were audited by former Australian Caption Centre CEO Robert Scott, a renowned expert on captioning. In a statement Scott said, “Notable in the review was Ai-Media’s commitment to delivering block captions for the Nine Network’s news and current affairs programming. This synchronous presentation of accurate caption text results in increased quality scores and more importantly better comprehension by the viewer.”
Media Access Australia’s recent white paper, Caption Quality: International Approaches to Standards and Measurement features a comprehensive analysis of the NER model and other approaches to caption quality. One of its key conclusions was that proper measurement of quality and public testing and discussion of the results will lead to better outcomes and an improved experience for viewers.
Our CEO Alex Varley said, “I applaud the Nine Network and AI Media for public releasing information about its caption quality performance. This sends a signal to other broadcasters, regulators and the public that a more open approach and discussion of caption quality is the best way to improve quality. The audit also shows that providing synchronised block captions delivers the best quality for live programs."
Top of page