Adding Benchmaxxer Repellant to the Open ASR Leaderboard

“When a measure becomes a target, it ceases to be a good measure.” (Goodhart’s Law)

TLDR: Appen Inc. and DataoceanAI have provided high-quality English ASR datasets covering scripted and conversational speech over multiple accents. To prevent potential risks of benchmaxxing or test-set contamination, we will keep these datasets private for a high-quality measure of performance on multiple tasks.

We’re not updating the average WER at this time: by default, the leaderboard’s Average WER remains computed on public datasets only. You can optionally include the private datasets using the toggle to see their impact 👀


Since its launch in September 2023, the Open ASR Leaderboard has been visited over 710K times. We’re blown away by the community’s interest and motivation to keep pushing speech recognition 🗣️

thumbnail

Two words sum up the objectives (but also challenges) in maintaining a benchmark like the Open ASR Leaderboard:

  1. Standardization: models can have different conventions for their usage and outputs, e.g. with/without punctuation and casing. Datasets have the same challenges and can be structured differently. To this end, all test sets have been gathered into a single dataset on the Hub for easy

     

     

     

    To finish reading, please visit source site