Judge Arena: Benchmarking LLMs as Evaluators

LLM-as-a-Judge has emerged as a popular way to grade natural language outputs from LLM applications, but how do we know which models make the best judges?

We’re excited to launch Judge Arena – a platform that lets anyone easily compare models as judges side-by-side. Just run the judges on a test sample and vote which judge you agree with most. The results will be organized into a leaderboard that displays the best judges.



Judge Arena

Crowdsourced, randomized

 

 

 

To finish reading, please visit source site