BigCodeArena: Judging code generations end to end with code executions

Terry Yue Zhuo's avatar

Evaluating the quality of AI-generated code is notoriously difficult. While humans can easily spot whether a piece of code “looks right,” determining if it actually works correctly, handles edge cases properly, and produces the intended result requires running and testing it. This is why today, we’re thrilled to announce BigCodeArena — the first human-in-the-loop platform for evaluating code generation models

 

 

 

To finish reading, please visit source site