LAVE: Zero-shot VQA Evaluation on Docmatix with LLMs – Do We Still Need Fine-Tuning?

Dana Aubakirova's avatar
Andres Marafioti's avatar

While developing Docmatix, we noticed that fine-tuning Florence-2 on it yielded great performance on DocVQA, but resulted in low scores on the benchmark. To enhance performance, we had to fine-tune the model further on DocVQA to learn the syntax

 

 

 

To finish reading, please visit source site