Gemma 3n fully available in the open-source ecosystem!

Gemma 3n was announced as a preview during Google I/O. The on-device community got really excited, because this is a model designed from the ground up to run locally on your hardware. On top of that, it’s natively multimodal, supporting image, text, audio, and video inputs 🤯

Today, Gemma 3n is finally available on the most used open source libraries. This includes transformers & timm, MLX, llama.cpp (text inputs), transformers.js, ollama, Google AI Edge, and others.

This post quickly goes through practical snippets to demonstrate how to use the model with these libraries, and how easy it is to

 

 

 

To finish reading, please visit source site