Introducing the Open Arabic LLM Leaderboard

The Open Arabic LLM Leaderboard (OALL) is designed to address the growing need for specialized benchmarks in the Arabic language processing domain. As the field of Natural Language Processing (NLP) progresses, the focus often remains heavily skewed towards English, leaving a significant gap in resources for other languages. The OALL aims to balance this by providing a platform specifically for evaluating and comparing the performance of Arabic Large Language Models (LLMs), thus promoting research and development in Arabic NLP. This […]

Read more

Hugging Face x LangChain : A new partner package in LangChain

We are thrilled to announce the launch of langchain_huggingface, a partner package in LangChain jointly maintained by Hugging Face and LangChain. This new Python package is designed to bring the power of the latest development of Hugging Face into LangChain and keep it up to date. All Hugging Face-related classes in LangChain were coded by the community, and while we thrived on this, over time, some of them became deprecated because of the lack of an insider’s perspective. By becoming […]

Read more

PaliGemma – Google’s Cutting-Edge Open Vision Language Model

Updated on 23-05-2024: We have introduced a few changes to the transformers PaliGemma implementation around fine-tuning, which you can find in this notebook. PaliGemma is a new family of vision language models from Google. PaliGemma can take in an image and a text and output text. The team at Google has released three types of models: the pretrained (pt) models, the mix models, and the fine-tuned (ft) models, each with different resolutions and available in multiple precisions for convenience. All […]

Read more

Unlocking Longer Generation with Key-Value Cache Quantization

At Hugging Face, we are excited to share with you a new feature that’s going to take your language models to the next level: KV Cache Quantization. TL;DR: KV Cache Quantization reduces memory usage for long-context text generation in LLMs with minimal impact on quality, offering customizable trade-offs between memory efficiency and generation speed. Have you ever tried generating a lengthy piece    

Read more

Hugging Face on AMD Instinct MI300 GPU

Join the next Hugging Cast on June 6th to ask questions to the post authors, watch a live demo deploying Llama 3 on MI300X on Azure, plus a bonus demo deploying models locally on Ryzen AI PC! Register at https://streamyard.com/watch/iMZUvJnmz8BV Introduction At Hugging Face we want to make it easy to build AI with open models and open source, whichever framework, cloud and stack you want to use. A key component is    

Read more

Build AI on premise with Dell Enterprise Hub

Today we announce the Dell Enterprise Hub, a new experience on Hugging Face to easily train and deploy open models on-premise using Dell platforms. Try it out at dell.huggingface.co Enterprises need to build AI with open models When building AI systems, open models is the best solution to meet security, compliance and privacy requirements of enterprises: Building upon open models allows companies to understand, own    

Read more

CyberSecEval 2 – A Comprehensive Evaluation Framework for Cybersecurity Risks and Capabilities of Large Language Models

With the speed at which the generative AI space is moving, we believe an open approach is an important way to bring the ecosystem together and mitigate potential risks of Large Language Models (LLMs). Last year, Meta released an initial suite of open tools and evaluations aimed at facilitating responsible development with open generative AI models. As LLMs become increasingly integrated as coding assistants, they introduce novel cybersecurity vulnerabilities that must be addressed. To tackle this challenge, comprehensive benchmarks are […]

Read more
1 37 38 39 40 41 70