BERT 101 🤗 State Of The Art NLP Model Explained

BERT, short for Bidirectional Encoder Representations from Transformers, is a Machine Learning (ML) model for natural language processing. It was developed in 2018 by researchers at Google AI Language and serves as a swiss army knife solution to 11+ of the most common language tasks, such as sentiment analysis and named entity recognition. Language has historically been difficult for computers to ‘understand’. Sure, computers can collect, store, and read text inputs but they lack basic language context. So, along came […]

Read more

Image search with 🤗 datasets

🤗 datasets is a library that makes it easy to access and share datasets. It also makes it easy to process data efficiently — including working with data which doesn’t fit into memory. When datasets was first launched, it was associated mostly with text data. However, recently, datasets has added increased    

Read more

Accelerate BERT inference with Hugging Face Transformers and AWS Inferentia

notebook: sagemaker/18_inferentia_inference The adoption of BERT and Transformers continues to grow. Transformer-based models are now not only achieving state-of-the-art performance in Natural Language Processing but also for Computer Vision, Speech, and Time-Series. 💬 🖼 🎤 ⏳ Companies are now slowly moving from the experimentation and research phase to the production phase in    

Read more

Machine Learning Experts – Margaret Mitchell

Hey friends! Welcome to Machine Learning Experts. I’m your host, Britney Muller and today’s guest is none other than Margaret Mitchell (Meg for short). Meg founded & co-led Google’s Ethical AI Group, is a pioneer in the field of Machine Learning, has published over 50 papers, and is a leading researcher in Ethical AI. You’ll hear Meg talk about the    

Read more

Don’t Repeat Yourself*

Designing open-source libraries for modern machine learning “Don’t repeat yourself”, or DRY, is a well-known principle of software development. The principle originates from “The pragmatic programmer”, one of the most read books on code design. The principle’s simple message    

Read more

Habana Labs and Hugging Face Partner to Accelerate Transformer Model Training

Santa Clara and San Francisco, CA, April 12th, 2022 Powered by deep learning, transformer models deliver state-of-the-art performance on a wide range of machine learning tasks, such as natural language processing, computer vision, speech, and more. However, training them at scale often requires a large amount of computing power, making the whole process unnecessarily long, complex, and costly. Today,

Read more
1 4 5 6 7 8 70