A work-in-progress vector version of the MNIST dataset

bezier-mnist This is a work-in-progress vector version of the MNIST dataset. Here are some samples from the training set. Note that, while these are rasterized, the underlying images can be rendered at any resolution because they are smooth vector graphics. ![A grid of sixteen digit images](https://github.com/unixpickle/bezier-mnist/raw/main/samples.png =300×300) I have already converted all of MNIST to Bezier curves. This dataset can be downloaded at this page. There are two files: train.zip and test.zip, each containing a separate json file for each […]

Read more

Pydantic models for Django

Djantic Documentation: https://jordaneremieff.github.io/djantic/ Requirements: Python 3.7+, Django 3.0+ Pydantic models for Django. This project should be considered a work-in-progress. It should be okay to use, but no specific version support has been determined (#16) and the default model generation behaviour may change across releases. Please use the issues tracker to report any bugs, or if something seems incorrect. Quickstart Install using pip: pip install djantic Generating schemas from models Configure a custom ModelSchema class for a Django model to generate […]

Read more

An Asynchronous Python object-document mapper for MongoDB

beanie Beanie – is an Asynchronous Python object-document mapper (ODM) for MongoDB, based on Motor and Pydantic. When using Beanie each database collection has a corresponding Document that is used to interact with that collection. In addition to retrieving data, Beanie allows you to add, update, or delete documents from the collection as well. Beanie saves you time by removing boiler-plate code and it helps you focus on the parts of your app that actually matter. Data and schema migrations […]

Read more

Unofficial PyTorch implementation of Google AI’s VoiceFilter system

Hi everyone! It’s Seung-won from MINDs Lab, Inc. It’s been a long time since I’ve released this open-source, and I didn’t expect this repository to grab such a great amount of attention for a long time. I would like to thank everyone for giving such attention, and also Mr. Quan Wang (the first author of the VoiceFilter paper) for referring this project in his paper. Actually, this project was done by me when it was only 3 months after I […]

Read more

A library for the Unbounded Interleaved-State Recurrent Neural Network algorithm

UIS-RNN This is the library for the Unbounded Interleaved-State Recurrent Neural Network (UIS-RNN) algorithm, corresponding to the paper Fully Supervised Speaker Diarization. This algorithm was originally proposed in the paperFully Supervised Speaker Diarization. The work has been introduced byGoogle AI Blog. Disclaimer This open source implementation is slightly different than the internal onewhich we used to produce the results in thepaper, due to dependencies onsome internal libraries. We CANNOT share the data, code, or model for the speaker recognition system(d-vector […]

Read more

A sentence embeddings method that provides semantic representations

InferSent InferSent is a sentence embeddings method that provides semantic representations for English sentences. It is trained on natural language inference data and generalizes well to many different tasks. We provide our pre-trained English sentence encoder from our paper and our SentEval evaluation toolkit. Recent changes: Removed train_nli.py and only kept pretrained models for simplicity. Reason is I do not have time anymore to maintain the repo beyond simple scripts to get sentence embeddings. Dependencies This code is written in […]

Read more

Pytorch implementation of Google AI’s 2018 BERT with simple annotation

BERT-pytorch Pytorch implementation of Google AI’s 2018 BERT, with simple annotation BERT 2018 BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding Paper URL : https://arxiv.org/abs/1810.04805 Google AI’s BERT paper shows the amazing result on various NLP task (new 17 NLP tasks SOTA),including outperform the human F1 score on SQuAD v1.1 QA task.This paper proved that Transformer(self-attention) based encoder can be powerfully used asalternative of previous language model with proper language model training method.And more importantly, they showed us that […]

Read more

The multitask and transfer learning toolkit for natural language processing research

The multitask and transfer learning toolkit for natural language processing research. Why should I use jiant? A few additional things you might want to know about jiant: jiant is configuration file driven jiant is built with PyTorch jiant integrates with datasets to manage task data jiant integrates with transformers to manage models and tokenizers. Getting Started Installation To import jiant from source (recommended for researchers): git clone https://github.com/nyu-mll/jiant.git cd jiant pip install -r requirements.txt # Add the following to your […]

Read more

A library for Multilingual Unsupervised or Supervised word Embeddings

MUSE: Multilingual Unsupervised and Supervised Embeddings A library for Multilingual Unsupervised or Supervised word Embeddings. MUSE is a Python library for multilingual word embeddings, whose goal is to provide the community with: state-of-the-art multilingual word embeddings (fastText embeddings aligned in a common space) large-scale high-quality bilingual dictionaries for training and evaluation We include two methods, one supervised that uses a bilingual dictionary or identical character strings, and one unsupervised that does not use any parallel data (see Word Translation without […]

Read more

A modular framework for vision & language multimodal research

MMF MMF is a modular framework for vision and language multimodal research from Facebook AI Research. MMF contains reference implementations of state-of-the-art vision and language models and has powered multiple research projects at Facebook AI Research. See full list of project inside or built on MMF here. MMF is powered by PyTorch, allows distributed training and is un-opinionated, scalable and fast. Use MMF to bootstrap for your next vision and language multimodal research project by following the installation instructions. Take […]

Read more
1 528 529 530 531 532 927