A Gaussian process (GP) library built in JAX (with objax)

Newt Newt is a Gaussian process (GP) library built in JAX (with objax), built and actively maintained by Will Wilkinson. Newt provides a unifying view of approximate Bayesian inference for GPs, and allows for the combination of many models (e.g. GPs, sparse GPs, Markov GPs, sparse Markov GPs) with the inference method of your choice (VI, EP, Laplace, Linearisation). For a full list of the methods implemented scroll down to the bottom of this page. Installation In the top directory […]

Read more

A fast and easy implementation of Transformer with PyTorch

FasySeq FasySeq is a shorthand as a Fast and easy sequential modeling toolkit. It aims to provide a seq2seq model to researchers and developers, which can be trained efficiently and modified easily. This toolkit is based on Transformer(Vaswani et al.), and will add more seq2seq models in the future. Dependency PyTorch >= 1.4 NLTK Result … Structure … To Be Updated top-k and top-p sampling multi-GPU inference length penalty in beam search … Preprocess Build Vocabulary createVocab.py NamedArguments Description -f/–file […]

Read more

Cascaded Sparse Query for Accelerating High-Resolution Small Object Detection

QueryDet-PyTorch This repository is the official implementation of our paper: QueryDet: Cascaded Sparse Query for Accelerating High-Resolution Small Object Detection Requirement a. Install Pytorch 1.4 b. Install APEX for mixed precision training c. Install our Pytorch based sparse convolution toolkit d. Install the detectron2 toolkit. Note we build our approach based on version 0.2.1, you may follow the instructions to set environment configs e. Install the Detectron2_Backbone for usage of MobileNet and ShuffleNet f. Clone our repository and have fun […]

Read more

A Telegram bot for remotely managing Binance Trade Bot

Binance Trade Bot Manager Telegram A Telegram bot for remotely managing Binance Trade Bot. About I wanted to develop an easy way of managing [Binance Trade Bot] so that I wouldn’t have to constantly ssh into my VPS, and my non-techy friends could enjoy the benefits of automated trading. As of now the bot is able to perform the following actions: [x] šŸ” Check bot status (running / not running) [x] ā–¶ Start Binance Trade Bot [x] ā¹ Stop Binance […]

Read more

Multi-Scale Aligned Distillation for Low-Resolution Detection

Multi-Scale Aligned Distillation for Low-Resolution Detection Lu Qi*, Jason Kuen*, Jiuxiang Gu, Zhe Lin, Yi Wang, Yukang Chen, Yanwei Li, Jiaya Jia This project provides an implementation for the CVPR 2021 paper “Multi-Scale Aligned Distillation for Low-Resolution Detection” based on Detectron2. MSAD targets to detect objects using low-resolution instead of high-resolution image. MSAD could obtain comparable performance in high-resolution image size. Our paper use Slimmable Neural Networks as our pretrained weight. Installation This project is based on Detectron2, which can […]

Read more

A gym style toolkit for building lightweight NAS systems

gymnastics A “gym” style toolkit for building lightweight Neural Architecture Search systems. I know, the name is awful. Installation Preferred option: Install from source: git clone [emailĀ protected]:jack-willturner/gymnastics.git cd gymnastics python setup.py install To install the latest release version: pip install gymnastics If you want to use NAS-Bench-101, follow the instructions here to get it set up. Overview Over the course of the final year of my PhD I worked a lot on Neural Architecture Search (NAS) and built a bunch […]

Read more

The “tl;dr” on a few notable transformer papers

# tldr-transformers The tl;dr on a few notable transformer/language model papers + other papers (alignment, memorization, etc). Models: GPT- *, * BERT *, Adapter- *, * T5, etc. Each set of notes includes links to the paper, the original code implementation (if available) and the Huggingface :hugs: implementation. Here is an example: t5. The transformers papers are presented somewhat chronologically below. Go to the “:point_right: Notes :point_left:” column below to find the notes for each paper. This repo also includes […]

Read more

A PyTorch library to analyze representation of neural networks

anatome į¼ˆĪ½Ī±Ļ„ĪæĪ¼Ī® is a PyTorch library to analyze internal representation of neural networks This project is under active development and the codebase is subject to change. Installation anatome requires Python>=3.9.0 PyTorch>=1.9.0 torchvision>=0.10.0 After the installation of PyTorch, install anatome as follows: pip install -U git+https://github.com/moskomule/anatome Representation Similarity To measure the similarity of learned representation, anatome.SimilarityHook is a useful tool. Currently, the followingmethods are implemented. from anatome import SimilarityHook model = resnet18() hook1 = SimilarityHook(model, “layer3.0.conv1”) hook2 = SimilarityHook(model, “layer3.0.conv2”) model.eval() […]

Read more

Getting started with NLP using NLTK Library

1010010Ā  Ā 01101001Ā  Ā 01110100Ā  Ā 01101000Ā  Ā 01101001Ā  01101011Ā  Ā 01100001 Did you understand the above binary code? If yes, then you’re a computer. If no, then you’re a Human. šŸ™‚ I know it’s a difficult task for us to understand binary code just like computers because binary code is a Machine Understandable Language. Likewise, even computers don’t understand human language. So, how to make computers understand human language? The answer is Natural Language Processing. With the help of NLP, we can teach computers […]

Read more

Text Generation Using Bidirectional LSTM – A Walk-through in Tensorflow

This article was published as a part of theĀ Data Science Blogathon Text Generation The Text Generation is a Natural Language Processing task that involves automatically generating meaningful texts. We can also utilize the Text Generation process for Autocomplete. Initially, we provide a prompt, which is a text that is used as the base to generate texts. The model will generate texts based on the prompt, the predicted text will be added to the base prompt and it is fed again […]

Read more
1 513 514 515 516 517 927