An Introduction to Deep Learning for the Physical Layer

radio-transformer-networks An Introduction to Deep Learning for the Physical Layer An usable PyTorch implementation of the noisy autoencoder infrastructure in the paper “An Introduction to Deep Learning for the Physical Layer” by Kenta Iwasaki on behalf of Gram.AI. Overall a fun experiment for constructing a communications system for the physical layer with transmitters/receivers in which the transmitter efficiently encodes a signal in a way such that the receiver can still, with minimal error, decode this encoded signal despite being inflicted […]

Read more

A library for finding knowledge neurons in pretrained transformer models

knowledge-neurons An open source repository replicating the 2021 paper Knowledge Neurons in Pretrained Transformers by Dai et al., and extending the technique to autoregressive models, as well as MLMs. The Huggingface Transformers library is used as the backend, so any model you want to probe must be implemented there. Currently integrated models: BERT_MODELS = [“bert-base-uncased”, “bert-base-multilingual-uncased”] GPT2_MODELS = [“gpt2”] GPT_NEO_MODELS = [ “EleutherAI/gpt-neo-125M”, “EleutherAI/gpt-neo-1.3B”, “EleutherAI/gpt-neo-2.7B”, ] The technique from Dai et al. has been used to locate knowledge neurons in […]

Read more

Image data augmentation scheduler for albumentations transforms

albu_scheduler Scheduler for albumentations transforms based on PyTorch schedulers interface TransformMultiStepScheduler import albumentations as A from albu_scheduler import TransformMultiStepScheduler transform_1 = A.Compose([ A.RandomCrop(width=256, height=256), A.HorizontalFlip(p=0.5), A.RandomBrightnessContrast(p=0.2), ]) transform_2 = A.Compose([ A.RandomCrop(width=128, height=128), A.VerticalFlip(p=0.5), ]) scheduled_transform = TransformMultiStepScheduler(transforms=[transform_1, transform_2], milestones=[0, 10]) dataset = Dataset(transform=scheduled_transform) for epoch in range(100): train(…) validate(…) scheduled_transform.step() TransformSchedulerOnPlateau from albu_scheduler import TransformSchedulerOnPlateau scheduled_transform = TransformSchedulerOnPlateau(transforms=[transform_1, transform_2], mode=”max”, patience=5) dataset = Dataset(transform=scheduled_transform) for epoch in range(100): train(…) score = validate(…) scheduled_transform.step(score) git clone https://github.com/KiriLev/albu_scheduler cd albu_scheduler make install […]

Read more

Feedback Transformer and Expire-Span with python

This repo contains the code for two papers: Feedback Transformer Expire-Span The training code is structured for long sequential modeling with Transformer-like architectures. Requirements You will need a CUDA-enabled GPU to run the code. Setup Run the following: pip install -r requirements.txt Feedback Transformer Introduced in Addressing Some Limitations of Transformers with Feedback Memory. Running Experiments from the Paper enwik8 Model Params Valid Test Feedback Transformer 77M 0.984 0.962 Numbers are Bits-Per-Character bash experiments/feedback/enwik8.sh Algorithmic Model 3 Variable 5 Variable […]

Read more

CAT-Net: Learning Canonical Appearance Transformations

CAT-Net Code to accompany our paper “How to Train a CAT: Learning Canonical Appearance Transformations for Direct Visual Localization Under Illumination Change”. Dependencies numpy matpotlib pytorch + torchvision (1.2) Pillow progress (for progress bars in train/val/test loops) tensorboard + tensorboardX (for visualization) pyslam + liegroups (optional, for running odometry/localization experiments) OpenCV (optional, for running odometry/localization experiments) Training the CAT Download the ETHL dataset from here or the Virtual KITTI dataset from here ETHL only: rename ethl1/2 to ethl1/2_static. ETHL only: […]

Read more

Keeping Your Eye on the Ball Trajectory Attention in Video Transformers with python

Motionformer This is an official pytorch implementation of paper Keeping Your Eye on the Ball: Trajectory Attention in Video Transformers. In this repository, we provide PyTorch code for training and testing our proposed Motionformer model. Motionformer use proposed trajectory attention to achieve state-of-the-art results on several video action recognition benchmarks such as Kinetics-400 and Something-Something V2. If you find Motionformer useful in your research, please use the following BibTeX entry for citation. @misc{patrick2021keeping, title={Keeping Your Eye on the Ball: Trajectory […]

Read more

Cross Attention in Vision Transformer with python

CAT: Cross Attention in Vision Transformer This is official implement of “CAT: Cross Attention in Vision Transformer”. Abstract Since Transformer has found widespread use in NLP, the potential of Transformer in CV has been realized and has inspired many new approaches. However, the computation required for replacing word tokens with image patches for Transformer after the tokenization of the image is vast(e.g., ViT), which bottlenecks model training and inference. In this paper, we propose a new attention mechanism in Transformer […]

Read more

Unofficial TensorFlow implementation of the Keyword Spotting Transformer model

Keyword Spotting Transformer This is the unofficial TensorFlow implementation of the Keyword Spotting Transformer model. This model is used to train on the 35 words speech command dataset Paper : Keyword Transformer: A Self-Attention Model for Keyword Spotting Model architecture Download the dataset To download the dataset use the following command wget https://storage.googleapis.com/download.tensorflow.org/data/speech_commands_v0.02.tar.gz mkdir data mv ./speech_commands_v0.02.tar.gz ./data cd ./data tar -xf ./speech_commands_v0.02.tar.gz cd ../ Setup virtual environment virtualenv -p python3 venv source ./venv/bin/activate Install dependencies pip install -r requirements.txt […]

Read more

A look-ahead multi-entity Transformer for modeling coordinated agents in python

baller2vec++ This is the repository for the paper: Michael A. Alcorn and Anh Nguyen. baller2vec++: A Look-Ahead Multi-Entity Transformer For Modeling Coordinated Agents. arXiv. 2021. To learn statistically dependent agent trajectories, baller2vec++ uses a specially designed self-attention mask to simultaneously process three different sets of features vectors in a single Transformer. The three sets of feature vectors consist of location feature vectors like those found in baller2vec, look-ahead trajectory feature vectors, and starting location feature vectors. This design allows the […]

Read more

State-of-the-art Natural Language Processing for Jax, PyTorch and TensorFlow

transformers Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX. 🤗 Transformers provides thousands of pretrained models to perform tasks on texts such as classification, information extraction, question answering, summarization, translation, text generation and more in over 100 languages. Its aim is to make cutting-edge NLP easier to use for everyone. 🤗 Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and then share them with the […]

Read more
1 4 5 6 7