An implementation of Performer, a linear attention-based transformer in Pytorch

Performer – Pytorch An implementation of Performer, a linear attention-based transformer variant with a Fast Attention Via positive Orthogonal Random features approach (FAVOR+). Install $ pip install performer-pytorch Then you must run the following, if you plan on training an autoregressive model $ pip install -r requirements.txt Usage Performer Language Model import torch from performer_pytorch import PerformerLM model = PerformerLM( num_tokens = 20000, max_seq_len = 2048, # max sequence length dim = 512, # dimension depth = 12, # layers […]

Read more

TabNet : Attentive Interpretable Tabular Learning

TabNet : Attentive Interpretable Tabular Learning This is a pyTorch implementation of Tabnet (Arik, S. O., & Pfister, T. (2019). TabNet: Attentive Interpretable Tabular Learning. arXiv preprint arXiv:1908.07442.) https://arxiv.org/pdf/1908.07442.pdf. Easy installation You can install using pip by running:pip install pytorch-tabnet Source code If you wan to use it locally within a docker container: git clone [email protected]:dreamquark-ai/tabnet.git cd tabnet to get inside the repository CPU only make start to build and get inside the container GPU make start-gpu to build and […]

Read more

A python library providing support for higher-order optimization

higher is a library providing support for higher-order optimization, e.g. through unrolled first-order optimization loops, of “meta” aspects of these loops. It provides tools for turning existing torch.nn.Module instances “stateless”, meaning that changes to the parameters thereof can be tracked, and gradient with regard to intermediate parameters can be taken. It also provides a suite of differentiable optimizers, to facilitate the implementation of various meta-learning approaches. Full documentation is available at https://higher.readthedocs.io/en/latest/. Python version >= 3.5 PyTorch version >= 1.3 […]

Read more

Generic EfficientNets for PyTorch

(Generic) EfficientNets for PyTorch A ‘generic’ implementation of EfficientNet, MixNet, MobileNetV3, etc. that covers most of the compute/parameter efficient architectures derived from the MobileNet V1/V2 block sequence, including those found via automated neural architecture search. All models are implemented by GenEfficientNet or MobileNetV3 classes, with string based architecture definitions to configure the block layouts (idea from here) Models Implemented models include: I originally implemented and trained some these models with code here, this repository contains just the GenEfficientNet models, validation, […]

Read more

PyTorch Extension Library of Optimized Autograd Sparse Matrix Operations

PyTorch Sparse This package consists of a small extension library of optimized sparse matrix operations with autograd support. This package currently consists of the following methods: All included operations work on varying data types and are implemented both for CPU and GPU.To avoid the hazzle of creating torch.sparse_coo_tensor, this package defines operations on sparse tensors by simply passing index and value tensors as arguments (with same shapes as defined in PyTorch).Note that only value comes with autograd support, as index […]

Read more

PyTorch Extension Library of Optimized Scatter Operations

PyTorch Scatter This package consists of a small extension library of highly optimized sparse update (scatter and segment) operations for the use in PyTorch, which are missing in the main package.Scatter and segment operations can be roughly described as reduce operations based on a given “group-index” tensor.Segment operations require the “group-index” tensor to be sorted, whereas scatter operations are not subject to these requirements. The package consists of the following operations with reduction types “sum”|”mean”|”min”|”max”: In addition, we provide the […]

Read more

A collection of extensions and data-loaders for few-shot learning & meta-learning in PyTorch

A collection of extensions and data-loaders for few-shot learning & meta-learning in PyTorch. Torchmeta contains popular meta-learning benchmarks, fully compatible with both torchvision and PyTorch’s DataLoader. Features A unified interface for both few-shot classification and regression problems, to allow easy benchmarking on multiple problems and reproducibility. Helper functions for some popular problems, with default arguments from the literature. An thin extension of PyTorch’s Module, called MetaModule, that simplifies the creation of certain meta-learning models (e.g. gradient based meta-learning methods). See […]

Read more

A recurrent unit that can run over 10 times faster than cuDNN LSTM

sru SRU is a recurrent unit that can run over 10 times faster than cuDNN LSTM, without loss of accuracy tested on many tasks. Average processing time of LSTM, conv2d and SRU, tested on GTX 1070 For example, the figure above presents the processing time of a single mini-batch of 32 samples. SRU achieves 10 to 16 times speed-up compared to LSTM, and operates as fast as (or faster than) word-level convolution using conv2d. Reference: Simple Recurrent Units for Highly […]

Read more

Model summary in PyTorch similar to model.summary() in Keras

Keras style model.summary() in PyTorch Keras has a neat API to view the visualization of the model which is very helpful while debugging your network. Here is a barebone code to try and mimic the same in PyTorch. The aim is to provide information complementary to, what is not provided by print(your_model) in PyTorch. Usage pip install torchsummary or git clone https://github.com/sksq96/pytorch-summary from torchsummary import summary summary(your_model, input_size=(channels, H, W)) Note that the input_size is required to make a forward […]

Read more

A collection of optimizers for PyTorch compatible with optim module

torch-optimizer torch-optimizer — collection of optimizers for PyTorch compatible with optim module. Simple example import torch_optimizer as optim # model = … optimizer = optim.DiffGrad(model.parameters(), lr=0.001) optimizer.step() Installation Installation process is simple, just: $ pip install torch_optimizer Documentation https://pytorch-optimizer.rtfd.io GitHub https://github.com/jettify/pytorch-optimizer    

Read more
1 4 5 6 7 8 51