PyTorch Extension Library of Optimized Autograd Sparse Matrix Operations

PyTorch Sparse This package consists of a small extension library of optimized sparse matrix operations with autograd support. This package currently consists of the following methods: All included operations work on varying data types and are implemented both for CPU and GPU.To avoid the hazzle of creating torch.sparse_coo_tensor, this package defines operations on sparse tensors by simply passing index and value tensors as arguments (with same shapes as defined in PyTorch).Note that only value comes with autograd support, as index […]

Read more

PyTorch Extension Library of Optimized Scatter Operations

PyTorch Scatter This package consists of a small extension library of highly optimized sparse update (scatter and segment) operations for the use in PyTorch, which are missing in the main package.Scatter and segment operations can be roughly described as reduce operations based on a given “group-index” tensor.Segment operations require the “group-index” tensor to be sorted, whereas scatter operations are not subject to these requirements. The package consists of the following operations with reduction types “sum”|”mean”|”min”|”max”: In addition, we provide the […]

Read more

A collection of extensions and data-loaders for few-shot learning & meta-learning in PyTorch

A collection of extensions and data-loaders for few-shot learning & meta-learning in PyTorch. Torchmeta contains popular meta-learning benchmarks, fully compatible with both torchvision and PyTorch’s DataLoader. Features A unified interface for both few-shot classification and regression problems, to allow easy benchmarking on multiple problems and reproducibility. Helper functions for some popular problems, with default arguments from the literature. An thin extension of PyTorch’s Module, called MetaModule, that simplifies the creation of certain meta-learning models (e.g. gradient based meta-learning methods). See […]

Read more

A recurrent unit that can run over 10 times faster than cuDNN LSTM

sru SRU is a recurrent unit that can run over 10 times faster than cuDNN LSTM, without loss of accuracy tested on many tasks. Average processing time of LSTM, conv2d and SRU, tested on GTX 1070 For example, the figure above presents the processing time of a single mini-batch of 32 samples. SRU achieves 10 to 16 times speed-up compared to LSTM, and operates as fast as (or faster than) word-level convolution using conv2d. Reference: Simple Recurrent Units for Highly […]

Read more

Model summary in PyTorch similar to model.summary() in Keras

Keras style model.summary() in PyTorch Keras has a neat API to view the visualization of the model which is very helpful while debugging your network. Here is a barebone code to try and mimic the same in PyTorch. The aim is to provide information complementary to, what is not provided by print(your_model) in PyTorch. Usage pip install torchsummary or git clone https://github.com/sksq96/pytorch-summary from torchsummary import summary summary(your_model, input_size=(channels, H, W)) Note that the input_size is required to make a forward […]

Read more

A collection of optimizers for PyTorch compatible with optim module

torch-optimizer torch-optimizer — collection of optimizers for PyTorch compatible with optim module. Simple example import torch_optimizer as optim # model = … optimizer = optim.DiffGrad(model.parameters(), lr=0.001) optimizer.step() Installation Installation process is simple, just: $ pip install torch_optimizer Documentation https://pytorch-optimizer.rtfd.io GitHub https://github.com/jettify/pytorch-optimizer    

Read more

A PyTorch implementation of EfficientNet and EfficientNetV2

EfficientNet PyTorch A PyTorch implementation of EfficientNet and EfficientNetV2 (coming soon!) Quickstart Install with pip install efficientnet_pytorch and load a pretrained EfficientNet with: from efficientnet_pytorch import EfficientNet model = EfficientNet.from_pretrained(‘efficientnet-b0’) Updates Update (April 2, 2021) The EfficientNetV2 paper has been released! I am working on implementing it as you read this 🙂 About EfficientNetV2: EfficientNetV2 is a new family of convolutional networks that have faster training speed and better parameter efficiency than previous models. To develop this family of models, […]

Read more

PyTorch Implementation of Differentiable ODE Solvers

PyTorch Implementation of Differentiable ODE Solvers This library provides ordinary differential equation (ODE) solvers implemented in PyTorch. Backpropagation through ODE solutions is supported using the adjoint method for constant memory cost. For usage of ODE solvers in deep learning applications, see reference [1]. As the solvers are implemented in PyTorch, algorithms in this repository are fully supported to run on the GPU. Installation To install latest stable version: pip install torchdiffeq To install latest on GitHub: pip install git+https://github.com/rtqichen/torchdiffeq Examples […]

Read more

Conversor de arquivos svg para react-native utilizando python

svg-react-native-converter Conversor de arquivos svg para react-native utilizando python. 🚀 Technologies Technologies that I used to develop this application 💻 Getting started Requirements Clone the project and access the folder $ git clone https://github.com/cesarzxk/svg-react-native-converter.git Follow the steps below # For run the code(or double click): python ./main.py 🤔 How to contribute Make a fork of this repository # Fork using GitHub official command line # If you don’t have the GitHub CLI, use the web site to do that. $ […]

Read more

Challenges and Opportunities in NLP Benchmarking

Over the last years, models in NLP have become much more powerful, driven by advances in transfer learning. A consequence of this drastic increase in performance is that existing benchmarks have been left behind. Recent models “have outpaced the benchmarks to test for them” (AI Index Report 2021), quickly reaching super-human performance on standard benchmarks such as SuperGLUE and SQuAD. Does this mean that we have solved natural language processing? Far from it. However, the traditional practices for evaluating performance […]

Read more
1 538 539 540 541 542 984