WarpedGANSpace: Finding non-linear RBF paths in GAN latent space

Authors official PyTorch implementation of the WarpedGANSpace: Finding non-linear RBF paths in GAN latent space (ICCV 2021). If you use this code for your research, please cite our paper. Overview In this work, we try to discover non-linear interpretable paths in GAN latent space. For doing so, we model non-linear paths using RBF-based warping functions, which by warping the latent space, endow it with vector fields (their gradients). We use the latter to traverse the latent space across the paths […]

Read more

NFNets and Adaptive Gradient Clipping for SGD implemented in PyTorch

Paper: https://arxiv.org/abs/2102.06171.pdf Original code: https://github.com/deepmind/deepmind-research/tree/master/nfnets Do star this repository if it helps your work! Note: See this comment for a generic implementation for any optimizer as a temporary reference for anyone who needs it. Install from PyPi: pip3 install nfnets-pytorch or install the latest code using: pip3 install git+https://github.com/vballoli/nfnets-pytorch WSConv2d Use WSConv2d and WSConvTranspose2d like any other torch.nn.Conv2d or torch.nn.ConvTranspose2d modules. import torch from torch import nn from nfnets import WSConv2d conv = nn.Conv2d(3,6,3) w_conv = WSConv2d(

Read more

LSTM and QRNN Language Model Toolkit for PyTorch

This repository contains the code used for two Salesforce Research papers: The model comes with instructions to train: word level language models over the Penn Treebank (PTB), WikiText-2 (WT2), and WikiText-103 (WT103) datasets character level language models over the Penn Treebank (PTBC) and Hutter Prize dataset (enwik8) The model can be composed of an LSTM or a Quasi-Recurrent Neural Network (QRNN) which is two or more times faster than the cuDNN LSTM in this setup while achieving equivalent or better […]

Read more

A PyTorch implementation of Attentive Recurrent Comparators

PyTorch implementation of Attentive Recurrent Comparators by Shyam et al. A blog explaining Attentive Recurrent Comparators Visualizing Attention On Same characters On Different Characters How to run? Download data A one-time 52MB download. Shouldn’t take more than a few minutes. Train Let it train until the accuracy rises to at least 80%. Early stopping is not implemented yet. You will have to manually kill the process.

Read more

The cross-modality generative model that synthesizes dance from music

Dancing to Music PyTorch implementation of the cross-modality generative model that synthesizes dance from music. Paper Hsin-Ying Lee, Xiaodong Yang, Ming-Yu Liu, Ting-Chun Wang, Yu-Ding Lu, Ming-Hsuan Yang, Jan KautzDancing to Music Neural Information Processing Systems (NeurIPS) 2019[Paper] [YouTube] [Project] [Blog] [Supp] Example Videos Beat-Matching1st row: generated dance sequences, 2nd row: music beats, 3rd row: kinematics beats MultimodalityGenerate various dance sequences with    

Read more

Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining in Pytorch

COCO LM Pretraining (wip) Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch. They were able to make contrastive learning work in a self-supervised manner for language model pretraining. Seems like a solid successor to Electra. Install $ pip install coco-lm-pytorch Usage An example using the x-transformers library $ pip install x-transformers Then import torch from coco_lm_pytorch import COCO # (1) instantiate the generator and discriminator, making sure that the generator is roughly a quarter […]

Read more

Implementation of OmniNet, Omnidirectional Representations from Transformers in Pytorch

Omninet – Pytorch Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch. The authors propose that we should be attending to all the tokens of the previous layers, leveraging recent efficient attention advances to achieve this goal. Install $ pip install omninet-pytorch Usage import torch from omninet_pytorch import Omninet omninet = Omninet( dim = 512, # model dimension depth = 6, # depth dim_head = 64, # dimension per head heads =

Read more

Torch Containers simplified in PyTorch

This repository aims to help former Torchies more seamlessly transition to the “Containerless” world of PyTorch by providing a list of PyTorch implementations of Torch Table Layers. Note: As a result of full integration with autograd, PyTorch requires networks to be defined in the following manner: Define all layers to be used in the __init__ method of your network Combine them however you want in the forward method of your network (avoiding in place Tensor ops) And that’s all there […]

Read more

Compact Bilinear Pooling for PyTorch

This repository has a pure Python implementation of Compact Bilinear Pooling and Count Sketch for PyTorch. This version relies on the FFT implementation provided with PyTorch 0.4.0 onward. For older versions of PyTorch, use the tag v0.3.0. Installation Run the setup.py, for instance: Usage class compact_bilinear_pooling.CompactBilinearPooling(input1_size, input2_size, output_size, h1 = None, s1 = None, h2 = None, s2 = None) Basic usage: from compact_bilinear_pooling import CountSketch, CompactBilinearPooling input_size = 2048 output_size = 16000 mcb = CompactBilinearPooling(input_size, input_size, output_size).cuda() x = […]

Read more
1 5 6 7 8 9 14