Minimal PyTorch implementation of Generative Latent Optimization

Minimal PyTorch implementation of Generative Latent Optimization This is a reimplementation of the paper Piotr Bojanowski, Armand Joulin, David Lopez-Paz, Arthur Szlam:Optimizing the Latent Space of Generative Networks I’m not one of the authors. I just reimplemented parts of the paper in PyTorch for learning about PyTorch and generative models. Also, I liked the idea in the paper and was surprised that the approach actually works. Implementation of the Laplacian pyramid L1 loss is inspired by https://github.com/mtyka/laploss. DCGAN network architecture […]

Read more

Poincaré Embeddings for Learning Hierarchical Representations

PoincarĂ© Embeddings for Learning Hierarchical Representations PyTorch implementation of PoincarĂ© Embeddings for Learning Hierarchical Representations Installation Simply clone this repository via git clone https://github.com/facebookresearch/poincare-embeddings.git cd poincare-embeddings conda env create -f environment.yml source activate poincare python setup.py build_ext –inplace Example: Embedding WordNet Mammals To embed the transitive closure of the WordNet mammals subtree, first generate the data via cd wordnet python transitive_closure.py This will generate the transitive closure of the full noun hierarchy as well as of the mammals subtree of […]

Read more

An implementation of the Adversarial Patch paper

adversarial-patch PyTorch implementation of adversarial patch This is an implementation of the Adversarial Patch paper. Not official and likely to have bugs/errors. How to run: Data set-up: Run attack: python make_patch.py –cuda –netClassifier inceptionv3 –max_count 500 –image_size 299 –patch_type circle –outf log Results: Using patch shapes of both circles and squares gave good results (both achieved 100% success on the training set and eventually > 90% success on test set) I managed to recreate the toaster example in the original […]

Read more

Efficient Neural Architecture Search (ENAS) in PyTorch

PyTorch implementation of Efficient Neural Architecture Search via Parameters Sharing. ENAS reduce the computational requirement (GPU-hours) of Neural Architecture Search (NAS) by 1000x via parameter sharing between models that are subgraphs within a large computational graph. SOTA on Penn Treebank language modeling. Prerequisites Python 3.6+ PyTorch==0.3.1 tqdm, scipy, imageio, graphviz, tensorboardX Usage Install prerequisites with: conda install graphviz pip install -r requirements.txt To train ENAS to discover a recurrent cell for RNN: python main.py –network_type rnn –dataset ptb –controller_optim adam […]

Read more

An implementation of shampoo, proposed in Shampoo

shampoo.pytorch An implementation of shampoo, proposed in Shampoo : Preconditioned Stochastic Tensor Optimization by Vineet Gupta, Tomer Koren and Yoram Singer. # Suppose the size of the tensor grad (i, j, k), # dim_id = 1 and dim = j grad = grad.transpose_(0, dim_id).contiguous() # (j, i, k) transposed_size = grad.size() grad = grad.view(dim, -1) # (j, i x k) grad_t = grad.t() # (i x k, j) precond.add_(grad @ grad_t) # (j, j) inv_precond.copy_(_matrix_power(state[precond, -1 / order)) # (j, […]

Read more

Unofficial PyTorch reimplementation of Hand-Biomechanical-Constraints

Hand Biomechanical Constraints Pytorch Unofficial PyTorch reimplementation of Hand-Biomechanical-Constraints (ECCV2020). This project reimplement following components : 3 kinds of biomechanical soft constraints integrate BMC into training procedure (PyTorch version) Usage Download data Download 3D joint location data joints.zip Google Drive or Baidu Pan (2pip), and . These statistics are from following datasets: Note the data from these datasets under their own licenses. Calculate BMC Run the code python calculate_bmc.py You will get bone_len_max.npy bone_len_min.npy for bone length limits curvatures_max.npy curvatures_min.npy […]

Read more

Face Identity Disentanglement via Latent Space Mapping

ID-disentanglement-Pytorch Pytorch implementation of the paper Face Identity Disentanglement via Latent Space Mapping for both training and evaluation, with StyleGAN 2. Changes from original paper instead of using a Discriminator loss for the mapper. We have used several other losses such as: LPIPS Loss (The Unreasonable Effectiveness of Deep Features as a Perceptual Metric, Zhang el al, 2018) MSE Loss Different ID Loss Different landmark detector The reason for those changes resides in the fact that the training procedure with […]

Read more

Look before you leap: learning landmark features for one-stage visual grounding

LBYL-Net This repo implements paper Look Before You Leap: Learning Landmark Features For One-Stage Visual Grounding CVPR 2021. Getting Started Prerequisites python 3.7 pytorch 10.0 cuda 10.0 gcc 4.92 or above Installation Then clone the repo and install dependencies. git clone https://github.com/svip-lab/LBYLNet.git cd LBYLNet pip install requirements.txt You also need to install our landmark feature convolution: cd ext git clone https://github.com/hbb1/landmarkconv.git cd landmarkconv/lib/layers python setup.py install –user We follow dataset structure DMS and FAOA. For convience, we have pack them […]

Read more

Probabilistic Tracklet Scoring and Inpainting for Multiple Object Tracking

ArTIST Probabilistic Tracklet Scoring and Inpainting for Multiple Object Tracking (CVPR 2021) Pytorch implementation of the ArTIST motion model. In this repo, there are Training script for the Moving Agent network Training script for the ArTIST motion model Demo script for Inferring the likelihood of current observations (detections) Demo script for Inpainting the missing observation/detections Demo 1: Likelihood estimation of observation Run: python3 demo_scoring.py This will generate the output in the temp/ar/log_p directory, look like this: This demo gets as […]

Read more

A variety of sequence model architectures from scratch in PyTorch

Sequence Models This repository implements a variety of sequence model architectures from scratch in PyTorch. Effort has been put to make the code well structured so that it can serve as learning material. The training loop implements the learner design pattern from fast.ai in pure PyTorch, with access to the loop provided through callbacks. Detailed logging and graphs are also provided with python logging and wandb. Additional implementations will be added. Setup Using Miniconda/Anaconda: cd path_to_repo conda create –name –file […]

Read more
1 10 11 12 13 14