Unofficial PyTorch implementation of Neural Additive Models (NAM)

nam-pytorch Unofficial PyTorch implementation of Neural Additive Models (NAM) by Agarwal, et al. [abs, pdf] Installation You can access nam-pytorch via pip: $ pip install nam-pytorch Usage import torch from nam_pytorch import NAM nam = NAM( num_features=784, link_func=”sigmoid” ) images = torch.rand(32, 784) pred = nam(images) # [32, 1] GitHub https://github.com/rish-16/nam-pytorch    

Read more

PyTorch implementation and pretrained models for XCiT models

Cross-Covariance Image Transformer (XCiT) PyTorch implementation and pretrained models for XCiT models. See XCiT: Cross-Covariance Image Transformer Linear complexity in time and memory Our XCiT models has a linear complexity w.r.t number of patches/tokens: Peak Memory (inference) Millisecond/Image (Inference) Scaling to high resolution inputs XCiT can scale to high resolution inputs both due to cheaper compute requirement as well as better adaptability to higher resolution at test time (see Figure 3 in the paper) Detection and Instance Segmentation for Ultra […]

Read more

Diverse im2im and vid2vid selfie to anime translation

GANs N’ Roses Pytorch Official PyTorch repo for GAN’s N’ Roses. Diverse im2im and vid2vid selfie to anime translation. Abstract: We show how to learn a map that takes a content code, derived from a face image, and a randomly chosen style code to an anime image. We derive an adversarial loss from our simple and effective definitions of style and content. This adversarial loss guarantees the map is diverse — a very wide range of anime can be produced […]

Read more

Implementation of Uformer, Attention-based Unet, in Pytorch

Uformer – Pytorch Implementation of Uformer, Attention-based Unet, in Pytorch. It will only offer the concat-cross-skip connection. This repository will be geared towards use in a project for learning protein structures. Specifically, it will include the ability to condition on time steps (needed for DDPM), as well as 2d relative positional encoding using rotary embeddings (instead of the bias on the attention matrix in the paper). Install $ pip install uformer-pytorch Usage import torch from uformer_pytorch import Uformer model = […]

Read more

Official implementation for TransDA

Official implementation for TransDA Official pytorch implement for “Transformer-Based Source-Free Domain Adaptation”. Overview: Result: Prerequisites: python == 3.6.8 pytorch ==1.1.0 torchvision == 0.3.0 numpy, scipy, sklearn, PIL, argparse, tqdm Prepare pretrain model We choose R50-ViT-B_16 as our encoder. wget https://storage.googleapis.com/vit_models/imagenet21k/R50+ViT-B_16.npz mkdir ./model/vit_checkpoint/imagenet21k mv R50+ViT-B_16.npz ./model/vit_checkpoint/imagenet21k/R50+ViT-B_16.npz Our checkpoints could be find in Dropbox Dataset: Please manually download the datasets Office, Office-Home, VisDA, Office-Caltech from the official websites, and modify the path of images in each ‘.txt’ under the folder ‘./data/’. The […]

Read more

Language Translation with Transformer In Python!

This article was published as a part of the Data Science Blogathon Introduction Natural Language Processing (NLP) is a field at the convergence of artificial intelligence, and linguistics. The aim is to make the computers understand real-world language or natural language so that they can perform tasks like Question Answering, Language Translation, and many more. NLP has lots of applications in different fields. 1. NLP enables the recognition and prediction of diseases based on electronic health records. 2. It is used […]

Read more

BERT for Natural Language Inference simplified in Pytorch!

This article was published as a part of the Data Science Blogathon Introduction to BERT: BERT stands for Bidirectional Encoder Representations from Transformers. It was introduced in 2018 by Google Researchers. BERT achieved state-of-art performance in most of the NLP tasks at that time and drawn the attention of the data science community worldwide. It is extensively used today by data science practitioners for various NLP tasks. Details about the working of the BERT model can be found here. Introduction to […]

Read more

Introduction to Flair for NLP: A Simple yet Powerful State-of-the-Art NLP Library

Introduction Last couple of years have been incredible for Natural Language Processing (NLP) as a domain! We have seen multiple breakthroughs – ULMFiT, ELMo, Facebook’s PyText, Google’s BERT, among many others. These have rapidly accelerated the state-of-the-art research in NLP (and language modeling, in particular). We can now predict the next sentence, given a sequence of preceding words. What’s even more important is that machines are now beginning to understand the key element that had eluded them for long. Context! Understanding context […]

Read more

Introduction to PyTorch-Transformers: An Incredible Library for State-of-the-Art NLP (with Python code)

Overview We look at the latest state-of-the-art NLP library in this article called PyTorch-Transformers We will also implement PyTorch-Transformers in Python using popular NLP models like Google’s BERT and OpenAI’s GPT-2! This has the potential to revolutionize the landscape of NLP as we know it   Introduction “NLP’s ImageNet moment has arrived.” – Sebastian Ruder Imagine having the power to build the Natural Language Processing (NLP) model that powers Google Translate. What if I told you this can be done […]

Read more

Automatic Image Captioning using Deep Learning (CNN and LSTM) in PyTorch

Introduction Deep Learning is a very rampant field right now – with so many applications coming out day by day. And the best way to get deeper into Deep Learning is to get hands-on with it. Take up as much projects as you can, and try to do them on your own. This would help you grasp the topics in more depth and assist you in becoming a better Deep Learning practitioner. In this article, we will take a look […]

Read more
1 11 12 13 14