TorchXRayVision: A library of chest X-ray datasets and models

A library for chest X-ray datasets and models. Including pre-trained models. ( 🎬promo video about the project) Motivation: While there are many publications focusing on the prediction of radiological and clinical findings from chest X-ray images much of this work is inaccessible to other researchers. In the case of researchers addressing clinical questions it is a waste of time for them to train models from scratch. To address this, TorchXRayVision provides pre-trained models which are trained on large cohorts of […]

Read more

A PyTorch Library for Accelerating 3D Deep Learning Research

Overview NVIDIA Kaolin library provides a PyTorch API for working with a variety of 3D representations and includes a growing collection of GPU-optimized operations such as modular differentiable rendering, fast conversions between representations, data loading, 3D checkpoints and more. Kaolin library is part of a larger suite of tools for 3D deep learning research. For example, the Omniverse Kaolin App will allow interactive visualization of 3D checkpoints. To find out more about the Kaolin ecosystem, visit the NVIDIA Kaolin Dev […]

Read more

Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks

This repository contains the code used for word-level language model and unsupervised parsing experiments in Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks paper, originally forked from the LSTM and QRNN Language Model Toolkit for PyTorch. If you use this code or our results in your research, we’d appreciate if you cite our paper as following: @article{shen2018ordered, title={Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks}, author={Shen, Yikang and Tan, Shawn and Sordoni, Alessandro and Courville, Aaron}, journal={arXiv preprint […]

Read more

LSTM and QRNN Language Model Toolkit for PyTorch

This repository contains the code used for two Salesforce Research papers: The model comes with instructions to train: word level language models over the Penn Treebank (PTB), WikiText-2 (WT2), and WikiText-103 (WT103) datasets character level language models over the Penn Treebank (PTBC) and Hutter Prize dataset (enwik8) The model can be composed of an LSTM or a Quasi-Recurrent Neural Network (QRNN) which is two or more times faster than the cuDNN LSTM in this setup while achieving equivalent or better […]

Read more

A PyTorch implementation of Attentive Recurrent Comparators

PyTorch implementation of Attentive Recurrent Comparators by Shyam et al. A blog explaining Attentive Recurrent Comparators Visualizing Attention On Same characters On Different Characters How to run? Download data A one-time 52MB download. Shouldn’t take more than a few minutes. Train Let it train until the accuracy rises to at least 80%. Early stopping is not implemented yet. You will have to manually kill the process.

Read more

Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling

Pytorch implementation of “Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling” (https://arxiv.org/pdf/1609.01454.pdf) Intent prediction and slot filling are performed in two branches based on Encoder-Decoder model. dataset (Atis) You can get data from here Requirements Train python3 train.py –data_path ‘your data path e.g. ./data/atis-2.train.w-intent.iob’ Result GitHub https://github.com/DSKSD/RNN-for-Joint-NLU    

Read more

Training RNNs as Fast as CNNs

News SRU++, a new SRU variant, is released. [tech report] [blog] The experimental code and SRU++ implementation are available on the dev branch which will be merged into master later. About SRU is a recurrent unit that can run over 10 times faster than cuDNN LSTM, without loss of accuracy tested on many tasks. Average processing time of LSTM, conv2d and SRU, tested on GTX 1070 For example, the figure above presents the processing time of a single mini-batch of […]

Read more

The cross-modality generative model that synthesizes dance from music

Dancing to Music PyTorch implementation of the cross-modality generative model that synthesizes dance from music. Paper Hsin-Ying Lee, Xiaodong Yang, Ming-Yu Liu, Ting-Chun Wang, Yu-Ding Lu, Ming-Hsuan Yang, Jan KautzDancing to Music Neural Information Processing Systems (NeurIPS) 2019[Paper] [YouTube] [Project] [Blog] [Supp] Example Videos Beat-Matching1st row: generated dance sequences, 2nd row: music beats, 3rd row: kinematics beats MultimodalityGenerate various dance sequences with    

Read more

A Virtual Desktop Assistant Written in Python

A Virtual Desktop Assistant Written in Python. It’s generally a basic virtual assistant The basic purpose of this is to make work easier as it re-directs you to various main sites and performs various important functions for your PC as well just install it for your system and run it in your code editor or IDE. I will be soon updating it as an application for MacOS, Linux and Windows. Until then you can follow the Contributing Guidelines and Contribute […]

Read more

Machine Translation Weekly 88: Text Segmentation and Multilinguality

With the semester start, it is also time to renew MT Weekly. My new year’s resolution was to make it to 100 issues, so let’s see if I can keep it. Today, I will talk about a paper by my colleagues from LMU Munich that will appear in the Findings of EMNLP 2021 which deals with a perpetual problem of NLP – input text segmentation. The title of the paper is Wine is Not v i n. On the Compatibility […]

Read more
1 470 471 472 473 474 974