Tmux session manager built on libtmux

tmuxp, tmux session manager. built on libtmux. We need help! tmuxp is a trusted session manager for tmux. If you could lend your time to helping answer issues and QA pull requests, please do! See issue #290! New to tmux? The Tao of tmux is available on Leanpub and Amazon Kindle. Read and browse the book for free on the web. Installation $ pip install –user tmuxp Load a tmux session Load tmux sessions    

Read more

Dead simple CLI tool to try Python packages

try is an easy-to-use cli tool to try out Python packages. Features Install specific package version from PyPI Install package from GitHub Install in virtualenv using specific version of python Specify alternative python package import name Keep try environment after interactive session Launch interactive python console with already imported package Launch editor instead of interpreter Launch arbitrary python shell instead of default python shell Usage try requests try requests –ipython try requests –shell ptipython try requests -p 3.5 try requests […]

Read more

Using Python Optional Arguments When Defining Functions

In this section, you’ll learn how to define a function that takes an optional argument. Functions with optional arguments offer more flexibility in how you can use them. You can call the function with or without the argument, and if there is no argument in the function call, then a default value is used. Default Values Assigned to Input Parameters You can modify the function add_item() so that the parameter quantity has a default value: # optional_params.py shopping_list = {} […]

Read more

Visual Question Answering in Pytorch

/! New version of pytorch for VQA available here: https://github.com/Cadene/block.bootstrap.pytorch This repo was made by Remi Cadene (LIP6) and Hedi Ben-Younes (LIP6-Heuritech), two PhD Students working on VQA at UPMC-LIP6 and their professors Matthieu Cord (LIP6) and Nicolas Thome (LIP6-CNAM). We developed this code in the frame of a research paper called MUTAN: Multimodal Tucker Fusion for VQA which is (as far as we know) the current state-of-the-art on the VQA 1.0 dataset. The goal of this repo is two […]

Read more

Deal or No Deal? End-to-End Learning for Negotiation Dialogues

end-to-end-negotiator This is a PyTorch implementation of the following research papers: The code is developed by Facebook AI Research. The code trains neural networks to hold negotiations in natural language, and allows reinforcement learning self play and rollout-based planning. If you want to use this code in your research, please cite: @inproceedings{DBLP:conf/icml/YaratsL18, author = {Denis Yarats and Mike Lewis}, title = {Hierarchical Text Generation and Planning for Strategic Dialogue}, booktitle = {Proceedings of the 35th International Conference on Machine Learning, […]

Read more

Image classification with synthetic gradient in Pytorch

I implement the Decoupled Neural Interfaces using Synthetic Gradients in pytorch. The paper uses synthetic gradient to decouple the layers among the network, which is pretty interesting since we won’t suffer from update lock anymore. I test my model in mnist and almost the same performance, compared to the model updated with backpropagation. Requirement pytorch python 3.5 torchvision seaborn (optional) matplotlib (optional) TODO use multi-threading on gpu to analyze the speed What’s synthetic gradients? We ofter optimize NN by backpropogation, […]

Read more

Improved Training of Wasserstein GANs in pytorch

A pytorch implementation of Paper “Improved Training of Wasserstein GANs”. Python, NumPy, SciPy, MatplotlibA recent NVIDIA GPU A latest master version of Pytorch [x] gan_toy.py : Toy datasets (8 Gaussians, 25 Gaussians, Swiss Roll).(Finished in 2017.5.8) [x] gan_language.py : Character-level language model (Discriminator is using nn.Conv1d. Generator is using nn.Conv1d. Finished in 2017.6.23. Finished in 2017.6.27.) [x] gan_mnist.py : MNIST (Running Results while Finished in 2017.6.26. Discriminator is using nn.Conv1d. Generator is using nn.Conv1d.) [ ] gan_64x64.py: 64×64 architectures(Looking forward […]

Read more

Intent parsing and slot filling in PyTorch with seq2seq + attention

PyTorch Seq2Seq Intent Parsing Reframing intent parsing as a human – machine translation task. Work in progress successor to torch-seq2seq-intent-parsing The command language This is a simple command language developed for the “home assistant” Maia living in my apartment. She’s designed as a collection of microservices with services for lights (Hue), switches (WeMo), and info such as weather and market prices. A command consists of a “service”, a “method”, and some number of arguments. lights setState office_light on switches getState […]

Read more

Neural Combinatorial Optimization with Reinforcement Learning In PyTorch

PyTorch implementation of Neural Combinatorial Optimization with Reinforcement Learning. I have implemented the basic RL pretraining model with greedy decoding from the paper. An implementation of the supervised learning baseline model is available here. Instead of a critic network, I got my results below on TSP from using an exponential moving average critic. The critic network is simply commented out in my code right now. From correspondence with a few others, it was determined that the exponential moving average critic […]

Read more

Molecular AutoEncoder in PyTorch

Molecular AutoEncoder in PyTorch Install $ git clone https://github.com/cxhernandez/molencoder.git && cd molencoder $ python setup.py install Download Dataset $ molencoder download –dataset chembl22 Train $ molencoder train –dataset data/chembl22.h5 Add –cuda flag to enable CUDA. Add –cont to continue training a model from a checkpoint file. Pre-Trained Model A pre-trained reference model is available in the ref/ directory. Currently, it performs with ~98% accuracy on the validation set after 100 epochs of training. However, if you succeed at training a […]

Read more
1 2 3 4 5 6 51