Skip to content

Deep Learning Daily

Deep Learning, NLP, NMT, AI, ML

  • Home
  • About
  • Privacy Policy
December 28, 2021 PyTorch

Attention for PyTorch with Linear Memory Footprint

Unofficially implements https://arxiv.org/abs/2112.05682 to get Linear Memory Cost on Attention (+ some sidekick speedup on the GPU when compared to reference implementation in JAX)

Usage:

git clone https://github.com/CHARM-Tx/linear_mem_attention-pytorch
cd linear_mem_attention_pytorch
python setup.py install 

Usage:

High Level

from linear_mem_attention_torch.

 

To finish reading, please visit source site

Categories

Recent Posts

  • Quiz: Control Flow Structures in Python
  • Control Flow Structures in Python
  • The LEGB Rule & Understanding Python Scope
  • FrodoKEM: A conservative quantum-safe cryptographic algorithm
  • Quiz: Marimo: A Reactive, Reproducible Notebook

Tags

Attention blogathon Calculus Command-line Tools Data Preparation data science data visualization Deep Learning Deep Learning for Computer Vision Deep Learning for Natural Language Processing Deep Learning for Time Series Deep Learning Performance Deep Learning with PyTorch Ensemble Learning Generative Adversarial Networks Imbalanced Classification Linear Algebra Long Short-Term Memory Networks machine learning Machine Learning Algorithms Machine Learning Process Machine Learning Resources machine translation Matplotlib Natural language processing Natural Language Processing & Speech Neural MT nlp NMT opencv Optimization pandas Probability python Python for Machine Learning Python Machine Learning Resources R Machine Learning scikit-learn sentiment analysis Start Machine Learning Statistics Time Series Weka Machine Learning XGBoost

Categories

Archives

Powered by WordPress and Rubine.