Asynchronous and also synchronous non-official QvaPay client for asyncio and Python

QvaPay client for Python Asynchronous and also synchronous non-official QvaPay client for asyncio and Python language. This library is still under development, the interface could be changed. Features Response models with type hints annotated fully (Also internal code have type hints annotated fully) thank you to Python’s type hints (or annotations) and pydantic Asynchronous and synchronous behavior thank you to httpx Coverage 100% Project collaborative and open source GitHub https://github.com/leynier/aioqvapay    

Read more

A Toastmasters-inspired speech timer for online scrum meetings

Qolor Tyme This is a Toastmasters-inspired speech timer for use during online daily scrum meeting updates. It is based on PyQt, which is Python API for Qt (pronounced “cute”) cross-platform GUI framework, hence the name Qolor Tyme. I honestly don’t know why they called it PyQt instead of QtPy, which would have made it “cutie-pie” ¯_(ツ)_/¯ Installation Use pip: pip install qolor-tyme Running Run it as the package. It accepts an optional timeout interval (default is 90 seconds per Agile […]

Read more

An unofficial API of 1cak.com

1cak – is Indonesian web that provide lot of fun. Endpoint Lol -> 10 Recent stored posts on databaseExample: https://onecak.azurewebsites.net/?lol Shuffle -> Select random posts from databaseExample: https://onecak.herokuapp.com/?shuffle=5 Info This app is subject of experiment, you may found some weird thing or even useless code.If so, please ignore or do PR.Thanks. GitHub https://github.com/dickymuliafiqri/onecak    

Read more

Robust Video Matting (RVM) in PyTorch

Official repository for the paper Robust High-Resolution Video Matting with Temporal Guidance. RVM is specifically designed for robust human video matting. Unlike existing neural models that process frames as independent images, RVM uses a recurrent neural network to process videos with temporal memory. RVM can perform matting in real-time on any videos without additional inputs. It achieves 4K 76FPS and HD 104FPS on an Nvidia GTX 1080 Ti GPU. The project was developed at ByteDance Inc.

Read more

For something in between a pytorch and a karpathy/micrograd

tinygrad For something in between a pytorch and a karpathy/micrograd This may not be the best deep learning framework, but it is a deep learning framework. Due to its extreme simplicity, it aims to be the easiest framework to add new accelerators to, with support for both inference and training. Support the simple basic ops, and you get SOTA vision extra/efficientnet.py and language extra/transformer.py models. We are working on support for the Apple Neural Engine. Eventually, we will build custom […]

Read more

Generate more helpful exception messages for numpy/pytorch matrix algebra expressions

See article Clarifying exceptions and visualizing tensor operations in deep learning code and TensorSensor implementation slides (PDF). One of the biggest challenges when writing code to implement deep learning networks, particularly for us newbies, is getting all of the tensor (matrix and vector) dimensions to line up properly. It’s really easy to lose track of tensor dimensionality in complicated expressions involving multiple tensors and tensor operations. Even when just feeding data into predefined Tensorflow network layers, we still need to […]

Read more

An implementation of Performer, a linear attention-based transformer in Pytorch

Performer – Pytorch An implementation of Performer, a linear attention-based transformer variant with a Fast Attention Via positive Orthogonal Random features approach (FAVOR+). Install $ pip install performer-pytorch Then you must run the following, if you plan on training an autoregressive model $ pip install -r requirements.txt Usage Performer Language Model import torch from performer_pytorch import PerformerLM model = PerformerLM( num_tokens = 20000, max_seq_len = 2048, # max sequence length dim = 512, # dimension depth = 12, # layers […]

Read more

TabNet : Attentive Interpretable Tabular Learning

TabNet : Attentive Interpretable Tabular Learning This is a pyTorch implementation of Tabnet (Arik, S. O., & Pfister, T. (2019). TabNet: Attentive Interpretable Tabular Learning. arXiv preprint arXiv:1908.07442.) https://arxiv.org/pdf/1908.07442.pdf. Easy installation You can install using pip by running:pip install pytorch-tabnet Source code If you wan to use it locally within a docker container: git clone [email protected]:dreamquark-ai/tabnet.git cd tabnet to get inside the repository CPU only make start to build and get inside the container GPU make start-gpu to build and […]

Read more

A python library providing support for higher-order optimization

higher is a library providing support for higher-order optimization, e.g. through unrolled first-order optimization loops, of “meta” aspects of these loops. It provides tools for turning existing torch.nn.Module instances “stateless”, meaning that changes to the parameters thereof can be tracked, and gradient with regard to intermediate parameters can be taken. It also provides a suite of differentiable optimizers, to facilitate the implementation of various meta-learning approaches. Full documentation is available at https://higher.readthedocs.io/en/latest/. Python version >= 3.5 PyTorch version >= 1.3 […]

Read more

Generic EfficientNets for PyTorch

(Generic) EfficientNets for PyTorch A ‘generic’ implementation of EfficientNet, MixNet, MobileNetV3, etc. that covers most of the compute/parameter efficient architectures derived from the MobileNet V1/V2 block sequence, including those found via automated neural architecture search. All models are implemented by GenEfficientNet or MobileNetV3 classes, with string based architecture definitions to configure the block layouts (idea from here) Models Implemented models include: I originally implemented and trained some these models with code here, this repository contains just the GenEfficientNet models, validation, […]

Read more
1 537 538 539 540 541 984