A Mixed Precision library for JAX in python

Mixed precision training in JAX Mixed precision training [0] is a technique that mixes the use of full andhalf precision floating point numbers during training to reduce the memorybandwidth requirements and improve the computational efficiency of a givenmodel. This library implements support for mixed precision training in JAX by providingtwo key abstractions (mixed precision “policies” and loss scaling). Neuralnetwork libraries (such as Haiku) can integrate with jmp and provide“Automatic Mixed Precision (AMP)” support (automating or simplifying applyingpolicies to modules). All […]

Read more

Attention in Attention Network for Image Super-Resolution

A2N This repository is an PyTorch implementation of the paper “Attention in Attention Network for Image Super-Resolution” [arXiv] Visual results in the paper are availble at Google Drive or Baidu Netdisk (password: 7t74). Unofficial TensorFlow implementation: https://github.com/Anuj040/superres Test Dependecies: PyTorch==0.4.1 (Will be updated to support PyTorch>1.0 in the future) You can download the test sets from Google Drive. Put the test data in ../Data/benchmark/. python main.py –scale 4 –data_test Set5 –pre_train ./experiment/model/aan_x4.pt –chop –test_only If you use CPU, please add […]

Read more

Unsupervised Pre-training for Person Re-identification

LUPerson The repository is for our CVPR2021 paper Unsupervised Pre-training for Person Re-identification. LUPerson Dataset LUPerson is currently the largest unlabeled dataset for Person Re-identification, which is used for Unsupervised Pre-training. LUPerson consists of 4M images of over 200K identities and covers a much diverse range of capturing environments. Details can be found at ./LUP. Pre-trained Models Finetuned Results For MGN with ResNet50: Dataset mAP cmc1 path MSMT17 66.06/79.93 85.08/87.63 MSMT DukeMTMC 82.27/91.70 90.35/92.82 Duke Market1501 91.12/96.16 96.26/97.12 Market CUHK03-L […]

Read more

Bot that automatically answers giga unitel questions

Gigabot+ Bot que responde automáticamente as perguntas do giga unitel ATT: Não compativel para Windows 7 Para instalar esta ferramenta é muito fácil pip install requests python gb.py Inicio Treinar Jogar Só vai poder jogar caso o cliente estiver subscrito Antes de escolher a opção “jogar” escola a opção “treinar”, sai do programa e depois abre de novo Para usar em Android instale o script no termux By: Joa Roque GitHub https://github.com/joaroque/gigabot-plus    

Read more

Boosting Co-teaching with Compression Regularization for Label Noise

Nested-Co-teaching ([email protected]) Pytorch implementation of paper “Boosting Co-teaching with Compression Regularization for Label Noise” [PDF] If our project is helpful for your research, please consider citing : @inproceedings{chen2021boosting, title={Boosting Co-teaching with Compression Regularization for Label Noise}, author={Chen, Yingyi and Shen, Xi and Hu, Shell Xu and Suykens, Johan AK}, booktitle={CVPR Learning from Limited and Imperfect Data (L2ID) workshop}, year={2021} } Our model can be learnt in a single GPU GeForce GTX 1080Ti (12G), this code has been tested with Pytorch […]

Read more

Image data augmentation scheduler for albumentations transforms

albu_scheduler Scheduler for albumentations transforms based on PyTorch schedulers interface TransformMultiStepScheduler import albumentations as A from albu_scheduler import TransformMultiStepScheduler transform_1 = A.Compose([ A.RandomCrop(width=256, height=256), A.HorizontalFlip(p=0.5), A.RandomBrightnessContrast(p=0.2), ]) transform_2 = A.Compose([ A.RandomCrop(width=128, height=128), A.VerticalFlip(p=0.5), ]) scheduled_transform = TransformMultiStepScheduler(transforms=[transform_1, transform_2], milestones=[0, 10]) dataset = Dataset(transform=scheduled_transform) for epoch in range(100): train(…) validate(…) scheduled_transform.step() TransformSchedulerOnPlateau from albu_scheduler import TransformSchedulerOnPlateau scheduled_transform = TransformSchedulerOnPlateau(transforms=[transform_1, transform_2], mode=”max”, patience=5) dataset = Dataset(transform=scheduled_transform) for epoch in range(100): train(…) score = validate(…) scheduled_transform.step(score) git clone https://github.com/KiriLev/albu_scheduler cd albu_scheduler make install […]

Read more

Performing Sentiment Analysis Using Twitter Data!

Photo by Daddy Mohlala on Unsplash Data is water, purifying to make it edible is a role of Data Analyst – Kashish Rastogi We are going to clean the twitter text data and visualize data in this blog. Table Of Contents: Problem Statement Data Description Cleaning text with NLP Finding if the text has: with spacy Cleaning text with preprocessor library Analysis of the sentiment of data Data visualizing   I am taking the twitter data which is available here on […]

Read more

Training BERT Text Classifier on Tensor Processing Unit (TPU)

Training hugging face most famous model on TPU for social media Tunisian Arabizi sentiment analysis.   Introduction The Arabic speakers usually express themself in local dialect on social media, so Tunisians use Tunisian Arabizi which consists of Arabic written in form of Latin alphabets. The sentiment analysis relies on cultural knowledge and word sense with contextual information. We will be using both Arabizi dialect and sentimental analysis to solve the problem in this project. The competition is hosted on Zindi which […]

Read more

Make Every feature Binary: A 135B parameter sparse neural network for massively improved search relevance

Recently, Transformer-based deep learning models like GPT-3 have been getting a lot of attention in the machine learning world. These models excel at understanding semantic relationships, and they have contributed to large improvements in Microsoft Bing’s search experience and surpassing human performance on the SuperGLUE academic benchmark. However, these models can fail to capture more nuanced relationships between query and document terms beyond pure semantics. In this blog post, we are introducing “Make Every feature Binary” (MEB), a large-scale sparse […]

Read more

New Future of Work: Redefining workspaces as hybrid and remote work become more prevalent with Jaime Teevan and Ginger Hudson

Episode 131 | August 4, 2021 For Microsoft researchers, COVID-19 was a call to action. The reimagining of work practices had long been an area of study, but existing and new questions that needed immediate answers surfaced as companies and their employees quickly adjusted to significantly different working conditions. Teams from across the Microsoft organizational chart pooled their unique  

Read more
1 565 566 567 568 569 972