Unsupervised Pre-training for Person Re-identification

LUPerson The repository is for our CVPR2021 paper Unsupervised Pre-training for Person Re-identification. LUPerson Dataset LUPerson is currently the largest unlabeled dataset for Person Re-identification, which is used for Unsupervised Pre-training. LUPerson consists of 4M images of over 200K identities and covers a much diverse range of capturing environments. Details can be found at ./LUP. Pre-trained Models Finetuned Results For MGN with ResNet50: Dataset mAP cmc1 path MSMT17 66.06/79.93 85.08/87.63 MSMT DukeMTMC 82.27/91.70 90.35/92.82 Duke Market1501 91.12/96.16 96.26/97.12 Market CUHK03-L […]

Read more

Bot that automatically answers giga unitel questions

Gigabot+ Bot que responde automáticamente as perguntas do giga unitel ATT: Não compativel para Windows 7 Para instalar esta ferramenta é muito fácil pip install requests python gb.py Inicio Treinar Jogar Só vai poder jogar caso o cliente estiver subscrito Antes de escolher a opção “jogar” escola a opção “treinar”, sai do programa e depois abre de novo Para usar em Android instale o script no termux By: Joa Roque GitHub https://github.com/joaroque/gigabot-plus    

Read more

Boosting Co-teaching with Compression Regularization for Label Noise

Nested-Co-teaching ([email protected]) Pytorch implementation of paper “Boosting Co-teaching with Compression Regularization for Label Noise” [PDF] If our project is helpful for your research, please consider citing : @inproceedings{chen2021boosting, title={Boosting Co-teaching with Compression Regularization for Label Noise}, author={Chen, Yingyi and Shen, Xi and Hu, Shell Xu and Suykens, Johan AK}, booktitle={CVPR Learning from Limited and Imperfect Data (L2ID) workshop}, year={2021} } Our model can be learnt in a single GPU GeForce GTX 1080Ti (12G), this code has been tested with Pytorch […]

Read more

Image data augmentation scheduler for albumentations transforms

albu_scheduler Scheduler for albumentations transforms based on PyTorch schedulers interface TransformMultiStepScheduler import albumentations as A from albu_scheduler import TransformMultiStepScheduler transform_1 = A.Compose([ A.RandomCrop(width=256, height=256), A.HorizontalFlip(p=0.5), A.RandomBrightnessContrast(p=0.2), ]) transform_2 = A.Compose([ A.RandomCrop(width=128, height=128), A.VerticalFlip(p=0.5), ]) scheduled_transform = TransformMultiStepScheduler(transforms=[transform_1, transform_2], milestones=[0, 10]) dataset = Dataset(transform=scheduled_transform) for epoch in range(100): train(…) validate(…) scheduled_transform.step() TransformSchedulerOnPlateau from albu_scheduler import TransformSchedulerOnPlateau scheduled_transform = TransformSchedulerOnPlateau(transforms=[transform_1, transform_2], mode=”max”, patience=5) dataset = Dataset(transform=scheduled_transform) for epoch in range(100): train(…) score = validate(…) scheduled_transform.step(score) git clone https://github.com/KiriLev/albu_scheduler cd albu_scheduler make install […]

Read more

Performing Sentiment Analysis Using Twitter Data!

Photo by Daddy Mohlala on Unsplash Data is water, purifying to make it edible is a role of Data Analyst – Kashish Rastogi We are going to clean the twitter text data and visualize data in this blog. Table Of Contents: Problem Statement Data Description Cleaning text with NLP Finding if the text has: with spacy Cleaning text with preprocessor library Analysis of the sentiment of data Data visualizing   I am taking the twitter data which is available here on […]

Read more

Training BERT Text Classifier on Tensor Processing Unit (TPU)

Training hugging face most famous model on TPU for social media Tunisian Arabizi sentiment analysis.   Introduction The Arabic speakers usually express themself in local dialect on social media, so Tunisians use Tunisian Arabizi which consists of Arabic written in form of Latin alphabets. The sentiment analysis relies on cultural knowledge and word sense with contextual information. We will be using both Arabizi dialect and sentimental analysis to solve the problem in this project. The competition is hosted on Zindi which […]

Read more

Dialogue in the Wild: Learning from a Deployed Role-Playing Game with Humans and Bots

Abstract Much of NLP research has focused on crowdsourced static datasets and the supervised learning paradigm of training once and then evaluating test performance. As argued in de Vries et al. (2020), crowdsourced data has the issues of lack of naturalness and relevance to real-world use cases, while the static dataset paradigm does not allow for a model to learn from its experiences of using language (Silver et al., 2013). In contrast, one might hope for machine learning systems that […]

Read more

UMS for Multi-turn Response Selection in PyTorch

UMS for Multi-turn Response Selection PyTorch Implementation for AAAI’21 “Do Response Selection Models Really Know What’s Next? Utterance Manipulation Strategies for Multi-turn Response Selection” Implements the model described in the following paper Do Response Selection Models Really Know What’s Next? Utterance Manipulation Strategies for Multi-turn Response Selection. @inproceedings{whang2021ums, title={Do Response Selection Models Really Know What’s Next? Utterance Manipulation Strategies for Multi-turn Response Selection}, author={Whang, Taesun and Lee, Dongyub and Oh, Dongsuk and Lee, Chanhee and Han, Kijong and Lee, Dong-hun […]

Read more

Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clouds

This repository contains the PyTorch implementation for paper “PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clouds” (CVPR 2021)[arXiv] Installation Prerequisites Python 3.8 PyTorch 1.8 torch-scatter CUDA 10.2 RTX 2080 Ti tqdm, tensorboard, scipy, imageio, png conda create -n pvraft python=3.8 conda activate pvraft conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch conda install tqdm tensorboard scipy imageio pip install pypng pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.8.0+cu102.html Usage Data Preparation We follow HPLFlowNet to prepare FlyingThings3D and KITTI datasets. […]

Read more

Evaluating the Factual Consistency of Abstractive Text Summarization

factCC Evaluating the Factual Consistency of Abstractive Text SummarizationAuthors: Wojciech Kryściński, Bryan McCann, Caiming Xiong, and Richard Socher Introduction Currently used metrics for assessing summarization algorithms do not account for whether summaries are factually consistent with source documents.We propose a weakly-supervised, model-based approach for verifying factual consistency and identifying conflicts between source documents and a generated summary.Training data is generated by applying a series of rule-based transformations to the sentences of source documents.The factual consistency model is then trained jointly […]

Read more
1 531 532 533 534 535 927