Regularizing Generative Adversarial Networks under Limited Data

lecam-gan Regularizing Generative Adversarial Networks under Limited Data Implementation for our GAN regularization method. The proposed regularization 1) improves the performance of GANs under limited training data, and 2) complements the exisiting data augmentation approches. Please note that this is not an officially supported Google product. Paper Please cite our paper if you find the code or dataset useful for your research. Regularizing Generative Adversarial Networks under Limited Data Hung-Yu Tseng, Lu Jiang, Ce Liu, Ming-Hsuan Yang, Weilong Yang Computer […]

Read more

A Unified Framework for Self-Supervised Outlier Detection

SSD: A Unified Framework for Self-Supervised Outlier Detection [ICLR 2021] Pdf: https://openreview.net/forum?id=v5gjXpmR8J Code for our ICLR 2021 paper on outlier detection, titled SSD, without requiring class labels of in-distribution training data. We leverage recent advances in self-supervised representation learning followed by the cluster-based outlier detection to achieve competitive performance. This repository support both self-supervised training of networks and outlier detection evaluation of pre-trained networks. It also includes code for the two proposed extensions in the paper, i.e., 1) Few-shot outlier […]

Read more

A friendly guide to NLP: Bag-of-Words with Python example

1. A Quick Example Let’s look at an easy example to understand the concepts previously explained. We could be interested in analyzing the reviews about Game of Thrones: Review 1: Game of Thrones is an amazing tv series! Review 2: Game of Thrones is the best tv series! Review 3: Game of Thrones is so great In the table, I show all the calculations to obtain the Bag-Of-Words approach: Each row corresponds to a different review, while the rows are […]

Read more

Don’t Sweep your Learning Rate under the Rug- A Closer Look at Cross-modal Transfer of Pretrained Transformers

July 23, 2021 By: Danielle Rothermel, Margaret Li, Tim Rocktäschel, Jakob Foerster Abstract Self-supervised pre-training of large-scale transformer models on text corpora followed by fine-tuning has achieved state-of-the-art on a number of natural language processing tasks. Recently, Lu et al. (2021) claimed that frozen pretrained transformers (FPTs) match or outperform training from scratch as well as unfrozen (fine-tuned) pretrained transformers in a set of transfer tasks to other modalities. In our work, we find that this result is, in fact, […]

Read more

Many-Speakers Single Channel Speech Separation with Optimal Permutation Training

Abstract Single channel speech separation has experienced great progress in the last few years. However, training neural speech separation for a large number of speakers (e.g., more than 10 speakers) is out of reach for the current methods, which rely on the Permutation Invariant Training (PIT). In this work, we present a permutation invariant training that employs the Hungarian algorithm in order to train with an O (C 3) time complexity, where C is the number of speakers, in comparison […]

Read more

DeepViT: Towards Deeper Vision Transformer

DeepViT This repo is the official implementation of “DeepViT: Towards Deeper Vision Transformer”. The repo is based on the timm library (https://github.com/rwightman/pytorch-image-models) by Ross Wightman Deep Vision Transformer is initially described in arxiv, which observes the attention collapese phenomenon when training deep vision transformers: In this paper, we show that, unlike convolution neural networks (CNNs)that can be improved by stacking more convolutional layers, the performance of ViTs saturate fast when scaled to be deeper. More specifically, we empirically observe that […]

Read more

A Prometheus Python client library for asyncio-based applications

aioprometheus aioprometheus is a Prometheus Python client library for asyncio-based applications. It provides metrics collection and serving capabilities, supports multiple data formats and pushing metrics to a gateway. The project documentation can be found on ReadTheDocs. Install $ pip install aioprometheus A Prometheus Push Gateway client and ASGI service are also included, but their dependencies are not installed by default. You can install them alongside aioprometheus by running: $ pip install aioprometheus[aiohttp] Prometheus 2.0 removed support for the binary protocol, […]

Read more

A free, online learning platform to make quality education accessible for all

Oppia Oppia is an online learning tool that enables anyone to easily create and share interactive activities (called ‘explorations’). These activities simulate a one-on-one conversation with a tutor, making it possible for students to learn by doing while getting feedback. In addition to developing the Oppia platform, the team is also developing and piloting a set of free and effective lessons on basic mathematics. These lessons are targeted at learners who lack access to educational resources. Oppia is written using […]

Read more

Flask-Rebar combines flask, marshmallow, and swagger for robust REST services

Flask-Rebar Flask-Rebar combines flask, marshmallow, and swagger for robust REST services. Features Request and Response Validation – Flask-Rebar relies on schemas from the popular Marshmallow package to validate incoming requests and marshal outgoing responses. Automatic Swagger Generation – The same schemas used for validation and marshaling are used to automatically generate OpenAPI specifications (a.k.a. Swagger). This also means automatic documentation via Swagger UI. Error Handling – Uncaught exceptions from Flask-Rebar are converted to appropriate HTTP errors. Example from flask import […]

Read more

High-Performance Large-Scale Image Recognition Without Normalization

NFNet Pytorch Implementation This repo contains pretrained NFNet models F0-F6 with high ImageNet accuracy from the paper High-Performance Large-Scale Image Recognition Without Normalization. The small models are as accurate as an EfficientNet-B7, but train 8.7 times faster. The large models set a new SOTA top-1 accuracy on ImageNet. NFNet F0 F1 F2 F3 F4 F5 F6+SAM Top-1 accuracy Brock et al. 83.6 84.7 85.1 85.7 85.9 86.0 86.5 Top-1 accuracy this implementation 82.82 84.63 84.90 85.46 85.66 85.62 TBD All […]

Read more
1 515 516 517 518 519 935