Articles About Deep Learning

One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective

ArXiv (pdf) Official pytorch implementation of the paper: “One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective” NeurIPS 2021 Released on September 29, 2021 This paper proposes a novel deep hashing model with only a single learning objective which is a simplification from most state of the art papers generally use lots of losses and regularizer. Specifically, it maximizes the cosine similarity between the continuous codes and their corresponding binary orthogonal codes to ensure both […]

Read more

State-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch

deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch. Design Architecture As shown below, each pretraining/fine-tuning model is decomposed into two modules: Encoder and Head. Encoder Encoder has Embedding and Backbone. Embedding makes continuous/categorical features tokenized or simply normalized. Backbone processes the tokenized features. Pretraining/Fine-tuning Head Pretraining/Fine-tuning Head uses Encoder module for training. Implemented Methods Available Modules Encoder – Embedding FeatureEmbedding TabTransformerEmbedding Encoder – Backbone MLPBackbone FTTransformerBackbone SAINTBackbone Model – Head Model – Pretraining […]

Read more

Deep learning with dynamic computation graphs in TensorFlow

TensorFlow Fold is a library for creating TensorFlow models that consume structured data, where the structure of the computation graph depends on the structure of the input data. For example, this model implements TreeLSTMs for sentiment analysis on parse trees of arbitrary shape/size/depth. Fold implements dynamic batching. Batches of arbitrarily shaped computation graphs are transformed to produce a static computation graph. This graph has the same structure regardless of what input it receives, and can be executed efficiently by TensorFlow. […]

Read more

The project for the most brutal and effective language learning technique

– “The project for the most brutal and effective language learning technique” (c) Alex Kay The langflow project was created especially for language learning by using the most direct way. A method demands to work passionately and regularly. The main idea of the method is constant recall by students writing sentences in language which he would like to improve! The natural way of learning by this technique reminds supervised learning which became one the most effective ways in ML. Just […]

Read more

GLaRA: Graph-based Labeling Rule Augmentation for Weakly Supervised Named Entity Recognition

This paper is the code release of the paper GLaRA: Graph-based Labeling Rule Augmentation for Weakly Supervised Named Entity Recognition, which is accepted at EACL-2021. This work aims at improving weakly supervised named entity reconigtion systems by automatically finding new rules that are helpful at identifying entities from data. The idea is, as shown in the following figure, if we know rule1: associated with->Disease is an accurate rule and it is semantically related to rule2: cause of->Disease, we should be […]

Read more

Dense Deep Unfolding Network with 3D-CNN Prior for Snapshot Compressive Imaging

This repository is the code for the following paper: Zhuoyuan Wu, Jian Zhang, Chong Mou. Dense Deep Unfolding Network with 3D-CNN Prior for Snapshot Compressive Imaging. ICCV 2021. [PDF] Introduction Snapshot compressive imaging (SCI) aims to record three-dimensional signals via a two-dimensional camera. For the sake of building a fast and accurate SCI recovery algorithm, we incorporate the interpretability of model-based methods and the speed of learning-based ones and present a novel dense deep unfolding network (DUN) with 3D-CNN prior […]

Read more

A Simple Baseline for Bayesian Uncertainty in Deep Learning

TensorFlow implementation of “A Simple Baseline for Bayesian Uncertainty in Deep Learning” Concept Algorithm to utilize the SWAG [1]. Equation for the weight sampling from SWAG [1]. Results The red color and the blue color represent the initial state and current state respectively. Performance MNIST Method Accuracy Precision Recall F1-Score Final Epoch 0.99230 0.99231 0.99222 0.99226 Best Loss 0.99350 0.99350 0.99338 0.99344 SWAG (S = 30) 0.99310 0.99305

Read more

A Structured Self-attentive Sentence Embedding

Implementation for the paper A Structured Self-Attentive Sentence Embedding, which was published in ICLR 2017: https://arxiv.org/abs/1703.03130 . USAGE: For binary sentiment classification on imdb dataset run : python classification.py “binary” For multiclass classification on reuters dataset run : python classification.py “multiclass” You can change the model parameters in the model_params.json file Other tranining parameters like number of attention hops etc can be configured in the config.json file. If you want to use pretrained glove embeddings , set the use_embeddings parameter […]

Read more

DeepLab resnet v2 model implementation in pytorch

DeepLab resnet v2 model implementation in pytorch. The architecture of deepLab-ResNet has been replicated exactly as it is from the caffe implementation. This architecture calculates losses on input images over multiple scales ( 1x, 0.75x, 0.5x ). Losses are calculated individually over these 3 scales. In addition to these 3 losses, one more loss is calculated after merging the output score maps on the 3 scales. These 4 losses are added to calculate the total loss. Updates 18 July 2017 […]

Read more
1 7 8 9 10 11 23