An efficient 3D semantic segmentation framework for Urban-scale point clouds like SensatUrban, Campus3D, etc

This is the official implementation of our BEV-Seg3D-Net, an efficient 3D semantic segmentation framework for Urban-scale point clouds like SensatUrban, Campus3D, etc. Features of our framework/model: leveraging various proven methods in 2D segmentation for 3D tasks achieve competitive performance in the SensatUrban benchmark fast inference process, about 1km^2 area per minute with RTX 3090. To be done: add more complex/efficient fusion models add more backbone like ResNeXt, HRNet, DenseNet, etc. add more novel projection methods like pointpillars For technical details, […]

Read more

Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation

This repo hosts the code to accompany the camera-ready version of “Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation” in EMNLP 2021. Setup We provide our scripts and modifications to Fairseq. In this section, we describe how to go about running the code and, for instance, reproduce Table 2 in the paper. Data To view the data as we prepared and used it, switch to the main branch. But we recommend cloning code from this branch to […]

Read more

GLaRA: Graph-based Labeling Rule Augmentation for Weakly Supervised Named Entity Recognition

This paper is the code release of the paper GLaRA: Graph-based Labeling Rule Augmentation for Weakly Supervised Named Entity Recognition, which is accepted at EACL-2021. This work aims at improving weakly supervised named entity reconigtion systems by automatically finding new rules that are helpful at identifying entities from data. The idea is, as shown in the following figure, if we know rule1: associated with->Disease is an accurate rule and it is semantically related to rule2: cause of->Disease, we should be […]

Read more

Run an FFmpeg command and see the percentage progress and ETA

A command line program that runs an FFmpeg command and shows the following in addition to the FFmpeg output: Percentage Progress Speed ETA (minutes and seconds) Example: Progress: 25% | Speed: 22.3x | ETA: 1m 33s python3 better_ffmpeg_progress.py -c “ffmpeg -i input.mp4 -c:a libmp3lame output.mp3” I have also included a function, which can be imported and used in your own Python program or script: run_ffmpeg_show_progress(“ffmpeg -i input.mp4 -c:a libmp3lame output.mp3”) GitHub https://github.com/CrypticSignal/better-ffmpeg-progress    

Read more

Live Speech Portraits: Real-Time Photorealistic Talking-Head Animation

This repository contains the implementation of the following paper: Live Speech Portraits: Real-Time Photorealistic Talking-Head Animation Yuanxun Lu, Jinxiang Chai, Xun Cao (SIGGRAPH Asia 2021) Abstract: To the best of our knowledge, we first present a live system that generates personalized photorealistic talking-head animation only driven by audio signals at over 30 fps. Our system contains three stages. The first stage is a deep neural network that extracts deep audio features along with a manifold projection to project the features […]

Read more

Auralisation of learned features in CNN (for audio)

This repo is for an example of auralisastion of CNNs that is demonstrated on ISMIR 2015. Files auralise.py: includes all required function for deconvolution. example.py: includes the whole code – just clone and run it by python example.py You might need to use older version of Keras, e.g. this (ver 0.3.x) Folders src_songs: includes three songs that I used in my blog posting. Usage Load weights that you want to auralise. I’m using this function W = load_weights() to load […]

Read more

Interpretability and explainability of data and machine learning models

The AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datasets and machine learning models. The AI Explainability 360 Python package includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics. The AI Explainability 360 interactive experience provides a gentle introduction to the concepts and capabilities by walking through an example use case for different consumer personas. The tutorials and example notebooks offer a deeper, data scientist-oriented introduction. […]

Read more

Interpreting scikit-learn’s decision tree and random forest predictions

Package for interpreting scikit-learn’s decision tree and random forest predictions. Allows decomposing each prediction into bias and feature contribution components as described in http://blog.datadive.net/interpreting-random-forests/. For a dataset with n features, each prediction on the dataset is decomposed as prediction = bias + feature_1_contribution + … + feature_n_contribution. It works on scikit-learn’s DecisionTreeRegressor DecisionTreeClassifier ExtraTreeRegressor ExtraTreeClassifier RandomForestRegressor RandomForestClassifier ExtraTreesRegressor ExtraTreesClassifier Free software: BSD license Dependencies Installation The easiest way to install the package is via pip: $ pip install treeinterpreter Usage […]

Read more

A library that implements fairness-aware machine learning algorithms

themis-ml is a Python library built on top of pandas and sklearnthat implements fairness-aware machine learning algorithms. themis-ml defines discrimination as the preference (bias) for or against a set of social groups that result in the unfair treatment of its members with respect to some outcome. It defines fairness as the inverse of discrimination, and in the context of a machine learning algorithm, this is measured by the degree to which the algorithm’s predictions favor one social group over another […]

Read more
1 448 449 450 451 452 928