Model Zoo for AI Model Efficiency Toolkit

We provide a collection of popular neural network models and compare their floating point and quantized performance. Results demonstrate that quantized models can provide good accuracy, comparable to floating point models. Together with results, we also provide recipes for users to quantize floating-point models using the AI Model Efficiency ToolKit (AIMET). Introduction Quantized inference is significantly faster than floating-point inference, and enables models to run in a power-efficient manner on mobile and edge devices. We use AIMET, a library that […]

Read more

Bonsai: Gradient Boosted Trees + Bayesian Optimization

Bonsai Bonsai is a wrapper for the XGBoost and Catboost model training pipelines that leverages Bayesian optimization for computationally efficient hyperparameter tuning. Despite being a very small package, it has access to nearly all of the configurable parameters in XGBoost and CatBoost as well as the BayesianOptimization package allowing users to specify unique objectives, metrics, parameter search ranges, and search policies. This is made possible thanks to the strong similarities between both libraries. $ pip install bonsai-tree References/Dependencies: Why use […]

Read more

Auto-generate Streamlit UI from Pydantic Models and Dataclasses

Streamlit Pydantic Auto-generate Streamlit UI elements from Pydantic models. Streamlit-pydantic makes it easy to auto-generate UI elements from Pydantic models. Just define your data model and turn it into a full-fledged UI form. It supports data validation, nested models, and field limitations. Streamlit-pydantic can be easily integrated into any Streamlit app. Beta Version: Only suggested for experimental usage. Highlights 🪄  Auto-generated UI elements from Pydantic models. 📇  Out-of-the-box data validation. 📑  Supports nested Pydantic models. 📏  Supports field limits and […]

Read more

BERT Over BERT for Training Persona-based Dialogue Models from Limited Personalized Data

BoB This repository provides the implementation details for the ACL 2021 main conference paper: BoB: BERT Over BERT for Training Persona-based Dialogue Models from Limited Personalized Data. 1. Data Preparation In this work, we carried out persona-based dialogue generation experiments under a persona-dense scenario (English PersonaChat) and a persona-sparse scenario (Chinese PersonalDialog), with the assistance of a series of auxiliary inference datasets. Here we summarize the key information of these datasets and provide the links to download these datasets if […]

Read more

A Joint Sequence(1D)-Fold(3D) Embedding-based Generative Model for Protein Design

Fold2Seq [ICML2021] Fold2Seq: A Joint Sequence(1D)-Fold(3D) Embedding-based Generative Model for Protein Design Environment file: Data and Feature Generation: Go to data/ and check the README there. How to train the model: python train.py –data_path $path_to_the_data_dictionary –lr $learning_rate –model_save $path_to_the_saved_model How to generate sequences: python inference.py –trained_model $path_to_the_trained_model –output $path_to_the_output_file –data_path $path_to_the_data_dictionary Fold2Seq generated structures against natural structures: GitHub https://github.com/IBM/fold2seq    

Read more

Unofficial implementation of PatchCore anomaly detection model

PatchCore anomaly detection Unofficial implementation of PatchCore(new SOTA) anomaly detection model Original Paper : Towards Total Recall in Industrial Anomaly Detection (Jun 2021)Karsten Roth, Latha Pemula, Joaquin Zepeda, Bernhard Schölkopf, Thomas Brox, Peter Gehler https://arxiv.org/abs/2106.08265https://paperswithcode.com/sota/anomaly-detection-on-mvtec-ad notice(21/06/18) :This code is not yet verified. Any feedback is appreciated.updates(21/06/21) : I used sklearn’s SparseRandomProjection(ep=0.9) for random projection. I’m not confident with this. I think exact value of “b nearest patch-features” is not presented in the paper. I just set 9. (args.n_neighbors) In terms […]

Read more
1 2