CPPF: Towards Robust Category-Level 9D Pose Estimation in the Wild (CVPR2022)

Yang You, Ruoxi Shi, Weiming Wang, Cewu Lu CVPR 2022 CPPF is a pure sim-to-real method that achieves 9D pose estimation in the wild. Our model is trained solely on ShapeNet synthetic models (without any real-world background pasting), and could be directly applied to real-world scenarios (i.e., NOCS REAL275, SUN RGB-D, etc.). CPPF achieves the goal by using only local $SE3$-invariant geometric features, and leverages a bottom-up voting scheme, which is quite different from previous end-to-end learning methods. Our model […]

Read more

Distributionally Robust Structure Learning for Discrete Pairwise Markov Networks

This is the official implementation of the following paper accepted to AISTATS 2022: Distributionally Robust Structure Learning for Discrete Pairwise Markov Networks Yeshu Li, Zhan Shi, Xinhua Zhang, Brian D. Ziebart [Proceeding link TBA] Requirements Quick Start Run Citation Please cite our work if you find it useful in your research: Acknowledgement This project is based upon work supported by the National Science Foundation under Grant No. 1652530. GitHub View Github    

Read more

RoMA: Robust Model Adaptation for Offline Model-based Optimization

Implementation of RoMA: Robust Model Adaptation for Offline Model-based Optimization (NeurIPS 2021). Setup conda create -n roma python=3.7 conda activate roma pip install -r requirement.txt Run experiments python run.py –task [TASK] where available tasks are TASKS=[ant, superconductor, dkitty, hopper, gfp, molecule]. Citation @inproceedings{ yu2021roma, title={RoMA: Robust Model Adaptation for Offline    

Read more

Robust fine-tuning of zero-shot models

This repository contains code for the paper Robust fine-tuning of zero-shot models by Mitchell Wortsman*, Gabriel Ilharco*, Mike Li, Jong Wook Kim, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, Ludwig Schmidt. Abstract Large pre-trained models such as CLIP offer consistent accuracy across a range of data distributions when performing zero-shot inference (i.e., without fine-tuning on a specific dataset). Although existing fine-tuning approaches substantially improve accuracy in-distribution, they also reduce out-of-distribution robustness. We address this tension by introducing a simple and effective […]

Read more