Optimization for Oriented Object Detection via Representation Invariance Loss

By Qi Ming, Zhiqiang Zhou, Lingjuan Miao, Xue Yang, and Yunpeng Dong. The repository hosts the codes for our paper Optimization for Oriented Object Detection via Representation Invariance Loss (paper link), based on mmdetection and s2anet. Introduction To be updated. Installation conda create -n ridet python=3.7 -y source activate ridet conda install pytorch=1.3 torchvision cudatoolkit=10.0 -c pytorch pip install -r requirements.txt python setup.py develop cd mmdet/ops/orn python setup.py build_ext –inplace apt-get update apt-get install swig apt-get install zip cd DOTA_devkit […]

Read more

Bunch of optimizer implementations in PyTorch

Bunch of optimizer implementations in PyTorch with clean-code, strict types. Also, including useful optimization ideas. Most of the implementations are based on the original paper, but I added some tweaks. Documentation https://pytorch-optimizers.readthedocs.io/en/latest/ Usage Install $ pip3 install pytorch-optimizer Simple Usage from pytorch_optimizer import Ranger21 … model = YourModel() optimizer = Ranger21(model.parameters()) … for input, output in data: optimizer.zero_grad() loss = loss_function(output, model(input)) loss.backward() optimizer.step() Supported Optimizers Useful Resources Several optimization ideas to regularize & stabilize the training. Most of the […]

Read more

Neural Combinatorial Optimization with Reinforcement Learning In PyTorch

PyTorch implementation of Neural Combinatorial Optimization with Reinforcement Learning. I have implemented the basic RL pretraining model with greedy decoding from the paper. An implementation of the supervised learning baseline model is available here. Instead of a critic network, I got my results below on TSP from using an exponential moving average critic. The critic network is simply commented out in my code right now. From correspondence with a few others, it was determined that the exponential moving average critic […]

Read more

A python library providing support for higher-order optimization

higher is a library providing support for higher-order optimization, e.g. through unrolled first-order optimization loops, of “meta” aspects of these loops. It provides tools for turning existing torch.nn.Module instances “stateless”, meaning that changes to the parameters thereof can be tracked, and gradient with regard to intermediate parameters can be taken. It also provides a suite of differentiable optimizers, to facilitate the implementation of various meta-learning approaches. Full documentation is available at https://higher.readthedocs.io/en/latest/. Python version >= 3.5 PyTorch version >= 1.3 […]

Read more

PyTorch Extension Library of Optimized Scatter Operations

PyTorch Scatter This package consists of a small extension library of highly optimized sparse update (scatter and segment) operations for the use in PyTorch, which are missing in the main package.Scatter and segment operations can be roughly described as reduce operations based on a given “group-index” tensor.Segment operations require the “group-index” tensor to be sorted, whereas scatter operations are not subject to these requirements. The package consists of the following operations with reduction types “sum”|”mean”|”min”|”max”: In addition, we provide the […]

Read more

Cloud Optimized GeoTIFF creation and validation plugin for rasterio

rio-cogeo Cloud Optimized GeoTIFF (COG) creation and validation plugin for Rasterio. Cloud Optimized GeoTIFF This plugin aims to facilitate the creation and validation of Cloud OptimizedGeoTIFF (COG or COGEO). While it respects theCOG specifications, this plugin alsoenforces several features: Internal overviews (User can remove overview with option –overview-level 0) Internal tiles (default profiles have 512×512 internal tiles) Important: in GDAL 3.1 a new COG driver has been added (doc, discussion), starting with rio-cogeo version 2.2, –use-cog-driver option was added to […]

Read more

A Python profiler to help you optimize your code – make it faster

pyinstrument Pyinstrument is a Python profiler. A profiler is a tool to help you optimize your code – make it faster. To get the biggest speed increase you should focus on the slowest part of your program. Pyinstrument helps you find it! Installation pip install pyinstrument Pyinstrument supports Python 3.7+. To run Pyinstrument from a git checkout, there’s a build step.Take a look at Contributing for more info. Documentation To learn how to use pyinstrument, or to check the reference, […]

Read more

Enhanced Particle Swarm Optimization (PSO) with Python

pso_particle_swarm_optimization Implemented fully documented Particle Swarm Optimization (PSO) algorithm in Python which includes a basic model along with few advanced features such as updating inertia weight, cognitive, social learning coefficients and maximum velocity of the particle. Implemented fully documented Particle Swarm Optimization (PSO) algorithm in Python which includes a basic model along with few advanced features such as updating inertia weight, cognitive, social learning coefficients and maximum velocity of the particle. Dependencies Utilities Once the installation is finished (download or […]

Read more

Jupyter-friendly Python interface for C++ MINUIT2

iminuit iminuit is a Jupyter-friendly Python interface for the Minuit2 C++ library maintained by CERN’s ROOT team. It can be used as a general robust function minimisation method, but is most commonly used for likelihood fits of models to data, and to get model parameter error estimates from likelihood profile analysis. In a nutshell from iminuit import Minuit def fcn(x, y, z): return (x – 2) ** 2 + (y – 3) ** 2 + (z – 4) ** 2 […]

Read more

How Do Adam and Training Strategies Help BNNs Optimization?

AdamBNN This is the pytorch implementation of our paper “How Do Adam and Training Strategies Help BNNs Optimization?”, published in ICML 2021. ![](https://github.com/liuzechun0216/images/raw/master/AdamBNN_github.jpg =60%x) In this work, we explore the intrisic reasons why Adam is superior to other optimizers like SGD for BNN optimization and provide analytical explanations that support specific training strategies. By visualizing the optimization trajectory, we show that the optimization lies in extremely rugged loss landscape and the second-order momentum in Adam is crucial to revitalize the […]

Read more
1 2