Optimizing Protein Structure Prediction Model Training and Inference on GPU Clusters

Optimizing Protein Structure Prediction Model Training and Inference on GPU Clusters FastFold provides a high-performance implementation of Evoformer with the following characteristics. Excellent kernel performance on GPU platform Supporting Dynamic Axial Parallelism(DAP) Break the memory limit of single GPU and reduce the overall training time DAP can significantly speed up inference and make ultra-long sequence inference possible Ease of use Huge performance gains with a few lines changes You don’t need to care about how the parallel part is implemented […]

Read more

Implementation of linesearch Optimization Algorithms in Python

During my time as Scientific Assistant at the Karlsruhe Institute of Technology (Germany) I implemented various Optimization Algorithms solving unrestricted nonlinear Problems; Gradient-Descent-Method, Newton-Method, Conjugate-Gradient-Descent-Method, BFGS-Method and a Trust-Region-Method in Python. In addition, I implemented an Armijo linesearch. The code is implemented in an object-oriented manner, whereby each method is implemented in a class (bfgs.py, cg.py, gradv.py, newtonm.py and tr.py) and executed via the ros_test.py script. In the script ros_test.py the Rosenbrock function was implemented, which is minimized to a […]

Read more

Wordle Optimization Project

Our “robo guesser” can solve any Wordle puzzle within 6 guesses, with an average of 4.53 guesses. This automatic guesser, along with other functions that provide useful analytics for players who want to improve their guesses without cheating, are included in this project. How to Use Key Functions All functions in this repository will provide basic information with use of help(function()), but below is how to use the key functions. Prerequisites You will need Python as well as packages pandas […]

Read more

A computational optimization project towards the goal of gerrymandering the results of a hypothetical election in the UK

A computational optimization project towards the goal of gerrymandering the results of a hypothetical election in the UK. We seek to determine the best possible division of 14 shires on agiven map into 5 constituencies, each composed of one or more shires. We do so towards the goal of maximising the number of such constituencies in which “Joris Bohnson” wins a majority of the total votes inside this constituency. As a result, he is able to send the maximum possible […]

Read more

A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization

A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization pip install madgrad Try it out! A best-of-both-worlds optimizer with the generalization performance of SGD and at least as fast convergence as that of Adam, often faster. A drop-in torch.optim implementation madgrad.MADGRAD is provided, as well as a FairSeq wrapped instance. For FairSeq, just import madgrad anywhere in your project files and use the –optimizer madgrad command line option, together with –weight-decay, –momentum, and optionally –madgrad_eps. The madgrad.py file containing […]

Read more

Convex Optimisation MVA course – Assignment

This repository contains the coding files of the third assignment in the MVA Convex Optimisation course. Usage To reproduce the results displayed in the report, please start by cloning the repository locally: git clone https://github.com/bglbrt/CVXOPTQP.git Then, install the required libraries: pip install -r requirements.txt To test the QP solver method against an open-source solver on randomly generated QP programs, run the dedicated testing file with: To obtain convergence plots for randomly generated LASSO dual optimisation problems, run the dedicated plotting […]

Read more

A Lightweight Hyperparameter Optimization Tool

The mle-hyperopt package provides a simple and intuitive API for hyperparameter optimization of your Machine Learning Experiment (MLE) pipeline. It supports real, integer & categorical search variables and single- or multi-objective optimization. Core features include the following: API Simplicity: strategy.ask(), strategy.tell() interface & space definition. Strategy Diversity: Grid, random, coordinate search, SMBO & wrapping around FAIR’s nevergrad. Search Space Refinement based on the top performing configs via strategy.refine(top_k=10). Export of configurations to execute via e.g. python train.py –config_fname config.yaml. Storage […]

Read more

When to trust your model: Model-based policy optimization in offline RL settings

This repository contains the code of a version of model-based RL algorithm MBPO, which is modified to perform in offline RL settingsPaper:When to trust your model: Model-based policy optimizationWith much thanks, this code is based on Xingyu-Lin‘s easy-to-read pytorch implementation of MBPO See requirements.txtThe code depends on D4RL‘s environments and datasetsOnly support hopper, walker, halfcheetah and ant environments right now (if you wish to evaluate in other environments, modify the termination function in predict_env.py) Simply run python main_mbpo.py –env_name=halfcheetah-medium-v0 –seed=1234 […]

Read more
1 2