Skip to content

Optimization Research

Research at the intersection of big data, optimization, and explainability

Typal Academy's research efforts focus on open-source development of optimization-based tools. Our specialty is in creating optimization models and algorithms that are tunable, thereby enabling high performance on a particular class of applications (when training data is available).

Google Scholar Profile

Contact Us



Learning to Optimize

Key Ideas

"Learning to Optimize" (L2O) is a methodology wherein models are defined with inspiration taken from optimization. Models we consider use predictions/inferences that include an optimization layer that is tunable, enabling the optimization to encode both prior knowledge and available data.

Why implicit L2O?

Many of the works below use implicit models. This is distinct from the explosion of L2O models constructed by unrolling an optimization algorithm for a fixed, finite number of steps. Standard feedforward networks prescribe a finite sequence of actions to perform. However, when defining an inference in terms of an optimization model, the inference is defined implicitly by optimality conditions rather than explicitly by actions to perform. This is significant because 1) it enables many options for computing inferences and 2) it enables strong guarantees to be provided on outputs (since they can inherit any desired properties from optimization theory, e.g. satisfaction of many constraints).

L2O Papers

Explainable AI via Learning to Optimize

Faster Predict-and-Optimize with Davis-Yin Splitting

Safeguarded Learned Convex Optimization

Jacobian-Free Backprop

Learn to Predict EQ via Fixed Point Networks

Feasibility-based Fixed Point Networks

L2O Videos



Zero-Order Optimization

Key Ideas

Recently, we found a way to approximate proximals for weakly convex functions using direct oracle sampling. This enables a new class of optimization problems to be solved by embedding zero-order schemes inside optimization algorithms. Additionally, by using sufficient sampling, we can approximately minimize functions globally.

ZOO Papers

A Hamilton-Jacobi-based Proximal Operator

Global Solutions to Nonconvex Problems by Evolution of HJ PDEs

ZOO Videos