Optimization Research
Research at the intersection of big data, optimization, and explainability
Typal Academy's research efforts focus on open-source development of optimization-based tools. Our specialty is in creating optimization models and algorithms that are tunable, thereby enabling high performance on a particular class of applications (when training data is available).
We are happy to share our work published via @PNASNews. We give a simple formula for estimating proximal operators when access is only given to (possibly noisy) objective function samples; these estimates can be embedded in #optimization algorithms.https://t.co/luNNxGFUjY
— Typal Academy (@TypalAcademy) August 4, 2023
We are happy to share our work on #explainableAI was recently published in #ScientificReports. We show how to use #optimization with deep learning to make explainable models and explainable inferences, using certificates of trustworthiness.
— Typal Academy (@TypalAcademy) July 5, 2023
Check it out:https://t.co/YMfPDZZxw4
Learning to Optimize
Key Ideas
"Learning to Optimize" (L2O) is a methodology wherein models are defined with inspiration taken from optimization. Models we consider use predictions/inferences that include an optimization layer that is tunable, enabling the optimization to encode both prior knowledge and available data.
Why implicit L2O?
Many of the works below use implicit models. This is distinct from the explosion of L2O models constructed by unrolling an optimization algorithm for a fixed, finite number of steps. Standard feedforward networks prescribe a finite sequence of actions to perform. However, when defining an inference in terms of an optimization model, the inference is defined implicitly by optimality conditions rather than explicitly by actions to perform. This is significant because 1) it enables many options for computing inferences and 2) it enables strong guarantees to be provided on outputs (since they can inherit any desired properties from optimization theory, e.g. satisfaction of many constraints).
L2O Papers
Faster Predict-and-Optimize with Davis-Yin Splitting
Safeguarded Learned Convex Optimization
L2O Videos
Zero-Order Optimization
Key Ideas
Recently, we found a way to approximate proximals for weakly convex functions using direct oracle sampling. This enables a new class of optimization problems to be solved by embedding zero-order schemes inside optimization algorithms. Additionally, by using sufficient sampling, we can approximately minimize functions globally.
ZOO Papers
Global Solutions to Nonconvex Problems by Evolution of HJ PDEs