Skip to content

About

Full Stack Mathematician.

Howard builds and deploys algorithms that combine physics-based modeling with data to maximize the performance of complex, constrained systems. He has worked with teams in medical imaging, e-commerce, physics-based animation, and trading crypto on blockchains. Through Typal Academy, he shares insights in the field of optimization and its interface with deep learning.

Education

University of California, Los Angeles
CredentialDetailYear
Ph.D. MathematicsThesis: Learning to Optimize with Guarantees2021
M.A. MathematicsQualifying Exams: Numerical Analysis, Applied Differential Equations2018
Walla Walla University
CredentialDetailYear
B.S. Computer Science, Mathematics, PhysicsGPA 3.97 / 4.002016

Academic Research

Howard's research originated in convex feasibility problems and iterative projection methods. In graduate school, this expanded to first-order optimization algorithms (e.g. operator splitting) and using machine learning to speed up algorithms and/or modify optimization problems to best utilize knowledge hidden in historical data (e.g. for inverse problems).

Google Scholar Profile

Optimization Algorithms

First-order and operator-splitting methods for large-scale convex optimization, with a focus on convergence guarantees, robustness and practical implementations.

  • Monotone operator theory and splitting schemes (e.g. DRS, PDHG).
  • Problem reformulations and projection formulas for efficient computation.
  • Applications to inverse problems in industry.

Optimization ∩ Deep Learning

Leveraging available data to solve optimization-based problems more effectively. This includes using deep learning to

  • reduce computational costs when optimizing (e.g. repeated solving with similar data),
  • learn data-driven regularizers (e.g. for inverse problems),
  • infer unseen parameters in an optimization problem.

This general area of research has many names: physics-informed machine learning, scientific machine learning, learning-to-optimize, predict-then-optimize and decision-focused learning.

Invited Talks

Industrial Problems Seminar, IMA at University of Minnesota

Nov 21, 2025

Convex Optimization, Data and Research in Industry

Optimization problems and big data arise all over in industry. This talk will walk through examples of these problems and highlight how optimization solvers can be informed by data (e.g. decision-focused learning). Discussion will then transition to lessons learned from research positions in various fields.

AI Meets Optimization, ICCOPT Session at University of Southern California

Jul 22, 2025

Differentiating through Solutions to Optimization Problems in Decision-Focused Learning

Many real-world problems can be framed as optimization problems, for which well-established algorithms exist. However, these problems often involve key parameters that are not directly observed. Instead, we typically have access to data that is correlated with these parameters, though the relationships are complex and difficult to describe explicitly. This challenge motivates the integration of machine learning with optimization: using machine learning to predict the hidden parameters and optimization to solve the resultant problem. This integration is known as decision-focused learning. In this talk, I will introduce decision-focused learning, with a particular focus on differentiating through solutions to optimization problems and recent advances in effectively scaling these computations.

ACMD Seminar, National Institute of Standards and Technology

Mar 7, 2023

Explainable Models via Data-Driven Optimization

Flexible, human-interpretable machine learning models are gaining interest as applications increasingly require explainable artificial intelligence. This talk overviews recent developments in the "learn to optimize" (L2O) methodology wherein model inferences are defined to be solutions to parameterized optimization problems. The idea is to have domain experts create intuitive optimization models that include both analytic and parameterized terms. This fusion merges data-driven modeling with strong analytic guarantees (e.g. inferences satisfying linear systems of constraints). We will cover the key tools needed to design and implement L2O models along with numerical examples.

What is a Full Stack Mathematician?

Someone that can take vague goals and constraints and own the process for developing algorithmic solutions in code that engineers can use. They also provide documentation and reports so people know how to use the math safely and interpret results, e.g. guides for engineers and clear summaries for leaders.

Email Newsletter

Howard writes short posts, each illustrating concepts in his area of work.

Podcast

Howard hosts the podcast "Numerical Optimization" where he interviews mathematicians in various optimization specialties.