no code implementations • 19 Sep 2024 • Joshua Ott, Mykel J. Kochenderfer, Stephen Boyd
Efficiently estimating system dynamics from data is essential for minimizing data collection costs and improving model performance.
1 code implementation • 18 Sep 2024 • Tetiana Parshakova, Trevor Hastie, Stephen Boyd
We show that the inverse of an invertible PSD MLR matrix is also an MLR matrix with the same sparsity in factors, and we use the recursive Sherman-Morrison-Woodbury matrix identity to obtain the factors of the inverse.
no code implementations • 25 Jun 2024 • Rafael Perez Martinez, Stephen Boyd, Srabanti Chowdhury
We conduct simulations across a range of $V_{\text{GS}}$ values to ensure a thorough and robust analysis.
1 code implementation • 24 Jun 2024 • Rafael Perez Martinez, Masaya Iwamoto, Kelly Woo, Zhengliang Bian, Roberto Tinti, Stephen Boyd, Srabanti Chowdhury
We demonstrate the effectiveness of our methodology by successfully modeling two semiconductor devices: a diamond Schottky diode and a GaN-on-SiC HEMT, with the latter involving the ASM-HEMT DC model, which requires simultaneously extracting 35 model parameters to fit the model to the measured data.
1 code implementation • 11 Apr 2024 • Eric Luxenberg, Stephen Boyd
We propose a general method for computing an approximation of EWMM, which requires storing only a window of a fixed number of past samples, and uses an additional quadratic term to approximate the loss associated with the data before the window.
1 code implementation • 12 Feb 2024 • Kasper Johansson, Thomas Schmelzer, Stephen Boyd
We propose a new method for finding statistical arbitrages that can contain more assets than just the traditional pair.
1 code implementation • 10 Jan 2024 • Stephen Boyd, Kasper Johansson, Ronald Kahn, Philipp Schiele, Thomas Schmelzer
More than seventy years ago Harry Markowitz formulated portfolio construction as an optimization problem that trades off expected return and risk, defined as the standard deviation of the portfolio returns.
1 code implementation • 30 Oct 2023 • Tetiana Parshakova, Trevor Hastie, Eric Darve, Stephen Boyd
The second is rank allocation, where we choose the ranks of the blocks in each level, subject to the total rank having a given value, which preserves the total storage needed for the MLR matrix.
1 code implementation • 9 Jun 2023 • Eric Luxenberg, Dhruv Malik, Yuanzhi Li, Aarti Singh, Stephen Boyd
We consider robust empirical risk minimization (ERM), where model parameters are chosen to minimize the worst-case empirical loss when each data point varies over a given convex uncertainty set.
1 code implementation • 31 May 2023 • Kasper Johansson, Mehmet Giray Ogut, Markus Pelger, Thomas Schmelzer, Stephen Boyd
We also test covariance predictors on downstream applications such as portfolio optimization methods that depend on the covariance matrix.
1 code implementation • 4 May 2023 • Ziheng Cheng, Junzi Zhang, Akshay Agrawal, Stephen Boyd
Laplacian regularized stratified models (LRSM) are models that utilize the explicit or implicit network structure of the sub-problems as defined by the categorical features called strata (e. g., age, region, time, forecast horizon, etc.
no code implementations • 15 Feb 2022 • Gabriel Maher, Stephen Boyd, Mykel Kochenderfer, Cristian Matache, Dylan Reuter, Alex Ulitsky, Slava Yukhymuk, Leonid Kopman
We describe a light-weight yet performant system for hyper-parameter optimization that approximately minimizes an overall scalar cost function that is obtained by combining multiple performance objectives using a target-priority-limit scalarizer.
2 code implementations • 9 Mar 2021 • Nicholas Moehle, Jack Gindi, Stephen Boyd, Mykel Kochenderfer
Mean-variance portfolio optimization problems often involve separable nonconvex terms, including penalties on capital gains, integer share constraints, and minimum position and trade sizes.
Portfolio Optimization Optimization and Control Portfolio Management
1 code implementation • 3 Mar 2021 • Akshay Agrawal, Alnur Ali, Stephen Boyd
Our software scales to data sets with millions of items and tens of millions of distortion functions.
no code implementations • 11 Feb 2021 • Nicholas Moehle, Stephen Boyd, Andrew Ang
We consider an investment process that includes a number of features, each of which can be active or inactive.
no code implementations • 29 Jan 2021 • Shane Barratt, Yining Dong, Stephen Boyd
Our focus is on low rank forecasters, which break forecasting up into two steps: estimating a vector that can be interpreted as a latent state, given the past, and then estimating the future values of the time series, given the latent state estimate.
1 code implementation • 29 Jan 2021 • Shane Barratt, Stephen Boyd
We consider the problem of predicting the covariance of a zero mean Gaussian vector, based on another feature vector.
no code implementations • 11 Jan 2021 • Jonathan Tuck, Shane Barratt, Stephen Boyd
In this paper we develop models of asset return mean and covariance that depend on some observable market conditions, and use these to construct a trading policy that depends on these conditions, and the current portfolio holdings.
no code implementations • 22 Oct 2020 • Junzi Zhang, Jongho Kim, Brendan O'Donoghue, Stephen Boyd
Policy gradient methods are among the most effective methods for large-scale reinforcement learning, and their empirical success has prompted several works that develop the foundation of their global convergence theory.
1 code implementation • 7 Jun 2020 • Akshay Agrawal, Shane Barratt, Stephen Boyd
A convex optimization model predicts an output from an input by solving a convex optimization problem.
1 code implementation • 18 May 2020 • Shane Barratt, Guillermo Angeris, Stephen Boyd
We consider the problem of assigning weights to a set of samples or data records, with the goal of achieving a representative weighting, which happens when certain sample averages of the data are close to prescribed values.
1 code implementation • 18 May 2020 • Shane Barratt, Stephen Boyd
We consider the problem of determining a sequence of payments among a set of entities that clear (if possible) the liabilities among them.
1 code implementation • 4 May 2020 • Jonathan Tuck, Stephen Boyd
We consider the problem of jointly estimating multiple related zero-mean Gaussian distributions from data.
2 code implementations • 27 Apr 2020 • Akshay Agrawal, Stephen Boyd
We use the adjoint of the derivative to implement differentiable log-log convex optimization layers in PyTorch and TensorFlow.
Optimization and Control
1 code implementation • 5 Mar 2020 • Shane Barratt, Jonathan Tuck, Stephen Boyd
We describe a number of convex optimization problems over the convex set of risk neutral price probabilities.
1 code implementation • 1 Mar 2020 • Rahul Trivedi, Guillermo Angeris, Logan Su, Stephen Boyd, Shanhui Fan, Jelena Vuckovic
We illustrate our bounding procedure by studying limits on the scattering cross-sections of dielectric and metallic particles in the absence of material losses.
Optics
1 code implementation • 13 Feb 2020 • Guillermo Angeris, Jelena Vučković, Stephen Boyd
In a physical design problem, the designer chooses values of some physical parameters, within limits, to optimize the resulting field.
Optimization and Control Computational Physics Optics
1 code implementation • 29 Jan 2020 • Shane Barratt, Guillermo Angeris, Stephen Boyd
Given an infeasible, unbounded, or pathological convex optimization problem, a natural question to ask is: what is the smallest change we can make to the problem's parameters such that the problem becomes solvable?
Optimization and Control
1 code implementation • 27 Jan 2020 • Jonathan Tuck, Stephen Boyd
This leads to a reduction, sometimes large, of model size when $m \leq n$ and $m \ll K$.
no code implementations • L4DC 2020 • Akshay Agrawal, Shane Barratt, Stephen Boyd, Bartolomeo Stellato
Common examples of such convex optimization control policies (COCPs) include the linear quadratic regulator (LQR), convex model predictive control (MPC), and convex control-Lyapunov or approximate dynamic programming (ADP) policies.
1 code implementation • NeurIPS 2019 • Akshay Agrawal, Brandon Amos, Shane Barratt, Stephen Boyd, Steven Diamond, Zico Kolter
In this paper, we propose an approach to differentiating through disciplined convex programs, a subclass of convex optimization problems used by domain-specific languages (DSLs) for convex optimization.
1 code implementation • 27 Oct 2019 • Shane Barratt, Guillermo Angeris, Stephen Boyd
We consider the problem of minimizing a sum of clipped convex functions; applications include clipped empirical risk minimization and clipped control.
no code implementations • 15 Oct 2019 • Youngsuk Park, Sauptik Dhar, Stephen Boyd, Mohak Shah
Under this metric selection for VM-PG, the theoretical convergence is analyzed.
2 code implementations • 30 May 2019 • Dave Deriso, Stephen Boyd
We pose the choice of warping function as an optimization problem with several terms in the objective.
1 code implementation • 2 May 2019 • Akshay Agrawal, Stephen Boyd
We present a composition rule involving quasiconvex functions that generalizes the classical composition rule for convex functions.
Optimization and Control Mathematical Software
2 code implementations • 26 Apr 2019 • Jonathan Tuck, Shane Barratt, Stephen Boyd
In a basic and traditional formulation a separate model is fit for each value of the categorical feature, using only the data that has the specific categorical value.
1 code implementation • 19 Apr 2019 • Akshay Agrawal, Shane Barratt, Stephen Boyd, Enzo Busseti, Walaa M. Moursi
These correspond to computing an approximate new solution, given a perturbation to the cone program coefficients (i. e., perturbation analysis), and to computing the gradient of a function of the solution with respect to the coefficients.
Optimization and Control
1 code implementation • 10 Apr 2019 • Shane Barratt, Stephen Boyd
Least squares is by far the simplest and most commonly applied computational method in many fields.
1 code implementation • 30 Nov 2018 • Guillermo Angeris, Jelena Vuckovic, Stephen Boyd
Physical design problems, such as photonic inverse design, are typically solved using local optimization methods.
Optics Optimization and Control Computational Physics
1 code implementation • 22 Oct 2018 • Shane Barratt, Mykel Kochenderfer, Stephen Boyd
Models for predicting aircraft motion are an important component of modern aeronautical systems.
no code implementations • NeurIPS 2017 • Zhengyuan Zhou, Panayotis Mertikopoulos, Nicholas Bambos, Stephen Boyd, Peter W. Glynn
In this paper, we examine a class of non-convex stochastic optimization problems which we call variationally coherent, and which properly includes pseudo-/quasiconvex and star-convex optimization problems.
2 code implementations • 21 Nov 2017 • Bartolomeo Stellato, Goran Banjac, Paul Goulart, Alberto Bemporad, Stephen Boyd
We present a general purpose solver for convex quadratic programs based on the alternating direction method of multipliers, employing a novel operator splitting technique that requires the solution of a quasi-definite linear system with the same coefficient matrix at almost every iteration.
Optimization and Control
1 code implementation • 13 Sep 2017 • Akshay Agrawal, Robin Verschueren, Steven Diamond, Stephen Boyd
We describe a modular rewriting system for translating optimization problems written in a domain-specific language to forms compatible with low-level solver interfaces.
Optimization and Control Mathematical Software
no code implementations • 18 Jun 2017 • Zhengyuan Zhou, Panayotis Mertikopoulos, Nicholas Bambos, Stephen Boyd, Peter Glynn
In this paper, we examine the convergence of mirror descent in a class of stochastic optimization problems that are not necessarily convex (or even quasi-convex), and which we call variationally coherent.
no code implementations • 10 Jun 2017 • David Hallac, Sagar Vare, Stephen Boyd, Jure Leskovec
We derive closed-form solutions to efficiently solve the two resulting subproblems in a scalable way, through dynamic programming and the alternating direction method of multipliers (ADMM), respectively.
3 code implementations • 29 Apr 2017 • Stephen Boyd, Enzo Busseti, Steven Diamond, Ronald N. Kahn, Kwangmoo Koh, Peter Nystrup, Jan Speth
The methods we describe in this paper can be thought of as good ways to exploit predictions, no matter how they are made.
2 code implementations • 6 Mar 2017 • David Hallac, Youngsuk Park, Stephen Boyd, Jure Leskovec
Many important problems can be modeled as a system of interconnected entities, where each entity is recording time-dependent observations or measurements.
1 code implementation • 23 Jan 2017 • Steven Diamond, Vincent Sitzmann, Frank Julca-Aguilar, Stephen Boyd, Gordon Wetzstein, Felix Heide
As such, conventional imaging involves processing the RAW sensor measurements in a sequential pipeline of steps, such as demosaicking, denoising, deblurring, tone-mapping and compression.
1 code implementation • 24 Oct 2016 • David Hallac, Peter Nystrup, Stephen Boyd
We consider the problem of breaking a multivariate (vector) time series into segments over which the data is well explained as independent samples from a Gaussian distribution.
Optimization and Control
no code implementations • 21 Sep 2016 • Nicholas Boyd, Trevor Hastie, Stephen Boyd, Benjamin Recht, Michael Jordan
We extend the adaptive regression spline model by incorporating saturation, the natural requirement that a function extend as a constant outside a certain range.
3 code implementations • 12 Sep 2016 • Xinyue Shen, Steven Diamond, Madeleine Udell, Yuantao Gu, Stephen Boyd
A multi-convex optimization problem is one in which the variables can be partitioned into sets over which the problem is convex when the other variables are fixed.
Optimization and Control
no code implementations • ICML 2018 • Qingyun Sun, Mengyuan Yan David Donoho, Stephen Boyd
A matrix network is a family of matrices, with relatedness modeled by a weighted graph.
no code implementations • ICCV 2015 • Steven Diamond, Stephen Boyd
We introduce a convex optimization modeling framework that transforms a convex optimization problem expressed in a form natural and convenient for the user into an equivalent cone program in a way that preserves fast linear transforms in the original problem.
no code implementations • 4 Mar 2015 • Weijie Su, Stephen Boyd, Emmanuel J. Candes
We derive a second-order ordinary differential equation (ODE) which is the limit of Nesterov's accelerated gradient method.
no code implementations • NeurIPS 2014 • Weijie Su, Stephen Boyd, Emmanuel Candes
We derive a second-order ordinary differential equation (ODE), which is the limit of Nesterov’s accelerated gradient method.
1 code implementation • 17 Oct 2014 • Madeleine Udell, Karanveer Mohan, David Zeng, Jenny Hong, Steven Diamond, Stephen Boyd
This paper describes Convex, a convex optimization modeling framework in Julia.
1 code implementation • 1 Oct 2014 • Madeleine Udell, Corinne Horn, Reza Zadeh, Stephen Boyd
Here, we extend the idea of PCA to handle arbitrary data sets consisting of numerical, Boolean, categorical, ordinal, and other data types.
no code implementations • NeurIPS 2012 • Stephen Boyd, Corinna Cortes, Mehryar Mohri, Ana Radovanovic
We introduce a new notion of classification accuracy based on the top $\tau$-quantile values of a scoring function, a relevant criterion in a number of problems arising for search engines.
no code implementations • 18 Jan 2009 • Danny Bickson, Yoav Tock, Argyris Zymnis, Stephen Boyd, Danny Dolev
Using an empirical evaluation we show that our new method outperforms previous approaches, including the truncated Newton method and dual-decomposition methods.
Information Theory Distributed, Parallel, and Cluster Computing Information Theory Optimization and Control