no code implementations • 29 Jan 2021 • Shane Barratt, Yining Dong, Stephen Boyd
Our focus is on low rank forecasters, which break forecasting up into two steps: estimating a vector that can be interpreted as a latent state, given the past, and then estimating the future values of the time series, given the latent state estimate.
1 code implementation • 29 Jan 2021 • Shane Barratt, Stephen Boyd
We consider the problem of predicting the covariance of a zero mean Gaussian vector, based on another feature vector.
no code implementations • 11 Jan 2021 • Jonathan Tuck, Shane Barratt, Stephen Boyd
In this paper we develop models of asset return mean and covariance that depend on some observable market conditions, and use these to construct a trading policy that depends on these conditions, and the current portfolio holdings.
1 code implementation • 7 Jun 2020 • Akshay Agrawal, Shane Barratt, Stephen Boyd
A convex optimization model predicts an output from an input by solving a convex optimization problem.
1 code implementation • 18 May 2020 • Shane Barratt, Stephen Boyd
We consider the problem of determining a sequence of payments among a set of entities that clear (if possible) the liabilities among them.
1 code implementation • 18 May 2020 • Shane Barratt, Guillermo Angeris, Stephen Boyd
We consider the problem of assigning weights to a set of samples or data records, with the goal of achieving a representative weighting, which happens when certain sample averages of the data are close to prescribed values.
1 code implementation • 5 Mar 2020 • Shane Barratt, Jonathan Tuck, Stephen Boyd
We describe a number of convex optimization problems over the convex set of risk neutral price probabilities.
1 code implementation • 29 Jan 2020 • Shane Barratt, Guillermo Angeris, Stephen Boyd
Given an infeasible, unbounded, or pathological convex optimization problem, a natural question to ask is: what is the smallest change we can make to the problem's parameters such that the problem becomes solvable?
Optimization and Control
no code implementations • L4DC 2020 • Akshay Agrawal, Shane Barratt, Stephen Boyd, Bartolomeo Stellato
Common examples of such convex optimization control policies (COCPs) include the linear quadratic regulator (LQR), convex model predictive control (MPC), and convex control-Lyapunov or approximate dynamic programming (ADP) policies.
1 code implementation • NeurIPS 2019 • Akshay Agrawal, Brandon Amos, Shane Barratt, Stephen Boyd, Steven Diamond, Zico Kolter
In this paper, we propose an approach to differentiating through disciplined convex programs, a subclass of convex optimization problems used by domain-specific languages (DSLs) for convex optimization.
1 code implementation • 27 Oct 2019 • Shane Barratt, Guillermo Angeris, Stephen Boyd
We consider the problem of minimizing a sum of clipped convex functions; applications include clipped empirical risk minimization and clipped control.
2 code implementations • 26 Apr 2019 • Jonathan Tuck, Shane Barratt, Stephen Boyd
In a basic and traditional formulation a separate model is fit for each value of the categorical feature, using only the data that has the specific categorical value.
1 code implementation • 19 Apr 2019 • Akshay Agrawal, Shane Barratt, Stephen Boyd, Enzo Busseti, Walaa M. Moursi
These correspond to computing an approximate new solution, given a perturbation to the cone program coefficients (i. e., perturbation analysis), and to computing the gradient of a function of the solution with respect to the coefficients.
Optimization and Control
1 code implementation • 10 Apr 2019 • Shane Barratt, Stephen Boyd
Least squares is by far the simplest and most commonly applied computational method in many fields.
1 code implementation • 22 Oct 2018 • Shane Barratt, Mykel Kochenderfer, Stephen Boyd
Models for predicting aircraft motion are an important component of modern aeronautical systems.
no code implementations • 24 Jul 2018 • Rishi Sharma, Shane Barratt, Stefano Ermon, Vijay Pande
We demonstrate that this strategy is key to obtaining state-of-the-art results in image generation.
1 code implementation • 18 May 2018 • Shane Barratt, Rishi Sharma
Cross-validation is the workhorse of modern applied statistics and machine learning, as it provides a principled framework for selecting the model that maximizes generalization performance.
no code implementations • 14 Jan 2018 • Colin de Vrieze, Shane Barratt, Daniel Tsai, Anant Sahai
Traditional radio systems are strictly co-designed on the lower levels of the OSI stack for compatibility and efficiency.
Multi-agent Reinforcement Learning reinforcement-learning +1
8 code implementations • 6 Jan 2018 • Shane Barratt, Rishi Sharma
Deep generative models are powerful tools that have produced impressive results in recent years.
1 code implementation • 26 Oct 2017 • Shane Barratt
This paper proposes a neural network design paradigm, termed InterpNET, which can be combined with any existing classification architecture to generate natural language explanations of the classifications.