Search Results for author: Elad Hazan

Found 99 papers, 16 papers with code

FutureFill: Fast Generation from Convolutional Sequence Models

no code implementations2 Oct 2024 Naman Agarwal, Xinyi Chen, Evan Dogariu, Vlad Feinberg, Daniel Suo, Peter Bartlett, Elad Hazan

We address the challenge of efficient auto-regressive generation in sequence prediction models by introducing FutureFill - a method for fast generation that applies to any sequence prediction algorithm based on convolutional operators.

Flash STU: Fast Spectral Transform Units

1 code implementation16 Sep 2024 Y. Isabel Liu, Windsor Nguyen, Yagiz Devre, Evan Dogariu, Anirudha Majumdar, Elad Hazan

This paper describes an efficient, open source PyTorch implementation of the Spectral Transform Unit.

State Space Models

Online Control in Population Dynamics

no code implementations3 Jun 2024 Noah Golowich, Elad Hazan, Zhou Lu, Dhruv Rohatgi, Y. Jennifer Sun

The study of population dynamics originated with early sociological works but has since extended into many fields, including biology, epidemiology, evolutionary game theory, and economics.

Epidemiology

Second Order Methods for Bandit Optimization and Control

no code implementations14 Feb 2024 Arun Suggala, Y. Jennifer Sun, Praneeth Netrapalli, Elad Hazan

We show that our algorithm achieves optimal (in terms of horizon) regret bounds for a large class of convex functions that we call $\kappa$-convex.

Decision Making Decision Making Under Uncertainty +1

Adaptive Regret for Bandits Made Possible: Two Queries Suffice

no code implementations17 Jan 2024 Zhou Lu, Qiuyi Zhang, Xinyi Chen, Fred Zhang, David Woodruff, Elad Hazan

In this paper, we give query and regret optimal bandit algorithms under the strict notion of strongly adaptive regret, which measures the maximum regret over any contiguous interval $I$.

Hyperparameter Optimization Multi-Armed Bandits

Chain of LoRA: Efficient Fine-tuning of Language Models via Residual Learning

no code implementations8 Jan 2024 Wenhan Xia, Chengwei Qin, Elad Hazan

Fine-tuning is the primary methodology for tailoring pre-trained large language models to specific tasks.

Benchmarking CoLA +3

Spectral State Space Models

2 code implementations11 Dec 2023 Naman Agarwal, Daniel Suo, Xinyi Chen, Elad Hazan

This paper studies sequence modeling for prediction tasks with long range dependencies.

State Space Models

Playing Large Games with Oracles and AI Debate

1 code implementation8 Dec 2023 Xinyi Chen, Angelica Chen, Dean Foster, Elad Hazan

We give a novel efficient algorithm for simultaneous external and internal regret minimization whose regret depends logarithmically on the number of actions.

An Efficient Interior-Point Method for Online Convex Optimization

no code implementations21 Jul 2023 Elad Hazan, Nimrod Megiddo

A new algorithm for regret minimization in online convex optimization is described.

A Nonstochastic Control Approach to Optimization

no code implementations19 Jan 2023 Xinyi Chen, Elad Hazan

Selecting the best hyperparameters for a particular optimization instance, such as the learning rate and momentum, is an important but nonconvex problem.

Projection-free Adaptive Regret with Membership Oracles

no code implementations22 Nov 2022 Zhou Lu, Nataly Brukhim, Paula Gradu, Elad Hazan

The most common approach is based on the Frank-Wolfe method, that uses linear optimization computation in lieu of projections.

Best of Both Worlds in Online Control: Competitive Ratio and Policy Regret

no code implementations21 Nov 2022 Gautam Goel, Naman Agarwal, Karan Singh, Elad Hazan

We consider the fundamental problem of online control of a linear dynamical system from two different viewpoints: regret minimization and competitive analysis.

Introduction to Online Nonstochastic Control

no code implementations17 Nov 2022 Elad Hazan, Karan Singh

In online nonstochastic control, both the cost functions as well as the perturbations from the assumed dynamical model are chosen by an adversary.

Decision Making

Partial Matrix Completion

no code implementations NeurIPS 2023 Elad Hazan, Adam Tauman Kalai, Varun Kanade, Clara Mohri, Y. Jennifer Sun

This work establishes a new framework of partial matrix completion, where the goal is to identify a large subset of the entries that can be completed with high confidence.

Matrix Completion

On the Computational Efficiency of Adaptive and Dynamic Regret Minimization

no code implementations1 Jul 2022 Zhou Lu, Elad Hazan

In online convex optimization, the player aims to minimize regret, or the difference between her loss and that of the best fixed decision in hindsight over the entire repeated game.

Computational Efficiency

Adaptive Online Learning of Quantum States

1 code implementation1 Jun 2022 Xinyi Chen, Elad Hazan, Tongyang Li, Zhou Lu, Xinzhao Wang, Rui Yang

The problem of efficient quantum state learning, also called shadow tomography, aims to comprehend an unknown $d$-dimensional quantum state through POVMs.

Non-convex online learning via algorithmic equivalence

no code implementations30 May 2022 Udaya Ghai, Zhou Lu, Elad Hazan

We prove an $O(T^{\frac{2}{3}})$ regret bound for non-convex online gradient descent in this setting, answering this open problem.

Adaptive Gradient Methods with Local Guarantees

no code implementations2 Mar 2022 Zhou Lu, Wenhan Xia, Sanjeev Arora, Elad Hazan

Adaptive gradient methods are the method of choice for optimization in machine learning and used to train the largest deep models.

Benchmarking

Online Control of Unknown Time-Varying Dynamical Systems

no code implementations NeurIPS 2021 Edgar Minasyan, Paula Gradu, Max Simchowitz, Elad Hazan

On the positive side, we give an efficient algorithm that attains a sublinear regret bound against the class of Disturbance Response policies up to the aforementioned system variability term.

A Regret Minimization Approach to Multi-Agent Control

no code implementations28 Jan 2022 Udaya Ghai, Udari Madhushani, Naomi Leonard, Elad Hazan

We study the problem of multi-agent control of a dynamical system with known dynamics and adversarial disturbances.

Multiclass Boosting and the Cost of Weak Learning

no code implementations NeurIPS 2021 Nataly Brukhim, Elad Hazan, Shay Moran, Indraneel Mukherjee, Robert E. Schapire

Here, we focus on an especially natural formulation in which the weak hypotheses are assumed to belong to an ''easy-to-learn'' base class, and the weak learner is an agnostic PAC learner for that class with respect to the standard classification loss.

Provable Regret Bounds for Deep Online Learning and Control

no code implementations15 Oct 2021 Xinyi Chen, Edgar Minasyan, Jason D. Lee, Elad Hazan

The theory of deep learning focuses almost exclusively on supervised learning, non-convex optimization using stochastic gradient descent, and overparametrized neural networks.

Deep Learning Second-order methods

Learning Rate Grafting: Transferability of Optimizer Tuning

no code implementations29 Sep 2021 Naman Agarwal, Rohan Anil, Elad Hazan, Tomer Koren, Cyril Zhang

In the empirical science of training large neural networks, the learning rate schedule is a notoriously challenging-to-tune hyperparameter, which can depend on all other properties (architecture, optimizer, batch size, dataset, regularization, ...) of the problem.

A Boosting Approach to Reinforcement Learning

no code implementations22 Aug 2021 Nataly Brukhim, Elad Hazan, Karan Singh

Reducing reinforcement learning to supervised learning is a well-studied and effective approach that leverages the benefits of compact function approximation to deal with large-scale Markov decision processes.

reinforcement-learning Reinforcement Learning +1

Robust Online Control with Model Misspecification

no code implementations16 Jul 2021 Xinyi Chen, Udaya Ghai, Elad Hazan, Alexandre Megretski

We study online control of an unknown nonlinear dynamical system that is approximated by a time-invariant linear system with model misspecification.

A Regret Minimization Approach to Iterative Learning Control

no code implementations26 Feb 2021 Naman Agarwal, Elad Hazan, Anirudha Majumdar, Karan Singh

We consider the setting of iterative learning control, or model-based policy learning in the presence of uncertain, time-varying dynamics.

Deluca -- A Differentiable Control Library: Environments, Methods, and Benchmarking

1 code implementation19 Feb 2021 Paula Gradu, John Hallman, Daniel Suo, Alex Yu, Naman Agarwal, Udaya Ghai, Karan Singh, Cyril Zhang, Anirudha Majumdar, Elad Hazan

We present an open-source library of natively differentiable physics and robotics environments, accompanied by gradient-based control methods and a benchmark-ing suite.

Benchmarking OpenAI Gym

Boosting for Online Convex Optimization

no code implementations18 Feb 2021 Elad Hazan, Karan Singh

In this access model, we give an efficient boosting algorithm that guarantees near-optimal regret against the convex hull of the base class.

Decision Making

Machine Learning for Mechanical Ventilation Control

2 code implementations12 Feb 2021 Daniel Suo, Naman Agarwal, Wenhan Xia, Xinyi Chen, Udaya Ghai, Alexander Yu, Paula Gradu, Karan Singh, Cyril Zhang, Edgar Minasyan, Julienne LaChance, Tom Zajdel, Manuel Schottdorf, Daniel Cohen, Elad Hazan

We consider the problem of controlling an invasive mechanical ventilator for pressure-controlled ventilation: a controller must let air in and out of a sedated patient's lungs according to a trajectory of airway pressures specified by a clinician.

BIG-bench Machine Learning

Generating Adversarial Disturbances for Controller Verification

no code implementations12 Dec 2020 Udaya Ghai, David Snyder, Anirudha Majumdar, Elad Hazan

We consider the problem of generating maximally adversarial disturbances for a given controller assuming only blackbox access to it.

Geometric Exploration for Online Control

no code implementations NeurIPS 2020 Orestis Plevrakis, Elad Hazan

We study the control of an \emph{unknown} linear dynamical system under general convex costs.

Non-Stochastic Control with Bandit Feedback

no code implementations NeurIPS 2020 Paula Gradu, John Hallman, Elad Hazan

We study the problem of controlling a linear dynamical system with adversarial perturbations where the only feedback available to the controller is the scalar loss, and the loss function itself is unknown.

Online Boosting with Bandit Feedback

no code implementations23 Jul 2020 Nataly Brukhim, Elad Hazan

We consider the problem of online boosting for regression tasks, when only limited information is available to the learner.

regression

Black-Box Control for Linear Dynamical Systems

no code implementations13 Jul 2020 Xinyi Chen, Elad Hazan

To complete the picture, we investigate the complexity of the online black-box control problem, and give a matching lower bound of $2^{\Omega(\mathcal{L})}$ on the regret, showing that the additional exponential cost is inevitable.

Adaptive Regret for Control of Time-Varying Dynamics

no code implementations8 Jul 2020 Paula Gradu, Elad Hazan, Edgar Minasyan

Our main contribution is a novel efficient meta-algorithm: it converts a controller with sublinear regret bounds into one with sublinear {\it adaptive regret} bounds in the setting of time-varying linear dynamical systems.

Online Agnostic Boosting via Regret Minimization

no code implementations NeurIPS 2020 Nataly Brukhim, Xinyi Chen, Elad Hazan, Shay Moran

Boosting is a widely used machine learning approach based on the idea of aggregating weak learning rules.

Disentangling Adaptive Gradient Methods from Learning Rates

1 code implementation26 Feb 2020 Naman Agarwal, Rohan Anil, Elad Hazan, Tomer Koren, Cyril Zhang

We investigate several confounding factors in the evaluation of optimization algorithms for deep learning.

Boosting Simple Learners

1 code implementation31 Jan 2020 Noga Alon, Alon Gonen, Elad Hazan, Shay Moran

(ii) Expressivity: Which tasks can be learned by boosting weak hypotheses from a bounded VC class?

Faster Projection-free Online Learning

no code implementations30 Jan 2020 Elad Hazan, Edgar Minasyan

In many online learning problems the computational bottleneck for gradient-based methods is the projection operation.

Improper Learning for Non-Stochastic Control

no code implementations25 Jan 2020 Max Simchowitz, Karan Singh, Elad Hazan

We consider the problem of controlling a possibly unknown linear dynamical system with adversarial perturbations, adversarially chosen convex loss functions, and partially observed states, known as non-stochastic control.

Revisiting the Generalization of Adaptive Gradient Methods

no code implementations ICLR 2020 Naman Agarwal, Rohan Anil, Elad Hazan, Tomer Koren, Cyril Zhang

A commonplace belief in the machine learning community is that using adaptive gradient methods hurts generalization.

BIG-bench Machine Learning

The Nonstochastic Control Problem

no code implementations27 Nov 2019 Elad Hazan, Sham M. Kakade, Karan Singh

We consider the problem of controlling an unknown linear dynamical system in the presence of (nonstochastic) adversarial perturbations and adversarial convex loss functions.

The gradient complexity of linear regression

no code implementations6 Nov 2019 Mark Braverman, Elad Hazan, Max Simchowitz, Blake Woodworth

We investigate the computational complexity of several basic linear algebra primitives, including largest eigenvector computation and linear regression, in the computational model that allows access to the data via a matrix-vector product oracle.

regression

Logarithmic Regret for Online Control

no code implementations NeurIPS 2019 Naman Agarwal, Elad Hazan, Karan Singh

We study optimal regret bounds for control in linear dynamical systems under adversarially changing strongly convex cost functions, given the knowledge of transition dynamics.

Lecture Notes: Optimization for Machine Learning

no code implementations8 Sep 2019 Elad Hazan

Lecture notes on optimization for machine learning, derived from a course at Princeton University and tutorials given in MLSS, Buenos Aires, as well as Simons Foundation, Berkeley.

BIG-bench Machine Learning

Introduction to Online Convex Optimization

1 code implementation7 Sep 2019 Elad Hazan

This manuscript portrays optimization as a process.

Boosting for Control of Dynamical Systems

no code implementations ICML 2020 Naman Agarwal, Nataly Brukhim, Elad Hazan, Zhou Lu

We study the question of how to aggregate controllers for dynamical systems in order to improve their performance.

Private Learning Implies Online Learning: An Efficient Reduction

no code implementations NeurIPS 2019 Alon Gonen, Elad Hazan, Shay Moran

We study the relationship between the notions of differentially private learning and online learning in games.

Open-Ended Question Answering

Online Control with Adversarial Disturbances

no code implementations23 Feb 2019 Naman Agarwal, Brian Bullins, Elad Hazan, Sham M. Kakade, Karan Singh

We study the control of a linear dynamical system with adversarial disturbances (as opposed to statistical noise).

Extreme Tensoring for Low-Memory Preconditioning

no code implementations ICLR 2020 Xinyi Chen, Naman Agarwal, Elad Hazan, Cyril Zhang, Yi Zhang

State-of-the-art models are now trained with billions of parameters, reaching hardware limits in terms of memory consumption.

Stochastic Optimization

Exponentiated Gradient Meets Gradient Descent

no code implementations5 Feb 2019 Udaya Ghai, Elad Hazan, Yoram Singer

The hypentropy has a natural spectral counterpart which we use to derive a family of matrix-based updates that bridge gradient methods and the multiplicative method for matrices.

Provably Efficient Maximum Entropy Exploration

2 code implementations6 Dec 2018 Elad Hazan, Sham M. Kakade, Karan Singh, Abby Van Soest

Suppose an agent is in a (possibly unknown) Markov Decision Process in the absence of a reward signal, what might we hope that an agent can efficiently learn to do?

Learning in Non-convex Games with an Optimization Oracle

no code implementations17 Oct 2018 Naman Agarwal, Alon Gonen, Elad Hazan

We consider online learning in an adversarial, non-convex setting under the assumption that the learner has an access to an offline optimization oracle.

Efficient Full-Matrix Adaptive Regularization

no code implementations ICLR 2019 Naman Agarwal, Brian Bullins, Xinyi Chen, Elad Hazan, Karan Singh, Cyril Zhang, Yi Zhang

Due to the large number of parameters of machine learning problems, full-matrix preconditioning methods are prohibitively expensive.

Online Improper Learning with an Approximation Oracle

no code implementations NeurIPS 2018 Elad Hazan, Wei Hu, Yuanzhi Li, Zhiyuan Li

We revisit the question of reducing online learning to approximate optimization of the offline problem.

Online Learning of Quantum States

no code implementations NeurIPS 2018 Scott Aaronson, Xinyi Chen, Elad Hazan, Satyen Kale, Ashwin Nayak

Even in the "non-realizable" setting---where there could be arbitrary noise in the measurement outcomes---we show how to output hypothesis states that do significantly worse than the best possible states at most $\operatorname{O}\!\left(\sqrt {Tn}\right) $ times on the first $T$ measurements.

On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization

1 code implementation ICML 2018 Sanjeev Arora, Nadav Cohen, Elad Hazan

The effect of depth on optimization is decoupled from expressiveness by focusing on settings where additional layers amount to overparameterization - linear neural networks, a well-studied model.

regression

Spectral Filtering for General Linear Dynamical Systems

no code implementations NeurIPS 2018 Elad Hazan, Holden Lee, Karan Singh, Cyril Zhang, Yi Zhang

We give a polynomial-time algorithm for learning latent-state linear dynamical systems without system identification, and without assumptions on the spectral radius of the system's transition matrix.

Towards Provable Control for Unknown Linear Dynamical Systems

no code implementations ICLR 2018 Sanjeev Arora, Elad Hazan, Holden Lee, Karan Singh, Cyril Zhang, Yi Zhang

We study the control of symmetric linear dynamical systems with unknown dynamics and a hidden state.

Learning Linear Dynamical Systems via Spectral Filtering

1 code implementation NeurIPS 2017 Elad Hazan, Karan Singh, Cyril Zhang

We present an efficient and practical algorithm for the online prediction of discrete-time linear dynamical systems with a symmetric transition matrix.

Time Series Time Series Analysis

Lower Bounds for Higher-Order Convex Optimization

no code implementations27 Oct 2017 Naman Agarwal, Elad Hazan

State-of-the-art methods in convex and non-convex optimization employ higher-order derivative information, either implicitly or explicitly.

Linear Convergence of a Frank-Wolfe Type Algorithm over Trace-Norm Balls

no code implementations NeurIPS 2017 Zeyuan Allen-Zhu, Elad Hazan, Wei Hu, Yuanzhi Li

We propose a rank-$k$ variant of the classical Frank-Wolfe algorithm to solve convex optimization over a trace-norm ball.

Efficient Regret Minimization in Non-Convex Games

no code implementations ICML 2017 Elad Hazan, Karan Singh, Cyril Zhang

We consider regret minimization in repeated games with non-convex loss functions.

Hyperparameter Optimization: A Spectral Approach

1 code implementation ICLR 2018 Elad Hazan, Adam Klivans, Yang Yuan

In particular, we obtain the first quasi-polynomial time algorithm for learning noisy decision trees with polynomial sample complexity.

Bayesian Optimization Hyperparameter Optimization

The Limits of Learning with Missing Data

no code implementations NeurIPS 2016 Brian Bullins, Elad Hazan, Tomer Koren

We study regression and classification in a setting where the learning algorithm is allowed to access only a limited number of attributes per example, known as the limited attribute observation model.

Attribute General Classification +1

Finding Approximate Local Minima Faster than Gradient Descent

1 code implementation3 Nov 2016 Naman Agarwal, Zeyuan Allen-Zhu, Brian Bullins, Elad Hazan, Tengyu Ma

We design a non-convex second-order optimization algorithm that is guaranteed to return an approximate local minimum in time which scales linearly in the underlying dimension and the number of training examples.

BIG-bench Machine Learning

A Non-generative Framework and Convex Relaxations for Unsupervised Learning

no code implementations NeurIPS 2016 Elad Hazan, Tengyu Ma

We give a novel formal theoretical framework for unsupervised learning with two distinctive characteristics.

Faster Eigenvector Computation via Shift-and-Invert Preconditioning

no code implementations26 May 2016 Dan Garber, Elad Hazan, Chi Jin, Sham M. Kakade, Cameron Musco, Praneeth Netrapalli, Aaron Sidford

We give faster algorithms and improved sample complexities for estimating the top eigenvector of a matrix $\Sigma$ -- i. e. computing a unit vector $x$ such that $x^T \Sigma x \ge (1-\epsilon)\lambda_1(\Sigma)$: Offline Eigenvector Estimation: Given an explicit $A \in \mathbb{R}^{n \times d}$ with $\Sigma = A^TA$, we show how to compute an $\epsilon$ approximate top eigenvector in time $\tilde O([nnz(A) + \frac{d*sr(A)}{gap^2} ]* \log 1/\epsilon )$ and $\tilde O([\frac{nnz(A)^{3/4} (d*sr(A))^{1/4}}{\sqrt{gap}} ] * \log 1/\epsilon )$.

Stochastic Optimization

Online Learning with Low Rank Experts

no code implementations21 Mar 2016 Elad Hazan, Tomer Koren, Roi Livni, Yishay Mansour

We consider the problem of prediction with expert advice when the losses of the experts have low-dimensional structure: they are restricted to an unknown $d$-dimensional subspace.

Optimal Black-Box Reductions Between Optimization Objectives

no code implementations NeurIPS 2016 Zeyuan Allen-Zhu, Elad Hazan

The diverse world of machine learning applications has given rise to a plethora of algorithms and optimization methods, finely tuned to the specific regression or classification task at hand.

BIG-bench Machine Learning General Classification +1

Variance Reduction for Faster Non-Convex Optimization

no code implementations17 Mar 2016 Zeyuan Allen-Zhu, Elad Hazan

We consider the fundamental problem in non-convex optimization of efficiently reaching a stationary point.

An optimal algorithm for bandit convex optimization

no code implementations14 Mar 2016 Elad Hazan, Yuanzhi Li

We consider the problem of online convex optimization against an arbitrary adversary with bandit feedback, known as bandit convex optimization.

Second-Order Stochastic Optimization for Machine Learning in Linear Time

4 code implementations12 Feb 2016 Naman Agarwal, Brian Bullins, Elad Hazan

First-order stochastic methods are the state-of-the-art in large-scale machine learning optimization owing to efficient per-iteration complexity.

BIG-bench Machine Learning Second-order methods +1

Variance-Reduced and Projection-Free Stochastic Optimization

no code implementations5 Feb 2016 Elad Hazan, Haipeng Luo

The Frank-Wolfe optimization algorithm has recently regained popularity for machine learning applications due to its projection-free property and its ability to handle structured constraints.

Stochastic Optimization

Online Learning for Adversaries with Memory: Price of Past Mistakes

no code implementations NeurIPS 2015 Oren Anava, Elad Hazan, Shie Mannor

In this work we extend the notion of learning with memory to the general Online Convex Optimization (OCO) framework, and present two algorithms that attain low regret.

Fast and Simple PCA via Convex Optimization

no code implementations18 Sep 2015 Dan Garber, Elad Hazan

The problem of principle component analysis (PCA) is traditionally solved by spectral or algebraic methods.

Faster Convex Optimization: Simulated Annealing with an Efficient Universal Barrier

no code implementations9 Jul 2015 Jacob Abernethy, Elad Hazan

We show that simulated annealing, a well-studied random walk algorithms, is directly equivalent, in a certain sense, to the central path interior point algorithm for the the entropic universal barrier function.

Beyond Convexity: Stochastic Quasi-Convex Optimization

no code implementations NeurIPS 2015 Elad Hazan, Kfir. Y. Levy, Shai Shalev-Shwartz

The Normalized Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which updates according to the direction of the gradients, rather than the gradients themselves.

Online Gradient Boosting

no code implementations NeurIPS 2015 Alina Beygelzimer, Elad Hazan, Satyen Kale, Haipeng Luo

We extend the theory of boosting for regression problems to the online learning setting.

regression

The Computational Power of Optimization in Online Learning

no code implementations8 Apr 2015 Elad Hazan, Tomer Koren

We also give a lower bound showing that this running time cannot be improved (up to log factors) in the oracle model, thereby exhibiting a quadratic speedup as compared to the standard, oracle-free setting where the required time for vanishing regret is $\widetilde{\Theta}(N)$.

On Graduated Optimization for Stochastic Non-Convex Problems

1 code implementation12 Mar 2015 Elad Hazan, Kfir. Y. Levy, Shai Shalev-Shwartz

We extend our algorithm and analysis to the setting of stochastic non-convex optimization with noisy gradient feedback, attaining the same convergence rate.

Classification with Low Rank and Missing Data

no code implementations14 Jan 2015 Elad Hazan, Roi Livni, Yishay Mansour

We consider classification and regression tasks where we have missing data and assume that the (clean) data resides in a low rank subspace.

Classification General Classification +1

The Blinded Bandit: Learning with Adaptive Feedback

no code implementations NeurIPS 2014 Ofer Dekel, Elad Hazan, Tomer Koren

We study an online learning setting where the player is temporarily deprived of feedback each time it switches to a different action.

Bandit Convex Optimization: Towards Tight Bounds

no code implementations NeurIPS 2014 Elad Hazan, Kfir Levy

Bandit Convex Optimization (BCO) is a fundamental framework for decision making under uncertainty, which generalizes many problems from the realm of online and statistical learning.

Decision Making Decision Making Under Uncertainty +1

Faster Rates for the Frank-Wolfe Method over Strongly-Convex Sets

no code implementations5 Jun 2014 Dan Garber, Elad Hazan

In this paper we consider the special case of optimization over strongly convex sets, for which we prove that the vanila FW method converges at a rate of $\frac{1}{t^2}$.

Logistic Regression: Tight Bounds for Stochastic and Online Optimization

no code implementations15 May 2014 Elad Hazan, Tomer Koren, Kfir. Y. Levy

We show that in contrast to known asymptotic bounds, as long as the number of prediction/optimization iterations is sub exponential, the logistic loss provides no improvement over a generic non-smooth loss function such as the hinge loss.

regression

Oracle-Based Robust Optimization via Online Learning

no code implementations25 Feb 2014 Aharon Ben-Tal, Elad Hazan, Tomer Koren, Shie Mannor

Robust optimization is a common framework in optimization under uncertainty when the problem parameters are not known, but it is rather known that the parameters belong to some given uncertainty set.

Volumetric Spanners: an Efficient Exploration Basis for Learning

no code implementations21 Dec 2013 Elad Hazan, Zohar Karnin, Raghu Mehka

Numerous machine learning problems require an exploration basis - a mechanism to explore the action space.

BIG-bench Machine Learning Efficient Exploration

Online Convex Optimization Against Adversaries with Memory and Application to Statistical Arbitrage

no code implementations27 Feb 2013 Oren Anava, Elad Hazan, Shie Mannor

The framework of online learning with memory naturally captures learning problems with temporal constraints, and was previously studied for the experts setting.

Newtron: an Efficient Bandit algorithm for Online Multiclass Prediction

no code implementations NeurIPS 2011 Elad Hazan, Satyen Kale

We prove that the regret of \newtron is \(O(\log T)\) when \(\alpha\) is a constant that does not vary with horizon \(T\), and at most \(O(T^{2/3})\) if \(\alpha\) is allowed to increase to infinity with \(T\).

Approximating Semidefinite Programs in Sublinear Time

no code implementations NeurIPS 2011 Dan Garber, Elad Hazan

In recent years semidefinite optimization has become a tool of major importance in various optimization and machine learning problems.

Beating SGD: Learning SVMs in Sublinear Time

no code implementations NeurIPS 2011 Elad Hazan, Tomer Koren, Nati Srebro

We present an optimization approach for linear SVMs based on a stochastic primal-dual approach, where the primal step is akin to an importance-weighted SGD, and the dual step is a stochastic update on the importance weights.

Beyond Convexity: Online Submodular Minimization

no code implementations NeurIPS 2009 Elad Hazan, Satyen Kale

We consider an online decision problem over a discrete space in which the loss function is submodular.

On Stochastic and Worst-case Models for Investing

no code implementations NeurIPS 2009 Elad Hazan, Satyen Kale

In practice, most investing is done assuming a probabilistic model of stock price returns known as the Geometric Brownian Motion (GBM).

Management valid

Computational Equivalence of Fixed Points and No Regret Algorithms, and Convergence to Equilibria

no code implementations NeurIPS 2007 Elad Hazan, Satyen Kale

We study the relation between notions of game-theoretic equilibria which are based on stability under a set of deviations, and empirical equilibria which are reached by rational players.

Cannot find the paper you are looking for? You can Submit a new open access paper.