Search Results for author: Maryam Fazel

Found 39 papers, 4 papers with code

A/B Testing and Best-arm Identification for Linear Bandits with Robustness to Non-stationarity

no code implementations27 Jul 2023 Zhihan Xiong, Romain Camilleri, Maryam Fazel, Lalit Jain, Kevin Jamieson

For robust identification, it is well-known that if arms are chosen randomly and non-adaptively from a G-optimal design over $\mathcal{X}$ at each time then the error probability decreases as $\exp(-T\Delta^2_{(1)}/d)$, where $\Delta_{(1)} = \min_{x \neq x^*} (x^* - x)^\top \frac{1}{T}\sum_{t=1}^T \theta_t$.

A Black-box Approach for Non-stationary Multi-agent Reinforcement Learning

no code implementations12 Jun 2023 Haozhe Jiang, Qiwen Cui, Zhihan Xiong, Maryam Fazel, Simon S. Du

Specifically, we focus on games with bandit feedback, where testing an equilibrium can result in substantial regret even when the gap to be tested is small, and the existence of multiple optimal solutions (equilibria) in stationary games poses extra challenges.

Multi-agent Reinforcement Learning reinforcement-learning

No-Regret Online Prediction with Strategic Experts

no code implementations24 May 2023 Omid Sadeghi, Maryam Fazel

Our goal is to design algorithms that satisfy the following two requirements: 1) $\textit{Incentive-compatible}$: Incentivize the experts to report their beliefs truthfully, and 2) $\textit{No-regret}$: Achieve sublinear regret with respect to the true beliefs of the best fixed set of $m$ experts in hindsight.

Stochastic Contextual Bandits with Long Horizon Rewards

no code implementations2 Feb 2023 Yuzhen Qin, Yingcong Li, Fabio Pasqualetti, Maryam Fazel, Samet Oymak

The growing interest in complex decision-making and language modeling problems highlights the importance of sample-efficient learning over very long horizons.

Decision Making Language Modelling +1

Offline congestion games: How feedback type affects data coverage requirement

no code implementations24 Oct 2022 Haozhe Jiang, Qiwen Cui, Zhihan Xiong, Maryam Fazel, Simon S. Du

Starting from the facility-level (a. k. a., semi-bandit) feedback, we propose a novel one-unit deviation coverage condition and give a pessimism-type algorithm that can recover an approximate NE.

Vocal Bursts Type Prediction

Iterative Linear Quadratic Optimization for Nonlinear Control: Differentiable Programming Algorithmic Templates

1 code implementation13 Jul 2022 Vincent Roulet, Siddhartha Srinivasa, Maryam Fazel, Zaid Harchaoui

We present the implementation of nonlinear control algorithms based on linear and quadratic approximations of the objective from a functional viewpoint.

Car Racing

Online SuBmodular + SuPermodular (BP) Maximization with Bandit Feedback

no code implementations7 Jul 2022 Adhyyan Narang, Omid Sadeghi, Lillian J Ratliff, Maryam Fazel, Jeff Bilmes

At round $i$, a user with unknown utility $h_q$ arrives; the optimizer selects a new item to add to $S_q$, and receives a noisy marginal gain.

Movie Recommendation

Emergent segmentation from participation dynamics and multi-learner retraining

1 code implementation6 Jun 2022 Sarah Dean, Mihaela Curmei, Lillian J. Ratliff, Jamie Morgenstern, Maryam Fazel

We study the participation and retraining dynamics that arise when both the learners and sub-populations of users are \emph{risk-reducing}, which cover a broad class of updates including gradient descent, multiplicative weights, etc.

Learning in Congestion Games with Bandit Feedback

no code implementations4 Jun 2022 Qiwen Cui, Zhihan Xiong, Maryam Fazel, Simon S. Du

We propose a centralized algorithm for Markov congestion games, whose sample complexity again has only polynomial dependence on all relevant problem parameters, but not the size of the action set.

Decision-Dependent Risk Minimization in Geometrically Decaying Dynamic Environments

no code implementations8 Apr 2022 Mitas Ray, Dmitriy Drusvyatskiy, Maryam Fazel, Lillian J. Ratliff

This paper studies the problem of expected loss minimization given a data distribution that is dependent on the decision-maker's action and evolves dynamically in time according to a geometric decay process.

System Identification via Nuclear Norm Regularization

1 code implementation30 Mar 2022 Yue Sun, Samet Oymak, Maryam Fazel

Hankel regularization encourages the low-rankness of the Hankel matrix, which maps to the low-orderness of the system.

Model Selection

Flat minima generalize for low-rank matrix recovery

no code implementations7 Mar 2022 Lijun Ding, Dmitriy Drusvyatskiy, Maryam Fazel, Zaid Harchaoui

Empirical evidence suggests that for a variety of overparameterized nonlinear models, most notably in neural network training, the growth of the loss around a minimizer strongly impacts its performance.

Matrix Completion

Towards Sample-efficient Overparameterized Meta-learning

1 code implementation NeurIPS 2021 Yue Sun, Adhyyan Narang, Halil Ibrahim Gulluk, Samet Oymak, Maryam Fazel

Specifically, for (1), we first show that learning the optimal representation coincides with the problem of designing a task-aware regularization to promote inductive bias.

Few-Shot Learning Inductive Bias

Multiplayer Performative Prediction: Learning in Decision-Dependent Games

no code implementations10 Jan 2022 Adhyyan Narang, Evan Faulkner, Dmitriy Drusvyatskiy, Maryam Fazel, Lillian J. Ratliff

We show that under mild assumptions, the performatively stable equilibria can be found efficiently by a variety of algorithms, including repeated retraining and the repeated (stochastic) gradient method.

Fast First-Order Methods for Monotone Strongly DR-Submodular Maximization

no code implementations15 Nov 2021 Omid Sadeghi, Maryam Fazel

Then, we study $L$-smooth monotone strongly DR-submodular functions that have bounded curvature, and we show how to exploit such additional structure to obtain algorithms with improved approximation guarantees and faster convergence rates for the maximization problem.

Selective Sampling for Online Best-arm Identification

no code implementations NeurIPS 2021 Romain Camilleri, Zhihan Xiong, Maryam Fazel, Lalit Jain, Kevin Jamieson

The main results of this work precisely characterize this trade-off between labeled samples and stopping time and provide an algorithm that nearly-optimally achieves the minimal label complexity given a desired stopping time.

Binary Classification

Near-Optimal Randomized Exploration for Tabular Markov Decision Processes

no code implementations19 Feb 2021 Zhihan Xiong, Ruoqi Shen, Qiwen Cui, Maryam Fazel, Simon S. Du

To achieve the desired result, we develop 1) a new clipping operation to ensure both the probability of being optimistic and the probability of being pessimistic are lower bounded by a constant, and 2) a new recursive formula for the absolute value of estimation errors to analyze the regret.

Sample Efficient Subspace-based Representations for Nonlinear Meta-Learning

no code implementations14 Feb 2021 Halil Ibrahim Gulluk, Yue Sun, Samet Oymak, Maryam Fazel

We prove that subspace-based representations can be learned in a sample-efficient manner and provably benefit future tasks in terms of sample complexity.

Binary Classification General Classification +2

Function Design for Improved Competitive Ratio in Online Resource Allocation with Procurement Costs

no code implementations23 Dec 2020 Mitas Ray, Omid Sadeghi, Lillian J. Ratliff, Maryam Fazel

We study the problem of online resource allocation, where multiple customers arrive sequentially and the seller must irrevocably allocate resources to each incoming customer while also facing a procurement cost for the total allocation.

A Single Recipe for Online Submodular Maximization with Adversarial or Stochastic Constraints

no code implementations NeurIPS 2020 Omid Sadeghi, Prasanna Raut, Maryam Fazel

In this paper, we consider an online optimization problem in which the reward functions are DR-submodular, and in addition to maximizing the total reward, the sequence of decisions must satisfy some convex constraints on average.

Online DR-Submodular Maximization with Stochastic Cumulative Constraints

no code implementations29 May 2020 Prasanna Sanjay Raut, Omid Sadeghi, Maryam Fazel

Stochastic long-term constraints arise naturally in applications where there is a limited budget or resource available and resource consumption at each step is governed by stochastically time-varying environments.

Competitive Algorithms for Online Budget-Constrained Continuous DR-Submodular Problems

no code implementations30 Jun 2019 Omid Sadeghi, Reza Eghbali, Maryam Fazel

In this paper, we study a certain class of online optimization problems, where the goal is to maximize a function that is not necessarily concave and satisfies the Diminishing Returns (DR) property under budget constraints.

Online Continuous DR-Submodular Maximization with Long-Term Budget Constraints

no code implementations30 Jun 2019 Omid Sadeghi, Maryam Fazel

In this paper, we study a class of online optimization problems with long-term budget constraints where the objective functions are not necessarily concave (nor convex) but they instead satisfy the Diminishing Returns (DR) property.

Escaping from saddle points on Riemannian manifolds

no code implementations NeurIPS 2019 Yue Sun, Nicolas Flammarion, Maryam Fazel

We consider minimizing a nonconvex, smooth function $f$ on a Riemannian manifold $\mathcal{M}$.

Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator

no code implementations ICML 2018 Maryam Fazel, Rong Ge, Sham M. Kakade, Mehran Mesbahi

Direct policy gradient methods for reinforcement learning and continuous control problems are a popular approach for a variety of reasons: 1) they are easy to implement without explicit knowledge of the underlying model 2) they are an "end-to-end" approach, directly optimizing the performance metric of interest 3) they inherently allow for richly parameterized policies.

Continuous Control Policy Gradient Methods

Global Convergence of Policy Gradient Methods for Linearized Control Problems

no code implementations ICLR 2018 Maryam Fazel, Rong Ge, Sham M. Kakade, Mehran Mesbahi

Direct policy gradient methods for reinforcement learning and continuous control problems are a popular approach for a variety of reasons: 1) they are easy to implement without explicit knowledge of the underlying model; 2) they are an "end-to-end" approach, directly optimizing the performance metric of interest; 3) they inherently allow for richly parameterized policies.

Continuous Control Policy Gradient Methods

Designing smoothing functions for improved worst-case competitive ratio in online optimization

no code implementations NeurIPS 2016 Reza Eghbali, Maryam Fazel

Online optimization covers problems such as online resource allocation, online bipartite matching, adwords (a central problem in e-commerce and advertising), and adwords with separable concave returns.

Relative Density and Exact Recovery in Heterogeneous Stochastic Block Models

no code implementations15 Dec 2015 Amin Jalali, Qiyang Han, Ioana Dumitriu, Maryam Fazel

For instance, $\log n$ is considered to be the standard lower bound on the cluster size for exact recovery via convex methods, for homogenous SBM.

Stochastic Block Model

Variational Gram Functions: Convex Analysis and Optimization

no code implementations16 Jul 2015 Amin Jalali, Maryam Fazel, Lin Xiao

We propose a new class of convex penalty functions, called \emph{variational Gram functions} (VGFs), that can promote pairwise relations, such as orthogonality, among a set of vectors in a vector space.

General Classification

Exponentiated Subgradient Algorithm for Online Optimization under the Random Permutation Model

no code implementations27 Oct 2014 Reza Eghbali, Jon Swenson, Maryam Fazel

Online optimization problems arise in many resource allocation tasks, where the future demands for each resource and the associated utility functions change over time and are not known apriori, yet resources need to be allocated at every point in time despite the future uncertainty.

Universal Convexification via Risk-Aversion

no code implementations3 Jun 2014 Krishnamurthy Dvijotham, Maryam Fazel, Emanuel Todorov

We develop a framework for convexifying a fairly general class of optimization problems.

Stochastic Optimization

Learning Graphical Models With Hubs

no code implementations28 Feb 2014 Kean Ming Tan, Palma London, Karthik Mohan, Su-In Lee, Maryam Fazel, Daniela Witten

We consider the problem of learning a high-dimensional graphical model in which certain hub nodes are highly-connected to many other nodes.

Node-Based Learning of Multiple Gaussian Graphical Models

no code implementations21 Mar 2013 Karthik Mohan, Palma London, Maryam Fazel, Daniela Witten, Su-In Lee

We consider estimation under two distinct assumptions: (1) differences between the K networks are due to individual nodes that are perturbed across conditions, or (2) similarities among the K networks are due to the presence of common hub nodes that are shared across all K networks.

Structured Learning of Gaussian Graphical Models

no code implementations NeurIPS 2012 Karthik Mohan, Mike Chung, Seungyeop Han, Daniela Witten, Su-In Lee, Maryam Fazel

We consider estimation of multiple high-dimensional Gaussian graphical models corresponding to a single set of nodes under several distinct conditions.

Cannot find the paper you are looking for? You can Submit a new open access paper.