Search Results for author: Sobhan Miryoosefi

Found 8 papers, 2 papers with code

Landscape-Aware Growing: The Power of a Little LAG

no code implementations4 Jun 2024 Stefani Karp, Nikunj Saunshi, Sobhan Miryoosefi, Sashank J. Reddi, Sanjiv Kumar

Instead, we identify that behavior at initialization can be misleading as a predictor of final performance and present an alternative perspective based on early training dynamics, which we call "landscape-aware growing (LAG)".

Efficient Stagewise Pretraining via Progressive Subnetworks

no code implementations8 Feb 2024 Abhishek Panigrahi, Nikunj Saunshi, Kaifeng Lyu, Sobhan Miryoosefi, Sashank Reddi, Satyen Kale, Sanjiv Kumar

RaPTr achieves better pre-training loss for BERT and UL2 language models while requiring 20-33% fewer FLOPs compared to standard training, and is competitive or better than other efficient training methods.

Provable Reinforcement Learning with a Short-Term Memory

no code implementations8 Feb 2022 Yonathan Efroni, Chi Jin, Akshay Krishnamurthy, Sobhan Miryoosefi

Real-world sequential decision making problems commonly involve partial observability, which requires the agent to maintain a memory of history in order to infer the latent states, plan and make good decisions.

Decision Making reinforcement-learning +1

A Simple Reward-free Approach to Constrained Reinforcement Learning

no code implementations12 Jul 2021 Sobhan Miryoosefi, Chi Jin

In constrained reinforcement learning (RL), a learning agent seeks to not only optimize the overall reward but also satisfy the additional safety, diversity, or budget constraints.

reinforcement-learning Reinforcement Learning (RL)

Bellman Eluder Dimension: New Rich Classes of RL Problems, and Sample-Efficient Algorithms

no code implementations NeurIPS 2021 Chi Jin, Qinghua Liu, Sobhan Miryoosefi

Finding the minimal structural assumptions that empower sample-efficient learning is one of the most important research directions in Reinforcement Learning (RL).

Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.