Search Results for author: Rasool Fakoor

Found 19 papers, 8 papers with code

Memory-augmented Attention Modelling for Videos

1 code implementation7 Nov 2016 Rasool Fakoor, Abdel-rahman Mohamed, Margaret Mitchell, Sing Bing Kang, Pushmeet Kohli

We present a method to improve video description generation by modeling higher-order interactions between video frames and described concepts.

Video Description

Reinforcement Learning To Adapt Speech Enhancement to Instantaneous Input Signal Quality

no code implementations29 Nov 2017 Rasool Fakoor, Xiaodong He, Ivan Tashev, Shuayb Zarar

Today, the optimal performance of existing noise-suppression algorithms, both data-driven and those based on classic statistical methods, is range bound to specific levels of instantaneous input signal-to-noise ratios.

reinforcement-learning Reinforcement Learning (RL) +1

Differentiable Greedy Networks

no code implementations30 Oct 2018 Thomas Powers, Rasool Fakoor, Siamak Shakeri, Abhinav Sethy, Amanjit Kainth, Abdel-rahman Mohamed, Ruhi Sarikaya

Optimal selection of a subset of items from a given set is a hard problem that requires combinatorial optimization.

Claim Verification Combinatorial Optimization +1

P3O: Policy-on Policy-off Policy Optimization

1 code implementation5 May 2019 Rasool Fakoor, Pratik Chaudhari, Alexander J. Smola

Extensive experiments on the Atari-2600 and MuJoCo benchmark suites show that this simple technique is effective in reducing the sample complexity of state-of-the-art algorithms.

Reinforcement Learning (RL)

Meta-Q-Learning

2 code implementations ICLR 2020 Rasool Fakoor, Pratik Chaudhari, Stefano Soatto, Alexander J. Smola

This paper introduces Meta-Q-Learning (MQL), a new off-policy algorithm for meta-Reinforcement Learning (meta-RL).

Continuous Control Meta Reinforcement Learning +1

TraDE: Transformers for Density Estimation

no code implementations6 Apr 2020 Rasool Fakoor, Pratik Chaudhari, Jonas Mueller, Alexander J. Smola

We present TraDE, a self-attention-based architecture for auto-regressive density estimation with continuous and discrete valued data.

Density Estimation Out-of-Distribution Detection

Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation

1 code implementation NeurIPS 2020 Rasool Fakoor, Jonas Mueller, Nick Erickson, Pratik Chaudhari, Alexander J. Smola

Automated machine learning (AutoML) can produce complex model ensembles by stacking, bagging, and boosting many individual models like trees, deep networks, and nearest neighbor estimators.

AutoML Data Augmentation

DDPG++: Striving for Simplicity in Continuous-control Off-Policy Reinforcement Learning

no code implementations26 Jun 2020 Rasool Fakoor, Pratik Chaudhari, Alexander J. Smola

This paper prescribes a suite of techniques for off-policy Reinforcement Learning (RL) that simplify the training process and reduce the sample complexity.

Continuous Control reinforcement-learning +1

Regioned Episodic Reinforcement Learning

no code implementations1 Jan 2021 Jiarui Jin, Cong Chen, Ming Zhou, Weinan Zhang, Rasool Fakoor, David Wipf, Yong Yu, Jun Wang, Alex Smola

Goal-oriented reinforcement learning algorithms are often good at exploration, not exploitation, while episodic algorithms excel at exploitation, not exploration.

reinforcement-learning Reinforcement Learning (RL)

TraDE: A Simple Self-Attention-Based Density Estimator

no code implementations1 Jan 2021 Rasool Fakoor, Pratik Anil Chaudhari, Jonas Mueller, Alex Smola

We present TraDE, a self-attention-based architecture for auto-regressive density estimation with continuous and discrete valued data.

Density Estimation Out-of-Distribution Detection

Explore with Dynamic Map: Graph Structured Reinforcement Learning

no code implementations1 Jan 2021 Jiarui Jin, Sijin Zhou, Weinan Zhang, Rasool Fakoor, David Wipf, Tong He, Yong Yu, Zheng Zhang, Alex Smola

In reinforcement learning, a map with states and transitions built based on historical trajectories is often helpful in exploration and exploitation.

reinforcement-learning Reinforcement Learning (RL)

Continuous Doubly Constrained Batch Reinforcement Learning

1 code implementation NeurIPS 2021 Rasool Fakoor, Jonas Mueller, Kavosh Asadi, Pratik Chaudhari, Alexander J. Smola

Reliant on too many experiments to learn good actions, current Reinforcement Learning (RL) algorithms have limited applicability in real-world settings, which can be too expensive to allow exploration.

reinforcement-learning Reinforcement Learning (RL)

Flexible Model Aggregation for Quantile Regression

1 code implementation26 Feb 2021 Rasool Fakoor, Taesup Kim, Jonas Mueller, Alexander J. Smola, Ryan J. Tibshirani

Quantile regression is a fundamental problem in statistical learning motivated by a need to quantify uncertainty in predictions, or to model a diverse population without being overly reductive.

Econometrics Prediction Intervals +1

Graph-Enhanced Exploration for Goal-oriented Reinforcement Learning

no code implementations ICLR 2022 Jiarui Jin, Sijin Zhou, Weinan Zhang, Tong He, Yong Yu, Rasool Fakoor

Goal-oriented Reinforcement Learning (GoRL) is a promising approach for scaling up RL techniques on sparse reward environments requiring long horizon planning.

Continuous Control graph construction +2

Faster Deep Reinforcement Learning with Slower Online Network

1 code implementation10 Dec 2021 Kavosh Asadi, Rasool Fakoor, Omer Gottesman, Taesup Kim, Michael L. Littman, Alexander J. Smola

In this paper we endow two popular deep reinforcement learning algorithms, namely DQN and Rainbow, with updates that incentivize the online network to remain in the proximity of the target network.

reinforcement-learning Reinforcement Learning (RL)

Task-Agnostic Continual Reinforcement Learning: Gaining Insights and Overcoming Challenges

2 code implementations28 May 2022 Massimo Caccia, Jonas Mueller, Taesup Kim, Laurent Charlin, Rasool Fakoor

We pose two hypotheses: (1) task-agnostic methods might provide advantages in settings with limited data, computation, or high dimensionality, and (2) faster adaptation may be particularly beneficial in continual learning settings, helping to mitigate the effects of catastrophic forgetting.

Continual Learning Continuous Control +3

TAIL: Task-specific Adapters for Imitation Learning with Large Pretrained Models

no code implementations9 Oct 2023 Zuxin Liu, Jesse Zhang, Kavosh Asadi, Yao Liu, Ding Zhao, Shoham Sabach, Rasool Fakoor

Inspired by recent advancements in parameter-efficient fine-tuning in language domains, we explore efficient fine-tuning techniques -- e. g., Bottleneck Adapters, P-Tuning, and Low-Rank Adaptation (LoRA) -- in TAIL to adapt large pretrained models for new tasks with limited demonstration data.

Continual Learning Imitation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.