Search Results for author: Seyed Mohammad Asghari

Found 11 papers, 5 papers with code

Efficient Exploration for LLMs

no code implementations1 Feb 2024 Vikranth Dwaracherla, Seyed Mohammad Asghari, Botao Hao, Benjamin Van Roy

We present evidence of substantial benefit from efficient exploration in gathering human feedback to improve large language models.

Efficient Exploration Thompson Sampling

Approximate Thompson Sampling via Epistemic Neural Networks

1 code implementation18 Feb 2023 Ian Osband, Zheng Wen, Seyed Mohammad Asghari, Vikranth Dwaracherla, Morteza Ibrahimi, Xiuyuan Lu, Benjamin Van Roy

Further, we demonstrate that the \textit{epinet} -- a small additive network that estimates uncertainty -- matches the performance of large ensembles at orders of magnitude lower computational cost.

Thompson Sampling

Fine-Tuning Language Models via Epistemic Neural Networks

1 code implementation3 Nov 2022 Ian Osband, Seyed Mohammad Asghari, Benjamin Van Roy, Nat McAleese, John Aslanides, Geoffrey Irving

Language models often pre-train on large unsupervised text corpora, then fine-tune on additional task-specific data.

Active Learning Language Modelling

Robustness of Epinets against Distributional Shifts

no code implementations1 Jul 2022 Xiuyuan Lu, Ian Osband, Seyed Mohammad Asghari, Sven Gowal, Vikranth Dwaracherla, Zheng Wen, Benjamin Van Roy

However, these improvements are relatively small compared to the outstanding issues in distributionally-robust deep learning.

Ensembles for Uncertainty Estimation: Benefits of Prior Functions and Bootstrapping

no code implementations8 Jun 2022 Vikranth Dwaracherla, Zheng Wen, Ian Osband, Xiuyuan Lu, Seyed Mohammad Asghari, Benjamin Van Roy

In machine learning, an agent needs to estimate uncertainty to efficiently explore and adapt and to make effective decisions.

Epistemic Neural Networks

1 code implementation NeurIPS 2023 Ian Osband, Zheng Wen, Seyed Mohammad Asghari, Vikranth Dwaracherla, Morteza Ibrahimi, Xiuyuan Lu, Benjamin Van Roy

We introduce the epinet: an architecture that can supplement any conventional neural network, including large pretrained models, and can be trained with modest incremental computation to estimate uncertainty.

Learning to Code: Coded Caching via Deep Reinforcement Learning

no code implementations9 Dec 2019 Navid Naderializadeh, Seyed Mohammad Asghari

We consider a system comprising a file library and a network with a server and multiple users equipped with cache memories.

reinforcement-learning Reinforcement Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.