Search Results for author: Miguel Sarabia

Found 6 papers, 3 papers with code

Sample-Efficient Preference-based Reinforcement Learning with Dynamics Aware Rewards

1 code implementation28 Feb 2024 Katherine Metcalf, Miguel Sarabia, Natalie Mackraz, Barry-John Theobald

Preference-based reinforcement learning (PbRL) aligns a robot behavior with human preferences via a reward function learned from binary feedback over agent behaviors.

reinforcement-learning

Novel-View Acoustic Synthesis from 3D Reconstructed Rooms

1 code implementation23 Oct 2023 Byeongjoo Ahn, Karren Yang, Brian Hamilton, Jonathan Sheaffer, Anurag Ranjan, Miguel Sarabia, Oncel Tuzel, Jen-Hao Rick Chang

Given audio recordings from 2-4 microphones and the 3D geometry and material of a scene containing multiple unknown sound sources, we estimate the sound anywhere in the scene.

Spatial LibriSpeech: An Augmented Dataset for Spatial Audio Learning

1 code implementation18 Aug 2023 Miguel Sarabia, Elena Menyaylenko, Alessandro Toso, Skyler Seto, Zakaria Aldeneh, Shadi Pirhosseinloo, Luca Zappella, Barry-John Theobald, Nicholas Apostoloff, Jonathan Sheaffer

We present Spatial LibriSpeech, a spatial audio dataset with over 650 hours of 19-channel audio, first-order ambisonics, and optional distractor noise.

8k Position

Rewards Encoding Environment Dynamics Improves Preference-based Reinforcement Learning

no code implementations12 Nov 2022 Katherine Metcalf, Miguel Sarabia, Barry-John Theobald

In this work, we demonstrate that encoding environment dynamics in the reward function (REED) dramatically reduces the number of preference labels required in state-of-the-art preference-based RL frameworks.

reinforcement-learning Reinforcement Learning (RL)

On the role of Lip Articulation in Visual Speech Perception

no code implementations18 Mar 2022 Zakaria Aldeneh, Masha Fedzechkina, Skyler Seto, Katherine Metcalf, Miguel Sarabia, Nicholas Apostoloff, Barry-John Theobald

Previous research has shown that traditional metrics used to optimize and assess models for generating lip motion from speech are not a good indicator of subjective opinion of animation quality.

Cannot find the paper you are looking for? You can Submit a new open access paper.