Search Results for author: Jarred Barber

Found 10 papers, 5 papers with code

Sparse Gaussian Processes via Parametric Families of Compactly-supported Kernels

no code implementations5 Jun 2020 Jarred Barber

Gaussian processes are powerful models for probabilistic machine learning, but are limited in application by their $O(N^3)$ inference complexity.

Gaussian Processes

Enhancing Few-Shot Image Classification with Unlabelled Examples

2 code implementations17 Jun 2020 Peyman Bateni, Jarred Barber, Jan-Willem van de Meent, Frank Wood

We develop a transductive meta-learning method that uses unlabelled instances to improve few-shot image classification performance.

Classification Clustering +4

Improving Few-Shot Visual Classification with Unlabelled Examples

2 code implementations28 Sep 2020 Peyman Bateni, Jarred Barber, Jan-Willem van de Meent, Frank Wood

We propose a transductive meta-learning method that uses unlabelled instances to improve few-shot image classification performance.

Classification Clustering +2

End-to-end Alexa Device Arbitration

no code implementations8 Dec 2021 Jarred Barber, Yifeng Fan, Tao Zhang

We introduce a variant of the speaker localization problem, which we call device arbitration.

Beyond Simple Meta-Learning: Multi-Purpose Models for Multi-Domain, Active and Continual Few-Shot Learning

2 code implementations13 Jan 2022 Peyman Bateni, Jarred Barber, Raghav Goyal, Vaden Masrani, Jan-Willem van de Meent, Leonid Sigal, Frank Wood

The first method, Simple CNAPS, employs a hierarchically regularized Mahalanobis-distance based classifier combined with a state of the art neural adaptive feature extractor to achieve strong performance on Meta-Dataset, mini-ImageNet and tiered-ImageNet benchmarks.

Active Learning continual few-shot learning +3

Challenges and Opportunities in Multi-device Speech Processing

no code implementations27 Jun 2022 Gregory Ciccarelli, Jarred Barber, Arun Nair, Israel Cohen, Tao Zhang

We review current solutions and technical challenges for automatic speech recognition, keyword spotting, device arbitration, speech enhancement, and source localization in multidevice home environments to provide context for the INTERSPEECH 2022 special session, "Challenges and opportunities for signal processing and machine learning for multiple smart devices".

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Muse: Text-To-Image Generation via Masked Generative Transformers

4 code implementations2 Jan 2023 Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, Jose Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T. Freeman, Michael Rubinstein, Yuanzhen Li, Dilip Krishnan

Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding.

 Ranked #1 on Text-to-Image Generation on MS-COCO (FID metric)

Language Modelling Large Language Model +1

SPADE: Self-supervised Pretraining for Acoustic DisEntanglement

no code implementations3 Feb 2023 John Harvill, Jarred Barber, Arun Nair, Ramin Pishehvar

Self-supervised representation learning approaches have grown in popularity due to the ability to train models on large amounts of unlabeled data and have demonstrated success in diverse fields such as natural language processing, computer vision, and speech.

Disentanglement

Leveraging Unpaired Data for Vision-Language Generative Models via Cycle Consistency

no code implementations5 Oct 2023 Tianhong Li, Sangnie Bhardwaj, Yonglong Tian, Han Zhang, Jarred Barber, Dina Katabi, Guillaume Lajoie, Huiwen Chang, Dilip Krishnan

We demonstrate image generation and captioning performance on par with state-of-the-art text-to-image and image-to-text models with orders of magnitude fewer (only 3M) paired image-text data.

Text-to-Image Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.