Search Results for author: Brian Kulis

Found 30 papers, 8 papers with code

Supervised Metric Learning for Retrieval via Contextual Similarity Optimization

1 code implementation4 Oct 2022 Christopher Liao, Theodoros Tsiligkaridis, Brian Kulis

We propose a novel alternative approach, \emph{contextual similarity optimization}, inspired by work in unsupervised metric learning.

Contrastive Learning Image Retrieval +1

Latency Control for Keyword Spotting

no code implementations15 Jun 2022 Christin Jose, Joseph Wang, Grant P. Strimel, Mohammad Omar Khursheed, Yuriy Mishchenko, Brian Kulis

We also show that when our approach is used in conjunction with a max-pooling loss, we are able to improve relative false accepts by 25 % at a fixed latency when compared to cross entropy loss.

Keyword Spotting

Pick up the PACE: Fast and Simple Domain Adaptation via Ensemble Pseudo-Labeling

1 code implementation26 May 2022 Christopher Liao, Theodoros Tsiligkaridis, Brian Kulis

Domain Adaptation (DA) has received widespread attention from deep learning researchers in recent years because of its potential to improve test accuracy with out-of-distribution labeled data.

Domain Adaptation

Faster Algorithms for Learning Convex Functions

no code implementations2 Nov 2021 Ali Siahkamari, Durmus Alp Emre Acar, Christopher Liao, Kelly Geyer, Venkatesh Saligrama, Brian Kulis

For the task of convex Lipschitz regression, we establish that our proposed algorithm converges with iteration complexity of $ O(n\sqrt{d}/\epsilon)$ for a dataset $\bm X \in \mathbb R^{n\times d}$ and $\epsilon > 0$.

Metric Learning

Tiny-CRNN: Streaming Wakeword Detection In A Low Footprint Setting

no code implementations29 Sep 2021 Mohammad Omar Khursheed, Christin Jose, Rajath Kumar, GengShen Fu, Brian Kulis, Santosh Kumar Cheekatmalla

In this work, we propose Tiny-CRNN (Tiny Convolutional Recurrent Neural Network) models applied to the problem of wakeword detection, and augment them with scaled dot product attention.

$β$-Annealed Variational Autoencoder for glitches

no code implementations20 Jul 2021 Sivaramakrishnan Sankarapandian, Brian Kulis

Gravitational wave detectors such as LIGO and Virgo are susceptible to various types of instrumental and environmental disturbances known as glitches which can mask and mimic gravitational waves.

Disentanglement

Substitutional Neural Image Compression

no code implementations16 May 2021 Xiao Wang, Wei Jiang, Wei Wang, Shan Liu, Brian Kulis, Peter Chin

The key idea is to replace the image to be compressed with a substitutional one that outperforms the original one in a desired way.

Image Compression

Small Footprint Convolutional Recurrent Networks for Streaming Wakeword Detection

no code implementations25 Nov 2020 Mohammad Omar Khursheed, Christin Jose, Rajath Kumar, GengShen Fu, Brian Kulis, Santosh Kumar Cheekatmalla

In this work, we propose small footprint Convolutional Recurrent Neural Network models applied to the problem of wakeword detection and augment them with scaled dot product attention.

Real-time Localized Photorealistic Video Style Transfer

no code implementations20 Oct 2020 Xide Xia, Tianfan Xue, Wei-Sheng Lai, Zheng Sun, Abby Chang, Brian Kulis, Jiawen Chen

We present a novel algorithm for transferring artistic styles of semantically meaningful local regions of an image onto local regions of a target video while preserving its photorealism.

Style Transfer Video Segmentation +2

Piecewise Linear Regression via a Difference of Convex Functions

2 code implementations ICML 2020 Ali Siahkamari, Aditya Gangrade, Brian Kulis, Venkatesh Saligrama

We present a new piecewise linear regression methodology that utilizes fitting a difference of convex functions (DC functions) to the data.

Deep Divergence Learning

no code implementations ICML 2020 Kubra Cilingir, Rachel Manzelli, Brian Kulis

Classical linear metric learning methods have recently been extended along two distinct lines: deep metric learning methods for learning embeddings of the data using neural networks, and Bregman divergence learning approaches for extending learning Euclidean distances to more general divergence measures such as divergences over distributions.

Metric Learning

Joint Bilateral Learning for Real-time Universal Photorealistic Style Transfer

3 code implementations ECCV 2020 Xide Xia, Meng Zhang, Tianfan Xue, Zheng Sun, Hui Fang, Brian Kulis, Jiawen Chen

Photorealistic style transfer is the task of transferring the artistic style of an image onto a content target, producing a result that is plausibly taken with a camera.

Style Transfer

Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses

1 code implementation20 Aug 2019 Xiao Wang, Siyue Wang, Pin-Yu Chen, Yanzhi Wang, Brian Kulis, Xue Lin, Peter Chin

However, one critical drawback of current defenses is that the robustness enhancement is at the cost of noticeable performance degradation on legitimate data, e. g., large drop in test accuracy.

Adversarial Robustness

Learning to Approximate a Bregman Divergence

2 code implementations NeurIPS 2020 Ali Siahkamari, Xide Xia, Venkatesh Saligrama, David Castanon, Brian Kulis

Bregman divergences generalize measures such as the squared Euclidean distance and the KL divergence, and arise throughout many areas of machine learning.

Metric Learning

Learning Compact Networks via Adaptive Network Regularization

no code implementations NIPS Workshop CDNNRIA 2018 Sivaramakrishnan Sankarapandian, Anil Kag, Rachel Manzelli, Brian Kulis

We describe a training strategy that grows the number of units during training, and show on several benchmark datasets that our model yields architectures that are smaller than those obtained when tuning the number of hidden units on a standard fixed architecture.

Conditioning Deep Generative Raw Audio Models for Structured Automatic Music

no code implementations26 Jun 2018 Rachel Manzelli, Vijay Thakkar, Ali Siahkamari, Brian Kulis

Existing automatic music generation approaches that feature deep learning can be broadly classified into two types: raw audio models and symbolic models.

Music Generation

W-Net: A Deep Model for Fully Unsupervised Image Segmentation

10 code implementations22 Nov 2017 Xide Xia, Brian Kulis

While significant attention has been recently focused on designing supervised deep semantic segmentation algorithms for vision tasks, there are many domains in which sufficient supervised pixel-level labels are difficult to obtain.

Image Segmentation Semantic Segmentation +1

Dynamic Clustering Algorithms via Small-Variance Analysis of Markov Chain Mixture Models

no code implementations26 Jul 2017 Trevor Campbell, Brian Kulis, Jonathan How

Bayesian nonparametrics are a class of probabilistic models in which the model size is inferred from data.

Stable Distribution Alignment Using the Dual of the Adversarial Distance

no code implementations ICLR 2018 Ben Usman, Kate Saenko, Brian Kulis

Our empirical results suggest that using the dual formulation for the restricted family of linear discriminators results in a more stable convergence to a desirable solution when compared with the performance of a primal min-max GAN-like objective and an MMD objective under the same restrictions.

Domain Adaptation

A Sufficient Statistics Construction of Bayesian Nonparametric Exponential Family Conjugate Models

no code implementations10 Jan 2016 Robert Finn, Brian Kulis

Second, we bridge the divide between the discrete and continuous likelihoods by illustrating a canonical construction for stochastic processes whose Levy measure densities are from positive exponential families, and then demonstrate that these processes in fact form the prior, likelihood, and posterior in a conjugate family.

Learning Theory

Revisiting Kernelized Locality-Sensitive Hashing for Improved Large-Scale Image Retrieval

no code implementations CVPR 2015 Ke Jiang, Qichao Que, Brian Kulis

We present a simple but powerful reinterpretation of kernelized locality-sensitive hashing (KLSH), a general and popular method developed in the vision community for performing approximate nearest-neighbor searches in an arbitrary reproducing kernel Hilbert space (RKHS).

Image Retrieval

Power-Law Graph Cuts

no code implementations29 Oct 2014 Xiangyang Zhou, Jiaxin Zhang, Brian Kulis

Despite strong performance for a number of clustering tasks, spectral graph cut algorithms still suffer from several limitations: first, they require the number of clusters to be known in advance, but this information is often unknown a priori; second, they tend to produce clusters with uniform sizes.

Image Segmentation Semantic Segmentation

Gamma Processes, Stick-Breaking, and Variational Inference

no code implementations4 Oct 2014 Anirban Roychowdhury, Brian Kulis

In this paper, we present a variational inference framework for models involving gamma process priors.

Variational Inference

Small-Variance Asymptotics for Hidden Markov Models

no code implementations NeurIPS 2013 Anirban Roychowdhury, Ke Jiang, Brian Kulis

Starting with the standard HMM, we first derive a “hard” inference algorithm analogous to k-means that arises when particular variances in the model tend to zero.

Dynamic Clustering via Asymptotics of the Dependent Dirichlet Process Mixture

1 code implementation NeurIPS 2013 Trevor Campbell, Miao Liu, Brian Kulis, Jonathan P. How, Lawrence Carin

This paper presents a novel algorithm, based upon the dependent Dirichlet process mixture model (DDPMM), for clustering batch-sequential data containing an unknown number of evolving clusters.

Small-Variance Asymptotics for Exponential Family Dirichlet Process Mixture Models

no code implementations NeurIPS 2012 Ke Jiang, Brian Kulis, Michael. I. Jordan

Links between probabilistic and non-probabilistic learning algorithms can arise by performing small-variance asymptotics, i. e., letting the variance of particular distributions in a graphical model go to zero.

Inductive Regularized Learning of Kernel Functions

no code implementations NeurIPS 2010 Prateek Jain, Brian Kulis, Inderjit S. Dhillon

Our result shows that the learned kernel matrices parameterize a linear transformation kernel function and can be applied inductively to new data points.

Dimensionality Reduction General Classification +1

Learning to Hash with Binary Reconstructive Embeddings

no code implementations NeurIPS 2009 Brian Kulis, Trevor Darrell

Fast retrieval methods are increasingly critical for many large-scale analysis tasks, and there have been several recent methods that attempt to learn hash functions for fast and accurate nearest neighbor searches.

Online Metric Learning and Fast Similarity Search

no code implementations NeurIPS 2008 Prateek Jain, Brian Kulis, Inderjit S. Dhillon, Kristen Grauman

Metric learning algorithms can provide useful distance functions for a variety of domains, and recent work has shown good accuracy for problems where the learner can access all distance constraints at once.

Metric Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.