You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

1 code implementation • 30 Jun 2022 • Vishal Athreya Baskaran, Jolene Ranek, Siyuan Shan, Natalie Stanley, Junier B. Oliva

Modern high-throughput single-cell immune profiling technologies, such as flow and mass cytometry and single-cell RNA sequencing can readily measure the expression of a large number of protein or gene features across the millions of cells in a multi-patient cohort.

1 code implementation • 28 Jan 2022 • Ryan R. Strauss, Junier B. Oliva

Arbitrary conditioning is an important problem in unsupervised learning, where we seek to model the conditional densities $p(\mathbf{x}_u \mid \mathbf{x}_o)$ that underly some data, for all possible non-intersecting subsets $o, u \subset \{1, \dots , d\}$.

1 code implementation • EMNLP 2021 • Somnath Basu Roy Chowdhury, Sayan Ghosh, Yiyuan Li, Junier B. Oliva, Shashank Srivastava, Snigdha Chaturvedi

Contextual representations learned by language models can often encode undesirable attributes, like demographic associations of the users, while being trained for an unrelated target task.

no code implementations • 9 Jul 2021 • Yang Li, Siyuan Shan, Qin Liu, Junier B. Oliva

Our framework can easily handle a large number of features using a hierarchical acquisition policy and is more robust to OOD inputs with the help of an OOD detector for partially observed data.

no code implementations • 11 Feb 2021 • Yang Li, Junier B. Oliva

Modeling dependencies among features is fundamental for many machine learning tasks.

1 code implementation • NeurIPS 2021 • Ryan R. Strauss, Junier B. Oliva

A more general and useful problem is arbitrary conditional density estimation, which aims to model any possible conditional distribution over a set of covariates, reflecting the more realistic setting of inference based on prior knowledge.

1 code implementation • 5 Feb 2021 • Siyuan Shan, Yang Li, Junier B. Oliva

Time series imputation is a fundamental task for understanding time series with missing data.

no code implementations • 18 Jan 2021 • David K. Lim, Naim U. Rashid, Junier B. Oliva, Joseph G. Ibrahim

Deep Learning (DL) methods have dramatically increased in popularity in recent years.

no code implementations • 6 Oct 2020 • Yang Li, Junier B. Oliva

Many real-world situations allow for the acquisition of additional relevant information when making an assessment with limited or uncertain data.

no code implementations • NeurIPS 2020 • Yang Li, Haidong Yi, Christopher M. Bender, Siyuan Shan, Junier B. Oliva

Reasoning over an instance composed of a set of vectors, like a point cloud, requires that one accounts for intra-set dependent features among elements.

no code implementations • 13 Jun 2020 • Yang Li, Junier B. Oliva

To trade off the improvement with the cost of acquisition, we leverage an information theoretic metric, conditional mutual information, to select the most informative feature to acquire.

no code implementations • 7 Jun 2020 • Yifeng Shi, Christopher M. Bender, Junier B. Oliva, Marc Niethammer

Clustering and prediction are two primary tasks in the fields of unsupervised and supervised learning, respectively.

no code implementations • ICML 2020 • Christopher M. Bender, Yang Li, Yifeng Shi, Michael K. Reiter, Junier B. Oliva

In this work we develop a novel Bayesian neural network methodology to achieve strong adversarial robustness without the need for online adversarial training.

1 code implementation • 13 Sep 2019 • Yang Li, Shoaib Akbar, Junier B. Oliva

Understanding the dependencies among features of a dataset is at the core of most unsupervised learning tasks.

1 code implementation • NeurIPS 2019 • Eunbyung Park, Junier B. Oliva

We propose meta-curvature (MC), a framework to learn curvature information for better generalization and fast model adaptation.

no code implementations • 4 Feb 2019 • Yang Li, Tianxiang Gao, Junier B. Oliva

In this work, we propose to learn a generative model using both learned features (through a latent space) and memories (through neighbors).

no code implementations • ICML 2018 • Junier B. Oliva, Avinava Dubey, Manzil Zaheer, Barnabás Póczos, Ruslan Salakhutdinov, Eric P. Xing, Jeff Schneider

Further, through a comprehensive study over both real world and synthetic data, we show for that jointly leveraging transformations of variables and autoregressive conditional models, results in a considerable improvement in performance.

no code implementations • 30 May 2017 • Junier B. Oliva, Kumar Avinava Dubey, Barnabas Poczos, Eric Xing, Jeff Schneider

After, an RNN is used to compute the conditional distributions of the latent covariates.

2 code implementations • ICML 2017 • Junier B. Oliva, Barnabas Poczos, Jeff Schneider

Sophisticated gated recurrent neural network architectures like LSTMs and GRUs have been shown to be highly effective in a myriad of applications.

1 code implementation • NeurIPS 2016 • Kirthevasan Kandasamy, Gautam Dasarathy, Junier B. Oliva, Jeff Schneider, Barnabas Poczos

However, in many cases, cheap approximations to $\func$ may be obtainable.

1 code implementation • 20 Mar 2016 • Kirthevasan Kandasamy, Gautam Dasarathy, Junier B. Oliva, Jeff Schneider, Barnabas Poczos

However, in many cases, cheap approximations to $f$ may be obtainable.

no code implementations • 13 Nov 2015 • Junier B. Oliva, Danica J. Sutherland, Barnabás Póczos, Jeff Schneider

The use of distributions and high-level features from deep architecture has become commonplace in modern computer vision.

no code implementations • 24 Sep 2015 • Danica J. Sutherland, Junier B. Oliva, Barnabás Póczos, Jeff Schneider

This work develops the first random features for pdfs whose dot product approximates kernels using these non-Euclidean metrics, allowing estimators using such kernels to scale to large datasets by working in a primal space, without computing large Gram matrices.

no code implementations • 10 Nov 2013 • Junier B. Oliva, Willie Neiswanger, Barnabas Poczos, Jeff Schneider, Eric Xing

We study the problem of distribution to real-value regression, where one aims to regress a mapping $f$ that takes in a distribution input covariate $P\in \mathcal{I}$ (for a non-parametric family of distributions $\mathcal{I}$) and outputs a real-valued response $Y=f(P) + \epsilon$.

no code implementations • 10 Nov 2013 • Junier B. Oliva, Barnabas Poczos, Timothy Verstynen, Aarti Singh, Jeff Schneider, Fang-Cheng Yeh, Wen-Yih Tseng

We present the FuSSO, a functional analogue to the LASSO, that efficiently finds a sparse set of functional input covariates to regress a real-valued response against.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.