Search Results for author: Akash Srivastava

Found 35 papers, 12 papers with code

Private Synthetic Data Meets Ensemble Learning

no code implementations15 Oct 2023 Haoyuan Sun, Navid Azizan, Akash Srivastava, Hao Wang

When machine learning models are trained on synthetic data and then deployed on real data, there is often a performance drop due to the distribution shift between synthetic and real data.

Ensemble Learning

Learning from Invalid Data: On Constraint Satisfaction in Generative Models

no code implementations27 Jun 2023 Giorgio Giannone, Lyle Regenwetter, Akash Srivastava, Dan Gutfreund, Faez Ahmed

This is particularly problematic when the generated data must satisfy constraints, for example, to meet product specifications in engineering design or to adhere to the laws of physics in a natural scene.

valid

A Probabilistic Framework for Modular Continual Learning

no code implementations11 Jun 2023 Lazar Valkov, Akash Srivastava, Swarat Chaudhuri, Charles Sutton

To address this challenge, we develop a modular CL framework, called PICLE, that accelerates search by using a probabilistic model to cheaply compute the fitness of each composition.

Continual Learning

Improving Tuning-Free Real Image Editing with Proximal Guidance

1 code implementation8 Jun 2023 Ligong Han, Song Wen, Qi Chen, Zhixing Zhang, Kunpeng Song, Mengwei Ren, Ruijiang Gao, Anastasis Stathopoulos, Xiaoxiao He, Yuxiao Chen, Di Liu, Qilong Zhangli, Jindong Jiang, Zhaoyang Xia, Akash Srivastava, Dimitris Metaxas

Null-text inversion (NTI) optimizes null embeddings to align the reconstruction and inversion trajectories with larger CFG scales, enabling real image editing with cross-attention control.

Estimating the Density Ratio between Distributions with High Discrepancy using Multinomial Logistic Regression

no code implementations1 May 2023 Akash Srivastava, Seungwook Han, Kai Xu, Benjamin Rhodes, Michael U. Gutmann

We show that if these auxiliary densities are constructed such that they overlap with $p$ and $q$, then a multi-class logistic regression allows for estimating $\log p/q$ on the domain of any of the $K+2$ distributions and resolves the distribution shift problems of the current state-of-the-art methods.

Binary Classification Density Ratio Estimation +4

Multi-Symmetry Ensembles: Improving Diversity and Generalization via Opposing Symmetries

1 code implementation4 Mar 2023 Charlotte Loh, Seungwook Han, Shivchander Sudalairaj, Rumen Dangovski, Kai Xu, Florian Wenzel, Marin Soljacic, Akash Srivastava

In this work, we present Multi-Symmetry Ensembles (MSE), a framework for constructing diverse ensembles by capturing the multiplicity of hypotheses along symmetry axes, which explore the hypothesis space beyond stochastic perturbations of model weights and hyperparameters.

Representation Learning Uncertainty Quantification

Beyond Statistical Similarity: Rethinking Metrics for Deep Generative Models in Engineering Design

no code implementations6 Feb 2023 Lyle Regenwetter, Akash Srivastava, Dan Gutfreund, Faez Ahmed

This paper doubles as a review and a practical guide to evaluation metrics for deep generative models (DGMs) in engineering design.

Drug Discovery Learning Theory +1

On the Importance of Calibration in Semi-supervised Learning

no code implementations10 Oct 2022 Charlotte Loh, Rumen Dangovski, Shivchander Sudalairaj, Seungwook Han, Ligong Han, Leonid Karlinsky, Marin Soljacic, Akash Srivastava

State-of-the-art (SOTA) semi-supervised learning (SSL) methods have been highly successful in leveraging a mix of labeled and unlabeled data by combining techniques of consistency regularization and pseudo-labeling.

LINKS: A dataset of a hundred million planar linkage mechanisms for data-driven kinematic design

1 code implementation30 Aug 2022 Amin Heyrani Nobari, Akash Srivastava, Dan Gutfreund, Faez Ahmed

LINKS is made up of various components including 100 million mechanisms, the simulation data for each mechanism, normalized paths generated by each mechanism, a curated set of paths, the code used to generate the data and simulate mechanisms, and a live web demo for interactive design of linkage mechanisms.

Retrieval

A Bayesian-Symbolic Approach to Reasoning and Learning in Intuitive Physics

no code implementations NeurIPS 2021 Kai Xu, Akash Srivastava, Dan Gutfreund, Felix Sosa, Tomer Ullman, Josh Tenenbaum, Charles Sutton

In this paper, we propose a Bayesian-symbolic framework (BSP) for physical reasoning and learning that is close to human-level sample-efficiency and accuracy.

Bayesian Inference Bilevel Optimization +3

Targeted Neural Dynamical Modeling

2 code implementations NeurIPS 2021 Cole Hurwitz, Akash Srivastava, Kai Xu, Justin Jude, Matthew G. Perich, Lee E. Miller, Matthias H. Hennig

These approaches, however, are limited in their ability to capture the underlying neural dynamics (e. g. linear) and in their ability to relate the learned dynamics back to the observed behaviour (e. g. no time lag).

Equivariant Contrastive Learning

2 code implementations28 Oct 2021 Rumen Dangovski, Li Jing, Charlotte Loh, Seungwook Han, Akash Srivastava, Brian Cheung, Pulkit Agrawal, Marin Soljačić

In state-of-the-art self-supervised learning (SSL) pre-training produces semantically good representations by encouraging them to be invariant under meaningful transformations prescribed from human knowledge.

Contrastive Learning Self-Supervised Learning

Equivariant Self-Supervised Learning: Encouraging Equivariance in Representations

no code implementations ICLR 2022 Rumen Dangovski, Li Jing, Charlotte Loh, Seungwook Han, Akash Srivastava, Brian Cheung, Pulkit Agrawal, Marin Soljacic

In state-of-the-art self-supervised learning (SSL) pre-training produces semantically good representations by encouraging them to be invariant under meaningful transformations prescribed from human knowledge.

Self-Supervised Learning

Scaling Densities For Improved Density Ratio Estimation

no code implementations29 Sep 2021 Akash Srivastava, Seungwook Han, Benjamin Rhodes, Kai Xu, Michael U. Gutmann

As such, estimating density ratios accurately using only samples from $p$ and $q$ is of high significance and has led to a flurry of recent work in this direction.

Binary Classification Density Ratio Estimation

A Bayesian-Symbolic Approach to Learning and Reasoning for Intuitive Physics

no code implementations1 Jan 2021 Kai Xu, Akash Srivastava, Dan Gutfreund, Felix Sosa, Tomer Ullman, Joshua B. Tenenbaum, Charles Sutton

As such, learning the laws is then reduced to symbolic regression and Bayesian inference methods are used to obtain the distribution of unobserved properties.

Bayesian Inference Common Sense Reasoning +2

Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modelling

no code implementations25 Oct 2020 Akash Srivastava, Yamini Bansal, Yukun Ding, Cole Hurwitz, Kai Xu, Bernhard Egger, Prasanna Sattigeri, Josh Tenenbaum, David D. Cox, Dan Gutfreund

Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the (aggregate) posterior to encourage statistical independence of the latent factors.

Disentanglement

Sequential Transfer Machine Learning in Networks: Measuring the Impact of Data and Neural Net Similarity on Transferability

no code implementations29 Mar 2020 Robin Hirt, Akash Srivastava, Carlos Berg, Niklas Kühl

As the number of data sets in business networks grows and not every neural net transfer is successful, indicators are needed for its impact on the target performance-its transferability.

CZ-GEM: A FRAMEWORK FOR DISENTANGLED REPRESENTATION LEARNING

no code implementations ICLR 2020 Akash Srivastava, Yamini Bansal, Yukun Ding, Bernhard Egger, Prasanna Sattigeri, Josh Tenenbaum, David D. Cox, Dan Gutfreund

In this work, we tackle a slightly more intricate scenario where the observations are generated from a conditional distribution of some known control variate and some latent noise variate.

Disentanglement

BreGMN: scaled-Bregman Generative Modeling Networks

no code implementations1 Jun 2019 Akash Srivastava, Kristjan Greenewald, Farzaneh Mirzazadeh

Well-definedness of f-divergences, however, requires the distributions of the data and model to overlap completely in every time step of training.

Generative Ratio Matching Networks

no code implementations ICLR 2020 Akash Srivastava, Kai Xu, Michael U. Gutmann, Charles Sutton

In this work, we take their insight of using kernels as fixed adversaries further and present a novel method for training deep generative models that does not involve saddlepoint optimization.

Variational Inference In Pachinko Allocation Machines

no code implementations21 Apr 2018 Akash Srivastava, Charles Sutton

The Pachinko Allocation Machine (PAM) is a deep topic model that allows representing rich correlation structures among topics by a directed acyclic graph over topics.

Variational Inference

HOUDINI: Lifelong Learning as Program Synthesis

2 code implementations NeurIPS 2018 Lazar Valkov, Dipak Chaudhari, Akash Srivastava, Charles Sutton, Swarat Chaudhuri

We present a neurosymbolic framework for the lifelong learning of algorithmic tasks that mix perception and procedural reasoning.

Program Synthesis Transfer Learning

VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning

1 code implementation NeurIPS 2017 Akash Srivastava, Lazar Valkov, Chris Russell, Michael U. Gutmann, Charles Sutton

Deep generative models provide powerful tools for distributions over complicated manifolds, such as those of natural images.

Autoencoding Variational Inference For Topic Models

6 code implementations4 Mar 2017 Akash Srivastava, Charles Sutton

A promising approach to address this problem is autoencoding variational Bayes (AEVB), but it has proven diffi- cult to apply to topic models in practice.

Test Topic Models +1

Clustering with a Reject Option: Interactive Clustering as Bayesian Prior Elicitation

no code implementations19 Jun 2016 Akash Srivastava, James Zou, Ryan P. Adams, Charles Sutton

A good clustering can help a data analyst to explore and understand a data set, but what constitutes a good clustering may depend on domain-specific and application-specific criteria.

Clustering

Clustering with a Reject Option: Interactive Clustering as Bayesian Prior Elicitation

no code implementations22 Feb 2016 Akash Srivastava, James Zou, Charles Sutton

A good clustering can help a data analyst to explore and understand a data set, but what constitutes a good clustering may depend on domain-specific and application-specific criteria.

Clustering Computational Efficiency

Cannot find the paper you are looking for? You can Submit a new open access paper.