Search Results for author: Harsh Shrivastava

Found 15 papers, 7 papers with code

Generative Kaleidoscopic Networks

1 code implementation19 Feb 2024 Harsh Shrivastava

In other words, the neural networks learn a many-to-one mapping and this effect is more prominent as we increase the number of layers or the depth of the neural network.

Federated Learning with Neural Graphical Models

no code implementations20 Sep 2023 Urszula Chajewska, Harsh Shrivastava

We develop a FL framework which maintains a global NGM model that learns the averaged information from the local NGM models while keeping the training data within the client's environment.

Federated Learning

Knowledge Propagation over Conditional Independence Graphs

no code implementations10 Aug 2023 Urszula Chajewska, Harsh Shrivastava

Conditional Independence (CI) graph is a special type of a Probabilistic Graphical Model (PGM) where the feature connections are modeled using an undirected graph and the edge weights show the partial correlation strength between the features.

DiversiGATE: A Comprehensive Framework for Reliable Large Language Models

no code implementations22 Jun 2023 Shima Imani, Ali Beyram, Harsh Shrivastava

In this paper, we introduce DiversiGATE, a unified framework that consolidates diverse methodologies for LLM verification.

Arithmetic Reasoning GSM8K +1

Are uGLAD? Time will tell!

1 code implementation21 Mar 2023 Shima Imani, Harsh Shrivastava

Segmentation of multivariate time series data is a technique for identifying meaningful patterns or changes in the time series that can signal a shift in the system's behavior.

EEG Segmentation +1

MathPrompter: Mathematical Reasoning using Large Language Models

no code implementations4 Mar 2023 Shima Imani, Liang Du, Harsh Shrivastava

Large Language Models (LLMs) have limited performance when solving arithmetic reasoning tasks and often provide incorrect answers.

Arithmetic Reasoning Math +2

Neural Graph Revealers

1 code implementation27 Feb 2023 Harsh Shrivastava, Urszula Chajewska

Sparse graph recovery methods work well where the data follows their assumptions but often they are not designed for doing downstream probabilistic queries.

Methods for Recovering Conditional Independence Graphs: A Survey

1 code implementation13 Nov 2022 Harsh Shrivastava, Urszula Chajewska

Conditional Independence (CI) graphs are a type of probabilistic graphical models that are primarily used to gain insights about feature relationships.

Neural Graphical Models

2 code implementations2 Oct 2022 Harsh Shrivastava, Urszula Chajewska

Theoretically these models can represent very complex dependency functions, but in practice often simplifying assumptions are made due to computational limitations associated with graph operations.

Multi-Task Learning

uGLAD: Sparse graph recovery by optimizing deep unrolled networks

4 code implementations23 May 2022 Harsh Shrivastava, Urszula Chajewska, Robin Abraham, Xinshi Chen

Our model, uGLAD, builds upon and extends the state-of-the-art model GLAD to the unsupervised setting.

Multi-Task Learning

Echo State Speech Recognition

no code implementations18 Feb 2021 Harsh Shrivastava, Ankush Garg, Yuan Cao, Yu Zhang, Tara Sainath

We propose automatic speech recognition (ASR) models inspired by echo state network (ESN), in which a subset of recurrent neural networks (RNN) layers in the models are randomly initialized and untrained.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Learning What Not to Model: Gaussian Process Regression with Negative Constraints

no code implementations1 Jan 2021 Gaurav Shrivastava, Harsh Shrivastava, Abhinav Shrivastava

But, what if for an input point '$\bar{\mathbf{x}}$', we want to constrain the GP to avoid a target regression value '$\bar{y}(\bar{\mathbf{x}})$' (a negative datapair)?

Navigate regression

Cooperative neural networks (CoNN): Exploiting prior independence structure for improved classification

no code implementations NeurIPS 2018 Harsh Shrivastava, Eugene Bart, Bob Price, Hanjun Dai, Bo Dai, Srinivas Aluru

We propose a new approach, called cooperative neural networks (CoNN), which uses a set of cooperatively trained neural networks to capture latent representations that exploit prior given independence structure.

General Classification text-classification +1

GLAD: Learning Sparse Graph Recovery

1 code implementation ICLR 2020 Harsh Shrivastava, Xinshi Chen, Binghong Chen, Guanghui Lan, Srinvas Aluru, Han Liu, Le Song

Recently, there is a surge of interest to learn algorithms directly based on data, and in this case, learn to map empirical covariance to the sparse precision matrix.

Inductive Bias

Cannot find the paper you are looking for? You can Submit a new open access paper.