no code implementations • 6 Mar 2024 • Yibo Jiang, Goutham Rajendran, Pradeep Ravikumar, Bryon Aragam, Victor Veitch
To that end, we introduce a simple latent variable model to abstract and formalize the concept dynamics of the next token prediction.
no code implementations • 14 Feb 2024 • Goutham Rajendran, Simon Buchholz, Bryon Aragam, Bernhard Schölkopf, Pradeep Ravikumar
In this work, we relate these two approaches and study how to learn human-interpretable concepts from data.
1 code implementation • 29 Nov 2023 • Goutham Rajendran, Patrik Reizinger, Wieland Brendel, Pradeep Ravikumar
We investigate the relationship between system identification and intervention design in dynamical systems.
no code implementations • NeurIPS 2023 • Simon Buchholz, Goutham Rajendran, Elan Rosenfeld, Bryon Aragam, Bernhard Schölkopf, Pradeep Ravikumar
We study the problem of learning causal representations from unknown, latent interventions in a general setting, where the latent distribution is Gaussian but the mixing function is completely general.
no code implementations • 9 Feb 2023 • Goutham Rajendran
In this work, we analyze the performance of the SoS hierarchy on fundamental problems stemming from statistics, theoretical computer science and statistical physics.
no code implementations • 6 Sep 2022 • Goutham Rajendran, Madhur Tulsiani
Using our general framework, we derive bounds for "sparse graph matrices", which were obtained only recently by Jones et al. [FOCS 2021] using a nontrivial application of the trace power method, and was a core component in their work.
1 code implementation • 17 Aug 2022 • Goutham Rajendran, Wei Zou
Therefore, the models we develop for various tasks should be robust to such kinds of noisy data, which led to the thriving field of robust machine learning.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 20 Jun 2022 • Bohdan Kivva, Goutham Rajendran, Pradeep Ravikumar, Bryon Aragam
We prove identifiability of a broad class of deep latent variable models that (a) have universal approximation capabilities and (b) are the decoders of variational autoencoders that are commonly used in practice.
no code implementations • NeurIPS 2021 • Goutham Rajendran, Bohdan Kivva, Ming Gao, Bryon Aragam
Greedy algorithms have long been a workhorse for learning graphical models, and more broadly for learning statistical models with sparse structure.
1 code implementation • NeurIPS 2021 • Bohdan Kivva, Goutham Rajendran, Pradeep Ravikumar, Bryon Aragam
We study the problem of reconstructing a causal graphical model from data in the presence of latent variables.