Search Results for author: Volodymyr Kuleshov

Found 39 papers, 18 papers with code

Caduceus: Bi-Directional Equivariant Long-Range DNA Sequence Modeling

1 code implementation5 Mar 2024 Yair Schiff, Chia-Hsiang Kao, Aaron Gokaslan, Tri Dao, Albert Gu, Volodymyr Kuleshov

Large-scale sequence modeling has sparked rapid advances that now extend into biology and genomics.

QuIP#: Even Better LLM Quantization with Hadamard Incoherence and Lattice Codebooks

1 code implementation6 Feb 2024 Albert Tseng, Jerry Chee, Qingyao Sun, Volodymyr Kuleshov, Christopher De Sa

Second, QuIP# uses vector quantization techniques to take advantage of the ball-shaped sub-Gaussian distribution that incoherent weights possess: specifically, we introduce a set of hardware-efficient codebooks based on the highly symmetric $E_8$ lattice, which achieves the optimal 8-dimension unit ball packing.

Quantization

DySLIM: Dynamics Stable Learning by Invariant Measure for Chaotic Systems

no code implementations6 Feb 2024 Yair Schiff, Zhong Yi Wan, Jeffrey B. Parker, Stephan Hoyer, Volodymyr Kuleshov, Fei Sha, Leonardo Zepeda-Núñez

Learning dynamics from dissipative chaotic systems is notoriously difficult due to their inherent instability, as formalized by their positive Lyapunov exponents, which exponentially amplify errors in the learned dynamics.

Denoising Diffusion Variational Inference: Diffusion Models as Expressive Variational Posteriors

no code implementations5 Jan 2024 Top Piriyakulkij, Yingheng Wang, Volodymyr Kuleshov

We propose denoising diffusion variational inference (DDVI), an approximate inference algorithm for latent variable models which relies on diffusion models as flexible variational posteriors.

Denoising Variational Inference

Active Preference Inference using Language Models and Probabilistic Reasoning

no code implementations19 Dec 2023 Top Piriyakulkij, Volodymyr Kuleshov, Kevin Ellis

To enable this ability for instruction-tuned large language models (LLMs), one may prompt them to ask users questions to infer their preferences, transforming the language models into more robust, interactive systems.

Decision Making

Local Discovery by Partitioning: Polynomial-Time Causal Discovery Around Exposure-Outcome Pairs

no code implementations25 Oct 2023 Jacqueline Maasch, Weishen Pan, Shantanu Gupta, Volodymyr Kuleshov, Kyra Gan, Fei Wang

Causal discovery is crucial for causal inference in observational studies: it can enable the identification of valid adjustment sets (VAS) for unbiased effect estimation.

Causal Discovery Causal Inference +1

CommonCanvas: An Open Diffusion Model Trained with Creative-Commons Images

1 code implementation25 Oct 2023 Aaron Gokaslan, A. Feder Cooper, Jasmine Collins, Landan Seguin, Austin Jacobson, Mihir Patel, Jonathan Frankle, Cory Stephenson, Volodymyr Kuleshov

This task presents two challenges: (1) high-resolution CC images lack the captions necessary to train text-to-image generative models; (2) CC images are relatively scarce.

Transfer Learning

Text Embeddings Reveal (Almost) As Much As Text

1 code implementation10 Oct 2023 John X. Morris, Volodymyr Kuleshov, Vitaly Shmatikov, Alexander M. Rush

How much private information do text embeddings reveal about the original text?

ModuLoRA: Finetuning 2-Bit LLMs on Consumer GPUs by Integrating with Modular Quantizers

2 code implementations28 Sep 2023 Junjie Yin, Jiahao Dong, Yingheng Wang, Christopher De Sa, Volodymyr Kuleshov

We propose a memory-efficient finetuning algorithm for large language models (LLMs) that supports finetuning LLMs with 65B parameters in 2/3/4-bit precision on as little as one 24GB GPU.

Instruction Following Natural Language Inference +3

InfoDiffusion: Representation Learning Using Information Maximizing Diffusion Models

no code implementations14 Jun 2023 Yingheng Wang, Yair Schiff, Aaron Gokaslan, Weishen Pan, Fei Wang, Christopher De Sa, Volodymyr Kuleshov

While diffusion models excel at generating high-quality samples, their latent variables typically lack semantic meaning and are not suitable for representation learning.

Representation Learning

Calibrated Propensity Scores for Causal Effect Estimation

no code implementations1 Jun 2023 Shachi Deshpande, Volodymyr Kuleshov

Propensity scores are commonly used to balance observed covariates while estimating treatment effects.

Adversarial Calibrated Regression for Online Decision Making

no code implementations23 Feb 2023 Volodymyr Kuleshov, Shachi Deshpande

Accurately estimating uncertainty is an essential component of decision-making and forecasting in machine learning.

Bayesian Optimization Decision Making +2

Model Criticism for Long-Form Text Generation

1 code implementation16 Oct 2022 Yuntian Deng, Volodymyr Kuleshov, Alexander M. Rush

Language models have demonstrated the ability to generate highly fluent text; however, it remains unclear whether their output retains coherent high-level structure (e. g., story progression).

Text Generation

Semi-Autoregressive Energy Flows: Exploring Likelihood-Free Training of Normalizing Flows

no code implementations14 Jun 2022 Phillip Si, Zeyi Chen, Subham Sekhar Sahoo, Yair Schiff, Volodymyr Kuleshov

Training normalizing flow generative models can be challenging due to the need to calculate computationally expensive determinants of Jacobians.

Two-sample testing

Backpropagation through Combinatorial Algorithms: Identity with Projection Works

2 code implementations30 May 2022 Subham Sekhar Sahoo, Anselm Paulus, Marin Vlastelica, Vít Musil, Volodymyr Kuleshov, Georg Martius

Embedding discrete solvers as differentiable layers has given modern deep learning architectures combinatorial expressivity and discrete reasoning capabilities.

Density Estimation Graph Matching +3

Semi-Parametric Inducing Point Networks and Neural Processes

2 code implementations24 May 2022 Richa Rastogi, Yair Schiff, Alon Hacohen, Zhaozhi Li, Ian Lee, Yuntian Deng, Mert R. Sabuncu, Volodymyr Kuleshov

We introduce semi-parametric inducing point networks (SPIN), a general-purpose architecture that can query the training set at inference time in a compute-efficient manner.

Imputation Meta-Learning

Deep Multi-Modal Structural Equations For Causal Effect Estimation With Unstructured Proxies

no code implementations18 Mar 2022 Shachi Deshpande, Kaiwen Wang, Dhruv Sreenivas, Zheng Li, Volodymyr Kuleshov

Oftentimes, the confounders are unobserved, but we have access to large amounts of additional unstructured data (images, text) that contain valuable proxy signal about the missing confounders.

Causal Inference Time Series Analysis

Calibrated and Sharp Uncertainties in Deep Learning via Density Estimation

no code implementations14 Dec 2021 Volodymyr Kuleshov, Shachi Deshpande

Accurate probabilistic predictions can be characterized by two properties -- calibration and sharpness.

Density Estimation

Quantifying and Understanding Adversarial Examples in Discrete Input Spaces

no code implementations12 Dec 2021 Volodymyr Kuleshov, Evgenii Nikishin, Shantanu Thakoor, Tingfung Lau, Stefano Ermon

In this work, we seek to understand and extend adversarial examples across domains in which inputs are discrete, particularly across new domains, such as computational biology.

Attribute Sentiment Analysis

Calibrated Uncertainty Estimation Improves Bayesian Optimization

no code implementations8 Dec 2021 Shachi Deshpande, Volodymyr Kuleshov

Bayesian optimization is a sequential procedure for obtaining the global optimum of black-box functions without knowing a priori their true form.

Bayesian Optimization Hyperparameter Optimization

Clinical Evidence Engine: Proof-of-Concept For A Clinical-Domain-Agnostic Decision Support Infrastructure

no code implementations31 Oct 2021 BoJian Hou, Hao Zhang, Gur Ladizhinsky, Stephen Yang, Volodymyr Kuleshov, Fei Wang, Qian Yang

As a result, clinicians cannot easily or rapidly scrutinize the CDSS recommendation when facing a difficult diagnosis or treatment decision in practice.

Towards Uncertainties in Deep Learning that Are Accurate and Calibrated

no code implementations29 Sep 2021 Volodymyr Kuleshov, Shachi Deshpande

Predictive uncertainties can be characterized by two properties---calibration and sharpness.

regression

A Multi-Modal and Multitask Benchmark in the Clinical Domain

no code implementations1 Jan 2021 Yong Huang, Edgar Mariano Marroquin, Volodymyr Kuleshov

Here, we introduce Multi-Modal Multitask MIMIC-III (M3) — a dataset and benchmark for evaluating machine learning algorithms in the healthcare domain.

BIG-bench Machine Learning Decompensation +2

Temporal FiLM: Capturing Long-Range Sequence Dependencies with Feature-Wise Modulations.

1 code implementation NeurIPS 2019 Sawyer Birnbaum, Volodymyr Kuleshov, Zayd Enam, Pang Wei W. Koh, Stefano Ermon

Learning representations that accurately capture long-range dependencies in sequential inputs --- including text, audio, and genomic data --- is a key problem in deep learning.

Audio Super-Resolution Super-Resolution +2

Temporal FiLM: Capturing Long-Range Sequence Dependencies with Feature-Wise Modulations

1 code implementation14 Sep 2019 Sawyer Birnbaum, Volodymyr Kuleshov, Zayd Enam, Pang Wei Koh, Stefano Ermon

Learning representations that accurately capture long-range dependencies in sequential inputs -- including text, audio, and genomic data -- is a key problem in deep learning.

Ranked #2 on Audio Super-Resolution on Voice Bank corpus (VCTK) (using extra training data)

Audio Super-Resolution Super-Resolution +2

Adversarial Constraint Learning for Structured Prediction

1 code implementation27 May 2018 Hongyu Ren, Russell Stewart, Jiaming Song, Volodymyr Kuleshov, Stefano Ermon

Constraint-based learning reduces the burden of collecting labels by having users specify general properties of structured outputs, such as constraints imposed by physical laws.

Pose Estimation Structured Prediction +3

Adversarial Examples for Natural Language Classification Problems

no code implementations ICLR 2018 Volodymyr Kuleshov, Shantanu Thakoor, Tingfung Lau, Stefano Ermon

Modern machine learning algorithms are often susceptible to adversarial examples — maliciously crafted inputs that are undetectable by humans but that fool the algorithm into producing undesirable behavior.

BIG-bench Machine Learning Classification +4

Audio Super Resolution using Neural Networks

4 code implementations2 Aug 2017 Volodymyr Kuleshov, S. Zayd Enam, Stefano Ermon

We introduce a new audio processing technique that increases the sampling rate of signals such as speech or music using deep convolutional neural networks.

Ranked #3 on Audio Super-Resolution on Voice Bank corpus (VCTK) (using extra training data)

Audio Super-Resolution

Estimating Uncertainty Online Against an Adversary

no code implementations13 Jul 2016 Volodymyr Kuleshov, Stefano Ermon

Assessing uncertainty is an important step towards ensuring the safety and reliability of machine learning systems.

General Classification Medical Diagnosis +1

Calibrated Structured Prediction

1 code implementation NeurIPS 2015 Volodymyr Kuleshov, Percy S. Liang

In user-facing applications, displaying calibrated confidence measures---probabilities that correspond to true frequency---can be as important as obtaining high accuracy.

Medical Diagnosis Optical Character Recognition +4

Tensor Factorization via Matrix Factorization

1 code implementation29 Jan 2015 Volodymyr Kuleshov, Arun Tejasvi Chaganty, Percy Liang

Tensor factorization arises in many machine learning applications, such knowledge base modeling and parameter estimation in latent variable models.

Algorithms for multi-armed bandit problems

no code implementations25 Feb 2014 Volodymyr Kuleshov, Doina Precup

Although the design of clinical trials has been one of the principal practical problems motivating research on multi-armed bandits, bandit algorithms have never been evaluated as potential treatment allocation strategies.

Multi-Armed Bandits

Cannot find the paper you are looking for? You can Submit a new open access paper.