Search Results for author: Richard G. Baraniuk

Found 75 papers, 20 papers with code

NeuroView: Explainable Deep Network Decision Making

no code implementations15 Oct 2021 CJ Barberan, Randall Balestriero, Richard G. Baraniuk

Each member of the family is derived from a standard DN architecture by vector quantizing the unit output values and feeding them into a global linear classifier.

Classification Decision Making

NFT-K: Non-Fungible Tangent Kernels

1 code implementation11 Oct 2021 Sina AlEMohammad, Hossein Babaei, CJ Barberan, Naiming Liu, Lorenzo Luzi, Blake Mason, Richard G. Baraniuk

To further contribute interpretability with respect to classification and the layers, we develop a new network as a combination of multiple neural tangent kernels, one to model each layer of the deep neural network individually as opposed to past work which attempts to represent the entire network via a single neural tangent kernel.

Evaluating generative networks using Gaussian mixtures of image features

no code implementations8 Oct 2021 Lorenzo Luzi, Carlos Ortiz Marrero, Nile Wynar, Richard G. Baraniuk, Michael J. Henry

We define a performance measure, which we call WaM, on two sets of images by using Inception-v3 (or another classifier) to featurize the images, estimate two GMMs, and use the restricted 2-Wasserstein distance to compare the GMMs.

Math Word Problem Generation with Mathematical Consistency and Problem Context Constraints

no code implementations9 Sep 2021 Zichao Wang, Andrew S. Lan, Richard G. Baraniuk

We study the problem of generating arithmetic math word problems (MWPs) given a math equation that specifies the mathematical computation and a context that specifies the problem scenario.

A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning

no code implementations6 Sep 2021 Yehuda Dar, Vidya Muthukumar, Richard G. Baraniuk

The rapid recent progress in machine learning (ML) has raised a number of scientific questions that challenge the longstanding dogma of the field.

The Flip Side of the Reweighted Coin: Duality of Adaptive Dropout and Regularization

1 code implementation14 Jun 2021 Daniel LeJeune, Hamid Javadi, Richard G. Baraniuk

Among the most successful methods for sparsifying deep (neural) networks are those that adaptively mask the network weights throughout training.

Extreme Compressed Sensing of Poisson Rates from Multiple Measurements

1 code implementation15 Mar 2021 Pavan K. Kota, Daniel LeJeune, Rebekah A. Drezek, Richard G. Baraniuk

Here, we present the first exploration of the MMV problem where signals are independently drawn from a sparse, multivariate Poisson distribution.

Transfer Learning Can Outperform the True Prior in Double Descent Regularization

no code implementations9 Mar 2021 Yehuda Dar, Richard G. Baraniuk

We study a fundamental transfer learning process from source to target linear regression tasks, including overparameterized settings where there are more learned parameters than data samples.

Transfer Learning

Instructions and Guide for Diagnostic Questions: The NeurIPS 2020 Education Challenge

no code implementations23 Jul 2020 Zichao Wang, Angus Lamb, Evgeny Saveliev, Pashmina Cameron, Yordan Zaykov, José Miguel Hernández-Lobato, Richard E. Turner, Richard G. Baraniuk, Craig Barton, Simon Peyton Jones, Simon Woodhead, Cheng Zhang

In this competition, participants will focus on the students' answer records to these multiple-choice diagnostic questions, with the aim of 1) accurately predicting which answers the students provide; 2) accurately predicting which questions have high quality; and 3) determining a personalized sequence of questions for each student that best predicts the student's answers.

Ensembles of Generative Adversarial Networks for Disconnected Data

no code implementations25 Jun 2020 Lorenzo Luzi, Randall Balestriero, Richard G. Baraniuk

They can be represented in two ways: With an ensemble of networks or with a single network with truncated latent space.

Analytical Probability Distributions and EM-Learning for Deep Generative Networks

no code implementations NeurIPS 2020 Randall Balestriero, Sebastien Paris, Richard G. Baraniuk

Deep Generative Networks (DGNs) with probabilistic modeling of their output and latent space are currently trained via Variational Autoencoders (VAEs).

Anomaly Detection Imputation +1

Interpretable Super-Resolution via a Learned Time-Series Representation

no code implementations13 Jun 2020 Randall Balestriero, Herve Glotin, Richard G. Baraniuk

We develop an interpretable and learnable Wigner-Ville distribution that produces a super-resolved quadratic signal representation for time-series analysis.

Super-Resolution Time Series +1

Double Double Descent: On Generalization Errors in Transfer Learning between Linear Regression Tasks

no code implementations12 Jun 2020 Yehuda Dar, Richard G. Baraniuk

Our non-asymptotic analysis shows that the generalization error of the target task follows a two-dimensional double descent trend (with respect to the number of free parameters in each of the tasks) that is controlled by the transfer learning factors.

Transfer Learning

An Improved Semi-Supervised VAE for Learning Disentangled Representations

no code implementations12 Jun 2020 Weili Nie, Zichao Wang, Ankit B. Patel, Richard G. Baraniuk

Learning interpretable and disentangled representations is a crucial yet challenging task in representation learning.

Representation Learning

MomentumRNN: Integrating Momentum into Recurrent Neural Networks

2 code implementations NeurIPS 2020 Tan M. Nguyen, Richard G. Baraniuk, Andrea L. Bertozzi, Stanley J. Osher, Bao Wang

Designing deep neural networks is an art that often involves an expensive search over candidate architectures.

Attention Word Embedding

no code implementations COLING 2020 Shashank Sonkar, Andrew E. Waters, Richard G. Baraniuk

Word embedding models learn semantically rich vector representations of words and are widely used to initialize natural processing language (NLP) models.

Word Similarity

qDKT: Question-centric Deep Knowledge Tracing

no code implementations25 May 2020 Shashank Sonkar, Andrew E. Waters, Andrew S. Lan, Phillip J. Grimaldi, Richard G. Baraniuk

Knowledge tracing (KT) models, e. g., the deep knowledge tracing (DKT) model, track an individual learner's acquisition of skills over time by examining the learner's performance on questions related to those skills.

Knowledge Tracing Language Modelling

Deep Learning Techniques for Inverse Problems in Imaging

no code implementations12 May 2020 Gregory Ongie, Ajil Jalal, Christopher A. Metzler, Richard G. Baraniuk, Alexandros G. Dimakis, Rebecca Willett

Recent work in machine learning shows that deep neural networks can be used to solve a wide variety of inverse problems arising in computational imaging.

Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks

1 code implementation ICLR 2020 Haoran You, Chaojian Li, Pengfei Xu, Yonggan Fu, Yue Wang, Xiaohan Chen, Richard G. Baraniuk, Zhangyang Wang, Yingyan Lin

Finally, we leverage the existence of EB tickets and the proposed mask distance to develop efficient training methods, which are achieved by first identifying EB tickets via low-cost schemes, and then continuing to train merely the EB tickets towards the target accuracy.

Educational Question Mining At Scale: Prediction, Analysis and Personalization

no code implementations12 Mar 2020 Zichao Wang, Sebastian Tschiatschek, Simon Woodhead, Jose Miguel Hernandez-Lobato, Simon Peyton Jones, Richard G. Baraniuk, Cheng Zhang

Online education platforms enable teachers to share a large number of educational resources such as questions to form exercises and quizzes for students.

Subspace Fitting Meets Regression: The Effects of Supervision and Orthonormality Constraints on Double Descent of Generalization Errors

no code implementations ICML 2020 Yehuda Dar, Paul Mayer, Lorenzo Luzi, Richard G. Baraniuk

We study the linear subspace fitting problem in the overparameterized setting, where the estimated subspace can perfectly interpolate the training examples.

Scheduled Restart Momentum for Accelerated Stochastic Gradient Descent

1 code implementation24 Feb 2020 Bao Wang, Tan M. Nguyen, Andrea L. Bertozzi, Richard G. Baraniuk, Stanley J. Osher

Nesterov accelerated gradient (NAG) improves the convergence rate of gradient descent (GD) for convex optimization using a specially designed momentum; however, it accumulates error when an inexact gradient is used (such as in SGD), slowing convergence at best and diverging at worst.

General Classification Image Classification

InfoCNF: An Efficient Conditional Continuous Normalizing Flow with Adaptive Solvers

no code implementations9 Dec 2019 Tan M. Nguyen, Animesh Garg, Richard G. Baraniuk, Anima Anandkumar

Continuous Normalizing Flows (CNFs) have emerged as promising deep generative models for a wide range of tasks thanks to their invertibility and exact likelihood estimation.

Conditional Image Generation Time Series

The Implicit Regularization of Ordinary Least Squares Ensembles

1 code implementation10 Oct 2019 Daniel LeJeune, Hamid Javadi, Richard G. Baraniuk

Ensemble methods that average over a collection of independent predictors that are each limited to a subsampling of both the examples and features of the training data command a significant presence in machine learning, such as the ever-popular random forest, yet the nature of the subsampling effect, particularly of the features, is not well understood.

Drawing early-bird tickets: Towards more efficient training of deep networks

1 code implementation26 Sep 2019 Haoran You, Chaojian Li, Pengfei Xu, Yonggan Fu, Yue Wang, Xiaohan Chen, Richard G. Baraniuk, Zhangyang Wang, Yingyan Lin

In this paper, we discover for the first time that the winning tickets can be identified at the very early training stage, which we term as early-bird (EB) tickets, via low-cost training schemes (e. g., early stopping and low-precision training) at large learning rates.

Implicit Rugosity Regularization via Data Augmentation

no code implementations28 May 2019 Daniel LeJeune, Randall Balestriero, Hamid Javadi, Richard G. Baraniuk

Deep (neural) networks have been applied productively in a wide range of supervised and unsupervised learning tasks.

Data Augmentation

Thresholding Graph Bandits with GrAPL

1 code implementation22 May 2019 Daniel LeJeune, Gautam Dasarathy, Richard G. Baraniuk

The main goal is to efficiently identify a subset of arms in a multi-armed bandit problem whose means are above a specified threshold.

Decision Making

IdeoTrace: A Framework for Ideology Tracing with a Case Study on the 2016 U.S. Presidential Election

no code implementations21 May 2019 Indu Manickam, Andrew S. Lan, Gautam Dasarathy, Richard G. Baraniuk

We apply this framework to the last two months of the election period for a group of 47508 Twitter users and demonstrate that both liberal and conservative users became more polarized over time.

Neural Rendering Model: Joint Generation and Prediction for Semi-Supervised Learning

no code implementations ICLR 2019 Nhat Ho, Tan Nguyen, Ankit B. Patel, Anima Anandkumar, Michael. I. Jordan, Richard G. Baraniuk

The conjugate prior yields a new regularizer for learning based on the paths rendered in the generative model for training CNNs–the Rendering Path Normalization (RPN).

Neural Rendering

Representing Formal Languages: A Comparison Between Finite Automata and Recurrent Neural Networks

no code implementations27 Feb 2019 Joshua J. Michalenko, Ameesh Shah, Abhinav Verma, Richard G. Baraniuk, Swarat Chaudhuri, Ankit B. Patel

We investigate the internal representations that a recurrent neural network (RNN) uses while learning to recognize a regular formal language.

Adaptive Estimation for Approximate k-Nearest-Neighbor Computations

1 code implementation25 Feb 2019 Daniel LeJeune, Richard G. Baraniuk, Reinhard Heckel

Algorithms often carry out equally many computations for "easy" and "hard" problem instances.

Sub-linear Memory Sketches for Near Neighbor Search on Streaming Data

no code implementations18 Feb 2019 Benjamin Coleman, Richard G. Baraniuk, Anshumali Shrivastava

We present the first sublinear memory sketch that can be queried to find the nearest neighbors in a dataset.

Density Estimation

A Bayesian Perspective of Convolutional Neural Networks through a Deconvolutional Generative Model

no code implementations1 Nov 2018 Tan Nguyen, Nhat Ho, Ankit Patel, Anima Anandkumar, Michael. I. Jordan, Richard G. Baraniuk

This conjugate prior yields a new regularizer based on paths rendered in the generative model for training CNNs-the Rendering Path Normalization (RPN).

Object Classification

From Hard to Soft: Understanding Deep Network Nonlinearities via Vector Quantization and Statistical Inference

no code implementations ICLR 2019 Randall Balestriero, Richard G. Baraniuk

We show that, under a GMM, piecewise affine, convex nonlinearities like ReLU, absolute value, and max-pooling can be interpreted as solutions to certain natural "hard" VQ inference problems, while sigmoid, hyperbolic tangent, and softmax can be interpreted as solutions to corresponding "soft" VQ inference problems.

Quantization

MISSION: Ultra Large-Scale Feature Selection using Count-Sketches

1 code implementation12 Jun 2018 Amirali Aghazadeh, Ryan Spring, Daniel LeJeune, Gautam Dasarathy, Anshumali Shrivastava, Richard G. Baraniuk

We demonstrate that MISSION accurately and efficiently performs feature selection on real-world, large-scale datasets with billions of dimensions.

Feature Selection

Unsupervised Learning with Stein's Unbiased Risk Estimator

1 code implementation26 May 2018 Christopher A. Metzler, Ali Mousavi, Reinhard Heckel, Richard G. Baraniuk

We show that, in the context of image recovery, SURE and its generalizations can be used to train convolutional neural networks (CNNs) for a range of image denoising and recovery problems without any ground truth data.

Image Denoising

Semi-Supervised Learning via New Deep Network Inversion

no code implementations12 Nov 2017 Randall Balestriero, Vincent Roger, Herve G. Glotin, Richard G. Baraniuk

We exploit a recently derived inversion scheme for arbitrary deep neural networks to develop a new semi-supervised learning framework that applies to a wide range of systems and problems.

DeepCodec: Adaptive Sensing and Recovery via Deep Convolutional Neural Networks

no code implementations11 Jul 2017 Ali Mousavi, Gautam Dasarathy, Richard G. Baraniuk

In this paper we develop a novel computational sensing framework for sensing and recovering structured signals.

Compressive Sensing

Learned D-AMP: Principled Neural Network based Compressive Image Recovery

1 code implementation NeurIPS 2017 Christopher A. Metzler, Ali Mousavi, Richard G. Baraniuk

The LDAMP network is easy to train, can be applied to a variety of different measurement matrices, and comes with a state-evolution heuristic that accurately predicts its performance.

Denoising

Data-Mining Textual Responses to Uncover Misconception Patterns

no code implementations24 Mar 2017 Joshua J. Michalenko, Andrew S. Lan, Richard G. Baraniuk

An important, yet largely unstudied, problem in student data analysis is to detect misconceptions from students' responses to open-response questions.

Learning to Invert: Signal Recovery via Deep Convolutional Networks

2 code implementations14 Jan 2017 Ali Mousavi, Richard G. Baraniuk

The promise of compressive sensing (CS) has been offset by two significant challenges.

Compressive Sensing

Semi-Supervised Learning with the Deep Rendering Mixture Model

no code implementations6 Dec 2016 Tan Nguyen, Wanjia Liu, Ethan Perez, Richard G. Baraniuk, Ankit B. Patel

Semi-supervised learning algorithms reduce the high cost of acquiring labeled training data by using both labeled and unlabeled data during learning.

Variational Inference

A Probabilistic Framework for Deep Learning

no code implementations NeurIPS 2016 Ankit B. Patel, Tan Nguyen, Richard G. Baraniuk

We develop a probabilistic framework for deep learning based on the Deep Rendering Mixture Model (DRMM), a new generative probabilistic model that explicitly capture variations in data due to latent task nuisance variables.

General Classification

Consistent Parameter Estimation for LASSO and Approximate Message Passing

no code implementations3 Nov 2015 Ali Mousavi, Arian Maleki, Richard G. Baraniuk

For instance the following basic questions have not yet been studied in the literature: (i) How does the size of the active set $\|\hat{\beta}^\lambda\|_0/p$ behave as a function of $\lambda$?

A Deep Learning Approach to Structured Signal Recovery

no code implementations17 Aug 2015 Ali Mousavi, Ankit B. Patel, Richard G. Baraniuk

In this paper, we develop a new framework for sensing and recovering structured signals.

Compressive Sensing Denoising

oASIS: Adaptive Column Sampling for Kernel Matrix Approximation

no code implementations19 May 2015 Raajen Patel, Thomas A. Goldstein, Eva L. Dyer, Azalia Mirhoseini, Richard G. Baraniuk

Kernel matrices (e. g. Gram or similarity matrices) are essential for many state-of-the-art approaches to classification, clustering, and dimensionality reduction.

Dimensionality Reduction General Classification

A Probabilistic Theory of Deep Learning

no code implementations2 Apr 2015 Ankit B. Patel, Tan Nguyen, Richard G. Baraniuk

A grand challenge in machine learning is the development of computational algorithms that match or outperform humans in perceptual inference tasks that are complicated by nuisance variation.

Object Recognition Speech Recognition

RankMap: A Platform-Aware Framework for Distributed Learning from Dense Datasets

1 code implementation27 Mar 2015 Azalia Mirhoseini, Eva L. Dyer, Ebrahim. M. Songhori, Richard G. Baraniuk, Farinaz Koushanfar

This paper introduces RankMap, a platform-aware end-to-end framework for efficient execution of a broad class of iterative learning algorithms for massive and dense datasets.

Distributed Computing

Video Compressive Sensing for Spatial Multiplexing Cameras using Motion-Flow Models

no code implementations9 Mar 2015 Aswin C. Sankaranarayanan, Lina Xu, Christoph Studer, Yun Li, Kevin Kelly, Richard G. Baraniuk

In this paper, we propose the CS multi-scale video (CS-MUVI) sensing and recovery framework for high-quality video acquisition and recovery using SMCs.

Compressive Sensing Optical Flow Estimation +1

Mathematical Language Processing: Automatic Grading and Feedback for Open Response Mathematical Questions

no code implementations18 Jan 2015 Andrew S. Lan, Divyanshu Vats, Andrew E. Waters, Richard G. Baraniuk

Our data-driven framework for mathematical language processing (MLP) leverages solution data from a large number of learners to evaluate the correctness of their solutions, assign partial-credit scores, and provide feedback to each learner on the likely locations of any errors.

SPRITE: A Response Model For Multiple Choice Testing

no code implementations12 Jan 2015 Ryan Ning, Andrew E. Waters, Christoph Studer, Richard G. Baraniuk

In this work, we propose a novel methodology for unordered categorical IRT that we call SPRITE (short for stochastic polytomous response item model) that: (i) analyzes both ordered and unordered categories, (ii) offers interpretable outputs, and (iii) provides improved data fitting compared to existing models.

Quantized Matrix Completion for Personalized Learning

no code implementations18 Dec 2014 Andrew S. Lan, Christoph Studer, Richard G. Baraniuk

The recently proposed SPARse Factor Analysis (SPARFA) framework for personalized learning performs factor analysis on ordinal or binary-valued (e. g., correct/incorrect) graded learner responses to questions.

Matrix Completion

Tag-Aware Ordinal Sparse Factor Analysis for Learning and Content Analytics

no code implementations18 Dec 2014 Andrew S. Lan, Christoph Studer, Andrew E. Waters, Richard G. Baraniuk

SPARse Factor Analysis (SPARFA) is a novel framework for machine learning-based learning analytics, which estimates a learner's knowledge of the concepts underlying a domain, and content analytics, which estimates the relationships among a collection of questions and those concepts.

Collaborative Filtering

Convex Biclustering

no code implementations5 Aug 2014 Eric C. Chi, Genevera I. Allen, Richard G. Baraniuk

In the biclustering problem, we seek to simultaneously group observations and features.

Collaborative Filtering

From Denoising to Compressed Sensing

2 code implementations16 Jun 2014 Christopher A. Metzler, Arian Maleki, Richard G. Baraniuk

A key element in D-AMP is the use of an appropriate Onsager correction term in its iterations, which coerces the signal perturbation at each iteration to be very close to the white Gaussian noise that denoisers are typically designed to remove.

Denoising

Sparse Bilinear Logistic Regression

no code implementations15 Apr 2014 Jianing V. Shi, Yangyang Xu, Richard G. Baraniuk

In this paper, we introduce the concept of sparse bilinear logistic regression for decision problems involving explanatory variables that are two-dimensional matrices.

Active Learning for Undirected Graphical Model Selection

no code implementations13 Apr 2014 Divyanshu Vats, Robert D. Nowak, Richard G. Baraniuk

This paper studies graphical model selection, i. e., the problem of estimating a graph of statistical relationships among a collection of random variables.

Active Learning Model Selection

Path Thresholding: Asymptotically Tuning-Free High-Dimensional Sparse Regression

no code implementations23 Feb 2014 Divyanshu Vats, Richard G. Baraniuk

In this paper, we address the challenging problem of selecting tuning parameters for high-dimensional sparse regression.

Model Selection

Video Compressive Sensing for Dynamic MRI

no code implementations30 Jan 2014 Jianing V. Shi, Wotao Yin, Aswin C. Sankaranarayanan, Richard G. Baraniuk

We apply this framework to accelerate the acquisition process of dynamic MRI and show it achieves the best reconstruction accuracy with the least computational time compared with existing algorithms in the literature.

Compressive Sensing Video Compressive Sensing

Time-varying Learning and Content Analytics via Sparse Factor Analysis

no code implementations19 Dec 2013 Andrew S. Lan, Christoph Studer, Richard G. Baraniuk

We propose SPARFA-Trace, a new machine learning-based framework for time-varying learning and content analytics for education applications.

Collaborative Filtering Knowledge Tracing

Swapping Variables for High-Dimensional Sparse Regression with Correlated Measurements

no code implementations5 Dec 2013 Divyanshu Vats, Richard G. Baraniuk

We consider the high-dimensional sparse linear regression problem of accurately estimating a sparse vector using a small number of linear measurements that are contaminated by noise.

Parameterless Optimal Approximate Message Passing

no code implementations31 Oct 2013 Ali Mousavi, Arian Maleki, Richard G. Baraniuk

In particular, both the final reconstruction error and the convergence rate of the algorithm crucially rely on how the threshold parameter is set at each step of the algorithm.

Compressive Sensing

Asymptotic Analysis of LASSOs Solution Path with Implications for Approximate Message Passing

no code implementations23 Sep 2013 Ali Mousavi, Arian Maleki, Richard G. Baraniuk

This paper concerns the performance of the LASSO (also knows as basis pursuit denoising) for recovering sparse signals from undersampled, randomized, noisy measurements.

Denoising

Joint Topic Modeling and Factor Analysis of Textual Information and Graded Response Data

no code implementations8 May 2013 Andrew S. Lan, Christoph Studer, Andrew E. Waters, Richard G. Baraniuk

In order to better interpret the estimated latent concepts, SPARFA relies on a post-processing step that utilizes user-defined tags (e. g., topics or keywords) available for each question.

Greedy Feature Selection for Subspace Clustering

no code implementations19 Mar 2013 Eva L. Dyer, Aswin C. Sankaranarayanan, Richard G. Baraniuk

To learn a union of subspaces from a collection of data, sets of signals in the collection that belong to the same subspace must be identified in order to obtain accurate estimates of the subspace structures present in the data.

Feature Selection

Compressive Acquisition of Dynamic Scenes

no code implementations23 Jan 2012 Aswin C. Sankaranarayanan, Pavan K Turaga, Rama Chellappa, Richard G. Baraniuk

Compressive sensing (CS) is a new approach for the acquisition and recovery of sparse signals and images that enables sampling rates significantly below the classical Nyquist rate.

Compressive Sensing

Cannot find the paper you are looking for? You can Submit a new open access paper.