Search Results for author: Richard G. Baraniuk

Found 86 papers, 23 papers with code

Momentum Transformer: Closing the Performance Gap Between Self-attention and Its Linearization

no code implementations1 Aug 2022 Tan Nguyen, Richard G. Baraniuk, Robert M. Kirby, Stanley J. Osher, Bao Wang

Transformers have achieved remarkable success in sequence modeling and beyond but suffer from quadratic computational and memory complexities with respect to the length of the input sequence.

Image Generation Machine Translation

Benign Overparameterization in Membership Inference with Early Stopping

no code implementations27 May 2022 Jasper Tan, Daniel LeJeune, Blake Mason, Hamid Javadi, Richard G. Baraniuk

In this work, we study the effects the number of training epochs and parameters have on a neural network's vulnerability to membership inference (MI) attacks, which aim to extract potentially private information about the training data.

DeepTensor: Low-Rank Tensor Decomposition with Deep Network Priors

no code implementations7 Apr 2022 Vishwanath Saragadam, Randall Balestriero, Ashok Veeraraghavan, Richard G. Baraniuk

DeepTensor is a computationally efficient framework for low-rank decomposition of matrices and tensors using deep generative networks.

Hyperspectral Image Denoising Image Classification +2

Singular Value Perturbation and Deep Network Optimization

no code implementations7 Mar 2022 Rudolf H. Riedi, Randall Balestriero, Richard G. Baraniuk

Building on our earlier work connecting deep networks with continuous piecewise-affine splines, we develop an exact local linear representation of a deep network layer for a family of modern deep networks that includes ConvNets at one end of a spectrum and ResNets, DenseNets, and other networks with skip connections at the other.

NeuroView-RNN: It's About Time

no code implementations23 Feb 2022 CJ Barberan, Sina AlEMohammad, Naiming Liu, Randall Balestriero, Richard G. Baraniuk

A key interpretability issue with RNNs is that it is not clear how each hidden state per time step contributes to the decision-making process in a quantitative manner.

Decision Making Time Series

On Local Distributions in Graph Signal Processing

no code implementations22 Feb 2022 T. Mitchell Roddenberry, Fernando Gama, Richard G. Baraniuk, Santiago Segarra

Leveraging this, we are able to seamlessly compare graphs of different sizes and coming from different models, yielding results on the convergence of spectral densities, transferability of filters across arbitrary graphs, and continuity of graph signal properties with respect to the distribution of local substructures.

Open-Ended Knowledge Tracing

no code implementations21 Feb 2022 Naiming Liu, Zichao Wang, Richard G. Baraniuk, Andrew Lan

We define a series of evaluation metrics in this domain and conduct a series of quantitative and qualitative experiments to test the boundaries of open-ended knowledge tracing methods on a real-world student code dataset.

Knowledge Tracing Navigate +1

MINER: Multiscale Implicit Neural Representations

1 code implementation7 Feb 2022 Vishwanath Saragadam, Jasper Tan, Guha Balakrishnan, Richard G. Baraniuk, Ashok Veeraraghavan

We introduce a new neural signal model designed for efficient high-resolution representation of large-scale signals.

Image Reconstruction

Parameters or Privacy: A Provable Tradeoff Between Overparameterization and Membership Inference

no code implementations2 Feb 2022 Jasper Tan, Blake Mason, Hamid Javadi, Richard G. Baraniuk

A surprising phenomenon in modern machine learning is the ability of a highly overparameterized model to generalize well (small error on the test data) even when it is trained to memorize the training data (zero error on the training data).

Inference Attack Membership Inference Attack

Improving Transformers with Probabilistic Attention Keys

1 code implementation16 Oct 2021 Tam Nguyen, Tan M. Nguyen, Dung D. Le, Duy Khuong Nguyen, Viet-Anh Tran, Richard G. Baraniuk, Nhat Ho, Stanley J. Osher

Inspired by this observation, we propose Transformer with a Mixture of Gaussian Keys (Transformer-MGK), a novel transformer architecture that replaces redundant heads in transformers with a mixture of keys at each head.

Language Modelling Natural Language Processing

NeuroView: Explainable Deep Network Decision Making

no code implementations15 Oct 2021 CJ Barberan, Randall Balestriero, Richard G. Baraniuk

Each member of the family is derived from a standard DN architecture by vector quantizing the unit output values and feeding them into a global linear classifier.

Decision Making

NFT-K: Non-Fungible Tangent Kernels

1 code implementation11 Oct 2021 Sina AlEMohammad, Hossein Babaei, CJ Barberan, Naiming Liu, Lorenzo Luzi, Blake Mason, Richard G. Baraniuk

To further contribute interpretability with respect to classification and the layers, we develop a new network as a combination of multiple neural tangent kernels, one to model each layer of the deep neural network individually as opposed to past work which attempts to represent the entire network via a single neural tangent kernel.

Evaluating generative networks using Gaussian mixtures of image features

no code implementations8 Oct 2021 Lorenzo Luzi, Carlos Ortiz Marrero, Nile Wynar, Richard G. Baraniuk, Michael J. Henry

We define a performance measure, which we call WaM, on two sets of images by using Inception-v3 (or another classifier) to featurize the images, estimate two GMMs, and use the restricted $2$-Wasserstein distance to compare the GMMs.

Math Word Problem Generation with Mathematical Consistency and Problem Context Constraints

no code implementations EMNLP 2021 Zichao Wang, Andrew S. Lan, Richard G. Baraniuk

We study the problem of generating arithmetic math word problems (MWPs) given a math equation that specifies the mathematical computation and a context that specifies the problem scenario.

Arithmetic Question Generation

A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning

no code implementations6 Sep 2021 Yehuda Dar, Vidya Muthukumar, Richard G. Baraniuk

The rapid recent progress in machine learning (ML) has raised a number of scientific questions that challenge the longstanding dogma of the field.

The Flip Side of the Reweighted Coin: Duality of Adaptive Dropout and Regularization

1 code implementation NeurIPS 2021 Daniel LeJeune, Hamid Javadi, Richard G. Baraniuk

Among the most successful methods for sparsifying deep (neural) networks are those that adaptively mask the network weights throughout training.

NePTuNe: Neural Powered Tucker Network for Knowledge Graph Completion

1 code implementation15 Apr 2021 Shashank Sonkar, Arzoo Katiyar, Richard G. Baraniuk

Knowledge graphs link entities through relations to provide a structured representation of real world facts.

Link Prediction

Extreme Compressed Sensing of Poisson Rates from Multiple Measurements

1 code implementation15 Mar 2021 Pavan K. Kota, Daniel LeJeune, Rebekah A. Drezek, Richard G. Baraniuk

Here, we present the first exploration of the MMV problem where signals are independently drawn from a sparse, multivariate Poisson distribution.

The Common Intuition to Transfer Learning Can Win or Lose: Case Studies for Linear Regression

no code implementations9 Mar 2021 Yehuda Dar, Daniel LeJeune, Richard G. Baraniuk

We define a transfer learning approach to the target task as a linear regression optimization with a regularization on the distance between the to-be-learned target parameters and the already-learned source parameters.

Philosophy Transfer Learning

Instructions and Guide for Diagnostic Questions: The NeurIPS 2020 Education Challenge

no code implementations23 Jul 2020 Zichao Wang, Angus Lamb, Evgeny Saveliev, Pashmina Cameron, Yordan Zaykov, José Miguel Hernández-Lobato, Richard E. Turner, Richard G. Baraniuk, Craig Barton, Simon Peyton Jones, Simon Woodhead, Cheng Zhang

In this competition, participants will focus on the students' answer records to these multiple-choice diagnostic questions, with the aim of 1) accurately predicting which answers the students provide; 2) accurately predicting which questions have high quality; and 3) determining a personalized sequence of questions for each student that best predicts the student's answers.

Misconceptions Multiple-choice

Ensembles of Generative Adversarial Networks for Disconnected Data

no code implementations25 Jun 2020 Lorenzo Luzi, Randall Balestriero, Richard G. Baraniuk

They can be represented in two ways: With an ensemble of networks or with a single network with truncated latent space.

Analytical Probability Distributions and EM-Learning for Deep Generative Networks

no code implementations NeurIPS 2020 Randall Balestriero, Sebastien Paris, Richard G. Baraniuk

Deep Generative Networks (DGNs) with probabilistic modeling of their output and latent space are currently trained via Variational Autoencoders (VAEs).

Anomaly Detection Imputation +1

Interpretable Super-Resolution via a Learned Time-Series Representation

no code implementations13 Jun 2020 Randall Balestriero, Herve Glotin, Richard G. Baraniuk

We develop an interpretable and learnable Wigner-Ville distribution that produces a super-resolved quadratic signal representation for time-series analysis.

Super-Resolution Time Series +1

An Improved Semi-Supervised VAE for Learning Disentangled Representations

no code implementations12 Jun 2020 Weili Nie, Zichao Wang, Ankit B. Patel, Richard G. Baraniuk

Learning interpretable and disentangled representations is a crucial yet challenging task in representation learning.

Disentanglement

Double Double Descent: On Generalization Errors in Transfer Learning between Linear Regression Tasks

no code implementations12 Jun 2020 Yehuda Dar, Richard G. Baraniuk

We analytically characterize the generalization error of the target task in terms of the salient factors in the transfer learning architecture, i. e., the number of examples available, the number of (free) parameters in each of the tasks, the number of parameters transferred from the source to target task, and the relation between the two tasks.

Transfer Learning

MomentumRNN: Integrating Momentum into Recurrent Neural Networks

2 code implementations NeurIPS 2020 Tan M. Nguyen, Richard G. Baraniuk, Andrea L. Bertozzi, Stanley J. Osher, Bao Wang

Designing deep neural networks is an art that often involves an expensive search over candidate architectures.

Attention Word Embedding

no code implementations COLING 2020 Shashank Sonkar, Andrew E. Waters, Richard G. Baraniuk

Word embedding models learn semantically rich vector representations of words and are widely used to initialize natural processing language (NLP) models.

Word Similarity

qDKT: Question-centric Deep Knowledge Tracing

no code implementations25 May 2020 Shashank Sonkar, Andrew E. Waters, Andrew S. Lan, Phillip J. Grimaldi, Richard G. Baraniuk

Knowledge tracing (KT) models, e. g., the deep knowledge tracing (DKT) model, track an individual learner's acquisition of skills over time by examining the learner's performance on questions related to those skills.

Knowledge Tracing Language Modelling

Deep Learning Techniques for Inverse Problems in Imaging

no code implementations12 May 2020 Gregory Ongie, Ajil Jalal, Christopher A. Metzler, Richard G. Baraniuk, Alexandros G. Dimakis, Rebecca Willett

Recent work in machine learning shows that deep neural networks can be used to solve a wide variety of inverse problems arising in computational imaging.

Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks

1 code implementation ICLR 2020 Haoran You, Chaojian Li, Pengfei Xu, Yonggan Fu, Yue Wang, Xiaohan Chen, Richard G. Baraniuk, Zhangyang Wang, Yingyan Lin

Finally, we leverage the existence of EB tickets and the proposed mask distance to develop efficient training methods, which are achieved by first identifying EB tickets via low-cost schemes, and then continuing to train merely the EB tickets towards the target accuracy.

Educational Question Mining At Scale: Prediction, Analysis and Personalization

no code implementations12 Mar 2020 Zichao Wang, Sebastian Tschiatschek, Simon Woodhead, Jose Miguel Hernandez-Lobato, Simon Peyton Jones, Richard G. Baraniuk, Cheng Zhang

Online education platforms enable teachers to share a large number of educational resources such as questions to form exercises and quizzes for students.

Subspace Fitting Meets Regression: The Effects of Supervision and Orthonormality Constraints on Double Descent of Generalization Errors

no code implementations ICML 2020 Yehuda Dar, Paul Mayer, Lorenzo Luzi, Richard G. Baraniuk

We study the linear subspace fitting problem in the overparameterized setting, where the estimated subspace can perfectly interpolate the training examples.

Scheduled Restart Momentum for Accelerated Stochastic Gradient Descent

1 code implementation24 Feb 2020 Bao Wang, Tan M. Nguyen, Andrea L. Bertozzi, Richard G. Baraniuk, Stanley J. Osher

Nesterov accelerated gradient (NAG) improves the convergence rate of gradient descent (GD) for convex optimization using a specially designed momentum; however, it accumulates error when an inexact gradient is used (such as in SGD), slowing convergence at best and diverging at worst.

General Classification Image Classification

InfoCNF: An Efficient Conditional Continuous Normalizing Flow with Adaptive Solvers

no code implementations9 Dec 2019 Tan M. Nguyen, Animesh Garg, Richard G. Baraniuk, Anima Anandkumar

Continuous Normalizing Flows (CNFs) have emerged as promising deep generative models for a wide range of tasks thanks to their invertibility and exact likelihood estimation.

Conditional Image Generation Time Series

The Implicit Regularization of Ordinary Least Squares Ensembles

1 code implementation10 Oct 2019 Daniel LeJeune, Hamid Javadi, Richard G. Baraniuk

Ensemble methods that average over a collection of independent predictors that are each limited to a subsampling of both the examples and features of the training data command a significant presence in machine learning, such as the ever-popular random forest, yet the nature of the subsampling effect, particularly of the features, is not well understood.

Drawing Early-Bird Tickets: Towards More Efficient Training of Deep Networks

1 code implementation26 Sep 2019 Haoran You, Chaojian Li, Pengfei Xu, Yonggan Fu, Yue Wang, Xiaohan Chen, Richard G. Baraniuk, Zhangyang Wang, Yingyan Lin

In this paper, we discover for the first time that the winning tickets can be identified at the very early training stage, which we term as early-bird (EB) tickets, via low-cost training schemes (e. g., early stopping and low-precision training) at large learning rates.

InfoCNF: Efficient Conditional Continuous Normalizing Flow Using Adaptive Solvers

no code implementations25 Sep 2019 Tan M. Nguyen, Animesh Garg, Richard G. Baraniuk, Anima Anandkumar

Continuous Normalizing Flows (CNFs) have emerged as promising deep generative models for a wide range of tasks thanks to their invertibility and exact likelihood estimation.

Conditional Image Generation Time Series

Out-of-Distribution Detection Using Neural Rendering Generative Models

no code implementations10 Jul 2019 Yujia Huang, Sihui Dai, Tan Nguyen, Richard G. Baraniuk, Anima Anandkumar

Our results show that when trained on CIFAR-10, lower likelihood (of latent variables) is assigned to SVHN images.

Neural Rendering OOD Detection +1

Implicit Rugosity Regularization via Data Augmentation

no code implementations28 May 2019 Daniel LeJeune, Randall Balestriero, Hamid Javadi, Richard G. Baraniuk

Deep (neural) networks have been applied productively in a wide range of supervised and unsupervised learning tasks.

Data Augmentation

Thresholding Graph Bandits with GrAPL

1 code implementation22 May 2019 Daniel LeJeune, Gautam Dasarathy, Richard G. Baraniuk

The main goal is to efficiently identify a subset of arms in a multi-armed bandit problem whose means are above a specified threshold.

Decision Making

IdeoTrace: A Framework for Ideology Tracing with a Case Study on the 2016 U.S. Presidential Election

no code implementations21 May 2019 Indu Manickam, Andrew S. Lan, Gautam Dasarathy, Richard G. Baraniuk

We apply this framework to the last two months of the election period for a group of 47508 Twitter users and demonstrate that both liberal and conservative users became more polarized over time.

Neural Rendering Model: Joint Generation and Prediction for Semi-Supervised Learning

no code implementations ICLR 2019 Nhat Ho, Tan Nguyen, Ankit B. Patel, Anima Anandkumar, Michael. I. Jordan, Richard G. Baraniuk

The conjugate prior yields a new regularizer for learning based on the paths rendered in the generative model for training CNNs–the Rendering Path Normalization (RPN).

Neural Rendering

Representing Formal Languages: A Comparison Between Finite Automata and Recurrent Neural Networks

no code implementations27 Feb 2019 Joshua J. Michalenko, Ameesh Shah, Abhinav Verma, Richard G. Baraniuk, Swarat Chaudhuri, Ankit B. Patel

We investigate the internal representations that a recurrent neural network (RNN) uses while learning to recognize a regular formal language.

Adaptive Estimation for Approximate k-Nearest-Neighbor Computations

1 code implementation25 Feb 2019 Daniel LeJeune, Richard G. Baraniuk, Reinhard Heckel

Algorithms often carry out equally many computations for "easy" and "hard" problem instances.

Sub-linear Memory Sketches for Near Neighbor Search on Streaming Data

no code implementations18 Feb 2019 Benjamin Coleman, Richard G. Baraniuk, Anshumali Shrivastava

We present the first sublinear memory sketch that can be queried to find the nearest neighbors in a dataset.

Density Estimation

A Bayesian Perspective of Convolutional Neural Networks through a Deconvolutional Generative Model

no code implementations1 Nov 2018 Tan Nguyen, Nhat Ho, Ankit Patel, Anima Anandkumar, Michael. I. Jordan, Richard G. Baraniuk

This conjugate prior yields a new regularizer based on paths rendered in the generative model for training CNNs-the Rendering Path Normalization (RPN).

From Hard to Soft: Understanding Deep Network Nonlinearities via Vector Quantization and Statistical Inference

no code implementations ICLR 2019 Randall Balestriero, Richard G. Baraniuk

We show that, under a GMM, piecewise affine, convex nonlinearities like ReLU, absolute value, and max-pooling can be interpreted as solutions to certain natural "hard" VQ inference problems, while sigmoid, hyperbolic tangent, and softmax can be interpreted as solutions to corresponding "soft" VQ inference problems.

Quantization

MISSION: Ultra Large-Scale Feature Selection using Count-Sketches

1 code implementation12 Jun 2018 Amirali Aghazadeh, Ryan Spring, Daniel LeJeune, Gautam Dasarathy, Anshumali Shrivastava, Richard G. Baraniuk

We demonstrate that MISSION accurately and efficiently performs feature selection on real-world, large-scale datasets with billions of dimensions.

feature selection Machine Learning

Unsupervised Learning with Stein's Unbiased Risk Estimator

1 code implementation26 May 2018 Christopher A. Metzler, Ali Mousavi, Reinhard Heckel, Richard G. Baraniuk

We show that, in the context of image recovery, SURE and its generalizations can be used to train convolutional neural networks (CNNs) for a range of image denoising and recovery problems without any ground truth data.

Astronomy Image Denoising

Semi-Supervised Learning via New Deep Network Inversion

no code implementations12 Nov 2017 Randall Balestriero, Vincent Roger, Herve G. Glotin, Richard G. Baraniuk

We exploit a recently derived inversion scheme for arbitrary deep neural networks to develop a new semi-supervised learning framework that applies to a wide range of systems and problems.

DeepCodec: Adaptive Sensing and Recovery via Deep Convolutional Neural Networks

no code implementations11 Jul 2017 Ali Mousavi, Gautam Dasarathy, Richard G. Baraniuk

In this paper we develop a novel computational sensing framework for sensing and recovering structured signals.

Compressive Sensing

Learned D-AMP: Principled Neural Network based Compressive Image Recovery

1 code implementation NeurIPS 2017 Christopher A. Metzler, Ali Mousavi, Richard G. Baraniuk

The LDAMP network is easy to train, can be applied to a variety of different measurement matrices, and comes with a state-evolution heuristic that accurately predicts its performance.

Denoising

Data-Mining Textual Responses to Uncover Misconception Patterns

no code implementations24 Mar 2017 Joshua J. Michalenko, Andrew S. Lan, Richard G. Baraniuk

An important, yet largely unstudied, problem in student data analysis is to detect misconceptions from students' responses to open-response questions.

Misconceptions Natural Language Processing

Learning to Invert: Signal Recovery via Deep Convolutional Networks

2 code implementations14 Jan 2017 Ali Mousavi, Richard G. Baraniuk

The promise of compressive sensing (CS) has been offset by two significant challenges.

Compressive Sensing

Semi-Supervised Learning with the Deep Rendering Mixture Model

no code implementations6 Dec 2016 Tan Nguyen, Wanjia Liu, Ethan Perez, Richard G. Baraniuk, Ankit B. Patel

Semi-supervised learning algorithms reduce the high cost of acquiring labeled training data by using both labeled and unlabeled data during learning.

Variational Inference

A Probabilistic Framework for Deep Learning

no code implementations NeurIPS 2016 Ankit B. Patel, Tan Nguyen, Richard G. Baraniuk

We develop a probabilistic framework for deep learning based on the Deep Rendering Mixture Model (DRMM), a new generative probabilistic model that explicitly capture variations in data due to latent task nuisance variables.

General Classification

Consistent Parameter Estimation for LASSO and Approximate Message Passing

no code implementations3 Nov 2015 Ali Mousavi, Arian Maleki, Richard G. Baraniuk

For instance the following basic questions have not yet been studied in the literature: (i) How does the size of the active set $\|\hat{\beta}^\lambda\|_0/p$ behave as a function of $\lambda$?

A Deep Learning Approach to Structured Signal Recovery

no code implementations17 Aug 2015 Ali Mousavi, Ankit B. Patel, Richard G. Baraniuk

In this paper, we develop a new framework for sensing and recovering structured signals.

Compressive Sensing Denoising

oASIS: Adaptive Column Sampling for Kernel Matrix Approximation

no code implementations19 May 2015 Raajen Patel, Thomas A. Goldstein, Eva L. Dyer, Azalia Mirhoseini, Richard G. Baraniuk

Kernel matrices (e. g. Gram or similarity matrices) are essential for many state-of-the-art approaches to classification, clustering, and dimensionality reduction.

Dimensionality Reduction General Classification

A Probabilistic Theory of Deep Learning

1 code implementation2 Apr 2015 Ankit B. Patel, Tan Nguyen, Richard G. Baraniuk

A grand challenge in machine learning is the development of computational algorithms that match or outperform humans in perceptual inference tasks that are complicated by nuisance variation.

Object Recognition speech-recognition +1

RankMap: A Platform-Aware Framework for Distributed Learning from Dense Datasets

1 code implementation27 Mar 2015 Azalia Mirhoseini, Eva L. Dyer, Ebrahim. M. Songhori, Richard G. Baraniuk, Farinaz Koushanfar

This paper introduces RankMap, a platform-aware end-to-end framework for efficient execution of a broad class of iterative learning algorithms for massive and dense datasets.

Distributed Computing

Video Compressive Sensing for Spatial Multiplexing Cameras using Motion-Flow Models

no code implementations9 Mar 2015 Aswin C. Sankaranarayanan, Lina Xu, Christoph Studer, Yun Li, Kevin Kelly, Richard G. Baraniuk

In this paper, we propose the CS multi-scale video (CS-MUVI) sensing and recovery framework for high-quality video acquisition and recovery using SMCs.

Compressive Sensing Optical Flow Estimation +1

Mathematical Language Processing: Automatic Grading and Feedback for Open Response Mathematical Questions

no code implementations18 Jan 2015 Andrew S. Lan, Divyanshu Vats, Andrew E. Waters, Richard G. Baraniuk

Our data-driven framework for mathematical language processing (MLP) leverages solution data from a large number of learners to evaluate the correctness of their solutions, assign partial-credit scores, and provide feedback to each learner on the likely locations of any errors.

Natural Language Processing Science / Technology

SPRITE: A Response Model For Multiple Choice Testing

no code implementations12 Jan 2015 Ryan Ning, Andrew E. Waters, Christoph Studer, Richard G. Baraniuk

In this work, we propose a novel methodology for unordered categorical IRT that we call SPRITE (short for stochastic polytomous response item model) that: (i) analyzes both ordered and unordered categories, (ii) offers interpretable outputs, and (iii) provides improved data fitting compared to existing models.

Multiple-choice

Tag-Aware Ordinal Sparse Factor Analysis for Learning and Content Analytics

no code implementations18 Dec 2014 Andrew S. Lan, Christoph Studer, Andrew E. Waters, Richard G. Baraniuk

SPARse Factor Analysis (SPARFA) is a novel framework for machine learning-based learning analytics, which estimates a learner's knowledge of the concepts underlying a domain, and content analytics, which estimates the relationships among a collection of questions and those concepts.

Collaborative Filtering Machine Learning +1

Quantized Matrix Completion for Personalized Learning

no code implementations18 Dec 2014 Andrew S. Lan, Christoph Studer, Richard G. Baraniuk

The recently proposed SPARse Factor Analysis (SPARFA) framework for personalized learning performs factor analysis on ordinal or binary-valued (e. g., correct/incorrect) graded learner responses to questions.

Matrix Completion

Convex Biclustering

no code implementations5 Aug 2014 Eric C. Chi, Genevera I. Allen, Richard G. Baraniuk

In the biclustering problem, we seek to simultaneously group observations and features.

Collaborative Filtering

From Denoising to Compressed Sensing

2 code implementations16 Jun 2014 Christopher A. Metzler, Arian Maleki, Richard G. Baraniuk

A key element in D-AMP is the use of an appropriate Onsager correction term in its iterations, which coerces the signal perturbation at each iteration to be very close to the white Gaussian noise that denoisers are typically designed to remove.

Denoising

Sparse Bilinear Logistic Regression

no code implementations15 Apr 2014 Jianing V. Shi, Yangyang Xu, Richard G. Baraniuk

In this paper, we introduce the concept of sparse bilinear logistic regression for decision problems involving explanatory variables that are two-dimensional matrices.

Active Learning for Undirected Graphical Model Selection

no code implementations13 Apr 2014 Divyanshu Vats, Robert D. Nowak, Richard G. Baraniuk

This paper studies graphical model selection, i. e., the problem of estimating a graph of statistical relationships among a collection of random variables.

Active Learning Model Selection

Path Thresholding: Asymptotically Tuning-Free High-Dimensional Sparse Regression

no code implementations23 Feb 2014 Divyanshu Vats, Richard G. Baraniuk

In this paper, we address the challenging problem of selecting tuning parameters for high-dimensional sparse regression.

Model Selection

Video Compressive Sensing for Dynamic MRI

no code implementations30 Jan 2014 Jianing V. Shi, Wotao Yin, Aswin C. Sankaranarayanan, Richard G. Baraniuk

We apply this framework to accelerate the acquisition process of dynamic MRI and show it achieves the best reconstruction accuracy with the least computational time compared with existing algorithms in the literature.

Compressive Sensing Video Compressive Sensing

Time-varying Learning and Content Analytics via Sparse Factor Analysis

no code implementations19 Dec 2013 Andrew S. Lan, Christoph Studer, Richard G. Baraniuk

We propose SPARFA-Trace, a new machine learning-based framework for time-varying learning and content analytics for education applications.

Collaborative Filtering Knowledge Tracing

Swapping Variables for High-Dimensional Sparse Regression with Correlated Measurements

no code implementations5 Dec 2013 Divyanshu Vats, Richard G. Baraniuk

We consider the high-dimensional sparse linear regression problem of accurately estimating a sparse vector using a small number of linear measurements that are contaminated by noise.

Parameterless Optimal Approximate Message Passing

no code implementations31 Oct 2013 Ali Mousavi, Arian Maleki, Richard G. Baraniuk

In particular, both the final reconstruction error and the convergence rate of the algorithm crucially rely on how the threshold parameter is set at each step of the algorithm.

Compressive Sensing

Asymptotic Analysis of LASSOs Solution Path with Implications for Approximate Message Passing

no code implementations23 Sep 2013 Ali Mousavi, Arian Maleki, Richard G. Baraniuk

This paper concerns the performance of the LASSO (also knows as basis pursuit denoising) for recovering sparse signals from undersampled, randomized, noisy measurements.

Denoising

Joint Topic Modeling and Factor Analysis of Textual Information and Graded Response Data

no code implementations8 May 2013 Andrew S. Lan, Christoph Studer, Andrew E. Waters, Richard G. Baraniuk

In order to better interpret the estimated latent concepts, SPARFA relies on a post-processing step that utilizes user-defined tags (e. g., topics or keywords) available for each question.

Machine Learning

Greedy Feature Selection for Subspace Clustering

no code implementations19 Mar 2013 Eva L. Dyer, Aswin C. Sankaranarayanan, Richard G. Baraniuk

To learn a union of subspaces from a collection of data, sets of signals in the collection that belong to the same subspace must be identified in order to obtain accurate estimates of the subspace structures present in the data.

feature selection

Compressive Acquisition of Dynamic Scenes

no code implementations23 Jan 2012 Aswin C. Sankaranarayanan, Pavan K Turaga, Rama Chellappa, Richard G. Baraniuk

Compressive sensing (CS) is a new approach for the acquisition and recovery of sparse signals and images that enables sampling rates significantly below the classical Nyquist rate.

Compressive Sensing

Cannot find the paper you are looking for? You can Submit a new open access paper.