no code implementations • 7 Mar 2024 • Peimeng Guan, Naveed Iqbal, Mark A. Davenport, Mudassir Masood
Model-based deep learning methods such as \emph{loop unrolling} (LU) and \emph{deep equilibrium model} (DEQ) extensions offer outstanding performance in solving inverse problems (IP).
no code implementations • 6 Dec 2023 • Coleman DeLude, Mark A. Davenport, Justin Romberg
Alongside a careful discussion of this model and how to choose its parameters we show how to fit the model to new blocks of samples as they are received, producing a streaming output.
no code implementations • 19 Jul 2023 • Peimeng Guan, Naveed Iqbal, Mark A. Davenport, Mudassir Masood
Due to the sparse nature of the reflectivity sequence, spike-promoting regularizers such as the $\ell_1$-norm are frequently used.
no code implementations • 3 May 2023 • Chiraag Kaushik, Andrew D. McRae, Mark A. Davenport, Vidya Muthukumar
The support vector machine (SVM) is a supervised learning algorithm that finds a maximum-margin linear classifier, often after mapping the data to a high-dimensional feature space via the kernel trick.
no code implementations • 10 Oct 2022 • Peimeng Guan, Jihui Jin, Justin Romberg, Mark A. Davenport
In inverse problems we aim to reconstruct some underlying signal of interest from potentially corrupted and often ill-posed measurements.
no code implementations • 8 Feb 2022 • Nauman Ahad, Eva L. Dyer, Keith B. Hengen, Yao Xie, Mark A. Davenport
We present a novel change point detection framework that uses true change point instances as supervision for learning a ground metric such that Sinkhorn divergences can be then used in two-sample tests on sliding windows to detect change points in an online manner.
no code implementations • 4 Feb 2022 • Namrata Nadagouda, Austin Xu, Mark A. Davenport
Motivated by this, we propose a novel unified query framework that can be applied to any problem in which a key component is learning a representation of the data that reflects similarity.
no code implementations • 9 Nov 2021 • Andrew D. McRae, Santhosh Karnik, Mark A. Davenport, Vidya Muthukumar
Our results recover prior independent-features results (with a much simpler analysis), but they furthermore show that harmless interpolation can occur in more general settings such as features that are a bounded orthonormal system.
1 code implementation • NeurIPS 2021 • Feng Zhu, Andrew R. Sedler, Harrison A. Grier, Nauman Ahad, Mark A. Davenport, Matthew T. Kaufman, Andrea Giovannucci, Chethan Pandarinath
We test SBTT applied to sequential autoencoders and demonstrate more efficient and higher-fidelity characterization of neural population dynamics in electrophysiological and calcium imaging data.
no code implementations • 22 Mar 2021 • Santhosh Karnik, Justin Romberg, Mark A. Davenport
This is useful in problems where many samples are taken, and thus, using many tapers is desirable.
no code implementations • 24 Sep 2020 • Nauman Ahad, Mark A. Davenport
We show that change points provide examples of similar/dissimilar pairs of sequences which, when coupled with labeled, can be used in a semi-supervised classification setting.
no code implementations • NeurIPS 2020 • Austin Xu, Mark A. Davenport
The underlying assumption in this model is that a smaller distance between $\mathbf{u}$ and an item $\mathbf{x_j}$ indicates a stronger preference for $\mathbf{x_j}$.
no code implementations • 18 May 2020 • Liangbei Xu, Mark A. Davenport
In this paper we propose a novel approach to knowledge tracing that combines techniques from matrix factorization with recent progress in recurrent neural networks (RNNs) to effectively track the state of a student's knowledge.
no code implementations • 20 Mar 2020 • Rakshith S Srinivasa, Mark A. Davenport, Justin Romberg
We consider sketched approximate matrix multiplication and ridge regression in the novel setting of localized sketching, where at any given point, only part of the data matrix is available.
no code implementations • 11 Jul 2019 • Andrew D. McRae, Mark A. Davenport
This paper considers the problem of estimating a low-rank matrix from the observation of all or a subset of its entries in the presence of Poisson noise.
1 code implementation • 10 May 2019 • Gregory H. Canal, Andrew K. Massimino, Mark A. Davenport, Christopher J. Rozell
Suppose that we wish to estimate a user's preference vector $w$ from paired comparisons of the form "does user $w$ prefer item $p$ or item $q$?," where both the user and items are embedded in a low-dimensional Euclidean space with distances that reflect user and item similarities.
1 code implementation • 19 Feb 2018 • Andrew K. Massimino, Mark A. Davenport
Suppose that we wish to estimate a vector $\mathbf{x}$ from a set of binary paired comparisons of the form "$\mathbf{x}$ is closer to $\mathbf{p}$ than to $\mathbf{q}$" for various choices of vectors $\mathbf{p}$ and $\mathbf{q}$.
no code implementations • NeurIPS 2016 • Liangbei Xu, Mark A. Davenport
Low-rank matrix factorizations arise in a wide variety of applications -- including recommendation systems, topic models, and source separation, to name just a few.
1 code implementation • 19 Jun 2015 • Mark A. Davenport, Andrew K. Massimino, Deanna Needell, Tina Woolf
Suppose that we wish to estimate a vector $\mathbf{x} \in \mathbb{C}^n$ from a small number of noisy linear measurements of the form $\mathbf{y} = \mathbf{A x} + \mathbf{z}$, where $\mathbf{z}$ represents measurement noise.
Information Theory Information Theory
no code implementations • CVPR 2014 • Hongteng Xu, Hongyuan Zha, Mark A. Davenport
In this paper, we present a novel method to synthesize dynamic texture sequences from extremely few samples, e. g., merely two possibly disparate frames, leveraging both Markov Random Fields (MRFs) and manifold learning.