no code implementations • 16 Oct 2024 • Daqian Bao, Alex Saad-Falcon, Justin Romberg
Compared to traditional scene models in radar signal processing, with only 10% data footprint, our RIFT model achieves up to 188% improvement in scene reconstruction.
no code implementations • 11 Oct 2024 • Brighton Ancelin, Alex Saad-Falcon, Kason Ancelin, Justin Romberg
We propose new algorithms to efficiently average a collection of points on a Grassmannian manifold in both the centralized and decentralized settings.
no code implementations • 24 Jun 2024 • Nakul Singh, Coleman DeLude, Mark A. Davenport, Justin Romberg
We introduce a new method for robust beamforming, where the goal is to estimate a signal from array samples when there is uncertainty in the angle of arrival.
no code implementations • 13 Jun 2024 • Coleman DeLude, Joe Driscoll, Mandovi Mukherjee, Nael Rahman, Uday Kamal, Xiangyu Mao, Sharjeel Khan, Hariharan Sivaraman, Eric Huang, Jeffrey McHarg, Madhavan Swaminathan, Santosh Pande, Saibal Mukhopadhyay, Justin Romberg
For an emulation scenario consisting of $M$ objects all interacting with one another, the tapped delay line model's computational requirements scale as $O(M^3)$ per sample: there are $O(M^2)$ channels, each with $O(M)$ complexity.
no code implementations • 13 Jun 2024 • Mandovi Mukherjee, Xiangyu Mao, Nael Rahman, Coleman DeLude, Joe Driscoll, Sudarshan Sharma, Payman Behnam, Uday Kamal, Jongseok Woo, Daehyun Kim, Sharjeel Khan, Jianming Tong, Jamin Seo, Prachi Sinha, Madhavan Swaminathan, Tushar Krishna, Santosh Pande, Justin Romberg, Saibal Mukhopadhyay
The FPGA-based implementation, evaluated on a Xilinx ZCU104 board, demonstrates a $9$-node test case (two Transmitters, one Receiver, and $6$ passive reflectors) with an emulation range of $1. 13$ km to $27. 3$ km at $215$ MHz bandwidth.
no code implementations • 4 Jun 2024 • Chiraag Kaushik, Justin Romberg, Vidya Muthukumar
The classical iteratively reweighted least-squares (IRLS) algorithm aims to recover an unknown signal from linear measurements by performing a sequence of weighted least squares problems, where the weights are recursively updated at each step.
no code implementations • 3 May 2024 • Sihan Zeng, Thinh T. Doan, Justin Romberg
Multi-task reinforcement learning (RL) aims to find a single policy that effectively solves multiple tasks at the same time.
no code implementations • 15 Feb 2024 • Alex Saad-Falcon, Brighton Ancelin, Justin Romberg
Tracking signals in dynamic environments presents difficulties in both analysis and implementation.
no code implementations • 6 Dec 2023 • Coleman DeLude, Mark A. Davenport, Justin Romberg
Alongside a careful discussion of this model and how to choose its parameters we show how to fit the model to new blocks of samples as they are received, producing a streaming output.
no code implementations • 4 Apr 2023 • Styliani I. Kampezidou, Justin Romberg, Kyriakos G. Vamvoudakis, Dimitri N. Mavris
In this work, a novel Stackelberg game theoretic framework is proposed for trading energy bidirectionally between the demand-response (DR) aggregator and the prosumers.
no code implementations • 21 Oct 2022 • Coleman DeLude, Rakshith Sharma, Santhosh Karnik, Christopher Hood, Mark Davenport, Justin Romberg
We show that by using these models, our adapted algorithms can successfully localize broadband sources under a variety of adverse operating scenarios.
no code implementations • 10 Oct 2022 • Peimeng Guan, Jihui Jin, Justin Romberg, Mark A. Davenport
In inverse problems we aim to reconstruct some underlying signal of interest from potentially corrupted and often ill-posed measurements.
no code implementations • 2 Aug 2022 • Justin Romberg
We present an online algorithm for reconstructing a signal from a set of non-uniform samples.
no code implementations • 14 Jun 2022 • Coleman DeLude, Santhosh Karnik, Mark Davenport, Justin Romberg
In modern applications multi-sensor arrays are subject to an ever-present demand to accommodate signals with higher bandwidths.
no code implementations • 27 May 2022 • Sihan Zeng, Thinh T. Doan, Justin Romberg
We study the problem of finding the Nash equilibrium in a two-player zero-sum Markov game.
no code implementations • 28 Oct 2021 • Brighton Ancelin, Sohail Bahmani, Justin Romberg
We consider the "all-for-one" decentralized learning problem for generalized linear models.
no code implementations • 21 Oct 2021 • Sihan Zeng, Thinh T. Doan, Justin Romberg
To solve this constrained optimization program, we study an online actor-critic variant of a classic primal-dual method where the gradients of both the primal and dual functions are estimated using samples from a single trajectory generated by the underlying time-varying Markov processes.
no code implementations • 29 Sep 2021 • Sihan Zeng, Thinh T. Doan, Justin Romberg
First, we look at the infinite-horizon average-reward MDP with finite state and action spaces and derive a convergence rate of $O(k^{-2/5})$ for the online actor-critic algorithm under function approximation, which recovers the best known rate derived specifically for this problem.
no code implementations • 22 Mar 2021 • Santhosh Karnik, Justin Romberg, Mark A. Davenport
This is useful in problems where many samples are taken, and thus, using many tapers is desirable.
no code implementations • 26 Jan 2021 • Sajad Khodadadian, Thinh T. Doan, Justin Romberg, Siva Theja Maguluri
In this paper, we characterize the \emph{global} convergence of an online natural actor-critic algorithm in the tabular setting using a single trajectory of samples.
no code implementations • 28 Oct 2020 • Sihan Zeng, Thinh T. Doan, Justin Romberg
We study a decentralized variant of stochastic approximation, a data-driven approach for finding the root of an operator under noisy measurements.
no code implementations • 15 Jun 2020 • Rakshith S Srinivasa, Cao Xiao, Lucas Glass, Justin Romberg, Jimeng Sun
The attention mechanism has demonstrated superior performance for inference over nodes in graph neural networks (GNNs), however, they result in a high computational burden during both training and inference.
no code implementations • NeurIPS 2020 • Andrew McRae, Justin Romberg, Mark Davenport
We consider the theory of regression on a manifold using reproducing kernel Hilbert space methods.
no code implementations • 8 Jun 2020 • Sihan Zeng, Aqeel Anwar, Thinh Doan, Arijit Raychowdhury, Justin Romberg
We develop a mathematical framework for solving multi-task reinforcement learning (MTRL) problems based on a type of policy gradient method.
no code implementations • 24 Mar 2020 • Thinh T. Doan, Lam M. Nguyen, Nhan H. Pham, Justin Romberg
Motivated by broad applications in reinforcement learning and machine learning, this paper considers the popular stochastic gradient descent (SGD) when the gradients of the underlying objective function are sampled from Markov processes.
no code implementations • 20 Mar 2020 • Rakshith S Srinivasa, Mark A. Davenport, Justin Romberg
We consider sketched approximate matrix multiplication and ridge regression in the novel setting of localized sketching, where at any given point, only part of the data matrix is available.
no code implementations • NeurIPS 2019 • Rakshith Sharma Srinivasa, Kiryung Lee, Marius Junge, Justin Romberg
We address a low-rank matrix recovery problem where each column of a rank-r matrix X of size (d1, d2) is compressed beyond the point of recovery to size L with L << d1.
no code implementations • 9 Nov 2019 • Foroozan Karimzadeh, Ningyuan Cao, Brian Crafton, Justin Romberg, Arijit Raychowdhury
Deep neural networks (DNNs) have been emerged as the state-of-the-art algorithms in broad range of applications.
no code implementations • 26 Aug 2019 • Sohail Bahmani, Justin Romberg
We propose a formulation for nonlinear recurrent models that includes simple parametric models of recurrent neural networks as a special case.
no code implementations • 25 Jul 2019 • Thinh T. Doan, Siva Theja Maguluri, Justin Romberg
Our main contribution is to provide a finite-analysis on the performance of this distributed {\sf TD}$(\lambda)$ algorithm for both constant and time-varying step sizes.
no code implementations • 20 Feb 2019 • Thinh T. Doan, Siva Theja Maguluri, Justin Romberg
In this problem, a group of agents works cooperatively to evaluate the value function for the global discounted accumulative reward problem, which is composed of local rewards observed by the agents.
Optimization and Control
no code implementations • 19 Feb 2019 • Shaojie Xu, Anvesha Amaravati, Justin Romberg, Arijit Raychowdhury
We propose a novel appearance-based gesture recognition algorithm using compressed domain signal processing techniques.
1 code implementation • 19 Feb 2019 • Shaojie Xu, Sihan Zeng, Justin Romberg
Deep learning models have significantly improved the visual quality and accuracy on compressive sensing recovery.
1 code implementation • 17 Jun 2018 • Alireza Aghasi, Afshin Abdi, Justin Romberg
We develop a fast, tractable technique called Net-Trim for simplifying a trained neural network.
no code implementations • 17 Feb 2017 • Sohail Bahmani, Justin Romberg
We consider the question of estimating a solution to a system of equations that involve convex nonlinearities, a problem that is common in machine learning and signal processing.
1 code implementation • NeurIPS 2017 • Alireza Aghasi, Afshin Abdi, Nam Nguyen, Justin Romberg
This program seeks a sparse set of weights at each layer that keeps the layer inputs and outputs consistent with the originally trained model.
no code implementations • 13 Oct 2016 • Sohail Bahmani, Justin Romberg
We propose a flexible convex relaxation for the phase retrieval problem that operates in the natural domain of the signal.
no code implementations • 26 May 2016 • Anvesha A, Shaojie Xu, Ningyuan Cao, Justin Romberg, Arijit Raychowdhury
In this paper we propose an energy-efficient camera-based gesture recognition system powered by light energy for "always on" applications.
no code implementations • 29 Mar 2016 • Alireza Aghasi, Barmak Heshmat, Albert Redo-Sanchez, Justin Romberg, Ramesh Raskar
Heavy sweep distortion induced by alignments and inter-reflections of layers of a sample is a major burden in recovering 2D and 3D information in time resolved spectral imaging.
no code implementations • 23 Feb 2016 • Alireza Aghasi, Justin Romberg
We present a mathematical and algorithmic scheme for learning the principal geometric elements in an image or 3D object.
no code implementations • 14 Jun 2013 • M. Salman Asif, Justin Romberg
In this paper, we discuss two such streaming systems and a homotopy-based algorithm for quickly solving the associated L1-norm minimization programs: 1) Recovery of a smooth, time-varying signal for which, instead of using block transforms, we use lapped orthogonal transforms for sparse representation.
1 code implementation • 21 Nov 2012 • Ali Ahmed, Benjamin Recht, Justin Romberg
That is, we show that if $\boldsymbol{x}$ is drawn from a random subspace of dimension $N$, and $\boldsymbol{w}$ is a vector in a subspace of dimension $K$ whose basis vectors are "spread out" in the frequency domain, then nuclear norm minimization recovers $\boldsymbol{w}\boldsymbol{x}^*$ without error.
Information Theory Information Theory
2 code implementations • 9 Mar 2009 • Muhammad Salman Asif, Justin Romberg
We consider cases where the underlying signal changes slightly between measurements, and where new measurements of a fixed signal are sequentially added to the system.
Information Theory Information Theory