no code implementations • 1 Mar 2024 • Jacob Spainhour, Korben Smart, Stephen Becker, Nick Bottenus
Objective: The transmit encoding model for synthetic aperture imaging is a robust and flexible framework for understanding the effect of acoustic transmission on ultrasound image reconstruction.
no code implementations • 17 Feb 2024 • Nuojin Cheng, Stephen Becker
Bayesian optimization is a widely used technique for optimizing black-box functions, with Expected Improvement (EI) being the most commonly utilized acquisition function in this domain.
no code implementations • 22 Jun 2023 • Riccardo Balin, Filippo Simini, Cooper Simpson, Andrew Shao, Alessandro Rigazzi, Matthew Ellis, Stephen Becker, Alireza Doostan, John A. Evans, Kenneth E. Jansen
Recent years have seen many successful applications of machine learning (ML) to facilitate fluid dynamic computations.
1 code implementation • 25 May 2023 • Nuojin Cheng, Osman Asif Malik, Subhayan De, Stephen Becker, Alireza Doostan
An effective algorithm is proposed to maximize the variational lower bound of the HF log-likelihood in the presence of limited HF data, resulting in the synthesis of HF realizations with a reduced computational cost.
no code implementations • 18 Jan 2023 • Alex McManus, Stephen Becker, Nicholas Dwork
Parallel imaging with linear predictability takes advantage of information present in multiple receive coils to accurately reconstruct the image with fewer samples.
2 code implementations • 9 Nov 2022 • Kevin Doherty, Cooper Simpson, Stephen Becker, Alireza Doostan
We present a new convolution layer for deep learning architectures which we call QuadConv -- an approximation to continuous convolution via quadrature.
1 code implementation • 9 May 2021 • Osman Asif Malik, Venkatalakshmi Vyjayanthi Narumanchi, Stephen Becker, Todd W. Murray
It is therefore much faster than the iterative method used by Idier et al. We also propose a new representation of the imaged object based on Dirac delta expansion functions.
no code implementations • 12 Feb 2021 • Zhishen Huang, Stephen Becker
Stochastic gradient Langevin dynamics (SGLD) has gained the attention of optimization researchers due to its global optimization properties.
no code implementations • 7 Jan 2021 • Mitchell Krock, William Kleiber, Dorit Hammerling, Stephen Becker
The basis graphical lasso writes a univariate Gaussian process as a linear combination of basis functions weighted with entries of a Gaussian graphical vector whose graph is estimated from optimizing an $\ell_1$ penalized likelihood.
1 code implementation • 16 Oct 2020 • Osman Asif Malik, Stephen Becker
We provide high-probability relative-error guarantees for the sampled least squares problems.
Numerical Analysis Numerical Analysis
no code implementations • 21 Jul 2020 • Zhishen Huang, Stephen Becker
Sketching is a stochastic dimension reduction method that preserves geometric structures of data and has applications in high-dimensional regression, low rank approximation and graph sparsification.
no code implementations • 10 Feb 2020 • Will Shand, Stephen Becker
We discuss the problem of performing similarity search over function spaces.
1 code implementation • 19 Nov 2019 • Osman Asif Malik, Stephen Becker
In the recent paper [Jin, Kolda & Ward, arXiv:1909. 04801], it is proved that the Kronecker fast Johnson-Lindenstrauss transform (KFJLT) is, in fact, a Johnson-Lindenstrauss transform, which had previously only been conjectured.
Numerical Analysis Numerical Analysis
no code implementations • 17 Oct 2019 • Emiliano Dall'Anese, Andrea Simonetto, Stephen Becker, Liam Madden
Approaches for the design of time-varying or online first-order optimization methods are discussed, with emphasis on algorithms that can handle errors in the gradient, as may arise when the gradient is estimated.
no code implementations • 25 Jul 2019 • James Folberth, Stephen Becker
Under reasonable conditions, our feature elimination strategy will eventually eliminate all zero features from the problem.
1 code implementation • 17 May 2019 • Osman Asif Malik, Stephen Becker
We present a method for randomizing formulas for bilinear computation of matrix products.
Data Structures and Algorithms Numerical Analysis Numerical Analysis
1 code implementation • 10 Mar 2019 • Eric Kightley, Stephen Becker
We present a one-pass sparsified Gaussian mixture model (SGMM).
1 code implementation • 29 Jan 2019 • Osman Asif Malik, Stephen Becker
We propose a new fast randomized algorithm for interpolative decomposition of matrices which utilizes CountSketch.
Numerical Analysis Numerical Analysis 15-02
no code implementations • 24 Jan 2019 • Zhishen Huang, Stephen Becker
We consider the problem of finding local minimizers in non-convex and non-smooth optimization.
1 code implementation • NeurIPS 2018 • Osman Asif Malik, Stephen Becker
We propose two randomized algorithms for low-rank Tucker decomposition of tensors.
1 code implementation • 17 Apr 2018 • Farhad Pourkamali-Anaraki, James Folberth, Stephen Becker
The $\ell_0$ model is non-convex but only needs memory linear in $n$, and is solved via orthogonal matching pursuit and cannot handle the case of affine subspaces.
no code implementations • 8 Aug 2017 • Farhad Pourkamali-Anaraki, Stephen Becker
The Nystrom method is a popular technique that uses a small number of landmark points to compute a fixed-rank approximation of large kernel matrices that arise in machine learning problems.
no code implementations • 20 Dec 2016 • Farhad Pourkamali-Anaraki, Stephen Becker
Moreover, we introduce a randomized algorithm for generating landmark points that is scalable to large-scale data sets.
no code implementations • 26 Aug 2016 • Farhad Pourkamali-Anaraki, Stephen Becker
Kernel-based K-means clustering has gained popularity due to its simplicity and the power of its implicit non-linear representation of the data.
no code implementations • 1 Mar 2016 • Aleksandr Y. Aravkin, Stephen Becker
We focus on the robust principal component analysis (RPCA) problem, and review a range of old and new convex formulations for the problem and its variants.
2 code implementations • 31 Oct 2015 • Farhad Pourkamali-Anaraki, Stephen Becker
We analyze a compression scheme for large data sets that randomly keeps a small percentage of the components of each data sample.
no code implementations • 16 Oct 2015 • Stephen Becker, Ban Kawas, Marek Petrik, Karthikeyan N. Ramamurthy
While maintaining computational efficiency, our models provide robust solutions that are more accurate--relative to solutions of uncompressed least-squares--than those of classical compressed variants.
no code implementations • 5 Apr 2015 • Farhad Pourkamali-Anaraki, Stephen Becker, Shannon M. Hughes
Performing signal processing tasks on compressive measurements of data has received great attention in recent years.
no code implementations • NeurIPS 2014 • Cho-Jui Hsieh, Inderjit S. Dhillon, Pradeep K. Ravikumar, Stephen Becker, Peder A. Olsen
In this paper, we develop a family of algorithms for optimizing superposition-structured” or “dirty” statistical estimators for high-dimensional problems involving the minimization of the sum of a smooth loss function with a hybrid regularization.
no code implementations • NeurIPS 2014 • John J. Bruer, Joel A. Tropp, Volkan Cevher, Stephen Becker
This paper proposes a tradeoff between sample complexity and computation time that applies to statistical estimators based on convex optimization.
no code implementations • 4 Nov 2014 • Volkan Cevher, Stephen Becker, Mark Schmidt
This article reviews recent advances in convex optimization algorithms for Big Data, which aim to reduce the computational, storage, and communications bottlenecks.
1 code implementation • 4 Jun 2014 • Aleksandr Aravkin, Stephen Becker, Volkan Cevher, Peder Olsen
We introduce a new convex formulation for stable principal component pursuit (SPCP) to decompose noisy signals into low-rank and sparse representations.
no code implementations • 7 Jun 2012 • Anastasios Kyrillidis, Stephen Becker, Volkan Cevher and, Christoph Koch
Most learning methods with rank or sparsity constraints use convex relaxations, which lead to optimization with the nuclear norm or the $\ell_1$-norm.