Search Results for author: Aleksandr Y. Aravkin

Found 33 papers, 3 papers with code

A unified sparse optimization framework to learn parsimonious physics-informed models from data

4 code implementations25 Jun 2019 Kathleen Champion, Peng Zheng, Aleksandr Y. Aravkin, Steven L. Brunton, J. Nathan Kutz

This flexible approach can be tailored to the unique challenges associated with a wide range of applications and data sets, providing a powerful ML-based framework for learning governing models for physical systems from data.

Generalized system identification with stable spline kernels

1 code implementation30 Sep 2013 Aleksandr Y. Aravkin, James V. Burke, Gianluigi Pillonetto

This paper extends linear system identification to a wide class of nonsmooth stable spline estimators, where regularization functionals and data misfits can be selected from a rich set of piecewise linear-quadratic (PLQ) penalties.

Time Series Using Exponential Smoothing Cells

1 code implementation9 Jun 2017 Avner Abrami, Aleksandr Y. Aravkin, Younghun Kim

We propose a flexible model for time series analysis, using exponential smoothing cells for overlapping time windows.

Time Series Time Series Analysis

Sparse Principal Component Analysis via Variable Projection

no code implementations1 Apr 2018 N. Benjamin Erichson, Peng Zheng, Krithika Manohar, Steven L. Brunton, J. Nathan Kutz, Aleksandr Y. Aravkin

Sparse principal component analysis (SPCA) has emerged as a powerful technique for modern data analysis, providing improved interpretation of low-rank structures by identifying localized spatial structures in the data and disambiguating between distinct time scales.

Computational Efficiency

Mean Reverting Portfolios via Penalized OU-Likelihood Estimation

no code implementations17 Mar 2018 Jize Zhang, Tim Leung, Aleksandr Y. Aravkin

We study an optimization-based approach to con- struct a mean-reverting portfolio of assets.

Fast Robust Methods for Singular State-Space Models

no code implementations7 Mar 2018 Jonathan Jonker, Aleksandr Y. Aravkin, James V. Burke, Gianluigi Pillonetto, Sarah Webster

We therefore suggest that the proposed approach be the {\it default choice} for estimating state space models outside of the Gaussian context, regardless of whether the error covariances are singular or not.

Time Series Time Series Analysis

Learning Robust Representations for Computer Vision

no code implementations31 Jul 2017 Peng Zheng, Aleksandr Y. Aravkin, Karthikeyan Natesan Ramamurthy, Jayaraman Jayaraman Thiagarajan

Unsupervised learning techniques in computer vision often require learning latent representations, such as low-dimensional linear and non-linear subspaces.

Clustering Representation Learning

Estimating Shape Parameters of Piecewise Linear-Quadratic Problems

no code implementations6 Jun 2017 Peng Zheng, Aleksandr Y. Aravkin, Karthikeyan Natesan Ramamurthy

The normalization constant inherent in this requirement helps to inform the optimization over shape parameters, giving a joint optimization problem over these as well as primary parameters of interest.

Boosting as a kernel-based method

no code implementations8 Aug 2016 Aleksandr Y. Aravkin, Giulio Bottegal, Gianluigi Pillonetto

We show that boosting with this learner is equivalent to estimation with a special {\it boosting kernel} that depends on $K$, as well as on the regression matrix, noise variance, and hyperparameters.

General Classification regression

Beating level-set methods for 3D seismic data interpolation: a primal-dual alternating approach

no code implementations9 Jul 2016 Rajiv Kumar, Oscar López, Damek Davis, Aleksandr Y. Aravkin, Felix J. Herrmann

Acquisition cost is a crucial bottleneck for seismic workflows, and low-rank formulations for data interpolation allow practitioners to `fill in' data volumes from critically subsampled data acquired in the field.

Dynamic matrix factorization with social influence

no code implementations21 Apr 2016 Aleksandr Y. Aravkin, Kush R. Varshney, Liu Yang

Matrix factorization is a key component of collaborative filtering-based recommendation systems because it allows us to complete sparse user-by-item ratings matrices under a low-rank assumption that encodes the belief that similar users give similar ratings and that similar items garner similar ratings.

Collaborative Filtering Recommendation Systems

Dual Smoothing and Level Set Techniques for Variational Matrix Decomposition

no code implementations1 Mar 2016 Aleksandr Y. Aravkin, Stephen Becker

We focus on the robust principal component analysis (RPCA) problem, and review a range of old and new convex formulations for the problem and its variants.

Robust EM kernel-based methods for linear system identification

no code implementations21 Nov 2014 Giulio Bottegal, Aleksandr Y. Aravkin, Håkan Hjalmarsson, Gianluigi Pillonetto

In this paper, we introduce a novel method to robustify kernel-based system identification methods.

Automatic Inference of the Quantile Parameter

no code implementations12 Nov 2015 Karthikeyan Natesan Ramamurthy, Aleksandr Y. Aravkin, Jayaraman J. Thiagarajan

However, loss functions such as quantile and quantile Huber generalize the symmetric $\ell_1$ and Huber losses to the asymmetric setting, for a fixed quantile parameter.

Beyond L2-Loss Functions for Learning Sparse Models

no code implementations26 Mar 2014 Karthikeyan Natesan Ramamurthy, Aleksandr Y. Aravkin, Jayaraman J. Thiagarajan

We propose an algorithm to learn dictionaries and obtain sparse codes when the data reconstruction fidelity is measured using any smooth PLQ cost function.

Clustering Retrieval +2

Smoothing Dynamic Systems with State-Dependent Covariance Matrices

no code implementations19 Nov 2012 Aleksandr Y. Aravkin, James V. Burke

One of the basic assumptions required to apply the Kalman smoothing framework is that error covariance matrices are known and given.

Computational Efficiency

Fast methods for denoising matrix completion formulations, with applications to robust seismic data interpolation

no code implementations20 Feb 2013 Aleksandr Y. Aravkin, Rajiv Kumar, Hassan Mansour, Ben Recht, Felix J. Herrmann

In this paper, we consider matrix completion formulations designed to hit a target data-fitting error level provided by the user, and propose an algorithm called LR-BPDN that is able to exploit factorized formulations to solve the corresponding optimization problem.

Collaborative Filtering Denoising +1

Semistochastic Quadratic Bound Methods

no code implementations5 Sep 2013 Aleksandr Y. Aravkin, Anna Choromanska, Tony Jebara, Dimitri Kanevsky

Batch methods based on the quadratic bound were recently proposed for this class of problems, and performed favorably in comparison to state-of-the-art techniques.

Outlier robust system identification: a Bayesian kernel-based approach

no code implementations21 Dec 2013 Giulio Bottegal, Aleksandr Y. Aravkin, Hakan Hjalmarsson, Gianluigi Pillonetto

In this paper, we propose an outlier-robust regularized kernel-based method for linear system identification.

Accelerating Hessian-free optimization for deep neural networks by implicit preconditioning and sampling

no code implementations5 Sep 2013 Tara N. Sainath, Lior Horesh, Brian Kingsbury, Aleksandr Y. Aravkin, Bhuvana Ramabhadran

This study aims at speeding up Hessian-free training, both by means of decreasing the amount of data used for training, as well as through reduction of the number of Krylov subspace solver iterations used for implicit estimation of the Hessian.

Improvements to deep convolutional neural networks for LVCSR

no code implementations5 Sep 2013 Tara N. Sainath, Brian Kingsbury, Abdel-rahman Mohamed, George E. Dahl, George Saon, Hagen Soltau, Tomas Beran, Aleksandr Y. Aravkin, Bhuvana Ramabhadran

We find that with these improvements, particularly with fMLLR and dropout, we are able to achieve an additional 2-3% relative improvement in WER on a 50-hour Broadcast News task over our previous best CNN baseline.

Speech Recognition

The connection between Bayesian estimation of a Gaussian random field and RKHS

no code implementations22 Jan 2013 Aleksandr Y. Aravkin, Bradley M. Bell, James V. Burke, Gianluigi Pillonetto

Reconstruction of a function from noisy data is often formulated as a regularized optimization problem over an infinite-dimensional reproducing kernel Hilbert space (RKHS).

Sparse/Robust Estimation and Kalman Smoothing with Nonsmooth Log-Concave Densities: Modeling, Computation, and Theory

no code implementations19 Jan 2013 Aleksandr Y. Aravkin, James V. Burke, Gianluigi Pillonetto

We introduce a class of quadratic support (QS) functions, many of which play a crucial role in a variety of applications, including machine learning, robust statistical inference, sparsity promotion, and Kalman smoothing.

Computational Efficiency Time Series Analysis

Computer Assisted Localization of a Heart Arrhythmia

no code implementations9 Jul 2018 Chris Vogl, Peng Zheng, Stephen P. Seslar, Aleksandr Y. Aravkin

We consider the problem of locating a point-source heart arrhythmia using data from a standard diagnostic procedure, where a reference catheter is placed in the heart, and arrival times from a second diagnostic catheter are recorded as the diagnostic catheter moves around within the heart.

Anatomy

A Unified Framework for Sparse Relaxed Regularized Regression: SR3

no code implementations14 Jul 2018 Peng Zheng, Travis Askham, Steven L. Brunton, J. Nathan Kutz, Aleksandr Y. Aravkin

We demonstrate the advantages of SR3 (computational efficiency, higher accuracy, faster convergence rates, greater flexibility) across a range of regularized regression problems with synthetic and real data, including applications in compressed sensing, LASSO, matrix completion, TV regularization, and group sparsity.

Computational Efficiency Matrix Completion +3

Adaptive As-Natural-As-Possible Image Stitching

no code implementations CVPR 2015 Chung-Ching Lin, Sharathchandra U. Pankanti, Karthikeyan Natesan Ramamurthy, Aleksandr Y. Aravkin

Computing the warp is fully automated and uses a combination of local homography and global similarity transformations, both of which are estimated with respect to the target.

Image Stitching

Data-Driven Aerospace Engineering: Reframing the Industry with Machine Learning

no code implementations24 Aug 2020 Steven L. Brunton, J. Nathan Kutz, Krithika Manohar, Aleksandr Y. Aravkin, Kristi Morgansen, Jennifer Klemisch, Nicholas Goebel, James Buttrick, Jeffrey Poskin, Agnes Blom-Schieber, Thomas Hogan, Darren McDonald

Indeed, emerging methods in machine learning may be thought of as data-driven optimization techniques that are ideal for high-dimensional, non-convex, and constrained, multi-objective optimization problems, and that improve with increasing volumes of data.

BIG-bench Machine Learning

A Proof of Principle: Multi-Modality Radiotherapy Optimization

no code implementations12 Nov 2019 Roman Levin, Aleksandr Y. Aravkin, Minsun Kim

In this paper, we propose a mathematical framework to optimize full radiation dose distributions and fractionation schedules of multiple radiation modalities, aiming to maximize the damage to the tumor while limiting the damage to the normal tissue to the corresponding tolerance level.

Optimization and Control Medical Physics

l1-Norm Minimization with Regula Falsi Type Root Finding Methods

no code implementations1 May 2021 Metin Vural, Aleksandr Y. Aravkin, Sławomir Stan'czak

Sparse level-set formulations allow practitioners to find the minimum 1-norm solution subject to likelihood constraints.

Vocal Bursts Type Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.