Search Results for author: Hoang Tran

Found 11 papers, 4 papers with code

Orthogonally weighted $\ell_{2,1}$ regularization for rank-aware joint sparse recovery: algorithm and analysis

1 code implementation21 Nov 2023 Armenak Petrosyan, Konstantin Pieper, Hoang Tran

We propose and analyze an efficient algorithm for solving the joint sparse recovery problem using a new regularization-based method, named orthogonally weighted $\ell_{2, 1}$ ($\mathit{ow}\ell_{2, 1}$), which is specifically designed to take into account the rank of the solution matrix.

Dictionary Learning

Point Cloud Data Simulation and Modelling with Aize Workspace

no code implementations19 Jan 2023 Boris Mocialov, Eirik Eythorsson, Reza Parseh, Hoang Tran, Vegard Flovik

This work takes a look at data models often used in digital twins and presents preliminary results specifically from surface reconstruction and semantic segmentation models trained using simulated data.

Segmentation Semantic Segmentation +1

Momentum Aggregation for Private Non-convex ERM

no code implementations12 Oct 2022 Hoang Tran, Ashok Cutkosky

We introduce new algorithms and convergence guarantees for privacy-preserving non-convex Empirical Risk Minimization (ERM) on smooth $d$-dimensional objectives.

Privacy Preserving

Differentially Private Online-to-Batch for Smooth Losses

no code implementations12 Oct 2022 Qinzi Zhang, Hoang Tran, Ashok Cutkosky

We develop a new reduction that converts any online convex optimization algorithm suffering $O(\sqrt{T})$ regret into an $\epsilon$-differentially private stochastic convex optimization algorithm with the optimal convergence rate $\tilde O(1/\sqrt{T} + \sqrt{d}/\epsilon T)$ on smooth losses in linear time, forming a direct analogy to the classical non-private "online-to-batch" conversion.

Model Calibration of the Liquid Mercury Spallation Target using Evolutionary Neural Networks and Sparse Polynomial Expansions

no code implementations18 Feb 2022 Majdi I. Radaideh, Hoang Tran, Lianshan Lin, Hao Jiang, Drew Winder, Sarma Gorti, Guannan Zhang, Justin Mach, Sarah Cousineau

Given that some of the calibrated parameters that show a good agreement with the experimental data can be nonphysical mercury properties, we need a more advanced two-phase flow model to capture bubble dynamics and mercury cavitation.

Better SGD using Second-order Momentum

1 code implementation4 Mar 2021 Hoang Tran, Ashok Cutkosky

We develop a new algorithm for non-convex stochastic optimization that finds an $\epsilon$-critical point in the optimal $O(\epsilon^{-3})$ stochastic gradient and Hessian-vector product computations.

Stochastic Optimization

AdaDGS: An adaptive black-box optimization method with a nonlocal directional Gaussian smoothing gradient

1 code implementation3 Nov 2020 Hoang Tran, Guannan Zhang

The local gradient points to the direction of the steepest slope in an infinitesimal neighborhood.

Analysis of The Ratio of $\ell_1$ and $\ell_2$ Norms in Compressed Sensing

no code implementations13 Apr 2020 Yiming Xu, Akil Narayan, Hoang Tran, Clayton G. Webster

We first propose a novel criterion that guarantees that an $s$-sparse signal is the local minimizer of the $\ell_1/\ell_2$ objective; our criterion is interpretable and useful in practice.

Accelerating Reinforcement Learning with a Directional-Gaussian-Smoothing Evolution Strategy

no code implementations21 Feb 2020 Jiaxing Zhang, Hoang Tran, Guannan Zhang

Evolution strategy (ES) has been shown great promise in many challenging reinforcement learning (RL) tasks, rivaling other state-of-the-art deep RL methods.

reinforcement-learning Reinforcement Learning (RL)

A Novel Evolution Strategy with Directional Gaussian Smoothing for Blackbox Optimization

1 code implementation7 Feb 2020 Jiaxin Zhang, Hoang Tran, Dan Lu, Guannan Zhang

Standard ES methods with $d$-dimensional Gaussian smoothing suffer from the curse of dimensionality due to the high variance of Monte Carlo (MC) based gradient estimators.

On Cross-validation for Sparse Reduced Rank Regression

no code implementations30 Dec 2018 Yiyuan She, Hoang Tran

In high-dimensional data analysis, regularization methods pursuing sparsity and/or low rank have received a lot of attention recently.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.