Search Results for author: Huahua Wang

Found 8 papers, 0 papers with code

MultiXNet: Multiclass Multistage Multimodal Motion Prediction

no code implementations3 Jun 2020 Nemanja Djuric, Henggang Cui, Zhaoen Su, Shangxuan Wu, Huahua Wang, Fang-Chieh Chou, Luisa San Martin, Song Feng, Rui Hu, Yang Xu, Alyssa Dayan, Sidney Zhang, Brian C. Becker, Gregory P. Meyer, Carlos Vallespi-Gonzalez, Carl K. Wellington

One of the critical pieces of the self-driving puzzle is understanding the surroundings of a self-driving vehicle (SDV) and predicting how these surroundings will change in the near future.

motion prediction Position

Parallel Direction Method of Multipliers

no code implementations NeurIPS 2014 Huahua Wang, Arindam Banerjee, Zhi-Quan Luo

In this paper, we propose a parallel randomized block coordinate method named Parallel Direction Method of Multipliers (PDMM) to solve the optimization problems with multi-block linear constraints.

Randomized Block Coordinate Descent for Online and Stochastic Optimization

no code implementations1 Jul 2014 Huahua Wang, Arindam Banerjee

One is online or stochastic gradient descent (OGD/SGD), and the other is randomzied coordinate descent (RBCD).

Stochastic Optimization

Large Scale Distributed Sparse Precision Estimation

no code implementations NeurIPS 2013 Huahua Wang, Arindam Banerjee, Cho-Jui Hsieh, Pradeep K. Ravikumar, Inderjit S. Dhillon

We consider the problem of sparse precision matrix estimation in high dimensions using the CLIME estimator, which has several desirable theoretical properties.

Bethe-ADMM for Tree Decomposition based Parallel MAP Inference

no code implementations26 Sep 2013 Qiang Fu, Huahua Wang, Arindam Banerjee

We present a parallel MAP inference algorithm called Bethe-ADMM based on two ideas: tree-decomposition of the graph and the alternating direction method of multipliers (ADMM).

Tree Decomposition

Online Alternating Direction Method (longer version)

no code implementations17 Jun 2013 Huahua Wang, Arindam Banerjee

Online optimization has emerged as powerful tool in large scale optimization.

Bregman Alternating Direction Method of Multipliers

no code implementations NeurIPS 2014 Huahua Wang, Arindam Banerjee

The mirror descent algorithm (MDA) generalizes gradient descent by using a Bregman divergence to replace squared Euclidean distance.

Cannot find the paper you are looking for? You can Submit a new open access paper.