Search Results for author: Shenghuo Zhu

Found 28 papers, 2 papers with code

Robust Gaussian Process Regression for Real-Time High Precision GPS Signal Enhancement

no code implementations3 Jun 2019 Ming Lin, Xiaomin Song, Qi Qian, Hao Li, Liang Sun, Shenghuo Zhu, Rong Jin

We validate the superiority of the proposed method in our real-time high precision positioning system against several popular state-of-the-art robust regression methods.

regression

RobustSTL: A Robust Seasonal-Trend Decomposition Algorithm for Long Time Series

1 code implementation5 Dec 2018 Qingsong Wen, Jingkun Gao, Xiaomin Song, Liang Sun, Huan Xu, Shenghuo Zhu

Based on the extracted trend, we apply the the non-local seasonal filtering to extract the seasonality component.

Anomaly Detection Time Series +1

Parallel Restarted SGD with Faster Convergence and Less Communication: Demystifying Why Model Averaging Works for Deep Learning

no code implementations17 Jul 2018 Hao Yu, Sen yang, Shenghuo Zhu

Ideally, parallel mini-batch SGD can achieve a linear speed-up of the training time (with respect to the number of workers) compared with SGD over a single worker.

Large-scale Distance Metric Learning with Uncertainty

no code implementations CVPR 2018 Qi Qian, Jiasheng Tang, Hao Li, Shenghuo Zhu, Rong Jin

Furthermore, we can show that the metric is learned from latent examples only, but it can preserve the large margin property even for the original data.

Metric Learning

Learning with Non-Convex Truncated Losses by SGD

no code implementations21 May 2018 Yi Xu, Shenghuo Zhu, Sen yang, Chi Zhang, Rong Jin, Tianbao Yang

Learning with a {\it convex loss} function has been a dominating paradigm for many years.

Robust Optimization over Multiple Domains

no code implementations19 May 2018 Qi Qian, Shenghuo Zhu, Jiasheng Tang, Rong Jin, Baigui Sun, Hao Li

Hence, we propose to learn the model and the adversarial distribution simultaneously with the stochastic algorithm for efficiency.

BIG-bench Machine Learning Cloud Computing +1

Multinomial Logit Bandit with Linear Utility Functions

no code implementations8 May 2018 Mingdong Ou, Nan Li, Shenghuo Zhu, Rong Jin

In each round, the player selects a $K$-cardinality subset from $N$ candidate items, and receives a reward which is governed by a {\it multinomial logit} (MNL) choice model considering both item utility and substitution property among items.

Extremely Low Bit Neural Network: Squeeze the Last Bit Out with ADMM

no code implementations24 Jul 2017 Cong Leng, Hao Li, Shenghuo Zhu, Rong Jin

Although deep learning models are highly effective for various learning tasks, their high computational costs prohibit the deployment to scenarios where either memory or computational resources are limited.

object-detection Object Detection +1

Similarity Learning via Adaptive Regression and Its Application to Image Retrieval

no code implementations6 Dec 2015 Qi Qian, Inci M. Baytas, Rong Jin, Anil Jain, Shenghuo Zhu

The similarity between pairs of images can be measured by the distances between their high dimensional representations, and the problem of learning the appropriate similarity is often addressed by distance metric learning.

Image Retrieval Metric Learning +2

Towards Making High Dimensional Distance Metric Learning Practical

no code implementations15 Sep 2015 Qi Qian, Rong Jin, Lijun Zhang, Shenghuo Zhu

In this work, we present a dual random projection frame for DML with high dimensional data that explicitly addresses the limitation of dimensionality reduction for DML.

Dimensionality Reduction Metric Learning +1

Theory of Dual-sparse Regularized Randomized Reduction

no code implementations15 Apr 2015 Tianbao Yang, Lijun Zhang, Rong Jin, Shenghuo Zhu

In this paper, we study randomized reduction methods, which reduce high-dimensional features into low-dimensional space by randomized methods (e. g., random projection, random hashing), for large-scale high-dimensional classification.

General Classification

On Data Preconditioning for Regularized Loss Minimization

no code implementations13 Aug 2014 Tianbao Yang, Rong Jin, Shenghuo Zhu, Qihang Lin

In this work, we study data preconditioning, a well-known and long-existing technique, for boosting the convergence of first-order methods for regularized loss minimization.

CUR Algorithm with Incomplete Matrix Observation

no code implementations22 Mar 2014 Rong Jin, Shenghuo Zhu

Our goal is to develop a low rank approximation algorithm, similar to CUR, based on (i) randomly sampled rows and columns from A, and (ii) randomly sampled entries from A.

Matrix Completion

Fine-Grained Visual Categorization via Multi-stage Metric Learning

no code implementations CVPR 2015 Qi Qian, Rong Jin, Shenghuo Zhu, Yuanqing Lin

To this end, we proposed a multi-stage metric learning framework that divides the large-scale high dimensional learning problem to a series of simple subproblems, achieving $\mathcal{O}(d)$ computational complexity.

Fine-Grained Visual Categorization Metric Learning

Analysis of Distributed Stochastic Dual Coordinate Ascent

no code implementations4 Dec 2013 Tianbao Yang, Shenghuo Zhu, Rong Jin, Yuanqing Lin

Extraordinary performances have been observed and reported for the well-motivated updates, as referred to the practical updates, compared to the naive updates.

Efficient Object Detection and Segmentation for Fine-Grained Recognition

no code implementations CVPR 2013 Anelia Angelova, Shenghuo Zhu

The algorithm first detects low-level regions that could potentially belong to the object and then performs a full-object segmentation through propagation.

Object object-detection +2

Stochastic gradient descent algorithms for strongly convex functions at O(1/T) convergence rates

no code implementations9 May 2013 Shenghuo Zhu

With a weighting scheme proportional to t, a traditional stochastic gradient descent (SGD) algorithm achieves a high probability convergence rate of O({\kappa}/T) for strongly convex functions, instead of O({\kappa} ln(T)/T).

One-Pass AUC Optimization

no code implementations7 May 2013 Wei Gao, Rong Jin, Shenghuo Zhu, Zhi-Hua Zhou

AUC is an important performance measure and many algorithms have been devoted to AUC optimization, mostly by minimizing a surrogate convex loss on a training data set.

Efficient Distance Metric Learning by Adaptive Sampling and Mini-Batch Stochastic Gradient Descent (SGD)

no code implementations3 Apr 2013 Qi Qian, Rong Jin, Jin-Feng Yi, Lijun Zhang, Shenghuo Zhu

Although stochastic gradient descent (SGD) has been successfully applied to improve the efficiency of DML, it can still be computationally expensive because in order to ensure that the solution is a PSD matrix, it has to, at every iteration, project the updated distance metric onto the PSD cone, an expensive operation.

Computational Efficiency Metric Learning

Large Scale Strongly Supervised Ensemble Metric Learning, with Applications to Face Verification and Retrieval

1 code implementation25 Dec 2012 Chang Huang, Shenghuo Zhu, Kai Yu

Learning Mahanalobis distance metrics in a high- dimensional feature space is very difficult especially when structural sparsity and low rank are enforced to improve com- putational efficiency in testing phase.

Face Verification Metric Learning +1

Stochastic Gradient Descent with Only One Projection

no code implementations NeurIPS 2012 Mehrdad Mahdavi, Tianbao Yang, Rong Jin, Shenghuo Zhu, Jin-Feng Yi

Although many variants of stochastic gradient descent have been proposed for large-scale convex optimization, most of them require projecting the solution at {\it each} iteration to ensure that the obtained solution stays within the feasible domain.

An Efficient Primal-Dual Prox Method for Non-Smooth Optimization

no code implementations24 Jan 2012 Tianbao Yang, Mehrdad Mahdavi, Rong Jin, Shenghuo Zhu

We study the non-smooth optimization problems in machine learning, where both the loss function and the regularizer are non-smooth functions.

BIG-bench Machine Learning

Deep Coding Network

no code implementations NeurIPS 2010 Yuanqing Lin, Tong Zhang, Shenghuo Zhu, Kai Yu

This paper proposes a principled extension of the traditional single-layer flat sparse coding scheme, where a two-layer coding scheme is derived based on theoretical analysis of nonlinear functional approximation that extends recent results for local coordinate coding.

Stochastic Relational Models for Large-scale Dyadic Data using MCMC

no code implementations NeurIPS 2008 Shenghuo Zhu, Kai Yu, Yihong Gong

Stochastic relational models provide a rich family of choices for learning and predicting dyadic data between two sets of entities.

Bayesian Inference Collaborative Filtering

Predictive Matrix-Variate t Models

no code implementations NeurIPS 2007 Shenghuo Zhu, Kai Yu, Yihong Gong

It is becoming increasingly important to learn from a partially-observed random matrix and predict its missing elements.

Missing Elements Model Selection

Cannot find the paper you are looking for? You can Submit a new open access paper.