Search Results for author: Zhi-Hua Zhou

Found 119 papers, 13 papers with code

Cost-effectively Identifying Causal Effect When Only Response Variable Observable

no code implementations ICML 2020 Tian-Zuo Wang, Xi-Zhu Wu, Sheng-Jun Huang, Zhi-Hua Zhou

In many real tasks, we care about how to make decisions other than mere predictions on an event, e. g. how to increase the revenue next month instead of knowing it will drop.

Decision Making

Learning with Feature and Distribution Evolvable Streams

no code implementations ICML 2020 Zhen-Yu Zhang, Peng Zhao, Yuan Jiang, Zhi-Hua Zhou

Besides the feature space evolving, it is noteworthy that the data distribution often changes in streaming data.

Efficient Methods for Non-stationary Online Learning

no code implementations16 Sep 2023 Peng Zhao, Yan-Feng Xie, Lijun Zhang, Zhi-Hua Zhou

In this paper, we present efficient methods for optimizing dynamic regret and adaptive regret, which reduce the number of projections per round from $\mathcal{O}(\log T)$ to $1$.

Universal Online Learning with Gradient Variations: A Multi-layer Online Ensemble Approach

no code implementations NeurIPS 2023 Yu-Hu Yan, Peng Zhao, Zhi-Hua Zhou

Our approach is based on a multi-layer online ensemble framework incorporating novel ingredients, including a carefully designed optimism for unifying diverse function types and cascaded corrections for algorithmic stability.

Weakly Supervised AUC Optimization: A Unified Partial AUC Approach

no code implementations23 May 2023 Zheng Xie, Yu Liu, Hao-Yuan He, Ming Li, Zhi-Hua Zhou

Since acquiring perfect supervision is usually difficult, real-world machine learning tasks often confront inaccurate, incomplete, or inexact supervision, collectively referred to as weak supervision.

A Theoretical Perspective of Machine Learning with Computational Resource Concerns

no code implementations3 May 2023 Zhi-Hua Zhou

Conventional theoretical machine learning studies generally assume explicitly or implicitly that there are enough or even infinitely supplied computational resources.

Learning Theory Scheduling

Revisiting Weighted Strategy for Non-stationary Parametric Bandits

no code implementations5 Mar 2023 Jing Wang, Peng Zhao, Zhi-Hua Zhou

We propose a refined analysis framework, which simplifies the derivation and importantly produces a simpler weight-based algorithm that is as efficient as window/restart-based algorithms while retaining the same regret as previous studies.

Learnware: Small Models Do Big

no code implementations7 Oct 2022 Zhi-Hua Zhou, Zhi-Hao Tan

There are complaints about current machine learning techniques such as the requirement of a huge amount of training data and proficient training skills, the difficulty of continual learning, the risk of catastrophic forgetting, the leaking of data privacy/proprietary, etc.

Continual Learning

Dynamic Regret of Online Markov Decision Processes

no code implementations26 Aug 2022 Peng Zhao, Long-Fei Li, Zhi-Hua Zhou

For these three models, we propose novel online ensemble algorithms and establish their dynamic regret guarantees respectively, in which the results for episodic (loop-free) SSP are provably minimax optimal in terms of time horizon and certain non-stationarity measure.

Adapting to Online Label Shift with Provable Guarantees

no code implementations5 Jul 2022 Yong Bai, Yu-Jie Zhang, Peng Zhao, Masashi Sugiyama, Zhi-Hua Zhou

In this paper, we formulate and investigate the problem of \emph{online label shift} (OLaS): the learner trains an initial model from the labeled offline data and then deploys it to an unlabeled online environment where the underlying label distribution changes over time but the label-conditional density does not.

On the Intrinsic Structures of Spiking Neural Networks

no code implementations21 Jun 2022 Shao-Qun Zhang, Jia-Yi Chen, Jin-Hui Wu, Gao Zhang, Huan Xiong, Bin Gu, Zhi-Hua Zhou

Initially, we unveil two pivotal components of intrinsic structures: the integration operation and firing-reset mechanism, by elucidating their influence on the expressivity of SNNs.

Open-environment Machine Learning

no code implementations1 Jun 2022 Zhi-Hua Zhou

With the great success of machine learning, nowadays, more and more practical tasks, particularly those involving open-environment scenarios where important factors are subject to change, called open-environment machine learning (Open ML) in this article, are present to the community.

BIG-bench Machine Learning

Corralling a Larger Band of Bandits: A Case Study on Switching Regret for Linear Bandits

no code implementations12 Feb 2022 Haipeng Luo, Mengxiao Zhang, Peng Zhao, Zhi-Hua Zhou

The CORRAL algorithm of Agarwal et al. (2017) and its variants (Foster et al., 2020a) achieve this goal with a regret overhead of order $\widetilde{O}(\sqrt{MT})$ where $M$ is the number of base algorithms and $T$ is the time horizon.

No-Regret Learning in Time-Varying Zero-Sum Games

no code implementations30 Jan 2022 Mengxiao Zhang, Peng Zhao, Haipeng Luo, Zhi-Hua Zhou

Learning from repeated play in a fixed two-player zero-sum game is a classic problem in game theory and online learning.

Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for Online Convex Optimization

no code implementations29 Dec 2021 Peng Zhao, Yu-Jie Zhang, Lijun Zhang, Zhi-Hua Zhou

Specifically, we introduce novel online algorithms that can exploit smoothness and replace the dependence on $T$ in dynamic regret with problem-dependent quantities: the variation in gradients of loss functions, the cumulative loss of the comparator sequence, and the minimum of these two terms.

Actively Identifying Causal Effects with Latent Variables Given Only Response Variable Observable

no code implementations NeurIPS 2021 Tian-Zuo Wang, Zhi-Hua Zhou

In many real tasks, it is generally desired to study the causal effect on a specific target (response variable) only, with no need to identify the thorough causal effects involving all variables.

Theoretical Exploration of Flexible Transmitter Model

no code implementations11 Nov 2021 Jin-Hui Wu, Shao-Qun Zhang, Yuan Jiang, Zhi-Hua Zhou

Neural network models generally involve two important components, i. e., network architecture and neuron model.

ARISE: ApeRIodic SEmi-parametric Process for Efficient Markets without Periodogram and Gaussianity Assumptions

no code implementations8 Nov 2021 Shao-Qun Zhang, Zhi-Hua Zhou

Mimicking and learning the long-term memory of efficient markets is a fundamental problem in the interaction between machine learning and financial economics to sequential data.

BIG-bench Machine Learning Time Series +1

Result Diversification by Multi-objective Evolutionary Algorithms with Theoretical Guarantees

1 code implementation18 Oct 2021 Chao Qian, Dan-Xuan Liu, Zhi-Hua Zhou

Experiments on the applications of web-based search, multi-label feature selection and document summarization show the superior performance of the GSEMO over the state-of-the-art algorithms (i. e., the greedy algorithm and local search) under both static and dynamic environments.

Document Summarization Evolutionary Algorithms +2

LIFE: Learning Individual Features for Multivariate Time Series Prediction with Missing Values

no code implementations30 Sep 2021 Zhao-Yu Zhang, Shao-Qun Zhang, Yuan Jiang, Zhi-Hua Zhou

Multivariate time series (MTS) prediction is ubiquitous in real-world fields, but MTS data often contains missing values.

Time Series Time Series Prediction

Towards Understanding Theoretical Advantages of Complex-Reaction Networks

no code implementations15 Aug 2021 Shao-Qun Zhang, Wei Gao, Zhi-Hua Zhou

Complex-valued neural networks have attracted increasing attention in recent years, while it remains open on the advantages of complex-valued neural networks in comparison with real-valued networks.

Seeing Differently, Acting Similarly: Heterogeneously Observable Imitation Learning

no code implementations17 Jun 2021 Xin-Qiang Cai, Yao-Xiang Ding, Zi-Xuan Chen, Yuan Jiang, Masashi Sugiyama, Zhi-Hua Zhou

In many real-world imitation learning tasks, the demonstrator and the learner have to act under different observation spaces.

Imitation Learning

Towards an Understanding of Benign Overfitting in Neural Networks

no code implementations6 Jun 2021 Zhu Li, Zhi-Hua Zhou, Arthur Gretton

Modern machine learning models often employ a huge number of parameters and are typically optimized to have zero training loss; yet surprisingly, they possess near-optimal prediction performance, contradicting classical learning theory.

Learning Theory

Non-stationary Online Learning with Memory and Non-stochastic Control

no code implementations7 Feb 2021 Peng Zhao, Yu-Hu Yan, Yu-Xiang Wang, Zhi-Hua Zhou

We study the problem of Online Convex Optimization (OCO) with memory, which allows loss functions to depend on past decisions and thus captures temporal effects of learning problems.

Towards Convergence Rate Analysis of Random Forests for Classification

no code implementations NeurIPS 2020 Wei Gao, Zhi-Hua Zhou

We get a convergence rate O(n^{-{1}/(d+2)}(\ln n)^{{1}/(d+2)}) for the variant of random forests, which reaches the minimax rate, except for a factor (\ln n)^{{1}/(d+2)}, of the optimal plug-in classifier under the L-Lipschitz assumption.

Classification General Classification

Isolation Distributional Kernel: A New Tool for Point & Group Anomaly Detection

1 code implementation24 Sep 2020 Kai Ming Ting, Bi-Cun Xu, Takashi Washio, Zhi-Hua Zhou

Existing approaches based on kernel mean embedding, which convert a point kernel to a distributional kernel, have two key issues: the point kernel employed has a feature map with intractable dimensionality; and it is {\em data independent}.

Group Anomaly Detection

Storage Fit Learning with Feature Evolvable Streams

no code implementations22 Jul 2020 Bo-Jian Hou, Yu-Hu Yan, Peng Zhao, Zhi-Hua Zhou

Our framework is able to fit its behavior to different storage budgets when learning with feature evolvable streams with unlabeled data.

Dynamic Regret of Convex and Smooth Functions

no code implementations NeurIPS 2020 Peng Zhao, Yu-Jie Zhang, Lijun Zhang, Zhi-Hua Zhou

We investigate online convex optimization in non-stationary environments and choose the dynamic regret as the performance measure, defined as the difference between cumulative loss incurred by the online algorithm and that of any feasible comparator sequence.

Soft Gradient Boosting Machine

no code implementations7 Jun 2020 Ji Feng, Yi-Xuan Xu, Yuan Jiang, Zhi-Hua Zhou

Gradient Boosting Machine has proven to be one successful function approximator and has been widely used in a variety of areas.

Incremental Learning

Flexible Transmitter Network

no code implementations8 Apr 2020 Shao-Qun Zhang, Zhi-Hua Zhou

To exhibit its power and potential, we present the Flexible Transmitter Network (FTNet), which is built on the most common fully-connected feed-forward architecture taking the FT model as the basic building block.

Time Series Analysis

AliExpress Learning-To-Rank: Maximizing Online Model Performance without Going Online

no code implementations25 Mar 2020 Guangda Huzhang, Zhen-Jia Pang, Yongqing Gao, Yawen Liu, Weijie Shen, Wen-Ji Zhou, Qing Da, An-Xiang Zeng, Han Yu, Yang Yu, Zhi-Hua Zhou

The framework consists of an evaluator that generalizes to evaluate recommendations involving the context, and a generator that maximizes the evaluator score by reinforcement learning, and a discriminator that ensures the generalization of the evaluator.


Exploratory Machine Learning with Unknown Unknowns

no code implementations5 Feb 2020 Yu-Jie Zhang, Peng Zhao, Zhi-Hua Zhou

In conventional supervised learning, a training dataset is given with ground-truth labels from a known label set, and the learned model will classify unseen instances to the known labels.

BIG-bench Machine Learning

Model Reuse with Reduced Kernel Mean Embedding Specification

no code implementations20 Jan 2020 Xi-Zhu Wu, Wenkai Xu, Song Liu, Zhi-Hua Zhou

Given a publicly available pool of machine learning models constructed for various tasks, when a user plans to build a model for her own machine learning application, is it possible to build upon models in the pool such that the previous efforts on these existing models can be reused rather than starting from scratch?

BIG-bench Machine Learning

Bridging Machine Learning and Logical Reasoning by Abductive Learning

1 code implementation NeurIPS 2019 Wang-Zhou Dai, Qiu-Ling Xu, Yang Yu, Zhi-Hua Zhou

In the area of artificial intelligence (AI), the two abilities are usually realised by machine learning and logic programming, respectively.

BIG-bench Machine Learning Logical Reasoning

A Refined Margin Distribution Analysis for Forest Representation Learning

no code implementations NeurIPS 2019 Shen-Huan Lyu, Liang Yang, Zhi-Hua Zhou

In this paper, we formulate the forest representation learning approach called \textsc{CasDF} as an additive model which boosts the augmented feature instead of the prediction.

Representation Learning

Improving deep forest by confidence screening

no code implementations the 18th IEEE International Conference on Data Mining 2019 Ming Pang, Kai-Ming Ting, Peng Zhao, Zhi-Hua Zhou

Most studies about deep learning are based on neural network models, where many layers of parameterized nonlinear differentiable modules are trained by back propagation.

Representation Learning

Multi-Label Learning with Deep Forest

no code implementations15 Nov 2019 Liang Yang, Xi-Zhu Wu, Yuan Jiang, Zhi-Hua Zhou

In multi-label learning, each instance is associated with multiple labels and the crucial task is how to leverage label correlations in building models.

Multi-Label Learning

An Unbiased Risk Estimator for Learning with Augmented Classes

no code implementations NeurIPS 2020 Yu-Jie Zhang, Peng Zhao, Zhi-Hua Zhou

This paper studies the problem of learning with augmented classes (LAC), where augmented classes unobserved in the training data might emerge in the testing phase.

Bifurcation Spiking Neural Network

no code implementations18 Sep 2019 Shao-Qun Zhang, Zhao-Yu Zhang, Zhi-Hua Zhou

Inspired by this insight, by enabling the spike generation function to have adaptable eigenvalues rather than parametric control rates, we develop the Bifurcation Spiking Neural Network (BSNN), which has an adaptive firing rate and is insensitive to the setting of control rates.

Time Series Time Series Analysis

Imitation Learning from Pixel-Level Demonstrations by HashReward

no code implementations9 Sep 2019 Xin-Qiang Cai, Yao-Xiang Ding, Yuan Jiang, Zhi-Hua Zhou

One of the key issues for imitation learning lies in making policy learned from limited samples to generalize well in the whole state-action space.

Dimensionality Reduction Imitation Learning

Bandit Convex Optimization in Non-stationary Environments

no code implementations29 Jul 2019 Peng Zhao, Guanghui Wang, Lijun Zhang, Zhi-Hua Zhou

In this paper, we investigate BCO in non-stationary environments and choose the \emph{dynamic regret} as the performance measure, which is defined as the difference between the cumulative loss incurred by the algorithm and that of any feasible comparator sequence.

Decision Making

Dual Adaptivity: A Universal Algorithm for Minimizing the Adaptive Regret of Convex Functions

no code implementations NeurIPS 2021 Lijun Zhang, Guanghui Wang, Wei-Wei Tu, Zhi-Hua Zhou

Along this line of research, this paper presents the first universal algorithm for minimizing the adaptive regret of convex functions.

Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual Perspective

1 code implementation10 Jun 2019 Lu Wang, Xuanqing Liu, Jin-Feng Yi, Zhi-Hua Zhou, Cho-Jui Hsieh

Furthermore, we show that dual solutions for these QP problems could give us a valid lower bound of the adversarial perturbation that can be used for formal robustness verification, giving us a nice view of attack/verification for NN models.


Joint Semantic Domain Alignment and Target Classifier Learning for Unsupervised Domain Adaptation

no code implementations10 Jun 2019 Dong-Dong Chen, Yisen Wang, Jin-Feng Yi, Zaiyi Chen, Zhi-Hua Zhou

Unsupervised domain adaptation aims to transfer the classifier learned from the source domain to the target domain in an unsupervised manner.

Unsupervised Domain Adaptation

Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder

1 code implementation NeurIPS 2019 Ji Feng, Qi-Zhi Cai, Zhi-Hua Zhou

In this work, we consider one challenging training time attack by modifying training data with bounded perturbation, hoping to manipulate the behavior (both targeted or non-targeted) of any corresponding trained classifier during test time when facing clean samples.


Forest Representation Learning Guided by Margin Distribution

no code implementations7 May 2019 Shen-Huan Lv, Liang Yang, Zhi-Hua Zhou

In this paper, we reformulate the forest representation learning approach as an additive model which boosts the augmented feature instead of the prediction.

Representation Learning

Optimal margin Distribution Network

no code implementations ICLR 2019 Shen-Huan Lv, Lu Wang, Zhi-Hua Zhou

Recent research about margin theory has proved that maximizing the minimum margin like support vector machines does not necessarily lead to better performance, and instead, it is crucial to optimize the margin distribution.

Prediction with Unpredictable Feature Evolution

no code implementations27 Apr 2019 Bo-Jian Hou, Lijun Zhang, Zhi-Hua Zhou

Learning with feature evolution studies the scenario where the features of the data streams can evolve, i. e., old features vanish and new features emerge.

Matrix Completion

Adaptive Regret of Convex and Smooth Functions

no code implementations26 Apr 2019 Lijun Zhang, Tie-Yan Liu, Zhi-Hua Zhou

We investigate online convex optimization in changing environments, and choose the adaptive regret as the performance measure.

Reliable Weakly Supervised Learning: Maximize Gain and Maintain Safeness

no code implementations22 Apr 2019 Lan-Zhe Guo, Yu-Feng Li, Ming Li, Jin-Feng Yi, Bo-Wen Zhou, Zhi-Hua Zhou

We guide the optimization of label quality through a small amount of validation data, and to ensure the safeness of performance while maximizing performance gain.

Weakly-supervised Learning

Stochastic Approximation of Smooth and Strongly Convex Functions: Beyond the $O(1/T)$ Convergence Rate

no code implementations27 Jan 2019 Lijun Zhang, Zhi-Hua Zhou

Finally, we emphasize that our proof is constructive and each risk bound is equipped with an efficient stochastic algorithm attaining that bound.

Improving Generalization of Deep Neural Networks by Leveraging Margin Distribution

no code implementations ICLR 2019 Shen-Huan Lyu, Lu Wang, Zhi-Hua Zhou

We utilize a convex margin distribution loss function on the deep neural networks to validate our theoretical results by optimizing the margin ratio.

Representation Learning

\ell_1-regression with Heavy-tailed Distributions

no code implementations NeurIPS 2018 Lijun Zhang, Zhi-Hua Zhou

In this paper, we consider the problem of linear regression with heavy-tailed distributions.


Preference Based Adaptation for Learning Objectives

no code implementations NeurIPS 2018 Yao-Xiang Ding, Zhi-Hua Zhou

In many real-world learning tasks, it is hard to directly optimize the true performance measures, meanwhile choosing the right surrogate objectives is also difficult.

Multi-Label Learning

Learning with Interpretable Structure from Gated RNN

no code implementations25 Oct 2018 Bo-Jian Hou, Zhi-Hua Zhou

With the learned FSA and via experiments on artificial and real datasets, we find that FSA is more trustable than the RNN from which it learned, which gives FSA a chance to substitute RNNs in applications involving humans' lives or dangerous facilities.

Clustering text-classification +1

Adaptive Online Learning in Dynamic Environments

no code implementations NeurIPS 2018 Lijun Zhang, Shiyin Lu, Zhi-Hua Zhou

In this paper, we study online convex optimization in dynamic environments, and aim to bound the dynamic regret with respect to any sequence of comparators.

Handling Concept Drift via Model Reuse

no code implementations8 Sep 2018 Peng Zhao, Le-Wen Cai, Zhi-Hua Zhou

In many real-world applications, data are often collected in the form of stream, and thus the distribution usually changes in nature, which is referred as concept drift in literature.

Rectify Heterogeneous Models with Semantic Mapping

no code implementations ICML 2018 Han-Jia Ye, De-Chuan Zhan, Yuan Jiang, Zhi-Hua Zhou

On the way to the robust learner for real-world applications, there are still great challenges, including considering unknown environments with limited data.

Multi-Layered Gradient Boosting Decision Trees

1 code implementation NeurIPS 2018 Ji Feng, Yang Yu, Zhi-Hua Zhou

Multi-layered representation is believed to be the key ingredient of deep neural networks especially in cognitive tasks like computer vision.

Representation Learning

Matrix Co-completion for Multi-label Classification with Missing Features and Labels

no code implementations23 May 2018 Miao Xu, Gang Niu, Bo Han, Ivor W. Tsang, Zhi-Hua Zhou, Masashi Sugiyama

We consider a challenging multi-label classification problem where both feature matrix $\X$ and label matrix $\Y$ have missing entries.

General Classification Matrix Completion +1

$\ell_1$-regression with Heavy-tailed Distributions

no code implementations NeurIPS 2018 Lijun Zhang, Zhi-Hua Zhou

In this paper, we consider the problem of linear regression with heavy-tailed distributions.


Tunneling Neural Perception and Logic Reasoning through Abductive Learning

1 code implementation4 Feb 2018 Wang-Zhou Dai, Qiu-Ling Xu, Yang Yu, Zhi-Hua Zhou

Perception and reasoning are basic human abilities that are seamlessly connected as part of human intelligence.

Subset Selection under Noise

no code implementations NeurIPS 2017 Chao Qian, Jing-Cheng Shi, Yang Yu, Ke Tang, Zhi-Hua Zhou

The problem of selecting the best $k$-element subset from a universe is involved in many applications.

Maximizing Submodular or Monotone Approximately Submodular Functions by Multi-objective Evolutionary Algorithms

no code implementations20 Nov 2017 Chao Qian, Yang Yu, Ke Tang, Xin Yao, Zhi-Hua Zhou

To provide a general theoretical explanation of the behavior of EAs, it is desirable to study their performance on general classes of combinatorial optimization problems.

Combinatorial Optimization Evolutionary Algorithms

AutoEncoder by Forest

2 code implementations26 Sep 2017 Ji Feng, Zhi-Hua Zhou

Auto-encoding is an important task which is typically realized by deep neural networks (DNNs) such as convolutional neural networks (CNN).

Theoretical Foundation of Co-Training and Disagreement-Based Algorithms

no code implementations15 Aug 2017 Wei Wang, Zhi-Hua Zhou

Disagreement-based approaches generate multiple classifiers and exploit the disagreement among them with unlabeled data to improve learning performance.

Multi-Class Optimal Margin Distribution Machine

no code implementations ICML 2017 Teng Zhang, Zhi-Hua Zhou

It still remains open for multi-class classification, and due to the complexity of margin for multi-class classification, optimizing its distribution by mean and variance can also be difficult.

Binary Classification Classification +2

Learning with Feature Evolvable Streams

no code implementations NeurIPS 2017 Bo-Jian Hou, Lijun Zhang, Zhi-Hua Zhou

To benefit from the recovered features, we develop two ensemble methods.

Distribution-Free One-Pass Learning

no code implementations8 Jun 2017 Peng Zhao, Zhi-Hua Zhou

Moreover, as the whole data volume is unknown when constructing the model, it is desired to scan each data item only once with a storage independent with the data volume.

Deep Descriptor Transforming for Image Co-Localization

no code implementations8 May 2017 Xiu-Shen Wei, Chen-Lin Zhang, Yao Li, Chen-Wei Xie, Jianxin Wu, Chunhua Shen, Zhi-Hua Zhou

Reusable model design becomes desirable with the rapid expansion of machine learning applications.

Effects of the optimisation of the margin distribution on generalisation in deep architectures

no code implementations19 Apr 2017 Lech Szymanski, Brendan McCane, Wei Gao, Zhi-Hua Zhou

Despite being so vital to success of Support Vector Machines, the principle of separating margin maximisation is not used in deep learning.

Multi-Label Learning with Global and Local Label Correlation

no code implementations4 Apr 2017 Yue Zhu, James T. Kwok, Zhi-Hua Zhou

In fact, in the real-world applications, both cases may occur that some label correlations are globally applicable and some are shared only in a local group of instances.

Multi-Label Learning

Deep Forest

19 code implementations28 Feb 2017 Zhi-Hua Zhou, Ji Feng

This study opens the door of deep learning based on non-differentiable modules, and exhibits the possibility of constructing deep models without using backpropagation.

Learning to Generate Posters of Scientific Papers by Probabilistic Graphical Models

no code implementations21 Feb 2017 Yu-ting Qiang, Yanwei Fu, Xiao Yu, Yanwen Guo, Zhi-Hua Zhou, Leonid Sigal

In order to bridge the gap between panel attributes and the composition within each panel, we also propose a recursive page splitting algorithm to generate the panel layout for a poster.

Dynamic Regret of Strongly Adaptive Methods

no code implementations ICML 2018 Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou

To cope with changing environments, recent developments in online learning have introduced the concepts of adaptive regret and dynamic regret independently.

What Makes Objects Similar: A Unified Multi-Metric Learning Approach

no code implementations NeurIPS 2016 Han-Jia Ye, De-Chuan Zhan, Xue-Min Si, Yuan Jiang, Zhi-Hua Zhou

In UM2L, a type of combination operator is introduced for distance characterization from multiple perspectives, and thus can introduce flexibilities for representing and utilizing both spatial and semantic linkages.

Metric Learning

Unorganized Malicious Attacks Detection

no code implementations NeurIPS 2018 Ming Pang, Wei Gao, Min Tao, Zhi-Hua Zhou

This work considers a different attack style: unorganized malicious attacks, where attackers individually utilize a small number of user profiles to attack different items without any organizer.

Matrix Completion Recommendation Systems

Crowdsourcing with Unsure Option

no code implementations1 Sep 2016 Yao-Xiang Ding, Zhi-Hua Zhou

One of the fundamental problems in crowdsourcing is the trade-off between the number of the workers needed for high-accuracy aggregation and the budget to pay.

A Unified View of Multi-Label Performance Measures

no code implementations ICML 2017 Xi-Zhu Wu, Zhi-Hua Zhou

Multi-label classification deals with the problem where each instance is associated with multiple class labels.

Classification General Classification +1

Efficient Training for Positive Unlabeled Learning

1 code implementation24 Aug 2016 Emanuele Sansone, Francesco G. B. De Natale, Zhi-Hua Zhou

Positive unlabeled (PU) learning is useful in various practical situations, where there is a need to learn a classifier for a class of interest from an unlabeled data set, which may contain anomalies as well as samples from unknown classes.

Learning Theory

Improved Dynamic Regret for Non-degenerate Functions

no code implementations NeurIPS 2017 Lijun Zhang, Tianbao Yang, Jin-Feng Yi, Rong Jin, Zhi-Hua Zhou

When multiple gradients are accessible to the learner, we first demonstrate that the dynamic regret of strongly convex functions can be upper bounded by the minimum of the path-length and the squared path-length.

On the Resistance of Nearest Neighbor to Random Noisy Labels

no code implementations26 Jul 2016 Wei Gao, Bin-Bin Yang, Zhi-Hua Zhou

The theoretical results show that, for asymmetric noises, k-nearest neighbor is robust enough to classify most data correctly, except for a handful of examples, whose labels are totally misled by random noises.

A Lower Bound Analysis of Population-based Evolutionary Algorithms for Pseudo-Boolean Functions

no code implementations10 Jun 2016 Chao Qian, Yang Yu, Zhi-Hua Zhou

Our results imply that the increase of population size, while usually desired in practice, bears the risk of increasing the lower bound of the running time and thus should be carefully considered.

Evolutionary Algorithms

Classification under Streaming Emerging New Classes: A Solution using Completely Random Trees

no code implementations30 May 2016 Xin Mu, Kai Ming Ting, Zhi-Hua Zhou

This is the first time, as far as we know, that completely random trees are used as a single common core to solve all three sub problems: unsupervised learning, supervised learning and model update in data streams.

Classification General Classification

One-Pass Learning with Incremental and Decremental Features

no code implementations30 May 2016 Chenping Hou, Zhi-Hua Zhou

In many real tasks the features are evolving, with some features being vanished and some other features augmented.

Selective Convolutional Descriptor Aggregation for Fine-Grained Image Retrieval

1 code implementation18 Apr 2016 Xiu-Shen Wei, Jian-Hao Luo, Jianxin Wu, Zhi-Hua Zhou

Moreover, on general image retrieval datasets, SCDA achieves comparable retrieval results with state-of-the-art general image retrieval approaches.

Image Retrieval Object Proposal Generation +1

Optimal Margin Distribution Machine

no code implementations12 Apr 2016 Teng Zhang, Zhi-Hua Zhou

Support vector machine (SVM) has been one of the most popular learning algorithms, with the central idea of maximizing the minimum margin, i. e., the smallest distance from the instances to the classification boundary.

Learning to Generate Posters of Scientific Papers

no code implementations5 Apr 2016 Yu-ting Qiang, Yanwei Fu, Yanwen Guo, Zhi-Hua Zhou, Leonid Sigal

Then, given inferred layout and attributes, composition of graphical elements within each panel is synthesized.

Minimal Gated Unit for Recurrent Neural Networks

no code implementations31 Mar 2016 Guo-Bing Zhou, Jianxin Wu, Chen-Lin Zhang, Zhi-Hua Zhou

Recently recurrent neural networks (RNN) has been very successful in handling sequence data.

Subset Selection by Pareto Optimization

no code implementations NeurIPS 2015 Chao Qian, Yang Yu, Zhi-Hua Zhou

Selecting the optimal subset from a large set of variables is a fundamental problem in various learning tasks such as feature selection, sparse regression, dictionary learning, etc.

Dictionary Learning feature selection +1

Sparse Learning for Large-scale and High-dimensional Data: A Randomized Convex-concave Optimization Approach

no code implementations12 Nov 2015 Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou

In this paper, we develop a randomized algorithm and theory for learning a sparse model from large-scale and high-dimensional data, which is usually formulated as an empirical risk minimization problem with a sparsity-inducing regularizer.

Sparse Learning

Stochastic Proximal Gradient Descent for Nuclear Norm Regularization

no code implementations5 Nov 2015 Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou

In this paper, we utilize stochastic optimization to reduce the space complexity of convex composite optimization with a nuclear norm regularizer, where the variable is a matrix of size $m \times n$.

Stochastic Optimization

Transductive Optimization of Top k Precision

no code implementations20 Oct 2015 Li-Ping Liu, Thomas G. Dietterich, Nan Li, Zhi-Hua Zhou

This paper introduces a new approach, Transductive Top K (TTK), that seeks to minimize the hinge loss over all training instances under the constraint that exactly $k$ test instances are predicted as positive.

Binary Classification Information Retrieval +2

Online Stochastic Linear Optimization under One-bit Feedback

no code implementations25 Sep 2015 Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou

In this paper, we study a special bandit setting of online stochastic linear optimization, where only one-bit of information is revealed to the learner at each round.

Multi-Label Active Learning from Crowds

no code implementations4 Aug 2015 Shao-Yuan Li, Yuan Jiang, Zhi-Hua Zhou

Multi-label active learning is a hot topic in reducing the label cost by optimally choosing the most valuable instance to query its label from an oracle.

Active Learning

Analysis of Nuclear Norm Regularization for Full-rank Matrix Completion

no code implementations26 Apr 2015 Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou

To the best of our knowledge, this is first time such a relative bound is proved for the regularized formulation of matrix completion.

Low-Rank Matrix Completion

Scalable Stochastic Alternating Direction Method of Multipliers

no code implementations12 Feb 2015 Shen-Yi Zhao, Wu-Jun Li, Zhi-Hua Zhou

There exists only one stochastic method, called SA-ADMM, which can achieve convergence rate $O(1/T)$ on general convex problems.

CUR Algorithm for Partially Observed Matrices

no code implementations4 Nov 2014 Miao Xu, Rong Jin, Zhi-Hua Zhou

In particular, the proposed algorithm computes the low rank approximation of the target matrix based on (i) the randomly sampled rows and columns, and (ii) a subset of observed entries that are randomly sampled from the matrix.

Matrix Completion

Top Rank Optimization in Linear Time

no code implementations NeurIPS 2014 Nan Li, Rong Jin, Zhi-Hua Zhou

Recent efforts of bipartite ranking are focused on optimizing ranking accuracy at the top of the ranked list.

Inductive Logic Boosting

no code implementations25 Feb 2014 Wang-Zhou Dai, Zhi-Hua Zhou

Structure learning of these systems is an intersection area of Inductive Logic Programming (ILP) and statistical learning (SL).

Inductive logic programming Relational Reasoning

Dropout Rademacher Complexity of Deep Neural Networks

no code implementations16 Feb 2014 Wei Gao, Zhi-Hua Zhou

Great successes of deep neural networks have been witnessed in various real applications.

Speedup Matrix Completion with Side Information: Application to Multi-Label Learning

no code implementations NeurIPS 2013 Miao Xu, Rong Jin, Zhi-Hua Zhou

In standard matrix completion theory, it is required to have at least $O(n\ln^2 n)$ observed entries to perfectly recover a low-rank matrix $M$ of size $n\times n$, leading to a large number of observations when $n$ is large.

Matrix Completion Multi-Label Learning

Analyzing Evolutionary Optimization in Noisy Environments

no code implementations20 Nov 2013 Chao Qian, Yang Yu, Zhi-Hua Zhou

On a representative problem where the noise has a strong negative effect, we examine two commonly employed mechanisms in EAs dealing with noise, the re-evaluation and the threshold selection strategies.

Evolutionary Algorithms

Large Margin Distribution Machine

no code implementations5 Nov 2013 Teng Zhang, Zhi-Hua Zhou

In this paper, we propose the Large margin Distribution Machine (LDM), which tries to achieve a better generalization performance by optimizing the margin distribution.

Fast Multi-Instance Multi-Label Learning

no code implementations8 Oct 2013 Sheng-Jun Huang, Zhi-Hua Zhou

Although the MIML problem is complicated, MIMLfast is able to achieve excellent performance by exploiting label relations with shared space and discovering sub-concepts for complicated labels.

Multi-Label Learning

One-Pass AUC Optimization

no code implementations7 May 2013 Wei Gao, Rong Jin, Shenghuo Zhu, Zhi-Hua Zhou

AUC is an important performance measure and many algorithms have been devoted to AUC optimization, mostly by minimizing a surrogate convex loss on a training data set.

Convex and Scalable Weakly Labeled SVMs

no code implementations6 Mar 2013 Yu-Feng Li, Ivor W. Tsang, James T. Kwok, Zhi-Hua Zhou

In this paper, we study the problem of learning from weakly labeled data, where labels of the training examples are incomplete.

Clustering Information Retrieval +1

Nyström Method vs Random Fourier Features: A Theoretical and Empirical Comparison

no code implementations NeurIPS 2012 Tianbao Yang, Yu-Feng Li, Mehrdad Mahdavi, Rong Jin, Zhi-Hua Zhou

Both random Fourier features and the Nyström method have been successfully applied to efficient kernel learning.

On the Consistency of AUC Pairwise Optimization

no code implementations3 Aug 2012 Wei Gao, Zhi-Hua Zhou

Based on this result, we prove that exponential loss and logistic loss are consistent with AUC, but hinge loss is inconsistent.


Active Learning by Querying Informative and Representative Examples

no code implementations NeurIPS 2010 Sheng-Jun Huang, Rong Jin, Zhi-Hua Zhou

Most active learning approaches select either informative or representative unlabeled instances to query their labels.

Active Learning Informativeness

Multi-View Active Learning in the Non-Realizable Case

no code implementations NeurIPS 2010 Wei Wang, Zhi-Hua Zhou

The sample complexity of active learning under the realizability assumption has been well-studied.

Active Learning

On the Doubt about Margin Explanation of Boosting

no code implementations19 Sep 2010 Wei Gao, Zhi-Hua Zhou

Margin theory provides one of the most popular explanations to the success of \texttt{AdaBoost}, where the central point lies in the recognition that \textit{margin} is the key for characterizing the performance of \texttt{AdaBoost}.

Isolation forest

no code implementations15 Dec 2008 Fei Tony Liu, Kai Ming Ting, Zhi-Hua Zhou

Most existing model-based approaches to anomaly detection construct a profile of normal instances, then identify instances that do not conform to the normal profile as anomalies.

Anomaly Detection Unsupervised Anomaly Detection with Specified Settings -- 0.1% anomaly +4

Semi-Supervised Regression with Co-Training

2 code implementations1 Jun 2005 Zhi-Hua Zhou, Ming Li

In many practical machine learning and data min-ing applications, unlabeled training examples arereadily available but labeled ones are fairly expen-sive to obtain.

BIG-bench Machine Learning regression

Cannot find the paper you are looking for? You can Submit a new open access paper.