no code implementations • ICML 2020 • Lan-Zhe Guo, Zhen-Yu Zhang, Yuan Jiang, Yufeng Li, Zhi-Hua Zhou
Deep semi-supervised learning (SSL) has been shown very effectively.
no code implementations • ICML 2020 • Zhen-Yu Zhang, Peng Zhao, Yuan Jiang, Zhi-Hua Zhou
Besides the feature space evolving, it is noteworthy that the data distribution often changes in streaming data.
no code implementations • ICML 2020 • Tian-Zuo Wang, Xi-Zhu Wu, Sheng-Jun Huang, Zhi-Hua Zhou
In many real tasks, we care about how to make decisions other than mere predictions on an event, e. g. how to increase the revenue next month instead of knowing it will drop.
no code implementations • 11 Dec 2024 • Wen-Chao Hu, Wang-Zhou Dai, Yuan Jiang, Zhi-Hua Zhou
Neuro-Symbolic (NeSy) AI could be regarded as an analogy to human dual-process cognition, modeling the intuitive System 1 with neural networks and the algorithmic System 2 with symbolic reasoning.
no code implementations • 8 Nov 2024 • Zhilong Zhang, Ruifeng Chen, Junyin Ye, Yihao Sun, Pengyuan Wang, JingCheng Pang, Kaiyuan Li, Tianshuo Liu, Haoxin Lin, Yang Yu, Zhi-Hua Zhou
Incorporating these two techniques, we present Whale-ST, a scalable spatial-temporal transformer-based world model with enhanced generalizability.
no code implementations • 5 Nov 2024 • Long-Fei Li, Peng Zhao, Zhi-Hua Zhou
Building on this, we propose a novel algorithm that combines the benefits of both methods.
no code implementations • 17 Aug 2024 • Yan-Feng Xie, Peng Zhao, Zhi-Hua Zhou
Recent efforts in neural network optimization suggest a generalized smoothness condition, allowing smoothness to correlate with gradient norms.
no code implementations • 21 Jul 2024 • Tian-Zuo Wang, Lue Tao, Zhi-Hua Zhou
Identifying causal relations is crucial for a variety of downstream tasks.
no code implementations • 27 May 2024 • Long-Fei Li, Yu-Jie Zhang, Peng Zhao, Zhi-Hua Zhou
The best-known result of Hwang and Oh [2023] has achieved an $\widetilde{\mathcal{O}}(\kappa^{-1}dH^2\sqrt{K})$ regret upper bound, where $\kappa$ is a problem-dependent quantity, $d$ is the feature dimension, $H$ is the episode length, and $K$ is the number of episodes.
no code implementations • 7 Mar 2024 • Long-Fei Li, Peng Zhao, Zhi-Hua Zhou
We study reinforcement learning with linear function approximation, unknown transition, and adversarial losses in the bandit feedback setting.
1 code implementation • 24 Jan 2024 • Zhi-Hao Tan, Jian-Dong Liu, Xiao-Dong Bi, Peng Tan, Qin-Cheng Zheng, Hai-Tian Liu, Yi Xie, Xiao-Chuan Zou, Yang Yu, Zhi-Hua Zhou
The learnware paradigm proposed by Zhou [2016] aims to enable users to reuse numerous existing well-trained models instead of building machine learning models from scratch, with the hope of solving new user tasks even beyond models' original purposes.
no code implementations • 16 Sep 2023 • Peng Zhao, Yan-Feng Xie, Lijun Zhang, Zhi-Hua Zhou
In this paper, we present efficient methods for optimizing dynamic regret and adaptive regret, which reduce the number of projections per round from $\mathcal{O}(\log T)$ to $1$.
no code implementations • NeurIPS 2023 • Yu-Hu Yan, Peng Zhao, Zhi-Hua Zhou
Our approach is based on a multi-layer online ensemble framework incorporating novel ingredients, including a carefully designed optimism for unifying diverse function types and cascaded corrections for algorithmic stability.
no code implementations • 23 May 2023 • Zheng Xie, Yu Liu, Hao-Yuan He, Ming Li, Zhi-Hua Zhou
Since acquiring perfect supervision is usually difficult, real-world machine learning tasks often confront inaccurate, incomplete, or inexact supervision, collectively referred to as weak supervision.
no code implementations • 3 May 2023 • Zhi-Hua Zhou
Conventional theoretical machine learning studies generally assume explicitly or implicitly that there are enough or even infinitely supplied computational resources.
no code implementations • 5 Mar 2023 • Jing Wang, Peng Zhao, Zhi-Hua Zhou
We propose a refined analysis framework, which simplifies the derivation and importantly produces a simpler weight-based algorithm that is as efficient as window/restart-based algorithms while retaining the same regret as previous studies.
no code implementations • NeurIPS 2023 • Lijun Zhang, Haomin Bai, Peng Zhao, Tianbao Yang, Zhi-Hua Zhou
To reduce the number of samples required in each round from $m$ to 1, we cast GDRO as a two-player game, where one player conducts SMD and the other executes an online algorithm for non-oblivious multi-armed bandits, maintaining the same sample complexity.
no code implementations • 7 Oct 2022 • Zhi-Hua Zhou, Zhi-Hao Tan
There are complaints about current machine learning techniques such as the requirement of a huge amount of training data and proficient training skills, the difficulty of continual learning, the risk of catastrophic forgetting, the leaking of data privacy/proprietary, etc.
no code implementations • 26 Aug 2022 • Peng Zhao, Long-Fei Li, Zhi-Hua Zhou
For these three models, we propose novel online ensemble algorithms and establish their dynamic regret guarantees respectively, in which the results for episodic (loop-free) SSP are provably minimax optimal in terms of time horizon and certain non-stationarity measure.
no code implementations • 5 Jul 2022 • Yong Bai, Yu-Jie Zhang, Peng Zhao, Masashi Sugiyama, Zhi-Hua Zhou
In this paper, we formulate and investigate the problem of \emph{online label shift} (OLaS): the learner trains an initial model from the labeled offline data and then deploys it to an unlabeled online environment where the underlying label distribution changes over time but the label-conditional density does not.
no code implementations • 21 Jun 2022 • Shao-Qun Zhang, Jia-Yi Chen, Jin-Hui Wu, Gao Zhang, Huan Xiong, Bin Gu, Zhi-Hua Zhou
Initially, we unveil two pivotal components of intrinsic structures: the integration operation and firing-reset mechanism, by elucidating their influence on the expressivity of SNNs.
no code implementations • 1 Jun 2022 • Zhi-Hua Zhou
With the great success of machine learning, nowadays, more and more practical tasks, particularly those involving open-environment scenarios where important factors are subject to change, called open-environment machine learning (Open ML) in this article, are present to the community.
no code implementations • 12 Feb 2022 • Haipeng Luo, Mengxiao Zhang, Peng Zhao, Zhi-Hua Zhou
The CORRAL algorithm of Agarwal et al. (2017) and its variants (Foster et al., 2020a) achieve this goal with a regret overhead of order $\widetilde{O}(\sqrt{MT})$ where $M$ is the number of base algorithms and $T$ is the time horizon.
no code implementations • 30 Jan 2022 • Mengxiao Zhang, Peng Zhao, Haipeng Luo, Zhi-Hua Zhou
Learning from repeated play in a fixed two-player zero-sum game is a classic problem in game theory and online learning.
1 code implementation • 29 Dec 2021 • Peng Zhao, Yu-Jie Zhang, Lijun Zhang, Zhi-Hua Zhou
Specifically, we introduce novel online algorithms that can exploit smoothness and replace the dependence on $T$ in dynamic regret with problem-dependent quantities: the variation in gradients of loss functions, the cumulative loss of the comparator sequence, and the minimum of these two terms.
no code implementations • NeurIPS 2021 • Tian-Zuo Wang, Zhi-Hua Zhou
In many real tasks, it is generally desired to study the causal effect on a specific target (response variable) only, with no need to identify the thorough causal effects involving all variables.
no code implementations • 11 Nov 2021 • Jin-Hui Wu, Shao-Qun Zhang, Yuan Jiang, Zhi-Hua Zhou
Neural network models generally involve two important components, i. e., network architecture and neuron model.
no code implementations • 8 Nov 2021 • Shao-Qun Zhang, Zhi-Hua Zhou
Mimicking and learning the long-term memory of efficient markets is a fundamental problem in the interaction between machine learning and financial economics to sequential data.
1 code implementation • 18 Oct 2021 • Chao Qian, Dan-Xuan Liu, Zhi-Hua Zhou
Experiments on the applications of web-based search, multi-label feature selection and document summarization show the superior performance of the GSEMO over the state-of-the-art algorithms (i. e., the greedy algorithm and local search) under both static and dynamic environments.
no code implementations • 30 Sep 2021 • Zhao-Yu Zhang, Shao-Qun Zhang, Yuan Jiang, Zhi-Hua Zhou
Multivariate time series (MTS) prediction is ubiquitous in real-world fields, but MTS data often contains missing values.
no code implementations • 15 Aug 2021 • Shao-Qun Zhang, Wei Gao, Zhi-Hua Zhou
Complex-valued neural networks have attracted increasing attention in recent years, while it remains open on the advantages of complex-valued neural networks in comparison with real-valued networks.
no code implementations • 17 Jun 2021 • Xin-Qiang Cai, Yao-Xiang Ding, Zi-Xuan Chen, Yuan Jiang, Masashi Sugiyama, Zhi-Hua Zhou
In many real-world imitation learning tasks, the demonstrator and the learner have to act under different observation spaces.
no code implementations • 6 Jun 2021 • Zhu Li, Zhi-Hua Zhou, Arthur Gretton
Modern machine learning models often employ a huge number of parameters and are typically optimized to have zero training loss; yet surprisingly, they possess near-optimal prediction performance, contradicting classical learning theory.
no code implementations • 7 Feb 2021 • Peng Zhao, Yu-Hu Yan, Yu-Xiang Wang, Zhi-Hua Zhou
We study the problem of Online Convex Optimization (OCO) with memory, which allows loss functions to depend on past decisions and thus captures temporal effects of learning problems.
no code implementations • NeurIPS 2020 • Wei Gao, Zhi-Hua Zhou
We get a convergence rate O(n^{-{1}/(d+2)}(\ln n)^{{1}/(d+2)}) for the variant of random forests, which reaches the minimax rate, except for a factor (\ln n)^{{1}/(d+2)}, of the optimal plug-in classifier under the L-Lipschitz assumption.
1 code implementation • 24 Sep 2020 • Kai Ming Ting, Bi-Cun Xu, Takashi Washio, Zhi-Hua Zhou
Existing approaches based on kernel mean embedding, which convert a point kernel to a distributional kernel, have two key issues: the point kernel employed has a feature map with intractable dimensionality; and it is {\em data independent}.
no code implementations • 22 Jul 2020 • Bo-Jian Hou, Yu-Hu Yan, Peng Zhao, Zhi-Hua Zhou
Our framework is able to fit its behavior to different storage budgets when learning with feature evolvable streams with unlabeled data.
no code implementations • NeurIPS 2020 • Peng Zhao, Yu-Jie Zhang, Lijun Zhang, Zhi-Hua Zhou
We investigate online convex optimization in non-stationary environments and choose the dynamic regret as the performance measure, defined as the difference between cumulative loss incurred by the online algorithm and that of any feasible comparator sequence.
no code implementations • 7 Jun 2020 • Ji Feng, Yi-Xuan Xu, Yuan Jiang, Zhi-Hua Zhou
Gradient Boosting Machine has proven to be one successful function approximator and has been widely used in a variety of areas.
no code implementations • 8 Apr 2020 • Shao-Qun Zhang, Zhi-Hua Zhou
To exhibit its power and potential, we present the Flexible Transmitter Network (FTNet), which is built on the most common fully-connected feed-forward architecture taking the FT model as the basic building block.
no code implementations • 25 Mar 2020 • Guangda Huzhang, Zhen-Jia Pang, Yongqing Gao, Yawen Liu, Weijie Shen, Wen-Ji Zhou, Qing Da, An-Xiang Zeng, Han Yu, Yang Yu, Zhi-Hua Zhou
The framework consists of an evaluator that generalizes to evaluate recommendations involving the context, and a generator that maximizes the evaluator score by reinforcement learning, and a discriminator that ensures the generalization of the evaluator.
no code implementations • 5 Feb 2020 • Peng Zhao, Jia-Wei Shan, Yu-Jie Zhang, Zhi-Hua Zhou
In conventional supervised learning, a training dataset is given with ground-truth labels from a known label set, and the learned model will classify unseen instances to known labels.
no code implementations • 20 Jan 2020 • Xi-Zhu Wu, Wenkai Xu, Song Liu, Zhi-Hua Zhou
Given a publicly available pool of machine learning models constructed for various tasks, when a user plans to build a model for her own machine learning application, is it possible to build upon models in the pool such that the previous efforts on these existing models can be reused rather than starting from scratch?
1 code implementation • NeurIPS 2019 • Wang-Zhou Dai, Qiu-Ling Xu, Yang Yu, Zhi-Hua Zhou
In the area of artificial intelligence (AI), the two abilities are usually realised by machine learning and logic programming, respectively.
no code implementations • NeurIPS 2019 • Shen-Huan Lyu, Liang Yang, Zhi-Hua Zhou
In this paper, we formulate the forest representation learning approach called \textsc{CasDF} as an additive model which boosts the augmented feature instead of the prediction.
no code implementations • the 18th IEEE International Conference on Data Mining 2019 • Ming Pang, Kai-Ming Ting, Peng Zhao, Zhi-Hua Zhou
Most studies about deep learning are based on neural network models, where many layers of parameterized nonlinear differentiable modules are trained by back propagation.
no code implementations • 15 Nov 2019 • Liang Yang, Xi-Zhu Wu, Yuan Jiang, Zhi-Hua Zhou
In multi-label learning, each instance is associated with multiple labels and the crucial task is how to leverage label correlations in building models.
no code implementations • NeurIPS 2020 • Yu-Jie Zhang, Peng Zhao, Zhi-Hua Zhou
This paper studies the problem of learning with augmented classes (LAC), where augmented classes unobserved in the training data might emerge in the testing phase.
no code implementations • 18 Sep 2019 • Shao-Qun Zhang, Zhao-Yu Zhang, Zhi-Hua Zhou
Inspired by this insight, by enabling the spike generation function to have adaptable eigenvalues rather than parametric control rates, we develop the Bifurcation Spiking Neural Network (BSNN), which has an adaptive firing rate and is insensitive to the setting of control rates.
no code implementations • 9 Sep 2019 • Xin-Qiang Cai, Yao-Xiang Ding, Yuan Jiang, Zhi-Hua Zhou
One of the key issues for imitation learning lies in making policy learned from limited samples to generalize well in the whole state-action space.
no code implementations • 29 Jul 2019 • Peng Zhao, Guanghui Wang, Lijun Zhang, Zhi-Hua Zhou
In this paper, we investigate BCO in non-stationary environments and choose the \emph{dynamic regret} as the performance measure, which is defined as the difference between the cumulative loss incurred by the algorithm and that of any feasible comparator sequence.
no code implementations • NeurIPS 2021 • Lijun Zhang, Guanghui Wang, Wei-Wei Tu, Zhi-Hua Zhou
Along this line of research, this paper presents the first universal algorithm for minimizing the adaptive regret of convex functions.
1 code implementation • 10 Jun 2019 • Lu Wang, Xuanqing Liu, Jin-Feng Yi, Zhi-Hua Zhou, Cho-Jui Hsieh
Furthermore, we show that dual solutions for these QP problems could give us a valid lower bound of the adversarial perturbation that can be used for formal robustness verification, giving us a nice view of attack/verification for NN models.
no code implementations • 10 Jun 2019 • Dong-Dong Chen, Yisen Wang, Jin-Feng Yi, Zaiyi Chen, Zhi-Hua Zhou
Unsupervised domain adaptation aims to transfer the classifier learned from the source domain to the target domain in an unsupervised manner.
no code implementations • 31 May 2019 • Wen-Ji Zhou, Yang Yu, Yingfeng Chen, Kai Guan, Tangjie Lv, Changjie Fan, Zhi-Hua Zhou
Experience reuse is key to sample-efficient reinforcement learning.
1 code implementation • NeurIPS 2019 • Ji Feng, Qi-Zhi Cai, Zhi-Hua Zhou
In this work, we consider one challenging training time attack by modifying training data with bounded perturbation, hoping to manipulate the behavior (both targeted or non-targeted) of any corresponding trained classifier during test time when facing clean samples.
no code implementations • 7 May 2019 • Shen-Huan Lv, Liang Yang, Zhi-Hua Zhou
In this paper, we reformulate the forest representation learning approach as an additive model which boosts the augmented feature instead of the prediction.
no code implementations • ICLR 2019 • Shen-Huan Lv, Lu Wang, Zhi-Hua Zhou
Recent research about margin theory has proved that maximizing the minimum margin like support vector machines does not necessarily lead to better performance, and instead, it is crucial to optimize the margin distribution.
no code implementations • 27 Apr 2019 • Bo-Jian Hou, Lijun Zhang, Zhi-Hua Zhou
Learning with feature evolution studies the scenario where the features of the data streams can evolve, i. e., old features vanish and new features emerge.
no code implementations • 26 Apr 2019 • Lijun Zhang, Tie-Yan Liu, Zhi-Hua Zhou
We investigate online convex optimization in changing environments, and choose the adaptive regret as the performance measure.
no code implementations • 22 Apr 2019 • Lan-Zhe Guo, Yu-Feng Li, Ming Li, Jin-Feng Yi, Bo-Wen Zhou, Zhi-Hua Zhou
We guide the optimization of label quality through a small amount of validation data, and to ensure the safeness of performance while maximizing performance gain.
no code implementations • 27 Jan 2019 • Lijun Zhang, Zhi-Hua Zhou
Finally, we emphasize that our proof is constructive and each risk bound is equipped with an efficient stochastic algorithm attaining that bound.
1 code implementation • ICLR 2019 • Shen-Huan Lyu, Lu Wang, Zhi-Hua Zhou
We utilize a convex margin distribution loss function on the deep neural networks to validate our theoretical results by optimizing the margin ratio.
no code implementations • NeurIPS 2018 • Yao-Xiang Ding, Zhi-Hua Zhou
In many real-world learning tasks, it is hard to directly optimize the true performance measures, meanwhile choosing the right surrogate objectives is also difficult.
no code implementations • NeurIPS 2018 • Lijun Zhang, Zhi-Hua Zhou
In this paper, we consider the problem of linear regression with heavy-tailed distributions.
no code implementations • 25 Oct 2018 • Bo-Jian Hou, Zhi-Hua Zhou
With the learned FSA and via experiments on artificial and real datasets, we find that FSA is more trustable than the RNN from which it learned, which gives FSA a chance to substitute RNNs in applications involving humans' lives or dangerous facilities.
no code implementations • NeurIPS 2018 • Lijun Zhang, Shiyin Lu, Zhi-Hua Zhou
In this paper, we study online convex optimization in dynamic environments, and aim to bound the dynamic regret with respect to any sequence of comparators.
no code implementations • 8 Sep 2018 • Peng Zhao, Le-Wen Cai, Zhi-Hua Zhou
In many real-world applications, data are often collected in the form of stream, and thus the distribution usually changes in nature, which is referred as concept drift in literature.
1 code implementation • Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining 2018 • Kai Ming Ting, Yue Zhu, Zhi-Hua Zhou
This paper investigates data dependent kernels that are derived directly from data.
no code implementations • ICML 2018 • Han-Jia Ye, De-Chuan Zhan, Yuan Jiang, Zhi-Hua Zhou
On the way to the robust learner for real-world applications, there are still great challenges, including considering unknown environments with limited data.
1 code implementation • NeurIPS 2018 • Ji Feng, Yang Yu, Zhi-Hua Zhou
Multi-layered representation is believed to be the key ingredient of deep neural networks especially in cognitive tasks like computer vision.
no code implementations • 23 May 2018 • Miao Xu, Gang Niu, Bo Han, Ivor W. Tsang, Zhi-Hua Zhou, Masashi Sugiyama
We consider a challenging multi-label classification problem where both feature matrix $\X$ and label matrix $\Y$ have missing entries.
no code implementations • 11 May 2018 • Ya-Lin Zhang, Jun Zhou, Wenhao Zheng, Ji Feng, Longfei Li, Ziqi Liu, Ming Li, Zhiqiang Zhang, Chaochao Chen, Xiaolong Li, Zhi-Hua Zhou, YUAN, QI
This model can block fraud transactions in a large amount of money each day.
no code implementations • NeurIPS 2018 • Lijun Zhang, Zhi-Hua Zhou
In this paper, we consider the problem of linear regression with heavy-tailed distributions.
1 code implementation • 4 Feb 2018 • Wang-Zhou Dai, Qiu-Ling Xu, Yang Yu, Zhi-Hua Zhou
Perception and reasoning are basic human abilities that are seamlessly connected as part of human intelligence.
no code implementations • NeurIPS 2017 • Chao Qian, Jing-Cheng Shi, Yang Yu, Ke Tang, Zhi-Hua Zhou
The problem of selecting the best $k$-element subset from a universe is involved in many applications.
no code implementations • 20 Nov 2017 • Chao Qian, Yang Yu, Ke Tang, Xin Yao, Zhi-Hua Zhou
To provide a general theoretical explanation of the behavior of EAs, it is desirable to study their performance on general classes of combinatorial optimization problems.
2 code implementations • 26 Sep 2017 • Ji Feng, Zhi-Hua Zhou
Auto-encoding is an important task which is typically realized by deep neural networks (DNNs) such as convolutional neural networks (CNN).
no code implementations • 15 Aug 2017 • Wei Wang, Zhi-Hua Zhou
Disagreement-based approaches generate multiple classifiers and exploit the disagreement among them with unlabeled data to improve learning performance.
no code implementations • ICML 2017 • Teng Zhang, Zhi-Hua Zhou
It still remains open for multi-class classification, and due to the complexity of margin for multi-class classification, optimizing its distribution by mean and variance can also be difficult.
no code implementations • 20 Jul 2017 • Xiu-Shen Wei, Chen-Lin Zhang, Jianxin Wu, Chunhua Shen, Zhi-Hua Zhou
Reusable model design becomes desirable with the rapid expansion of computer vision and machine learning applications.
Ranked #11 on Single-object discovery on COCO_20k
no code implementations • NeurIPS 2017 • Bo-Jian Hou, Lijun Zhang, Zhi-Hua Zhou
To benefit from the recovered features, we develop two ensemble methods.
no code implementations • 8 Jun 2017 • Peng Zhao, Zhi-Hua Zhou
Moreover, as the whole data volume is unknown when constructing the model, it is desired to scan each data item only once with a storage independent with the data volume.
no code implementations • 8 May 2017 • Xiu-Shen Wei, Chen-Lin Zhang, Yao Li, Chen-Wei Xie, Jianxin Wu, Chunhua Shen, Zhi-Hua Zhou
Reusable model design becomes desirable with the rapid expansion of machine learning applications.
no code implementations • 19 Apr 2017 • Lech Szymanski, Brendan McCane, Wei Gao, Zhi-Hua Zhou
Despite being so vital to success of Support Vector Machines, the principle of separating margin maximisation is not used in deep learning.
no code implementations • 4 Apr 2017 • Yue Zhu, James T. Kwok, Zhi-Hua Zhou
In fact, in the real-world applications, both cases may occur that some label correlations are globally applicable and some are shared only in a local group of instances.
19 code implementations • 28 Feb 2017 • Zhi-Hua Zhou, Ji Feng
This study opens the door of deep learning based on non-differentiable modules, and exhibits the possibility of constructing deep models without using backpropagation.
no code implementations • 21 Feb 2017 • Yu-ting Qiang, Yanwei Fu, Xiao Yu, Yanwen Guo, Zhi-Hua Zhou, Leonid Sigal
In order to bridge the gap between panel attributes and the composition within each panel, we also propose a recursive page splitting algorithm to generate the panel layout for a poster.
no code implementations • ICML 2018 • Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou
To cope with changing environments, recent developments in online learning have introduced the concepts of adaptive regret and dynamic regret independently.
no code implementations • NeurIPS 2016 • Han-Jia Ye, De-Chuan Zhan, Xue-Min Si, Yuan Jiang, Zhi-Hua Zhou
In UM2L, a type of combination operator is introduced for distance characterization from multiple perspectives, and thus can introduce flexibilities for representing and utilizing both spatial and semantic linkages.
no code implementations • NeurIPS 2018 • Ming Pang, Wei Gao, Min Tao, Zhi-Hua Zhou
This work considers a different attack style: unorganized malicious attacks, where attackers individually utilize a small number of user profiles to attack different items without any organizer.
no code implementations • ICML 2017 • Xi-Zhu Wu, Zhi-Hua Zhou
Multi-label classification deals with the problem where each instance is associated with multiple class labels.
no code implementations • 1 Sep 2016 • Yao-Xiang Ding, Zhi-Hua Zhou
One of the fundamental problems in crowdsourcing is the trade-off between the number of the workers needed for high-accuracy aggregation and the budget to pay.
1 code implementation • 24 Aug 2016 • Emanuele Sansone, Francesco G. B. De Natale, Zhi-Hua Zhou
Positive unlabeled (PU) learning is useful in various practical situations, where there is a need to learn a classifier for a class of interest from an unlabeled data set, which may contain anomalies as well as samples from unknown classes.
no code implementations • NeurIPS 2017 • Lijun Zhang, Tianbao Yang, Jin-Feng Yi, Rong Jin, Zhi-Hua Zhou
When multiple gradients are accessible to the learner, we first demonstrate that the dynamic regret of strongly convex functions can be upper bounded by the minimum of the path-length and the squared path-length.
no code implementations • 26 Jul 2016 • Wei Gao, Bin-Bin Yang, Zhi-Hua Zhou
The theoretical results show that, for asymmetric noises, k-nearest neighbor is robust enough to classify most data correctly, except for a handful of examples, whose labels are totally misled by random noises.
no code implementations • 10 Jun 2016 • Chao Qian, Yang Yu, Zhi-Hua Zhou
Our results imply that the increase of population size, while usually desired in practice, bears the risk of increasing the lower bound of the running time and thus should be carefully considered.
no code implementations • 30 May 2016 • Chenping Hou, Zhi-Hua Zhou
In many real tasks the features are evolving, with some features being vanished and some other features augmented.
no code implementations • 30 May 2016 • Xin Mu, Kai Ming Ting, Zhi-Hua Zhou
This is the first time, as far as we know, that completely random trees are used as a single common core to solve all three sub problems: unsupervised learning, supervised learning and model update in data streams.
1 code implementation • 18 Apr 2016 • Xiu-Shen Wei, Jian-Hao Luo, Jianxin Wu, Zhi-Hua Zhou
Moreover, on general image retrieval datasets, SCDA achieves comparable retrieval results with state-of-the-art general image retrieval approaches.
no code implementations • 12 Apr 2016 • Teng Zhang, Zhi-Hua Zhou
Support vector machine (SVM) has been one of the most popular learning algorithms, with the central idea of maximizing the minimum margin, i. e., the smallest distance from the instances to the classification boundary.
no code implementations • 5 Apr 2016 • Yu-ting Qiang, Yanwei Fu, Yanwen Guo, Zhi-Hua Zhou, Leonid Sigal
Then, given inferred layout and attributes, composition of graphical elements within each panel is synthesized.
no code implementations • 31 Mar 2016 • Guo-Bing Zhou, Jianxin Wu, Chen-Lin Zhang, Zhi-Hua Zhou
Recently recurrent neural networks (RNN) has been very successful in handling sequence data.
no code implementations • NeurIPS 2015 • Chao Qian, Yang Yu, Zhi-Hua Zhou
Selecting the optimal subset from a large set of variables is a fundamental problem in various learning tasks such as feature selection, sparse regression, dictionary learning, etc.
no code implementations • 12 Nov 2015 • Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou
In this paper, we develop a randomized algorithm and theory for learning a sparse model from large-scale and high-dimensional data, which is usually formulated as an empirical risk minimization problem with a sparsity-inducing regularizer.
no code implementations • 5 Nov 2015 • Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou
In this paper, we utilize stochastic optimization to reduce the space complexity of convex composite optimization with a nuclear norm regularizer, where the variable is a matrix of size $m \times n$.
no code implementations • 20 Oct 2015 • Li-Ping Liu, Thomas G. Dietterich, Nan Li, Zhi-Hua Zhou
This paper introduces a new approach, Transductive Top K (TTK), that seeks to minimize the hinge loss over all training instances under the constraint that exactly $k$ test instances are predicted as positive.
no code implementations • 25 Sep 2015 • Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou
In this paper, we study a special bandit setting of online stochastic linear optimization, where only one-bit of information is revealed to the learner at each round.
no code implementations • 4 Aug 2015 • Shao-Yuan Li, Yuan Jiang, Zhi-Hua Zhou
Multi-label active learning is a hot topic in reducing the label cost by optimally choosing the most valuable instance to query its label from an oracle.
no code implementations • 26 Apr 2015 • Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou
To the best of our knowledge, this is the first relative bound that has been proved for the regularized formulation of matrix completion.
no code implementations • 12 Feb 2015 • Shen-Yi Zhao, Wu-Jun Li, Zhi-Hua Zhou
There exists only one stochastic method, called SA-ADMM, which can achieve convergence rate $O(1/T)$ on general convex problems.
no code implementations • 4 Nov 2014 • Miao Xu, Rong Jin, Zhi-Hua Zhou
In particular, the proposed algorithm computes the low rank approximation of the target matrix based on (i) the randomly sampled rows and columns, and (ii) a subset of observed entries that are randomly sampled from the matrix.
no code implementations • NeurIPS 2014 • Nan Li, Rong Jin, Zhi-Hua Zhou
Recent efforts of bipartite ranking are focused on optimizing ranking accuracy at the top of the ranked list.
no code implementations • 25 Feb 2014 • Wang-Zhou Dai, Zhi-Hua Zhou
Structure learning of these systems is an intersection area of Inductive Logic Programming (ILP) and statistical learning (SL).
no code implementations • 16 Feb 2014 • Wei Gao, Zhi-Hua Zhou
Great successes of deep neural networks have been witnessed in various real applications.
no code implementations • NeurIPS 2013 • Miao Xu, Rong Jin, Zhi-Hua Zhou
In standard matrix completion theory, it is required to have at least $O(n\ln^2 n)$ observed entries to perfectly recover a low-rank matrix $M$ of size $n\times n$, leading to a large number of observations when $n$ is large.
no code implementations • 20 Nov 2013 • Chao Qian, Yang Yu, Zhi-Hua Zhou
On a representative problem where the noise has a strong negative effect, we examine two commonly employed mechanisms in EAs dealing with noise, the re-evaluation and the threshold selection strategies.
no code implementations • 5 Nov 2013 • Teng Zhang, Zhi-Hua Zhou
In this paper, we propose the Large margin Distribution Machine (LDM), which tries to achieve a better generalization performance by optimizing the margin distribution.
no code implementations • 8 Oct 2013 • Sheng-Jun Huang, Zhi-Hua Zhou
Although the MIML problem is complicated, MIMLfast is able to achieve excellent performance by exploiting label relations with shared space and discovering sub-concepts for complicated labels.
no code implementations • 7 May 2013 • Wei Gao, Rong Jin, Shenghuo Zhu, Zhi-Hua Zhou
AUC is an important performance measure and many algorithms have been devoted to AUC optimization, mostly by minimizing a surrogate convex loss on a training data set.
no code implementations • 6 Mar 2013 • Yu-Feng Li, Ivor W. Tsang, James T. Kwok, Zhi-Hua Zhou
In this paper, we study the problem of learning from weakly labeled data, where labels of the training examples are incomplete.
no code implementations • NeurIPS 2012 • Tianbao Yang, Yu-Feng Li, Mehrdad Mahdavi, Rong Jin, Zhi-Hua Zhou
Both random Fourier features and the Nyström method have been successfully applied to efficient kernel learning.
no code implementations • 3 Aug 2012 • Wei Gao, Zhi-Hua Zhou
Based on this result, we prove that exponential loss and logistic loss are consistent with AUC, but hinge loss is inconsistent.
no code implementations • NeurIPS 2010 • Sheng-Jun Huang, Rong Jin, Zhi-Hua Zhou
Most active learning approaches select either informative or representative unlabeled instances to query their labels.
no code implementations • NeurIPS 2010 • Wei Wang, Zhi-Hua Zhou
The sample complexity of active learning under the realizability assumption has been well-studied.
no code implementations • 19 Sep 2010 • Wei Gao, Zhi-Hua Zhou
Margin theory provides one of the most popular explanations to the success of \texttt{AdaBoost}, where the central point lies in the recognition that \textit{margin} is the key for characterizing the performance of \texttt{AdaBoost}.
no code implementations • 15 Dec 2008 • Fei Tony Liu, Kai Ming Ting, Zhi-Hua Zhou
Most existing model-based approaches to anomaly detection construct a profile of normal instances, then identify instances that do not conform to the normal profile as anomalies.
Anomaly Detection Unsupervised Anomaly Detection with Specified Settings -- 0.1% anomaly +4
2 code implementations • 1 Jun 2005 • Zhi-Hua Zhou, Ming Li
In many practical machine learning and data min-ing applications, unlabeled training examples arereadily available but labeled ones are fairly expen-sive to obtain.