no code implementations • 28 Mar 2024 • Ming Yan, Joey Tianyi Zhou, Ivor W. Tsang
Specifically, our stance detection approach leverages target background knowledge collaboratively from different knowledge sources with the help of knowledge alignment.
no code implementations • 23 Feb 2024 • Yanjun Zhao, Sizhe Dang, Haishan Ye, Guang Dai, Yi Qian, Ivor W. Tsang
Fine-tuning large language models (LLMs) with classic first-order optimizers entails prohibitive GPU memory due to the backpropagation process.
no code implementations • 6 Feb 2024 • Bohao Qu, Xiaofeng Cao, Qing Guo, Yi Chang, Ivor W. Tsang, Chengqi Zhang
In this study, we present a transductive inference approach on that reward information propagation graph, which enables the effective estimation of rewards for unlabelled data in offline reinforcement learning.
no code implementations • 22 Jan 2024 • Jinliang Deng, Xuan Song, Ivor W. Tsang, Hui Xiong
Through this work, we advocate a paradigm shift in LTSF, emphasizing the importance to tailor the model to the inherent dynamics of time series data-a timely reminder that in the realm of LTSF, bigger is not invariably better.
no code implementations • 29 Dec 2023 • Tuan-Anh Vu, Duc Thanh Nguyen, Qing Guo, Binh-Son Hua, Nhat Minh Chung, Ivor W. Tsang, Sai-Kit Yeung
Such cross-domain representations are desirable in segmenting camouflaged objects where visual cues are subtle to distinguish the objects from the background, especially in segmenting novel objects which are not seen in training.
no code implementations • 21 Dec 2023 • Bowen Xing, Ivor W. Tsang
The attributes contain the background and property information of the target, which can help to enrich the semantics of the review context and the target.
no code implementations • 22 Nov 2023 • Bowen Xing, Ivor W. Tsang
For the first stage, we propose single-task supervised contrastive learning, and for the second stage, we propose co-guiding supervised contrastive learning, which considers the two tasks' mutual guidances in the contrastive learning procedure.
no code implementations • 10 Nov 2023 • Mingwei Xu, Xiaofeng Cao, Ivor W. Tsang, James T. Kwok
In this paper, we replace the aforementioned weighting method with a new strategy that considers the generalization bounds of each local model.
1 code implementation • 2 Nov 2023 • Yinghua Yao, Yuangang Pan, Jing Li, Ivor W. Tsang, Xin Yao
Therein, the interested clustering factor and the confounding factor are coarsely considered in the raw feature space, where the correlation between the data and the confounding factor is ideally assumed to be linear for convenient solutions.
no code implementations • 31 Aug 2023 • Kairui Hu, Ming Yan, Joey Tianyi Zhou, Ivor W. Tsang, Wen Haw Chong, Yong Keong Yap
In response to these identified gaps, we introduce the Ladder-of-Thought (LoT) for the stance detection task.
no code implementations • 21 Aug 2023 • Jun Chen, Haishan Ye, Mengmeng Wang, Tianxin Huang, Guang Dai, Ivor W. Tsang, Yong liu
This paper proposes a decentralized Riemannian conjugate gradient descent (DRCGD) method that aims at minimizing a global function over the Stiefel manifold.
no code implementations • 15 Jun 2023 • Bowen Xing, Ivor W. Tsang
In this paper, we put forward a new framework, whose core is relational temporal graph reasoning. We propose a speaker-aware temporal graph (SATG) and a dual-task relational temporal graph (DRTG) to facilitate relational temporal modeling in dialog understanding and dual-task reasoning.
no code implementations • 7 Jun 2023 • Bowen Xing, Ivor W. Tsang
Finally, we propose a Co-evolving Graph Reasoning Network (CGR-Net) that implements our MTL framework and conducts Co-evolving Reasoning on MRG.
no code implementations • 25 May 2023 • Yihao Huang, Yue Cao, Tianlin Li, Felix Juefei-Xu, Di Lin, Ivor W. Tsang, Yang Liu, Qing Guo
Second, we extend representative adversarial attacks against SAM and study the influence of different prompts on robustness.
1 code implementation • 22 May 2023 • Jinliang Deng, Xiusi Chen, Renhe Jiang, Du Yin, Yi Yang, Xuan Song, Ivor W. Tsang
The core issue in MTS forecasting is how to effectively model complex spatial-temporal patterns.
1 code implementation • 28 Apr 2023 • Jing Li, Yuangang Pan, Yueming Lyu, Yinghua Yao, Yulei Sui, Ivor W. Tsang
Unlike existing model tuning methods where the target data is always ready for calculating model gradients, the model providers in EXPECTED only see some feedbacks which could be as simple as scalars, such as inference accuracy or usage rate.
no code implementations • 24 Apr 2023 • Yaxin Shi, Xiaowei Zhou, Ping Liu, Ivor W. Tsang
Furthermore, we propose the use of transition consistency, defined on the transition variable, to enable regularization of consistency on unobserved translations, which is omitted in previous works.
no code implementations • 5 Apr 2023 • Kim Yong Tan, Yueming Lyu, Yew Soon Ong, Ivor W. Tsang
This need requires the ANN search algorithm to support fast online data deletion and insertion.
no code implementations • 2 Apr 2023 • Cheng Chen, Yueming Lyu, Ivor W. Tsang
However, conventional partial-label learning (PLL) methods are still vulnerable to the high ratio of noisy partial labels, especially in a large labelling space.
no code implementations • 28 Feb 2023 • Bohao Qu, Xiaofeng Cao, Jielong Yang, Hechang Chen, Chang Yi, Ivor W. Tsang, Yew-Soon Ong
To resolve this problem, this paper tries to learn the diverse policies from the history of state-action pairs under a non-Markovian environment, in which a policy dispersion scheme is designed for seeking diverse policy representation.
1 code implementation • 19 Feb 2023 • Jiangchao Yao, Bo Han, Zhihan Zhou, Ya zhang, Ivor W. Tsang
We solve this problem by introducing a Latent Class-Conditional Noise model (LCCN) to parameterize the noise transition under a Bayesian framework.
no code implementations • 7 Feb 2023 • Jun Chen, Hanwen Chen, Mengmeng Wang, Guang Dai, Ivor W. Tsang, Yong liu
By introducing a partial differential equation on metrics, i. e., the Ricci flow, we establish the dynamical stability and convergence of the LNE metric with the $L^2$-norm perturbation.
no code implementations • 9 Jan 2023 • Yuhao Liu, Qing Guo, Lan Fu, Zhanghan Ke, Ke Xu, Wei Feng, Ivor W. Tsang, Rynson W. H. Lau
Hence, in this paper, we propose to remove shadows at the image structure level.
no code implementations • 13 Dec 2022 • Peiyao Zhao, Yuangang Pan, Xin Li, Xu Chen, Ivor W. Tsang, Lejian Liao
Inspired by the impressive success of contrastive learning (CL), a variety of graph augmentation strategies have been employed to learn node representations in a self-supervised manner.
1 code implementation • 19 Oct 2022 • Bowen Xing, Ivor W. Tsang
In this paper, we propose a novel model termed Co-guiding Net, which implements a two-stage framework achieving the \textit{mutual guidances} between the two tasks.
no code implementations • 19 Oct 2022 • Bowen Xing, Ivor W. Tsang
Therefore, in this paper, we first construct a Heterogeneous Label Graph (HLG) containing two kinds of topologies: (1) statistical dependencies based on labels' co-occurrence patterns and hierarchies in slot labels; (2) rich relations among the label nodes.
no code implementations • 29 Jul 2022 • Xiaofeng Cao, Weixin Bu, Shengjun Huang, MinLing Zhang, Ivor W. Tsang, Yew Soon Ong, James T. Kwok
In future, learning on small data that approximates the generalization ability of big data is one of the ultimate purposes of AI, which requires machines to recognize objectives and scenarios relying on small data as humans.
no code implementations • 30 Jun 2022 • Xiaofeng Cao, Weiyang Liu, Ivor W. Tsang
Finally, we demonstrate the empirical performance of MHEAL in a wide range of applications on data-efficient learning, including deep clustering, distribution matching, version space sampling and deep active learning.
no code implementations • 30 Jun 2022 • Xiaofeng Cao, Yaming Guo, Ivor W. Tsang, James T. Kwok
An inherent assumption is that this learning manner can derive those updates into the optimal hypothesis.
1 code implementation • 8 Jun 2022 • Xiaowei Zhou, Ivor W. Tsang, Jie Yin
To achieve a better trade-off between standard accuracy and adversarial robustness, we propose a novel adversarial training framework called LAtent bounDary-guided aDvErsarial tRaining (LADDER) that adversarially trains DNN models on latent boundary-guided adversarial examples.
no code implementations • 23 May 2022 • Bowen Xing, Ivor W. Tsang
In this paper, we propose a novel model termed Neural Subgraph Explorer, which (1) reduces the noisy information via pruning target-irrelevant nodes on the syntax graph; (2) introduces beneficial first-order connections between the target and its related words into the obtained graph.
no code implementations • 1 Apr 2022 • Yan Zhang, Changyu Li, Ivor W. Tsang, Hui Xu, Lixin Duan, Hongzhi Yin, Wen Li, Jie Shao
Motivated by the idea of meta-augmentation, in this paper, by treating a user's preference over items as a task, we propose a so-called Diverse Preference Augmentation framework with multiple source domains based on meta-learning (referred to as MetaDPA) to i) generate diverse ratings in a new domain of interest (known as target domain) to handle overfitting on the case of sparse interactions, and to ii) learn a preference model in the target domain via a meta-learning scheme to alleviate cold-start issues.
1 code implementation • Findings (ACL) 2022 • Bowen Xing, Ivor W. Tsang
To implement our framework, we propose a novel model dubbed DARER, which first generates the context-, speaker- and temporal-sensitive utterance representations via modeling SATG, then conducts recurrent dual-task relational reasoning on DRTG, in which process the estimated label distributions act as key clues in prediction-level interactions.
no code implementations • 15 Dec 2021 • Jing Li, Yuangang Pan, Ivor W. Tsang
The prediction uncertainty is typically expressed as the \emph{entropy} computed by the transformed probabilities in output space.
no code implementations • 26 Nov 2021 • Yinghua Yao, Yuangang Pan, Ivor W. Tsang, Xin Yao
In particular, we simultaneously train two modules: a generator that translates an input image to the desired image with smooth subtle changes with respect to the interested attributes; and a ranker that ranks rival preferences consisting of the input image and the desired image.
no code implementations • 20 Nov 2021 • Zhixiong Yue, Feiyang Ye, Yu Zhang, Christy Liang, Ivor W. Tsang
We theoretically study the safeness of both learning strategies in the DSMTL model to show that the proposed methods can achieve some versions of safe multi-task learning.
1 code implementation • 20 Nov 2021 • Baijiong Lin, Feiyang Ye, Yu Zhang, Ivor W. Tsang
Multi-Task Learning (MTL) has achieved success in various fields.
no code implementations • 24 Sep 2021 • Xiaowei Zhou, Jie Yin, Ivor W. Tsang
Graph neural networks have emerged as a powerful model for graph representation learning to undertake graph-level prediction tasks.
1 code implementation • 2 Sep 2021 • Jinliang Deng, Xiusi Chen, Renhe Jiang, Xuan Song, Ivor W. Tsang
Therefore, there are two fundamental views which can be used to analyze MTS data, namely the spatial view and the temporal view.
1 code implementation • 5 Aug 2021 • Bowen Xing, Ivor W. Tsang
Aspect-level sentiment classification (ASC) aims to predict the fine-grained sentiment polarity towards a given aspect mentioned in a review.
Aspect-Based Sentiment Analysis (ABSA) Sentiment Classification
1 code implementation • 14 Jul 2021 • Yinghua Yao, Yuangang Pan, Ivor W. Tsang, Xin Yao
This paper proposes Differential-Critic Generative Adversarial Network (DiCGAN) to learn the distribution of user-desired data when only partial instead of the entire dataset possesses the desired property.
no code implementations • 2 Jul 2021 • Maosen Li, Siheng Chen, Yanning Shen, Genjia Liu, Ivor W. Tsang, Ya zhang
This paper considers predicting future statuses of multiple agents in an online fashion by exploiting dynamic interactions in the system.
no code implementations • 21 Jun 2021 • Bowen Xing, Ivor W. Tsang
In these aspect-aware context encoders, the semantics of the given aspect is used to regulate the information flow.
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA)
no code implementations • 6 May 2021 • Xiaofeng Cao, Ivor W. Tsang
This optimization solver is in general ineffective when the student learner does not disclose any cue of the learning parameters.
no code implementations • 6 May 2021 • Xiaofeng Cao, Ivor W. Tsang
We present geometric Bayesian active learning by disagreements (GBALD), a framework that performs BALD on its core-set construction interacting with model uncertainty estimation.
no code implementations • 23 Nov 2020 • Weiwei Liu, Haobo Wang, Xiaobo Shen, Ivor W. Tsang
Exabytes of data are generated daily by humans, leading to the growing need for new efforts in dealing with the grand challenges for multi-label learning brought by big data.
1 code implementation • 9 Nov 2020 • Bo Han, Quanming Yao, Tongliang Liu, Gang Niu, Ivor W. Tsang, James T. Kwok, Masashi Sugiyama
Classical machine learning implicitly assumes that labels of the training data are sampled from a clean distribution, which can be too restrictive for real-world scenarios.
no code implementations • 2 Nov 2020 • Yan Zhang, Ivor W. Tsang, Lixin Duan
Cold-start has being a critical issue in recommender systems with the explosion of data in e-commerce.
no code implementations • 2 Nov 2020 • Yan Zhang, Ivor W. Tsang, Hongzhi Yin, Guowu Yang, Defu Lian, Jingjing Li
Specifically, we first pre-train robust item representation from item content data by a Denoising Auto-encoder instead of other deterministic deep learning frameworks; then we finetune the entire framework by adding a pairwise loss objective with discrete constraints; moreover, DPH aims to minimize a pairwise ranking loss that is consistent with the ultimate goal of recommendation.
no code implementations • NeurIPS 2020 • Yueming Lyu, Yuan Yuan, Ivor W. Tsang
We theoretically prove a lower and an upper bound of the minimum pairwise distance of any non-degenerate rank-1 lattice.
2 code implementations • NeurIPS 2020 • Maosen Li, Siheng Chen, Ya zhang, Ivor W. Tsang
Based on trainable hierarchical representations of a graph, GXN enables the interchange of intermediate features across scales to promote information flow.
1 code implementation • 22 Sep 2020 • Hao Zhang, Joey Tianyi Zhou, Tianying Wang, Ivor W. Tsang, Rick Siow Mong Goh
To facilitate the training of N-ary ECOC with deep learning base learners, we further propose three different variants of parameter sharing architectures for deep N-ary ECOC.
1 code implementation • ICML 2020 • Xingrui Yu, Yueming Lyu, Ivor W. Tsang
Thus, our module provides the imitation agent both the intrinsic intention of the demonstrator and a better exploration ability, which is critical for the agent to outperform the demonstrator.
no code implementations • 24 May 2020 • Yaxin Shi, Yuangang Pan, Donna Xu, Ivor W. Tsang
Multi-view alignment, achieving one-to-one correspondence of multi-view inputs, is critical in many real-world multi-view applications, especially for cross-view data analysis problems.
no code implementations • 30 Mar 2020 • Jing Li, Yuangang Pan, Yulei Sui, Ivor W. Tsang
This paper studies, for the first time, how pairwise information can be leaked to attackers during distance metric learning, and develops differential pairwise privacy (DPP), generalizing the definition of standard differential privacy, for secure metric learning.
no code implementations • 9 Feb 2020 • Yang Zhang, Ivor W. Tsang, Jun Li, Ping Liu, Xiaobo Lu, Xin Yu
The coarse-level FHnet generates a frontal coarse HR face and then the fine-level FHnet makes use of the facial component appearance prior, i. e., fine-grained facial components, to attain a frontal HR face image with authentic details.
1 code implementation • 1 Feb 2020 • Zhuanghua Liu, Ivor W. Tsang
However, in state-of-the-art first-order attacks, adversarial examples with sign gradients retain the sign information of each gradient component but discard the relative magnitude between components.
no code implementations • ICLR 2020 • Yan Zhang, Ivor W. Tsang, Lixin Duan, Guowu Yang
Cold-start and efficiency issues of the Top-k recommendation are critical to large-scale recommender systems.
no code implementations • 9 Oct 2019 • Yueming Lyu, Ivor W. Tsang
Empirically, our method with full matrix update achieves competitive performance compared with one of the state-of-the-art method CMA-ES on benchmark test problems.
no code implementations • 26 Jul 2019 • Xiaofeng Xu, Ivor W. Tsang, Chuancai Liu
Unfortunately, previous attribute selection methods are conducted based on the seen data, and their selected attributes have poor generalization capability to the unseen data, which is unavailable in the training stage of ZSL tasks.
3 code implementations • 23 Jul 2019 • Xu Chen, Siheng Chen, Huangjie Zheng, Jiangchao Yao, Kenan Cui, Ya zhang, Ivor W. Tsang
NANG learns a unifying latent representation which is shared by both node attributes and graph structures and can be translated to different modalities.
no code implementations • 16 Jul 2019 • Xiaowei Zhou, Ivor W. Tsang, Jie Yin
The proposed LAD method improves the robustness of a DNN model through adversarial training on generated adversarial examples.
1 code implementation • 29 May 2019 • Yuangang Pan, WeiJie Chen, Gang Niu, Ivor W. Tsang, Masashi Sugiyama
Specifically, the properties of our CoarsenRank are summarized as follows: (1) CoarsenRank is designed for mild model misspecification, which assumes there exist the ideal preferences (consistent with model assumption) that locates in a neighborhood of the actual preferences.
no code implementations • ICLR 2020 • Yueming Lyu, Ivor W. Tsang
Although the 0-1 loss has some robust properties, it is difficult to optimize.
no code implementations • 24 May 2019 • Yueming Lyu, Yuan Yuan, Ivor W. Tsang
In this work, we investigate black-box optimization from the perspective of frequentist kernel methods.
no code implementations • ICLR 2019 • Yuan Yuan, Yueming Lyu, Xi Shen, Ivor W. Tsang, Dit-yan Yeung
The MAAN employs a novel marginalized average aggregation (MAA) module and learns a set of latent discriminative probabilities in an end-to-end fashion.
Ranked #11 on Weakly Supervised Action Localization on ActivityNet-1.3 (mAP@0.5 metric)
Weakly Supervised Action Localization Weakly-supervised Learning +2
no code implementations • 20 May 2019 • Xiaofeng Xu, Ivor W. Tsang, Xiaofeng Cao, Ruiheng Zhang, Chuancai Liu
In most of existing attribute-based research, class-specific attributes (CSA), which are class-level annotations, are usually adopted due to its low annotation cost for each class instead of each individual image.
no code implementations • ICLR 2019 • Yuangang Pan, Avinash K Singh, Ivor W. Tsang, Chin-Teng Lin
Furthermore, a transition matrix is introduced to characterize the reliability of each channel used in EEG data, which helps in learning brain dynamics preferences only from informative EEG channels.
1 code implementation • 6 Mar 2019 • Jiangchao Yao, Ya zhang, Ivor W. Tsang, Jun Sun
We further generalize LCCN for open-set noisy labels and the semi-supervised setting.
Ranked #34 on Image Classification on Clothing1M (using extra training data)
3 code implementations • 14 Jan 2019 • Xingrui Yu, Bo Han, Jiangchao Yao, Gang Niu, Ivor W. Tsang, Masashi Sugiyama
Learning with noisy labels is one of the hottest problems in weakly-supervised learning.
Ranked #13 on Learning with noisy labels on CIFAR-100N
no code implementations • 2 Jan 2019 • Donna Xu, Yaxin Shi, Ivor W. Tsang, Yew-Soon Ong, Chen Gong, Xiaobo Shen
Multi-output learning aims to simultaneously predict multiple outputs given an input.
no code implementations • 30 Sep 2018 • Bo Han, Ivor W. Tsang, Xiaokui Xiao, Ling Chen, Sai-fu Fung, Celina P. Yu
PRESTIGE bridges private updates of the primal variable (by private sampling) with the gradual curriculum learning (CL).
no code implementations • 28 Sep 2018 • Xiaofeng Cao, Ivor W. Tsang, Xiaofeng Xu, Guandong Xu
By discovering the connections between hypothesis and input distribution, we map the volume of version space into the number density and propose a target-independent distribution-splitting strategy with the following advantages: 1) provide theoretical guarantees on reducing label complexity and error rate as volume-splitting; 2) break the curse of initial hypothesis; 3) provide model guidance for a target-independent AL algorithm in real AL tasks.
no code implementations • 22 Aug 2018 • Xi Peng, Yunnan Li, Ivor W. Tsang, Hongyuan Zhu, Jiancheng Lv, Joey Tianyi Zhou
The second is implementing discrete $k$-means with a differentiable neural network that embraces the advantages of parallel computing, online clustering, and clustering-favorable representation learning.
no code implementations • 24 Jul 2018 • Xiaofeng Cao, Ivor W. Tsang, Guandong Xu
In this paper, we approximate the version space to a structured {hypersphere} that covers most of the hypotheses, and then divide the available AL sampling approaches into two kinds of strategies: Outer Volume Sampling and Inner Volume Sampling.
no code implementations • 10 Jul 2018 • Huangjie Zheng, Jiangchao Yao, Ya zhang, Ivor W. Tsang, Jia Wang
In information theory, Fisher information and Shannon information (entropy) are respectively used to quantify the uncertainty associated with the distribution modeling and the uncertainty in specifying the outcome of given variables.
no code implementations • 23 May 2018 • Miao Xu, Gang Niu, Bo Han, Ivor W. Tsang, Zhi-Hua Zhou, Masashi Sugiyama
We consider a challenging multi-label classification problem where both feature matrix $\X$ and label matrix $\Y$ have missing entries.
no code implementations • 3 May 2018 • Yaxin Shi, Donna Xu, Yuangang Pan, Ivor W. Tsang, Shirui Pan
In this paper, we propose a general Partial Heterogeneous Context Label Embedding (PHCLE) framework to address these challenges.
no code implementations • 17 Apr 2018 • Xiaofeng Xu, Ivor W. Tsang, Chuancai Liu
Extensive experiments on five ZSL benchmark datasets and the large-scale ImageNet dataset demonstrate that the proposed complementary attributes and rank aggregation can significantly and robustly improve existing ZSL methods and achieve the state-of-the-art performance.
1 code implementation • 26 Feb 2018 • Fanhua Shang, Kaiwen Zhou, Hongying Liu, James Cheng, Ivor W. Tsang, Lijun Zhang, DaCheng Tao, Licheng Jiao
In this paper, we propose a simple variant of the original SVRG, called variance reduced stochastic gradient descent (VR-SGD).
no code implementations • 19 Feb 2018 • Huangjie Zheng, Jiangchao Yao, Ya zhang, Ivor W. Tsang
While enormous progress has been made to Variational Autoencoder (VAE) in recent years, similar to other deep networks, VAE with deep networks suffers from the problem of degeneration, which seriously weakens the correlation between the input and the corresponding latent codes, deviating from the goal of the representation learning.
no code implementations • 29 Nov 2017 • Donna Xu, Ivor W. Tsang, Ying Zhang
The experiments demonstrate that our online PQ model is both time-efficient and effective for ANN search in dynamic large scale databases compared with baseline methods and the idea of partial PQ codebook update further reduces the update cost.
no code implementations • 13 May 2016 • Joey Tianyi Zhou, Xinxing Xu, Sinno Jialin Pan, Ivor W. Tsang, Zheng Qin, Rick Siow Mong Goh
Specifically, we extend the standard learning to hash method, Iterative Quantization (ITQ), in a transfer learning manner, namely ITQ+.
no code implementations • 5 May 2016 • Bo Han, Ivor W. Tsang, Ling Chen
The convergence of Stochastic Gradient Descent (SGD) using convex loss functions has been widely studied.
no code implementations • 18 Mar 2016 • Joey Tianyi Zhou, Ivor W. Tsang, Shen-Shyang Ho, Klaus-Robert Muller
The coding matrix design plays a fundamental role in the prediction performance of the error correcting output codes (ECOC)-based multi-class task.
no code implementations • 9 Dec 2015 • Qi Mao, Li Wang, Ivor W. Tsang, Yijun Sun
As showcases, models that can learn a spanning tree or a weighted undirected $\ell_1$ graph are proposed, and a new learning algorithm is developed that learns a set of principal points and a graph structure from data, simultaneously.
no code implementations • CVPR 2014 • Zhongwen Xu, Ivor W. Tsang, Yi Yang, Zhigang Ma, Alexander G. Hauptmann
We address the challenging problem of utilizing related exemplars for complex event detection while multiple features are available.
no code implementations • 6 Mar 2013 • Yu-Feng Li, Ivor W. Tsang, James T. Kwok, Zhi-Hua Zhou
In this paper, we study the problem of learning from weakly labeled data, where labels of the training examples are incomplete.
no code implementations • 20 Feb 2013 • Mingkui Tan, Ivor W. Tsang, Li Wang
Matching Pursuit LASSIn Part I \cite{TanPMLPart1}, a Matching Pursuit LASSO ({MPL}) algorithm has been presented for solving large-scale sparse recovery (SR) problems.
no code implementations • 24 Sep 2012 • Mingkui Tan, Ivor W. Tsang, Li Wang
In this paper, we present a new adaptive feature scaling scheme for ultrahigh-dimensional feature selection on Big Data.
no code implementations • 5 Mar 2011 • Qi Mao, Ivor W. Tsang
The analyses by comparing with the state-of-the-art feature selection methods show that the proposed method is superior to others.
no code implementations • 4 Mar 2011 • Qi Mao, Ivor W. Tsang
To alleviate this issue, in this paper, we propose a novel multiple template learning paradigm to learn structured prediction and the importance of each template simultaneously, so that hundreds of arbitrary templates could be added into the learning model without caution.