no code implementations • EACL (WASSA) 2021 • Wazir Ali, Naveed Ali, Yong Dai, Jay Kumar, Saifullah Tumrani, Zenglin Xu
In this paper, we develop Sindhi subjective lexicon using a merger of existing English resources: NRC lexicon, list of opinion words, SentiWordNet, Sindhi-English bilingual dictionary, and collection of Sindhi modifiers.
no code implementations • RANLP 2021 • Wazir Ali, Zenglin Xu, Jay Kumar
In this paper, we introduce the SiPOS dataset for part-of-speech tagging in the low-resource Sindhi language with quality baselines.
no code implementations • 21 Jan 2023 • Zhe Li, Zhongwen Rao, Lujia Pan, Pengyun Wang, Zenglin Xu
Multivariate Time Series forecasting has been an increasingly popular topic in various applications and scenarios.
Contrastive Learning
Multivariate Time Series Forecasting
+1
1 code implementation • 20 Dec 2022 • Zhuo Zhang, Yuanhang Yang, Yong Dai, Lizhen Qu, Zenglin Xu
To facilitate the research of PETuning in FL, we also develop a federated tuning framework FedPETuning, which allows practitioners to exploit different PETuning methods under the FL training paradigm conveniently.
no code implementations • 19 Dec 2022 • Zi Gong, Yinpeng Guo, Pingyi Zhou, Cuiyun Gao, Yasheng Wang, Zenglin Xu
On the other hand, there are few studies exploring the effects of multi-programming-lingual (MultiPL) pre-training for the code completion, especially the impact on low-resource programming languages.
1 code implementation • 30 Oct 2022 • Jing Xu, Xu Luo, Xinglin Pan, Wenjie Pei, Yanan Li, Zenglin Xu
In this paper, we find that this problem usually occurs when the positions of support samples are in the vicinity of task centroid -- the mean of all class centroids in the task.
no code implementations • 11 Oct 2022 • Yuanhang Yang, shiyi qi, Cuiyun Gao, Zenglin Xu, Yulan He, Qifan Wang, Chuanyi Liu
Transformer-based models have achieved great success on sentence pair modeling tasks, such as answer selection and natural language inference (NLI).
no code implementations • 11 Oct 2022 • Terry Yue Zhuo, Yaqing Liao, Yuecheng Lei, Lizhen Qu, Gerard de Melo, Xiaojun Chang, Yazhou Ren, Zenglin Xu
We introduce ViLPAct, a novel vision-language benchmark for human activity planning.
no code implementations • 16 Jun 2022 • Langzhang Liang, Zenglin Xu, Zixing Song, Irwin King, Jieping Ye
In detail, by studying the long-tailed distribution of node degrees in the graph, we propose a novel normalization method for GNNs, which is termed ResNorm (\textbf{Res}haping the long-tailed distribution into a normal-like distribution via \textbf{norm}alization).
1 code implementation • 16 Jun 2022 • Xu Luo, Jing Xu, Zenglin Xu
When facing novel few-shot tasks in the test-time datasets, this transformation can greatly improve the generalization ability of learned image representations, while being agnostic to the choice of training algorithms and datasets.
1 code implementation • 28 May 2022 • Yu Pan, Zeyong Su, Ao Liu, Jingquan Wang, Nannan Li, Zenglin Xu
To address this problem, we propose a universal weight initialization paradigm, which generalizes Xavier and Kaiming methods and can be widely applicable to arbitrary TCNNs.
no code implementations • 26 May 2022 • Dun Zeng, Shiyu Liu, Siqi Liang, Zonghang Li, Zenglin Xu
Malicious attackers and an honest-but-curious server can steal private client data from uploaded gradients in federated learning.
no code implementations • 5 May 2022 • Fangfei Lin, Bing Bai, Kun Bai, Yazhou Ren, Peng Zhao, Zenglin Xu
Then, we embed the representations into a hyperbolic space and optimize the hyperbolic embeddings via a continuous relaxation of hierarchical clustering loss.
1 code implementation • 12 Mar 2022 • Linyang Li, Yong Dai, Duyu Tang, Xipeng Qiu, Zenglin Xu, Shuming Shi
We present a Chinese BERT model dubbed MarkBERT that uses word information in this work.
Chinese Named Entity Recognition
named-entity-recognition
+6
no code implementations • 17 Feb 2022 • Jingquan Wang, Jing Xu, Yu Pan, Zenglin Xu
Few-shot learning aims to classify unseen classes with only a limited number of labeled data.
1 code implementation • 14 Feb 2022 • Zi Gong, Cuiyun Gao, Yasheng Wang, Wenchao Gu, Yun Peng, Zenglin Xu
We further show that how the proposed SCRIPT captures the structural relative dependencies.
1 code implementation • 3 Feb 2022 • Zonghang Li, Yihong He, Hongfang Yu, Jiawen Kang, Xiaoping Li, Zenglin Xu, Dusit Niyato
In this paper, we propose FedGS, which is a hierarchical cloud-edge-end FL framework for 5G empowered industries, to improve industrial FL performance on non-i. i. d.
no code implementations • 31 Jan 2022 • Shenglai Zeng, Zonghang Li, Hongfang Yu, Yihong He, Zenglin Xu, Dusit Niyato, Han Yu
In this paper, we propose a data heterogeneity-robust FL approach, FedGSP, to address this challenge by leveraging on a novel concept of dynamic Sequential-to-Parallel (STP) collaborative training.
no code implementations • 14 Dec 2021 • Jing Xu, Xinglin Pan, Xu Luo, Wenjie Pei, Zenglin Xu
To alleviate this problem, we present a simple yet effective feature rectification method by exploring the category correlation between novel and base classes as the prior knowledge.
1 code implementation • 13 Dec 2021 • Lili Pan, Mingming Meng, Yazhou Ren, Yali Zheng, Zenglin Xu
To answer this question, this paper proposes a new SPL method: easy and underrepresented examples first, for learning DDMs.
no code implementations • 9 Nov 2021 • Chaozheng Wang, Shuzheng Gao, Cuiyun Gao, Pengyun Wang, Wenjie Pei, Lujia Pan, Zenglin Xu
Real-world data usually present long-tailed distributions.
no code implementations • 18 Oct 2021 • Langzhang Liang, Cuiyun Gao, Shiyi Chen, Shishi Duan, Yu Pan, Junjin Zheng, Lei Wang, Zenglin Xu
Graph Convolutional Networks (GCNs) are powerful for processing graph-structured data and have achieved state-of-the-art performance in several tasks such as node classification, link prediction, and graph classification.
1 code implementation • 24 Jul 2021 • Dun Zeng, Siqi Liang, Xiangjing Hu, Hui Wang, Zenglin Xu
Federated learning (FL) is a machine learning field in which researchers try to facilitate model learning process among multiparty without violating privacy protection regulations.
1 code implementation • NeurIPS 2021 • Qingzhong Ai, Lirong He, Shiyu Liu, Zenglin Xu
To address this issue, we propose Bayesian Pseudocoresets Exemplar VAE (ByPE-VAE), a new variant of VAE with a prior based on Bayesian pseudocoreset.
1 code implementation • 20 Jul 2021 • Xu Luo, Yuxuan Chen, Liangjian Wen, Lili Pan, Zenglin Xu
The goal of few-shot classification is to classify new categories with few labeled examples within each class.
no code implementations • 20 Jul 2021 • Qingzhong Ai, Shiyu Liu, Lirong He, Zenglin Xu
In practice, we notice that the kernel used in SVGD-based methods has a decisive effect on the empirical performance.
1 code implementation • NeurIPS 2021 • Xu Luo, Longhui Wei, Liangjian Wen, Jinrong Yang, Lingxi Xie, Zenglin Xu, Qi Tian
The category gap between training and evaluation has been characterised as one of the main obstacles to the success of Few-Shot Learning (FSL).
1 code implementation • 16 Jun 2021 • Xianghong Fang, Haoli Bai, Jian Li, Zenglin Xu, Michael Lyu, Irwin King
We further design discrete latent space for the variational attention and mathematically show that our model is free from posterior collapse.
no code implementations • 10 May 2021 • Xinglin Pan, Jing Xu, Yu Pan, Liangjian Wen, WenXiang Lin, Kun Bai, Zenglin Xu
Convolutional Neural Networks (CNNs) have achieved tremendous success in a number of learning tasks including image classification.
no code implementations • 9 May 2021 • Yong Dai, Jian Liu, Jian Zhang, Hongguang Fu, Zenglin Xu
The first mechanism is a selective domain adaptation (SDA) method, which transfers knowledge from the closest source domain.
1 code implementation • 11 Apr 2021 • Yu Pan, Maolin Wang, Zenglin Xu
Tensor Decomposition Networks (TDNs) prevail for their inherent compact architectures.
1 code implementation • 8 Apr 2021 • Juncheng Lv, Zhao Kang, Xiao Lu, Zenglin Xu
To tackle these problems, we use pairwise similarity to weigh the reconstruction loss to capture local structure information, while a similarity is learned by the self-expression layer.
no code implementations • 10 Mar 2021 • Ping Guo, Kaizhu Huang, Zenglin Xu
In this work, we generalize the reaction-diffusion equation in statistical physics, Schr\"odinger equation in quantum mechanics, Helmholtz equation in paraxial optics into the neural partial differential equations (NPDE), which can be considered as the fundamental equations in the field of artificial intelligence research.
no code implementations • 5 Mar 2021 • Lili Pan, Peijun Tang, Zhiyong Chen, Zenglin Xu
Disentanglement is defined as the problem of learninga representation that can separate the distinct, informativefactors of variations of data.
no code implementations • 28 Feb 2021 • Xiangli Yang, Zixing Song, Irwin King, Zenglin Xu
Deep semi-supervised learning is a fast-growing field with a range of practical applications.
1 code implementation • 26 Feb 2021 • Zixing Song, Xiangli Yang, Zenglin Xu, Irwin King
An important class of SSL methods is to naturally represent data as graphs such that the label information of unlabelled samples can be inferred from the graphs, which corresponds to graph-based semi-supervised learning (GSSL) methods.
1 code implementation • 25 Jan 2021 • Jing Xu, Tszhang Guo, Yong Xu, Zenglin Xu, Kun Bai
Deep Convolutional Neural Networks (DCNNs) and their variants have been widely used in large scale face recognition(FR) recently.
3 code implementations • 3 Jan 2021 • Jing Xu, Yu Pan, Xinglin Pan, Steven Hoi, Zhang Yi, Zenglin Xu
The ResNet and its variants have achieved remarkable successes in various computer vision tasks.
Ranked #3 on
Medical Image Classification
on NCT-CRC-HE-100K
no code implementations • 1 Jan 2021 • Xinglin Pan, Jing Xu, Yu Pan, WenXiang Lin, Liangjian Wen, Zenglin Xu
Convolutional Neural Networks (CNNs) have achieved tremendous success in a number of learning tasks, e. g., image classification.
no code implementations • 1 Jan 2021 • Xu Luo, Yuxuan Chen, Liangjian Wen, Lili Pan, Zenglin Xu
Few-shot learning aims to recognize new classes with few annotated instances within each category.
no code implementations • 30 Dec 2020 • Wazir Ali, Jay Kumar, Zenglin Xu, Congjian Luo, Junyu Lu, Junming Shao, Rajesh Kumar, Yazhou Ren
The word segmentation is a fundamental and inevitable prerequisite for many languages.
no code implementations • 10 Oct 2020 • Jinmian Ye, Guangxi Li, Di Chen, Haiqin Yang, Shandian Zhe, Zenglin Xu
Deep neural networks (DNNs) have achieved outstanding performance in a wide range of applications, e. g., image classification, natural language processing, etc.
no code implementations • 22 Sep 2020 • Nannan Li, Yu Pan, Yaran Chen, Zixiang Ding, Dongbin Zhao, Zenglin Xu
Interestingly, we discover that part of the rank elements is sensitive and usually aggregate in a narrow region, namely an interest region.
no code implementations • 31 Aug 2020 • Zhao Kang, Chong Peng, Qiang Cheng, Xinwang Liu, Xi Peng, Zenglin Xu, Ling Tian
Furthermore, most existing graph-based methods conduct clustering and semi-supervised classification on the graph learned from the original data matrix, which doesn't have explicit cluster structure, thus they might not achieve the optimal performance.
1 code implementation • 26 Jul 2020 • Jie Xu, Yazhou Ren, Guofeng Li, Lili Pan, Ce Zhu, Zenglin Xu
Firstly, the embedded representations of multiple views are learned individually by deep autoencoders.
1 code implementation • 14 Jul 2020 • Lun Yiu Nie, Cuiyun Gao, Zhicong Zhong, Wai Lam, Yang Liu, Zenglin Xu
In this paper, we propose a novel Contextualized code representation learning strategy for commit message Generation (CoreGen).
no code implementations • 11 Jul 2020 • Zhao Kang, Xiao Lu, Jian Liang, Kun Bai, Zenglin Xu
In this work, we propose a new representation learning method that explicitly models and leverages sample relations, which in turn is used as supervision to guide the representation learning.
no code implementations • 10 Jun 2020 • Yong Dai, Jian Liu, Xiancong Ren, Zenglin Xu
Existing algorithms of MS-UDA either only exploit the shared features, i. e., the domain-invariant information, or based on some weak assumption in NLP, e. g., smoothness assumption.
Multi-Source Unsupervised Domain Adaptation
Sentiment Analysis
+1
no code implementations • 12 May 2020 • Yitian Li, Ruini Xue, Mengmeng Zhu, Jing Xu, Zenglin Xu
Many complex network structures are proposed recently and many of them concentrate on multi-branch features to achieve high performance.
1 code implementation • ICLR 2020 • Liangjian Wen, Yiji Zhou, Lirong He, Mingyuan Zhou, Zenglin Xu
To this end, we propose the Mutual Information Gradient Estimator (MIGE) for representation learning based on the score estimation of implicit distributions.
no code implementations • LREC 2020 • Wazir Ali, Junyu Lu, Zenglin Xu
We introduce the SiNER: a named entity recognition (NER) dataset for low-resourced Sindhi language with quality baselines.
no code implementations • 21 Apr 2020 • Xianghong Fang, Haoli Bai, Zenglin Xu, Michael Lyu, Irwin King
Variational autoencoders have been widely applied for natural language generation, however, there are two long-standing problems: information under-representation and posterior collapse.
no code implementations • ECCV 2020 • Lili Pan, Shijie Ai, Yazhou Ren, Zenglin Xu
Deep discriminative models (e. g. deep regression forests, deep neural decision forests) have achieved remarkable success recently to solve problems such as facial age estimation and head pose estimation.
no code implementations • 3 Dec 2019 • Zhao Kang, Xiao Lu, Yiwei Lu, Chong Peng, Zenglin Xu
Leveraging on the underlying low-dimensional structure of data, low-rank and sparse modeling approaches have achieved great success in a wide range of applications.
no code implementations • 3 Dec 2019 • Juncheng Lv, Zhao Kang, Boyu Wang, Luping Ji, Zenglin Xu
Multi-view clustering is an important approach to analyze multi-view data in an unsupervised way.
no code implementations • 28 Nov 2019 • Wazir Ali, Jay Kumar, Junyu Lu, Zenglin Xu
Our intrinsic evaluation results demonstrate the high quality of our generated Sindhi word embeddings using SG, CBoW, and GloVe as compare to SdfastText word representations.
2 code implementations • 21 Nov 2019 • Zhao Kang, Wangtao Zhou, Zhitong Zhao, Junming Shao, Meng Han, Zenglin Xu
A plethora of multi-view subspace clustering (MVSC) methods have been proposed over the past few years.
no code implementations • 20 Nov 2019 • Lirong He, Ziyi Guo, Kai-Zhu Huang, Zenglin Xu
In a worst-case scenario, MPM tries to minimize an upper bound of misclassification probabilities, considering the global information (i. e., mean and covariance information of each class).
no code implementations • 15 Nov 2019 • Shufei Zhang, Kai-Zhu Huang, Zenglin Xu
We propose to exploit an energy function to describe the stability and prove that reducing such energy guarantees the robustness against adversarial examples.
1 code implementation • 16 Sep 2019 • Zhao Kang, Guoxin Shi, Shudong Huang, Wenyu Chen, Xiaorong Pu, Joey Tianyi Zhou, Zenglin Xu
Most existing methods don't pay attention to the quality of the graphs and perform graph learning and spectral clustering separately.
1 code implementation • 13 Sep 2019 • Zhao Kang, Zipeng Guo, Shudong Huang, Siying Wang, Wenyu Chen, Yuanzhang Su, Zenglin Xu
Most existing multi-view clustering methods explore the heterogeneous information in the space where the data points lie.
1 code implementation • ACL 2019 • Junyu Lu, Chenbin Zhang, Zeying Xie, Guang Ling, Tom Chao Zhou, Zenglin Xu
Response selection plays an important role in fully automated dialogue systems.
no code implementations • 17 Jun 2019 • Liangjian Wen, Xuanyang Zhang, Haoli Bai, Zenglin Xu
Recurrent neural networks (RNNs) have recently achieved remarkable successes in a number of applications.
no code implementations • 21 May 2019 • Zhao Kang, Honghui Xu, Boyu Wang, Hongyuan Zhu, Zenglin Xu
A key step of graph-based approach is the similarity graph construction.
1 code implementation • CVPR 2019 • Jian Liang, Yuren Cao, Chenbin Zhang, Shiyu Chang, Kun Bai, Zenglin Xu
Authentication is a task aiming to confirm the truth between data instances and personal identities.
no code implementations • ICLR 2019 • Xuanyang Zhang, Hao liu, Zhanxing Zhu, Zenglin Xu
Deep neural networks have achieved outstanding performance in many real-world applications with the expense of huge computational resources.
no code implementations • 14 Mar 2019 • Zhao Kang, Liangjian Wen, Wenyu Chen, Zenglin Xu
By formulating graph construction and kernel learning in a unified framework, the graph and consensus kernel can be iteratively enhanced by each other.
1 code implementation • 11 Mar 2019 • Zhao Kang, Yiwei Lu, Yuanzhang Su, Changsheng Li, Zenglin Xu
Data similarity is a key concept in many data-driven applications.
1 code implementation • Neurocomputing 2019 • Yazhou Ren, Kangrong Hu, Xinyi Dai, Lili Pan, Steven C. H. Hoi, Zenglin Xu
Deep embedded clustering (DEC) is one of the state-of-the-art deep clustering methods.
no code implementations • 30 Dec 2018 • Xianghong Fang, Haoli Bai, Ziyi Guo, Bin Shen, Steven Hoi, Zenglin Xu
In this paper, we propose a new unsupervised domain adaptation method named Domain-Adversarial Residual-Transfer (DART) learning of Deep Neural Networks to tackle cross-domain image classification tasks.
1 code implementation • 17 Dec 2018 • Zhao Kang, Haiqi Pan, Steven C. H. Hoi, Zenglin Xu
The proposed model is able to boost the performance of data clustering, semisupervised classification, and data recovery significantly, primarily due to two key factors: 1) enhanced low-rank recovery by exploiting the graph smoothness assumption, 2) improved graph construction by exploiting clean data recovered by robust PCA.
no code implementations • 17 Dec 2018 • Lili Pan, Shen Cheng, Jian Liu, Yazhou Ren, Zenglin Xu
We study the problem of multimodal generative modelling of images based on generative adversarial networks (GANs).
1 code implementation • 11 Dec 2018 • Yazhou Ren, Ni Wang, Mingxia Li, Zenglin Xu
Recently, deep clustering, which is able to perform feature learning that favors clustering tasks via deep neural networks, has achieved remarkable performance in image clustering applications.
Ranked #1 on
Image Clustering
on LetterA-J
1 code implementation • NIPS Workshop CDNNRIA 2018 • Yu Pan, Jing Xu, Maolin Wang, Jinmian Ye, Fei Wang, Kun Bai, Zenglin Xu
Recurrent Neural Networks (RNNs) and their variants, such as Long-Short Term Memory (LSTM) networks, and Gated Recurrent Unit (GRU) networks, have achieved promising performance in sequential data modeling.
1 code implementation • 24 Aug 2018 • Yazhou Ren, Xiaofan Que, Dezhong Yao, Zenglin Xu
Despite the success of traditional MTC models, they are either easy to stuck into local optima, or sensitive to outliers and noisy data.
no code implementations • 20 Jun 2018 • Zhao Kang, Xiao Lu, Jin-Feng Yi, Zenglin Xu
There are two possible reasons for the failure: (i) most existing MKL methods assume that the optimal kernel is a linear combination of base kernels, which may not hold true; and (ii) some kernel weights are inappropriately assigned due to noises and carelessly designed algorithms.
1 code implementation • 21 May 2018 • Zhonghui You, Jinmian Ye, Kunming Li, Zenglin Xu, Ping Wang
In this paper, we introduce a novel regularization method called Adversarial Noise Layer (ANL) and its efficient version called Class Adversarial Noise Layer (CANL), which are able to significantly improve CNN's generalization ability by adding carefully crafted noise into the intermediate layer activations.
no code implementations • 13 Jan 2018 • Linnan Wang, Jinmian Ye, Yiyang Zhao, Wei Wu, Ang Li, Shuaiwen Leon Song, Zenglin Xu, Tim Kraska
Given the limited GPU DRAM, SuperNeurons not only provisions the necessary memory for the training, but also dynamically allocates the memory for convolution workspaces to achieve the high performance.
no code implementations • 15 Dec 2017 • Guangxi Li, Jinmian Ye, Haiqin Yang, Di Chen, Shuicheng Yan, Zenglin Xu
Recently, deep neural networks (DNNs) have been regarded as the state-of-the-art classification methods in a wide range of applications, especially in image classification.
no code implementations • CVPR 2018 • Jinmian Ye, Linnan Wang, Guangxi Li, Di Chen, Shandian Zhe, Xinqi Chu, Zenglin Xu
On three challenging tasks, including Action Recognition in Videos, Image Captioning and Image Generation, BT-RNN outperforms TT-RNN and the standard RNN in terms of both prediction accuracy and convergence rate.
no code implementations • 16 Nov 2017 • Dan Ma, Bin Liu, Zhao Kang, Jiayu Zhou, Jianke Zhu, Zenglin Xu
Generating high fidelity identity-preserving faces with different facial attributes has a wide range of applications.
1 code implementation • 12 Nov 2017 • Zhao Kang, Chong Peng, Qiang Cheng, Zenglin Xu
Second, the discrete solution may deviate from the spectral solution since k-means method is well-known as sensitive to the initialization of cluster centers.
no code implementations • 24 May 2017 • Hao Liu, Haoli Bai, Lirong He, Zenglin Xu
Inheriting these advantages of stochastic neural sequential models, we propose a structured and stochastic sequential neural network, which models both the long-term dependencies via recurrent neural networks and the uncertainty in the segmentation and labels via discrete random variables.
no code implementations • 11 Nov 2016 • Guangxi Li, Zenglin Xu, Linnan Wang, Jinmian Ye, Irwin King, Michael Lyu
Probabilistic Temporal Tensor Factorization (PTTF) is an effective algorithm to model the temporal tensor data.
no code implementations • 3 Nov 2016 • Bin Liu, Zenglin Xu, Yingming Li
Another assumption of these methods is that a predefined rank should be known.
no code implementations • NeurIPS 2016 • Shandian Zhe, Kai Zhang, Pengyuan Wang, Kuang-Chih Lee, Zenglin Xu, Yuan Qi, Zoubin Ghahramani
Tensor factorization is a powerful tool to analyse multi-way data.
no code implementations • NeurIPS 2013 • Shouyuan Chen, Michael R. Lyu, Irwin King, Zenglin Xu
For the noisy cases, we also prove error bounds for a constrained convex program for recovering the tensors.
no code implementations • 26 Apr 2013 • Shandian Zhe, Zenglin Xu, Yuan Qi
To unify these two tasks, we present a new sparse Bayesian approach for joint association study and disease diagnosis.
no code implementations • 15 Mar 2012 • Kaizhu Huang, Rong Jin, Zenglin Xu, Cheng-Lin Liu
Most existing distance metric learning methods assume perfect side information that is usually given in pairwise or triplet constraints.
no code implementations • NeurIPS 2009 • Zenglin Xu, Rong Jin, Jianke Zhu, Irwin King, Michael Lyu, Zhirong Yang
In this framework, SVM and TSVM can be regarded as a learning machine without regularization and one with full regularization from the unlabeled data, respectively.
no code implementations • NeurIPS 2009 • Zhirong Yang, Irwin King, Zenglin Xu, Erkki Oja
Based on this finding, we present a parameterized subset of similarity functions for choosing the best tail-heaviness for HSSNE; (2) we present a fixed-point optimization algorithm that can be applied to all heavy-tailed functions and does not require the user to set any parameters; and (3) we present two empirical studies, one for unsupervised visualization showing that our optimization algorithm runs as fast and as good as the best known t-SNE implementation and the other for semi-supervised visualization showing quantitative superiority using the homogeneity measure as well as qualitative advantage in cluster separation over t-SNE.
no code implementations • NeurIPS 2008 • Zenglin Xu, Rong Jin, Irwin King, Michael Lyu
We consider the problem of multiple kernel learning (MKL), which can be formulated as a convex-concave problem.
no code implementations • NeurIPS 2007 • Zenglin Xu, Rong Jin, Jianke Zhu, Irwin King, Michael Lyu
We consider the problem of Support Vector Machine transduction, which involves a combinatorial problem with exponential computational complexity in the number of unlabeled examples.