no code implementations • 3 Oct 2024 • Xinpeng Li, Zile Jiang, Kai Ming Ting, Ye Zhu
Automatic Modulation Classification (AMC), as a crucial technique in modern non-cooperative communication networks, plays a key role in various civil and military applications.
1 code implementation • 1 Oct 2024 • Kaichen Zhou, Yang Cao, Taewhan Kim, Hao Zhao, Hao Dong, Kai Ming Ting, Ye Zhu
To address this gap, we introduce the Realistic Anomaly Detection (RAD) dataset, the first multi-view RGB-based anomaly detection dataset specifically collected using a real robot arm, providing unique and realistic data scenarios.
no code implementations • 14 Sep 2024 • Hang Zhang, Yang Xu, Lei Gong, Ye Zhu, Kai Ming Ting
This paper introduces a new framework for clustering in a distributed network called Distributed Clustering based on Distributional Kernel (K) or KDC that produces the final clusters based on the similarity with respect to the distributions of initial clusters, as measured by K. It is the only framework that satisfies all three of the following properties.
no code implementations • 16 Mar 2024 • Yang Cao, Haolong Xiang, Hang Zhang, Ye Zhu, Kai Ming Ting
Anomaly detection is a longstanding and active research area that has many applications in domains such as finance, security, and manufacturing.
no code implementations • 8 Oct 2023 • Zi Jing Wang, Ye Zhu, Kai Ming Ting
Independent of the distance measure employed, existing clustering algorithms have another challenge: either effectiveness issues or high time complexity.
1 code implementation • 17 Jan 2023 • Zhong Zhuang, Kai Ming Ting, Guansong Pang, Shuaibin Song
A treatment called Subgraph Centralization for graph anomaly detection is proposed to address all the above weaknesses.
no code implementations • 1 Jan 2023 • Yufan Wang, Kai Ming Ting, Yuanyi Shang
Existing measures and representations for trajectories have two longstanding fundamental shortcomings, i. e., they are computationally expensive and they can not guarantee the `uniqueness' property of a distance function: dist(X, Y) = 0 if and only if X=Y, where $X$ and $Y$ are two trajectories.
2 code implementations • 30 Dec 2022 • Yang Cao, Ye Zhu, Kai Ming Ting, Flora D. Salim, Hong Xian Li, Luxing Yang, Gang Li
Detecting abrupt changes in data distribution is one of the most significant tasks in streaming data analysis.
no code implementations • 29 Sep 2021 • Kai Ming Ting, Takashi Washio, Ye Zhu, Yang Xu
The curse of dimensionality has been studied in different aspects.
no code implementations • 12 Oct 2020 • Xin Han, Ye Zhu, Kai Ming Ting, Gang Li
In this paper, we identify the root cause of this issue and show that the use of a data-dependent kernel (instead of distance or existing kernel) provides an effective means to address it.
1 code implementation • 24 Sep 2020 • Kai Ming Ting, Bi-Cun Xu, Takashi Washio, Zhi-Hua Zhou
Existing approaches based on kernel mean embedding, which convert a point kernel to a distributional kernel, have two key issues: the point kernel employed has a feature map with intractable dimensionality; and it is {\em data independent}.
no code implementations • 28 Apr 2020 • Durgesh Samariya, Sunil Aryal, Kai Ming Ting
In this paper, we introduce a new score called SiNNE, which is independent of the dimensionality of subspaces.
1 code implementation • 14 Feb 2020 • Kai Ming Ting, Jonathan R. Wells, Ye Zhu
This paper introduces a new similarity measure called point-set kernel which computes the similarity between an object and a set of objects.
no code implementations • 2 Jul 2019 • Kai Ming Ting, Jonathan R. Wells, Takashi Washio
A current key approach focuses on ways to produce an approximate finite-dimensional feature map, assuming that the kernel used has a feature map with intractable dimensionality---an assumption traditionally held in kernel-based methods.
1 code implementation • 30 Jun 2019 • Xiaoyu Qin, Kai Ming Ting, Ye Zhu, Vincent CS Lee
A new type of clusters called mass-connected clusters is formally defined.
1 code implementation • 24 Jun 2019 • Ye Zhu, Kai Ming Ting
This paper presents a new insight into improving the performance of Stochastic Neighbour Embedding (t-SNE) by using Isolation kernel instead of Gaussian kernel.
no code implementations • 9 Feb 2019 • Sunil Aryal, Kai Ming Ting, Takashi Washio, Gholamreza Haffari
To measure the similarity of two documents in the bag-of-words (BoW) vector representation, different term weighting schemes are used to improve the performance of cosine similarity---the most widely used inter-document similarity measure in text mining.
1 code implementation • 8 Oct 2018 • Ye Zhu, Kai Ming Ting, Yuan Jin, Maia Angelova
This paper focuses on density-based clustering, particularly the Density Peak (DP) algorithm and the one based on density-connectivity DBSCAN; and proposes a new method which takes advantage of the individual strengths of these two methods to yield a density-based hierarchical clustering algorithm.
1 code implementation • 5 Oct 2018 • Ye Zhu, Kai Ming Ting, Mark Carman, Maia Angelova
To match the implicit assumption, we propose to transform a given dataset such that the transformed clusters have approximately the same density while all regions of locally low density become globally low density -- homogenising cluster density while preserving the cluster structure of the dataset.
1 code implementation • Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining 2018 • Kai Ming Ting, Yue Zhu, Zhi-Hua Zhou
This paper investigates data dependent kernels that are derived directly from data.
no code implementations • 3 Jul 2017 • Jonathan R. Wells, Kai Ming Ting
We show that a recent outlying aspects miner can run orders of magnitude faster by simply replacing its density estimator with the proposed density estimator, enabling it to deal with large datasets with thousands of dimensions that would otherwise be impossible.
no code implementations • 30 May 2016 • Xin Mu, Kai Ming Ting, Zhi-Hua Zhou
This is the first time, as far as we know, that completely random trees are used as a single common core to solve all three sub problems: unsupervised learning, supervised learning and model update in data streams.
no code implementations • 15 Dec 2008 • Fei Tony Liu, Kai Ming Ting, Zhi-Hua Zhou
Most existing model-based approaches to anomaly detection construct a profile of normal instances, then identify instances that do not conform to the normal profile as anomalies.
Anomaly Detection
Unsupervised Anomaly Detection with Specified Settings -- 0.1% anomaly
+4