1 code implementation • 12 Jun 2018 • Chapman Siu, Richard Yi Da Xu
The framework aims to promote diversity based on the kernel produced on a feature level, through at most three stages: feature sampling, local criteria and global criteria for feature selection.
no code implementations • 18 Jul 2017 • Junyu Xuan, Jie Lu, Guangquan Zhang, Richard Yi Da Xu
The cooperative hierarchical structure is a common and significant data structure observed in, or adopted by, many research areas, such as: text mining (author-paper-word) and multi-label classification (label-instance-feature).
no code implementations • 27 Jun 2016 • Cheng Luo, Richard Yi Da Xu, Yang Xiang
One of the propositions of the dependent random measures is that the atoms of the posterior distribution are shared amongst groups, and hence groups can borrow information from each other.
no code implementations • 16 Apr 2016 • Cheng Luo, Yang Xiang, Richard Yi Da Xu
The key novelty of this model is that we place a temporal constraint amongst the nearby discrete measures $\{G_j\}$ in the form of symmetric Kullback-Leibler (KL) Divergence with a fixed bound $B$.
no code implementations • 9 Feb 2016 • Richard Yi Da Xu, Francois Caron, Arnaud Doucet
We introduce here a class of Bayesian nonparametric models to address this problem.
no code implementations • 12 Jul 2015 • Junyu Xuan, Jie Lu, Guangquan Zhang, Richard Yi Da Xu, Xiangfeng Luo
Under this same framework, two classes of correlation function are proposed (1) using Bivariate beta distribution and (2) using Copula function.
no code implementations • 30 Mar 2015 • Junyu Xuan, Jie Lu, Guangquan Zhang, Richard Yi Da Xu, Xiangfeng Luo
Traditional Relational Topic Models provide a way to discover the hidden topics from a document network.
no code implementations • 30 Mar 2015 • Junyu Xuan, Jie Lu, Guangquan Zhang, Richard Yi Da Xu, Xiangfeng Luo
One branch of these works is the so-called Author Topic Model (ATM), which incorporates the authors's interests as side information into the classical topic model.
no code implementations • 10 Mar 2015 • Ava Bargi, Richard Yi Da Xu, Massimo Piccardi
This infinite adaptive online approach is capable of segmenting and classifying the sequential data over unlimited number of classes, while meeting the memory and delay constraints of streaming contexts.
no code implementations • 12 Jun 2013 • Xuhui Fan, Longbing Cao, Richard Yi Da Xu
To this end, we introduce a \emph{Copula Mixed-Membership Stochastic Blockmodel (cMMSB)} where an individual Copula function is employed to jointly model the membership pairs of those nodes within the subgroup of interest.
no code implementations • 6 Oct 2013 • Xuhui Fan, Richard Yi Da Xu, Longbing Cao, Yin Song
In this work, we propose an informative relational model (InfRM) framework to adequately involve rich information and its granularity in a network, including metadata information about each entity and various forms of link data.
no code implementations • 2 Jul 2013 • Ava Bargi, Richard Yi Da Xu, Massimo Piccardi
In this paper, we propose a non-parametric conditional factor regression (NCFR)model for domains with high-dimensional input and response.
no code implementations • 13 Jun 2013 • Xuhui Fan, Longbing Cao, Richard Yi Da Xu
Directional and pairwise measurements are often used to model inter-relationships in a social network setting.
no code implementations • 15 Jul 2018 • Shuai Jiang, Kan Li, Richard Yi Da Xu
Low rank matrix factorisation is often used in recommender systems as a way of extracting latent features.
no code implementations • 15 Jul 2020 • Ziyue Zhang, Shuai Jiang, Congzhentao Huang, Yang Li, Richard Yi Da Xu
To solve this challenge, we proposed a Teacher-Student GAN model (TS-GAN) to adopt different domains and guide the ReID backbone to learn better ReID information.
no code implementations • ECCV 2020 • Congzhentao Huang, Shuai Jiang, Yang Li, Ziyue Zhang, Jason Traish, Chen Deng, Sam Ferguson, Richard Yi Da Xu
To address this phenomenon, we propose a novel end-to-end training scheme that brings the three separate modules into a single model.
no code implementations • 25 Nov 2020 • Wei Huang, Weitao Du, Richard Yi Da Xu, Chunrui Liu
We claim that depending on the separation conditions of data, the gradient descent iterates will converge to a flatter minimum in the catapult phase.
no code implementations • 12 Jan 2021 • Ziyue Zhang, Shuai Jiang, Congzhentao Huang, Richard Yi Da Xu
In this paper, we propose a novel two-stream network with a lightweight resolution association ReID feature transformation (RAFT) module and a self-weighted attention (SWA) ReID module to evaluate features under different resolutions.
no code implementations • ICLR 2022 • Wei Huang, Yayong Li, Weitao Du, Jie Yin, Richard Yi Da Xu, Ling Chen, Miao Zhang
Inspired by our theoretical insights on trainability, we propose Critical DropEdge, a connectivity-aware and graph-adaptive sampling method, to alleviate the exponential decay problem more fundamentally.
no code implementations • 5 Aug 2021 • Sen Pei, Richard Yi Da Xu, Shiming Xiang, Gaofeng Meng
We compare the proposed method with Unrolled GAN (Metz et al. 2016), BourGAN (Xiao, Zhong, and Zheng 2018), PacGAN (Lin et al. 2018), VEEGAN (Srivastava et al. 2017) and ALI (Dumoulin et al. 2016) on 2D synthetic dataset, and results show that the diversity penalty module can help GAN capture much more modes of the data distribution.
no code implementations • 19 Sep 2021 • Chapman Siu, Jason Traish, Richard Yi Da Xu
We propose using regularization for Multi-Agent Reinforcement Learning rather than learning explicit cooperative structures called {\em Multi-Agent Regularized Q-learning} (MARQ).
no code implementations • 19 Sep 2021 • Chapman Siu, Jason Traish, Richard Yi Da Xu
We demonstrate the flexibility of this approach and how it can be adapted to online contexts where the environment is available to collect experiences and a variety of other contexts.
no code implementations • 19 Sep 2021 • Chapman Siu, Jason Traish, Richard Yi Da Xu
This paper introduces Greedy UnMix (GUM) for cooperative multi-agent reinforcement learning (MARL).
no code implementations • 25 Sep 2019 • Wanming Huang, Shuai Jiang, Xuan Liang, Ian Oppermann, Richard Yi Da Xu
Instead of defining p(x|k, θ) explicitly, we devised a modified GAN to allow us to define the distribution using p(z|k, θ), where z is the corresponding latent representation of x, as well as p(k|x, θ) through an additional classification network which is trained with the GAN in an “end-to-end” fashion.
no code implementations • 4 Feb 2022 • Wei Huang, Chunrui Liu, Yilan Chen, Tianyu Liu, Richard Yi Da Xu
In addition to being a pure generalization bound analysis tool, PAC-Bayesian bound can also be incorporated into an objective function to train a probabilistic neural network, making them a powerful and relevant framework that can numerically provide a tight generalization bound for supervised learning.
no code implementations • 17 Jan 2023 • Sen Pei, Jiaxi Sun, Richard Yi Da Xu, Bin Fan, Shiming Xiang, Gaofeng Meng
Generally, existing approaches in dealing with out-of-distribution (OOD) detection mainly focus on the statistical difference between the features of OOD and in-distribution (ID) data extracted by the classifiers.
no code implementations • 18 Jan 2023 • Hong-Bo Xie, Caoyuan Li, Shuliang Wang, Richard Yi Da Xu, Kerrie Mengersen
Construction of dictionaries using nonnegative matrix factorisation (NMF) has extensive applications in signal processing and machine learning.
no code implementations • 26 Sep 2023 • Haotian Li, Lingzhi Wang, Yuliang Wei, Richard Yi Da Xu, Bailing Wang
Knowledge graph completion is a task that revolves around filling in missing triples based on the information available in a knowledge graph.
2 code implementations • 13 Apr 2020 • Wei Huang, Weitao Du, Richard Yi Da Xu
The prevailing thinking is that orthogonal weights are crucial to enforcing dynamical isometry and speeding up training.
1 code implementation • 19 Dec 2019 • Wei Huang, Richard Yi Da Xu, Weitao Du, Yutian Zeng, Yunce Zhao
In recent years, the mean field theory has been applied to the study of neural networks and has achieved a great deal of success.
2 code implementations • 19 Dec 2019 • Wei Huang, Richard Yi Da Xu
Our work is primarily inspired by the Gaussian Process Latent Variable Model (GPLVM), which was a non-linear dimensionality reduction method.
1 code implementation • 25 Jul 2022 • Sen Pei, Jiaxi Sun, Richard Yi Da Xu, Shiming Xiang, Gaofeng Meng
PoER helps the neural networks to capture label-related features which contain the domain information first in shallow layers and then distills the label-discriminative representations out progressively, enforcing the neural networks to be aware of the characteristic of objects and background which is vital to the generation of domain-invariant features.