no code implementations • 11 Mar 2024 • Mohammad Alkhalefi, Georgios Leontidis, Mingjun Zhong
Contrastive instance discrimination outperforms supervised learning in downstream tasks like image classification and object detection.
no code implementations • 7 Mar 2024 • Miles Everett, Mingjun Zhong, Georgios Leontidis
We propose Masked Capsule Autoencoders (MCAE), the first Capsule Network that utilises pretraining in a self-supervised manner.
no code implementations • 19 Jul 2023 • Miles Everett, Mingjun Zhong, Georgios Leontidis
Our findings underscore the potential of our proposed methodology in enhancing the operational efficiency and performance of Capsule Networks, paving the way for their application in increasingly complex computational scenarios.
no code implementations • 28 Jun 2023 • Mohammad Alkhalefi, Georgios Leontidis, Mingjun Zhong
Self-supervised learning algorithms (SSL) based on instance discrimination have shown promising results, performing competitively or even outperforming supervised learning counterparts in some downstream tasks.
no code implementations • 13 May 2023 • Miles Everett, Mingjun Zhong, Georgios Leontidis
This paper extends the investigation to a range of leading Capsule Network architectures, demonstrating that these issues are not confined to the original design.
no code implementations • 9 Oct 2022 • Shuyi Chen, Bochao Zhao, Mingjun Zhong, Wenpeng Luan, Yixin Yu
Based on the NILM results in various cases, SSL generally outperforms zero-shot learning in improving load disaggregation performance without any sub-metering data from the target data sets.
1 code implementation • 26 Oct 2021 • Zhenyu Lu, Yurong Cheng, Mingjun Zhong, George Stoian, Ye Yuan, Guoren Wang
A typical approach is to formulate causal inference as a supervised learning problem and so counterfactual could be predicted.
no code implementations • 15 Oct 2019 • Shouyong Jiang, Hongru Li, Jinglei Guo, Mingjun Zhong, Shengxiang Yang, Marcus Kaiser, Natalio Krasnogor
The proposed framework is combined with new strategies, such as reference adaptation and adaptive local mating, to solve different types of problems.
no code implementations • 10 Jul 2019 • Oleg Arenz, Mingjun Zhong, Gerhard Neumann
For efficient improvement of the GMM approximation, we derive a lower bound on the corresponding optimization objective enabling us to update the components independently.
1 code implementation • 23 Feb 2019 • Michele DIncecco, Stefano Squartini, Mingjun Zhong
It is not clear if the method could be generalised or transferred to different domains, e. g., the test data were drawn from a different country comparing to the training data.
no code implementations • 17 Oct 2018 • Hong Tang, Huaming Chen, Ting Li, Mingjun Zhong
The proposed framework for this challenge has four steps: preprocessing, feature extraction, training and validation.
1 code implementation • ICML 2018 • Oleg Arenz, Gerhard Neumann, Mingjun Zhong
Inference from complex distributions is a common problem in machine learning needed for many Bayesian methods.
no code implementations • 1 Jun 2018 • Ruosi Wan, Mingjun Zhong, Haoyi Xiong, Zhanxing Zhu
In statistics and machine learning, approximation of an intractable integration is often achieved by using the unbiased Monte Carlo estimator, but the variances of the estimation are generally high in many applications.
8 code implementations • 29 Dec 2016 • Chaoyun Zhang, Mingjun Zhong, Zongzuo Wang, Nigel Goddard, Charles Sutton
Interestingly, we systematically show that the convolutional neural networks can inherently learn the signatures of the target appliances, which are automatically added into the model to reduce the identifiability problem.
1 code implementation • NeurIPS 2015 • Mingjun Zhong, Nigel Goddard, Charles Sutton
In many statistical problems, a more coarse-grained model may be suitable for population-level behaviour, whereas a more detailed model is appropriate for accurate modelling of individual behaviour.
no code implementations • NeurIPS 2014 • Mingjun Zhong, Nigel Goddard, Charles Sutton
Blind source separation problems are difficult because they are inherently unidentifiable, yet the entire goal is to identify meaningful sources.
no code implementations • 12 Oct 2012 • Mingjun Zhong, Rong Liu, Bo Liu
Compared to the point estimate algorithms, which only provide single estimates for those parameters, the Bayesian methods are more meaningful and provide credible intervals, which take into account the uncertainty of the inferred interactions of the miRNA and mRNA.