1 code implementation • 30 Jan 2025 • Zhe Wang, Yuhua Ru, Fabian Bauer, Aladine Chetouani, Fang Chen, Liping Zhang, Didier Hans, Rachid Jennane, Mohamed Jarraya, Yung Hsin Chen
Specifically, the student model refines the 7T SR task with steps, leveraging feature maps from the inference phase of the teacher model as guidance, aiming to allow the student model to achieve progressively 7T SR performance with a smaller, deployable model size.
no code implementations • 21 Jan 2025 • Jin Li, Shoujin Wang, Qi Zhang, Shui Yu, Fang Chen
However, two significant gaps persist: 1) the difficulty in accurately generating missing data due to the limited ability to capture modality distributions; and 2) the critical but overlooked visibility bias, where items with missing modalities are more likely to be disregarded due to the prioritization of items' multimodal data over user preference alignment.
no code implementations • 27 Dec 2024 • Zhiyu Zhu, Jiayu Zhang, Zhibo Jin, Huaming Chen, Jianlong Zhou, Fang Chen
The interpretability of deep neural networks is crucial for understanding model decisions in various applications, including computer vision.
1 code implementation • 16 Dec 2024 • Zhuhao Wang, Yihua Sun, Zihan Li, Xuan Yang, Fang Chen, Hongen Liao
To bridge the gap between current RRG models and the clinical demands in practice, we first develop a data generation pipeline to create a new MIMIC-RG4 dataset, which considers four common radiology report drafting scenarios and has perfectly corresponded input and output.
no code implementations • 15 Dec 2024 • Yixuan Zhang, Zhidong Li, Yang Wang, Fang Chen, Xuhui Fan, Feng Zhou
Machine learning algorithms often struggle to eliminate inherent data biases, particularly those arising from unreliable labels, which poses a significant challenge in ensuring fairness.
1 code implementation • 22 Nov 2024 • Shuming Liang, Yu Ding, Zhidong Li, Bin Liang, Siqi Zhang, Yang Wang, Fang Chen
This paper explores the ability of Graph Neural Networks (GNNs) in learning various forms of information for link prediction, alongside a brief review of existing link prediction methods.
Ranked #1 on
Link Property Prediction
on ogbl-citation2
no code implementations • 8 Nov 2024 • Xingyu Ai, Bin Huang, Fang Chen, Liu Shi, Binxuan Li, Shaoyu Wang, Qiegen Liu
From the perspective of diffusion mechanism, RED uses the residual between sinograms to replace Gaussian noise in diffusion process, respectively sets the low-dose and full-dose sinograms as the starting point and endpoint of reconstruction.
no code implementations • 9 Oct 2024 • Zhe Wang, Yung Hsin Chen, Aladine Chetouani, Fabian Bauer, Yuhua Ru, Fang Chen, Liping Zhang, Rachid Jennane, Mohamed Jarraya
Through ablation studies, we further validate that integrating supplementary patient-specific information, beyond what X-rays alone can provide, enhances the accuracy and clinical relevance of the generated MRI, which underscores the potential of leveraging external patient-specific information to improve the MRI generation.
no code implementations • 19 Sep 2024 • Hassan Gharoun, Mohammad Sadegh Khorshidi, Fang Chen, Amir H. Gandomi
This study presents an uncertainty-aware stacked neural networks model for the reliable classification of COVID-19 from radiological images.
no code implementations • 2 Sep 2024 • Fangcen Liu, Chenqiang Gao, Fang Chen, Pengcheng Li, Junjie Guo, Deyu Meng
Besides, the attention-guided fusion module is proposed for effectively fusing by exploring the complementary information of two modalities.
no code implementations • 22 Aug 2024 • Zhibo Jin, Jiayu Zhang, Zhiyu Zhu, Yuchen Zhang, Jiahao Huang, Jianlong Zhou, Fang Chen
GE-AdvGAN, a recent method for transferable adversarial attacks, is based on this principle.
no code implementations • 14 Aug 2024 • Zhibo Jin, Jiayu Zhang, Zhiyu Zhu, Chenyu Zhang, Jiahao Huang, Jianlong Zhou, Fang Chen
Given the essence of adversarial attacks is to impair model integrity with minimal noise on original samples, exploring avenues to maximize the utility of such perturbations is imperative.
no code implementations • 23 Jul 2024 • Zizhuo Meng, Boyu Li, Xuhui Fan, Zhidong Li, Yang Wang, Fang Chen, Feng Zhou
The classical temporal point process (TPP) constructs an intensity function by taking the occurrence times into account.
no code implementations • 6 Jun 2024 • Fang Chen, Gourav Datta, Mujahid Al Rafi, Hyeran Jeon, Meng Tang
Reducing peak memory, which is the maximum memory consumed during the execution of a neural network, is critical to deploy neural networks on edge devices with limited memory budget.
no code implementations • 27 May 2024 • Boyuan Zheng, Jianlong Zhou, Fang Chen
Natural language, moreover, serves as the primary medium through which humans acquire new knowledge, presenting a potentially intuitive bridge for translating concepts understandable by humans into formats that can be learned by machines.
1 code implementation • 23 May 2024 • Jia Guo, Shuai Lu, Weihang Zhang, Fang Chen, Hongen Liao, Huiqi Li
Recent studies highlighted a practical setting of unsupervised anomaly detection (UAD) that builds a unified model for multi-class images.
Ranked #2 on
Anomaly Detection
on VisA
no code implementations • 13 Feb 2024 • Jin Li, Shoujin Wang, Qi Zhang, Longbing Cao, Fang Chen, Xiuzhen Zhang, Dietmar Jannach, Charu C. Aggarwal
However, emerging vulnerabilities in RS have catalyzed a paradigm shift towards Trustworthy RS (TRS).
no code implementations • 11 Jan 2024 • Mohammad Sadegh Khorshidi, Navid Yazdanjue, Hassan Gharoun, Danial Yazdani, Mohammad Reza Nikoo, Fang Chen, Amir H. Gandomi
Addressing these challenges, multi-view ensemble learning (MEL) has emerged as a transformative approach, with feature partitioning (FP) playing a pivotal role in constructing artificial views for MEL.
no code implementations • 19 Dec 2023 • Sharath Nittur Sridhar, Maciej Szankin, Fang Chen, Sairam Sundaresan, Anthony Sarah
In this paper, we demonstrate that by using multi-objective search algorithms paired with lightly trained predictors, we can efficiently search for both the sub-network architecture and the corresponding quantization policy and outperform their respective baselines across different performance objectives such as accuracy, model size, and latency.
no code implementations • 27 Oct 2023 • Daniela Elia, Fang Chen, Didar Zowghi, Marian-Andrei Rizoiu
The fast adoption of new technologies forces companies to continuously adapt their operations making it harder to predict workforce requirements.
no code implementations • 29 Sep 2023 • Yiqiao Li, Jianlong Zhou, Yifei Dong, Niusha Shafiabady, Fang Chen
Graph neural networks (GNNs) have proven their efficacy in a variety of real-world applications, but their underlying mechanisms remain a mystery.
no code implementations • 13 Jul 2023 • Michael James Horry, Subrata Chakraborty, Biswajeet Pradhan, Manoranjan Paul, Jing Zhu, Prabal Datta Barua, U. Rajendra Acharya, Fang Chen, Jianlong Zhou
The proposed algorithm achieved excellent generalization results against an external dataset with sensitivity of 77% at a false positive rate of 7. 6.
no code implementations • 18 May 2023 • Jianlong Zhou, Heimo Müller, Andreas Holzinger, Fang Chen
Large language models, e. g. ChatGPT are currently contributing enormously to make artificial intelligence even more popular, especially among the general population.
no code implementations • 23 Mar 2023 • Zhe Wang, Aladine Chetouani, Yung Hsin Chen, Yuhua Ru, Fang Chen, Mohamed Jarraya, Fabian Bauer, Liping Zhang, Didier Hans, Rachid Jennane
Knee Osteoarthritis (KOA) is a prevalent musculoskeletal disorder that severely impacts mobility and quality of life, particularly among older adults.
no code implementations • 13 Mar 2023 • Hassan Gharoun, Fereshteh Momenifar, Fang Chen, Amir H. Gandomi
Despite its astounding success in learning deeper multi-dimensional data, the performance of deep learning declines on new unseen tasks mainly due to its focus on same-distribution prediction.
1 code implementation • 26 Feb 2023 • Zhe Wang, Aladine Chetouani, Mohamed Jarraya, Yung Hsin Chen, Yuhua Ru, Fang Chen, Fabian Bauer, Liping Zhang, Didier Hans, Rachid Jennane
These findings highlight the potential of KECAE as a robust tool for augmenting medical datasets in early KOA detection.
no code implementations • 3 Jan 2023 • Boyuan Zheng, Jianlong Zhou, Fang Chen
Imitation learning demonstrates remarkable performance in various domains.
no code implementations • 3 Jan 2023 • Boyuan Zheng, Jianlong Zhou, Chunjie Liu, Yiqiao Li, Fang Chen
As one of the prevalent methods to achieve automation systems, Imitation Learning (IL) presents a promising performance in a wide range of domains.
no code implementations • 30 Dec 2022 • Yiqiao Li, Jianlong Zhou, Boyuan Zheng, Fang Chen
With the rapid deployment of graph neural networks (GNNs) based techniques into a wide range of applications such as link prediction, node classification, and graph classification the explainability of GNNs has become an indispensable component for predictive and trustworthy decision-making.
no code implementations • 19 Dec 2022 • Fang Chen, Heiko Balzter, Feixiang Zhou, Peng Ren, Huiyu Zhou
In this paper, we develop an effective segmentation framework named DGNet, which performs oil spill segmentation by incorporating the intrinsic distribution of backscatter values in SAR images.
1 code implementation • Advances in Neural Information Processing Systems 2022 • Yu Ding, Lei Wang, Bin Liang, Shuming Liang, Yang Wang, Fang Chen
With the images output by the encoder-decoder network, another classifier is designed to learn the domain-invariant features to conduct image classification.
Ranked #24 on
Domain Generalization
on PACS
no code implementations • 16 Sep 2022 • Fang Chen, Gourav Datta, Souvik Kundu, Peter Beerel
With the aggressive down-sampling of the activation maps in the initial layers (providing up to 22x reduction in memory consumption), our approach achieves 1. 43% higher test accuracy compared to SOTA techniques with iso-memory footprints.
no code implementations • 12 Aug 2022 • Zhongyan Zhang, Lei Wang, Yang Wang, Luping Zhou, Jianjia Zhang, Peng Wang, Fang Chen
Although achieving promising results, this approach is restricted by two issues: 1) the domain gap between benchmark datasets and the dataset of a given retrieval task; 2) the required auxiliary dataset cannot be readily obtained.
1 code implementation • 7 Aug 2022 • Feixiang Zhou, Xinyu Yang, Fang Chen, Long Chen, Zheheng Jiang, Hui Zhu, Reiko Heckel, Haikuan Wang, Minrui Fei, Huiyu Zhou
Furthermore, we design a novel Interaction-Aware Transformer (IAT) to dynamically learn the graph-level representation of social behaviours and update the node-level representation, guided by our proposed interaction-aware self-attention mechanism.
no code implementations • 1 Aug 2022 • Yixuan Zhang, Feng Zhou, Zhidong Li, Yang Wang, Fang Chen
In other words, the fair pre-processing methods ignore the discrimination encoded in the labels either during the learning procedure or the evaluation stage.
no code implementations • 26 Jul 2022 • Yiqiao Li, Jianlong Zhou, Sunny Verma, Fang Chen
Graph neural networks (GNNs) have demonstrated a significant boost in prediction performance on graph data.
no code implementations • 15 Jun 2022 • Fang Chen, Ravi Samtaney
Identification of different types of MHD waves is an important and challenging task in such complex wave patterns.
1 code implementation • 10 May 2022 • Artur Grigorev, Adriana-Simona Mihaita, Seunghyeon Lee, Fang Chen
Predicting the duration of traffic incidents is a challenging task due to the stochastic nature of events.
no code implementations • 29 Sep 2021 • Fangcen Liu, Chenqiang Gao, Fang Chen, Deyu Meng, WangMeng Zuo, Xinbo Gao
We adopt the self-attention mechanism of the transformer to learn the interaction information of image features in a larger range.
no code implementations • 14 Sep 2021 • Fang Chen, Te Wu, Long Wang
Here, we use a newly proposed state-clustering method to theoretically analyze the evolutionary dynamics of two representative ZD strategies: generous ZD strategies and extortionate ZD strategies.
no code implementations • 24 Aug 2021 • Fang Chen, Te Wu, Guocheng Wang, Long Wang
In this paper, we propose a new method, namely, the state-clustering method to calculate the long-term payoffs in repeated games.
1 code implementation • 13 Aug 2021 • Fang Chen, Chenqiang Gao, Fangcen Liu, Yue Zhao, Yuxi Zhou, Deyu Meng, WangMeng Zuo
A local patch network (LPNet) with global attention is proposed in this paper to detect small targets by jointly considering the global and local properties of infrared small target images.
no code implementations • 7 Jul 2021 • Yixuan Zhang, Feng Zhou, Zhidong Li, Yang Wang, Fang Chen
Therefore, we propose a Bias-TolerantFAirRegularizedLoss (B-FARL), which tries to regain the benefits using data affected by label bias and selection bias.
no code implementations • 23 Jun 2021 • Boyuan Zheng, Sunny Verma, Jianlong Zhou, Ivor Tsang, Fang Chen
Imitation learning aims to extract knowledge from human experts' demonstrations or artificially created agents in order to replicate their behaviors.
no code implementations • 15 Apr 2021 • Jianlong Zhou, Weidong Huang, Fang Chen
The dependence of ML models with dynamic number of features is encoded into the structure of visualisation, where ML models and their dependent features are directly revealed from related line connections.
no code implementations • 15 Apr 2021 • Shuiqiao Yang, Sunny Verma, Borui Cai, Jiaojiao Jiang, Kun Yu, Fang Chen, Shui Yu
Recent works for attributed network clustering utilize graph convolution to obtain node embeddings and simultaneously perform clustering assignments on the embedding space.
no code implementations • 11 Mar 2021 • Tuo Mao, Adriana-Simona Mihaita, Fang Chen, Hai L. Vu
Lastly, we propose a new algorithm BGA-ML combining the GA algorithm and the extreme-gradient decision-tree, which is the best performing regressor, together in a single optimization framework.
no code implementations • 22 Dec 2020 • Ben-Yang Li, Fang Chen, Heng-Na Xiong, Ling Tang, Ju-Xiang Shao, Ze-Jin Yang
We did extensive research for the typical nanolaminate Mn+1AXn (n=1, 2, 3) ceramics focusing on the structural stability, the phase transition pressure of Ti2GaN (160 GPa) is far higher than that of Zr2GaN (92 GPa), meaning the strong M dependence of the same group, whereas Zr2AlN (98 GPa) has similar value with that of Zr2GaN, meaning the weak A dependence.
Materials Science
1 code implementation • 1 Dec 2020 • Feixiang Zhou, Zheheng Jiang, Zhihua Liu, Fang Chen, Long Chen, Lei Tong, Zhile Yang, Haikuan Wang, Minrui Fei, Ling Li, Huiyu Zhou
However, quantifying mouse behaviours from videos or images remains a challenging problem, where pose estimation plays an important role in describing mouse behaviours.
1 code implementation • 16 Oct 2020 • Sunny Verma, Jiwei Wang, Zhefeng Ge, Rujia Shen, Fan Jin, Yang Wang, Fang Chen, Wei Liu
In this research, we first propose a common network to discover both intra-modal and inter-modal dynamics by utilizing basic LSTMs and tensor based convolution networks.
no code implementations • ECAI 2020 • Sixing Wu, Fang Chen, Fangzhao Wu, Yongfeng Huang and Xing Li
In this paper, we propose a multi-task neural network to perform emotion-cause pair extraction in a unified model.
no code implementations • 22 Jun 2020 • Jianlong Zhou, Shuiqiao Yang, Chun Xiao, Fang Chen
In this paper, we exploit the massive text data posted by Twitter users to analyse the sentiment dynamics of people living in the state of New South Wales (NSW) in Australia during the pandemic period.
no code implementations • 5 Jun 2020 • Dilusha Weeraddana, Bin Liang, Zhidong Li, Yang Wang, Fang Chen, Livia Bonazzi, Dean Phillips, Nitin Saxena
Data61 and Western Water worked collaboratively to apply engineering expertise and Machine Learning tools to find a cost-effective solution to the pipe failure problem in the region west of Melbourne, where on average 400 water main failures occur per year.
no code implementations • 8 Jan 2020 • Thanh Tung Khuat, Fang Chen, Bogdan Gabrys
This paper proposes an improved version of the current online learning algorithm for a general fuzzy min-max neural network (GFMM) to tackle existing issues concerning expansion and contraction steps as well as the way of dealing with unseen data located on decision boundaries.
no code implementations • 29 Oct 2019 • Feng Zhou, Zhidong Li, Xuhui Fan, Yang Wang, Arcot Sowmya, Fang Chen
In this paper, we consider the sigmoid Gaussian Hawkes process model: the baseline intensity and triggering kernel of Hawkes process are both modeled as the sigmoid transformation of random trajectories drawn from Gaussian processes (GP).
no code implementations • 12 Sep 2019 • Fang Chen, Hong Wan, Hua Cai, Guang Cheng
Machine learning and blockchain are two of the most noticeable technologies in recent years.
no code implementations • 29 May 2019 • Feng Zhou, Zhidong Li, Xuhui Fan, Yang Wang, Arcot Sowmya, Fang Chen
In classical Hawkes process, the baseline intensity and triggering kernel are assumed to be a constant and parametric function respectively, which limits the model flexibility.
1 code implementation • 29 May 2019 • Thanh Tung Khuat, Fang Chen, Bogdan Gabrys
Motivated by the practical demands for simplification of data towards being consistent with human thinking and problem solving as well as tolerance of uncertainty, information granules are becoming important entities in data processing at different levels of data abstraction.
no code implementations • 14 May 2019 • Weida Li, Mingxia Liu, Fang Chen, Daoqiang Zhang
Aggregating multi-subject functional magnetic resonance imaging (fMRI) data is indispensable for generating valid and general inferences from patterns distributed across human brains.
no code implementations • CONLL 2018 • Mohammad Ebrahimi, Elaheh ShafieiBavani, Raymond Wong, Fang Chen
Locations of social media users are important to many applications such as rapid disaster response, targeted advertisement, and news recommendation.
no code implementations • EMNLP 2018 • Elaheh ShafieiBavani, Mohammad Ebrahimi, Raymond Wong, Fang Chen
ROUGE is one of the first and most widely used evaluation metrics for text summarization.
no code implementations • COLING 2018 • Elaheh ShafieiBavani, Mohammad Ebrahimi, Raymond Wong, Fang Chen
We present a new summary evaluation approach that does not require human model summaries.
no code implementations • 20 Oct 2017 • Elaheh ShafieiBavani, Mohammad Ebrahimi, Raymond Wong, Fang Chen
ROUGE is one of the first and most widely used evaluation metrics for text summarization.
2 code implementations • CVPR 2017 • Kan Chen, Trung Bui, Fang Chen, Zhaowen Wang, Ram Nevatia
According to the intent of query, attention mechanism can be introduced to adaptively balance the importance of different modalities.
no code implementations • COLING 2016 • Elaheh ShafieiBavani, Mohammad Ebrahimi, Raymond Wong, Fang Chen
When making clinical decisions, practitioners need to rely on the most relevant evidence available.
no code implementations • NeurIPS 2016 • Matt Zhang, Peng Lin, Ting Guo, Yang Wang, Fang Chen
The proposed approach can simultaneously model both the observations and arrival times of temporal events, and determine the number of latent states from data.
no code implementations • 23 May 2016 • Xuhui Fan, Bin Li, Yi Wang, Yang Wang, Fang Chen
Due to constraints of partition strategy, existing models may cause unnecessary dissections in sparse regions when fitting data in dense regions.
no code implementations • 7 May 2016 • Elaheh ShafieiBavani, Mohammad Ebrahimi, Raymond Wong, Fang Chen
In this paper, we propose an effective approach to enhance the word graph-based MSC and tackle the issue that most of the state-of-the-art MSC approaches are confronted with: i. e., improving both informativity and grammaticality at the same time.
no code implementations • CVPR 2014 • Bang Zhang, Yi Wang, Yang Wang, Fang Chen
Many prevalent multi-class classification approaches can be unified and generalized by the output coding framework which usually consists of three phases: (1) coding, (2) learning binary classifiers, and (3) decoding.