no code implementations • 9 May 2024 • Chen Chen, Kai Qiao, Jie Yang, Jian Chen, Bin Yan
In this model, the teacher-guided MIM pretraining model is introduced into PCB CT image element segmentation for the first time, and a multi-scale local visual field extraction (MVE) module is proposed to reduce redundancy by focusing on local visual fields.
no code implementations • 5 Jul 2023 • Shuhao Shi, Kai Qiao, Zhengyan Wang, Jie Yang, Baojie Song, Jian Chen, Bin Yan
Recently, more and more GNN-based methods have been proposed for bot detection.
1 code implementation • 14 Apr 2023 • Shuhao Shi, Kai Qiao, Jie Yang, Baojie Song, Jian Chen, Bin Yan
This paper proposes a Random Forest boosted Graph Neural Network for social bot detection, called RF-GNN, which employs graph neural networks (GNNs) as the base classifiers to construct a random forest, effectively combining the advantages of ensemble learning and GNNs to improve the accuracy and robustness of the model.
1 code implementation • 14 Feb 2023 • Shuhao Shi, Kai Qiao, Jie Yang, Baojie Song, Jian Chen, Bin Yan
The proposed framework is evaluated using three real-world bot detection benchmark datasets, and it consistently exhibits superiority over the baselines.
1 code implementation • 3 Jan 2023 • Shuhao Shi, Kai Qiao, Jian Chen, Shuai Yang, Jie Yang, Baojie Song, Linyuan Wang, Bin Yan
However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research.
Ranked #1 on Stance Detection on MGTAB
no code implementations • 8 May 2022 • Shuhao Shi, Jian Chen, Kai Qiao, Shuai Yang, Linyuan Wang, Bin Yan
The Graph Convolutional Networks (GCNs) have achieved excellent results in node classification tasks, but the model's performance at low label rates is still unsatisfactory.
no code implementations • 29 Sep 2021 • Shuhao Shi, Pengfei Xie, Xu Luo, Kai Qiao, Linyuan Wang, Jian Chen, Bin Yan
AMC-GNN generates two graph views by data augmentation and compares different layers' output embeddings of Graph Neural Network encoders to obtain feature representations, which could be used for downstream tasks.
no code implementations • 26 Jun 2021 • Shuai Yang, Kai Qiao
In this paper, we propose a novel encoder, called ShapeEditor, for high-resolution, realistic and high-fidelity face exchange.
no code implementations • 3 Jun 2021 • Pengfei Xie, Linyuan Wang, Ruoxi Qin, Kai Qiao, Shuhao Shi, Guoen Hu, Bin Yan
In this paper, we propose a new gradient iteration framework, which redefines the relationship between the above three.
no code implementations • 25 May 2021 • S. Shi, Kai Qiao, Shuai Yang, L. Wang, J. Chen, Bin Yan
Traditional methods such as resampling, reweighting, and synthetic samples that deal with imbalanced datasets are no longer applicable in GNN.
no code implementations • 22 Oct 2020 • Zifei Zhang, Kai Qiao, Jian Chen, Ningning Liang
Experimentally, we show that our ASR of adversarial attack reaches to 58. 38% on average, which outperforms the state-of-the-art method by 12. 1% on the normally trained models and by 11. 13% on the adversarially trained models.
no code implementations • 26 Mar 2020 • Kai Qiao, Chi Zhang, Jian Chen, Linyuan Wang, Li Tong, Bin Yan
Except for deep network structure, the task or corresponding big dataset is also important for deep network models, but neglected by previous studies.
no code implementations • 13 Mar 2020 • Kai Qiao, Jian Chen, Linyuan Wang, Chi Zhang, Li Tong, Bin Yan
In this study, we proposed a new GAN-based Bayesian visual reconstruction method (GAN-BVRM) that includes a classifier to decode categories from fMRI data, a pre-trained conditional generator to generate natural images of specified categories, and a set of encoding models and evaluator to evaluate generated images.
no code implementations • 1 Feb 2020 • Zifei Zhang, Kai Qiao, Lingyun Jiang, Linyuan Wang, Bin Yan
To alleviate the tradeoff between the attack success rate and image fidelity, we propose a method named AdvJND, adding visual model coefficients, just noticeable difference coefficients, in the constraint of a distortion function when generating adversarial examples.
no code implementations • 17 Sep 2019 • Wanting Yu, Hongyi Yu, Lingyun Jiang, Mengli Zhang, Kai Qiao
The proposed model comprising a texture transfer network (TTN) and an auxiliary defense generative adversarial networks (GAN) is called Human-perception Auxiliary Defense GAN (HAD-GAN).
1 code implementation • 27 Jul 2019 • Kai Qiao, Chi Zhang, Jian Chen, Linyuan Wang, Li Tong, Bin Yan
Recently, visual encoding based on functional magnetic resonance imaging (fMRI) have realized many achievements with the rapid development of deep network computation.
no code implementations • 12 Apr 2019 • Lingyun Jiang, Kai Qiao, Ruoxi Qin, Linyuan Wang, Jian Chen, Haibing Bu, Bin Yan
In image classification of deep learning, adversarial examples where inputs intended to add small magnitude perturbations may mislead deep neural networks (DNNs) to incorrect results, which means DNNs are vulnerable to them.
no code implementations • 19 Mar 2019 • Kai Qiao, Jian Chen, Linyuan Wang, Chi Zhang, Lei Zeng, Li Tong, Bin Yan
Despite the hierarchically similar representations of deep network and human vision, visual information flows from primary visual cortices to high visual cortices and vice versa based on the bottom-up and top-down manners, respectively.
Neurons and Cognition
no code implementations • 23 Feb 2019 • Chi Zhang, Kai Qiao, Linyuan Wang, Li Tong, Guoen Hu, Ruyuan Zhang, Bin Yan
In this framework, we employ the transfer learning technique to incorporate a pre-trained DNN (i. e., AlexNet) and train a nonlinear mapping from visual features to brain activity.
no code implementations • 16 Jan 2018 • Chi Zhang, Kai Qiao, Linyuan Wang, Li Tong, Ying Zeng, Bin Yan
Without semantic prior information, we present a novel method to reconstruct nature images from fMRI signals of human visual cortex based on the computation model of convolutional neural network (CNN).
no code implementations • 2 Jan 2018 • Kai Qiao, Chi Zhang, Linyuan Wang, Bin Yan, Jian Chen, Lei Zeng, Li Tong
We firstly employed the CapsNet to train the nonlinear mapping from image stimuli to high-level capsule features, and from high-level capsule features to image stimuli again in an end-to-end manner.
no code implementations • 29 Jul 2016 • Hanming Zhang, Liang Li, Kai Qiao, Linyuan Wang, Bin Yan, Lei LI, Guoen Hu
The qualitative and quantitative evaluations of experimental results indicate that the proposed method show a stable and prospective performance on artifacts reduction and detail recovery for limited angle tomography.