no code implementations • 2 Jun 2024 • Fan Xu, Nan Wang, Hao Wu, Xuezhi Wen, Dalin Zhang, Siyang Lu, Binyong Li, Wei Gong, Hai Wan, Xibin Zhao
However, current methods are constrained by their receptive fields, struggling to learn global features within the graphs.
no code implementations • 11 Dec 2023 • Fan Xu, Nan Wang, Hao Wu, Xuezhi Wen, Xibin Zhao, Hai Wan
This detector includes a hybrid filtering module and a local environmental constraint module, the two modules are utilized to solve heterophily and label utilization problem respectively.
no code implementations • 10 Dec 2023 • Bingjun Luo, Haowen Wang, Jinpeng Wang, Junjie Zhu, Xibin Zhao, Yue Gao
With the strong robusticity on illumination variations, near-infrared (NIR) can be an effective and essential complement to visible (VIS) facial expression recognition in low lighting or complete darkness conditions.
Facial Expression Recognition Facial Expression Recognition (FER)
no code implementations • 10 Dec 2023 • Bingjun Luo, Zewen Wang, Jinpeng Wang, Junjie Zhu, Xibin Zhao, Yue Gao
Illumination variation has been a long-term challenge in real-world facial expression recognition(FER).
Facial Expression Recognition Facial Expression Recognition (FER) +1
no code implementations • 17 Nov 2023 • Fan Xu, Nan Wang, Xuezhi Wen, Meiqi Gao, Chaoqun Guo, Xibin Zhao
Graph anomaly detection plays a crucial role in identifying exceptional instances in graph data that deviate significantly from the majority.
no code implementations • 3 Jun 2023 • Fan Xu, Nan Wang, Xibin Zhao
To address such problem, we propose an anomaly detection method GALDetector which is combined of global and local information based on observed normal samples.
no code implementations • 6 Apr 2023 • Nan Wang, Xuezhi Wen, Dalin Zhang, Xibin Zhao, Jiahui Ma, Mengxia Luo, Sen Nie, Shi Wu, Jiqiang Liu
APT detection is difficult to detect due to the long-term latency, covert and slow multistage attack patterns of Advanced Persistent Threat (APT).
no code implementations • 13 Mar 2023 • Zhiwei Xu, Min Zhou, Xibin Zhao, Yang Chen, Xi Cheng, Hongyu Zhang
The proposed xASTNN has three advantages.
no code implementations • 9 Oct 2022 • Xinwei Zhang, Jianwen Jiang, Yutong Feng, Zhi-Fan Wu, Xibin Zhao, Hai Wan, Mingqian Tang, Rong Jin, Yue Gao
Although a number of studies are devoted to novel category discovery, most of them assume a static setting where both labeled and unlabeled data are given at once for finding new categories.
no code implementations • CVPR 2021 • Xuancheng Zhang, Yutong Feng, Siqi Li, Changqing Zou, Hai Wan, Xibin Zhao, Yandong Guo, Yue Gao
This paper presents a view-guided solution for the task of point cloud completion.
Ranked #3 on Point Cloud Completion on ShapeNet-ViPC
no code implementations • SIGKDD 2020 • Shuyi Ji, Yifan Feng, Rongrong Ji, Xibin Zhao, Wanwan Tang, Yue Gao.
Second, the hypergraph structure is employed for modeling users and items with explicit hybrid high-order correlations.
no code implementations • 31 Mar 2020 • Siqi Li, Changqing Zou, Yipeng Li, Xibin Zhao, Yue Gao
This paper presents an end-to-end 3D convolutional network named attention-based multi-modal fusion network (AMFNet) for the semantic scene completion (SSC) task of inferring the occupancy and semantic labels of a volumetric 3D scene from single-view RGB-D images.
Ranked #14 on 3D Semantic Scene Completion on NYUv2
no code implementations • 2 Dec 2018 • Haoxuan You, Yifan Feng, Xibin Zhao, Changqing Zou, Rongrong Ji, Yue Gao
More specifically, based on the relation score module, the point-single-view fusion feature is first extracted by fusing the point cloud feature and each single view feature with point-singe-view relation, then the point-multi-view fusion feature is extracted by fusing the point cloud feature and the features of different number of views with point-multi-view relation.
2 code implementations • 28 Nov 2018 • Yutong Feng, Yifan Feng, Haoxuan You, Xibin Zhao, Yue Gao
However, there is little effort on using mesh data in recent years, due to the complexity and irregularity of mesh data.
no code implementations • CVPR 2018 • Yifan Feng, Zizhao Zhang, Xibin Zhao, Rongrong Ji, Yue Gao
The proposed GVCNN framework is composed of a hierarchical view-group-shape architecture, i. e., from the view level, the group level and the shape level, which are organized using a grouping strategy.