no code implementations • 17 Feb 2025 • Xuan Tong, Yang Chang, Qing Zhao, Jiawen Yu, Boyang Wang, Junxiong Lin, Yuxuan Lin, Xinji Mai, Haoran Wang, Zeng Tao, Yan Wang, Wenqiang Zhang
Experiments on the MVTecLOCO dataset confirm the efficacy of ComGEN, achieving the best AUROC score of 91. 2%.
no code implementations • 9 Oct 2024 • Jingyang Deng, Zhengyang Shen, Boyang Wang, Lixin Su, Suqi Cheng, Ying Nie, Junfeng Wang, Dawei Yin, Jinwen Ma
The development of Long-Context Large Language Models (LLMs) has markedly advanced natural language processing by facilitating the process of textual data across long documents and multiple corpora.
no code implementations • 22 Jul 2024 • Xinji Mai, Junxiong Lin, Haoran Wang, Zeng Tao, Yan Wang, Shaoqi Yan, Xuan Tong, Jiawen Yu, Boyang Wang, Ziheng Zhou, Qing Zhao, Shuyong Gao, Wenqiang Zhang
In the field of affective computing, fully leveraging information from a variety of sensory modalities is essential for the comprehensive understanding and processing of human emotions.
Dynamic Facial Expression Recognition
Emotion Classification
+1
no code implementations • 8 Jul 2024 • Boyang Wang, Nikhil Sridhar, Chao Feng, Mark Van der Merwe, Adam Fishman, Nima Fazeli, Jeong Joon Park
We propose a robot learning method for communicating, planning, and executing a wide range of tasks, dubbed This&That.
no code implementations • 24 Jun 2024 • Haoran Wang, Xinji Mai, Zeng Tao, Xuan Tong, Junxiong Lin, Yan Wang, Jiawen Yu, Boyang Wang, Shaoqi Yan, Qing Zhao, Ziheng Zhou, Shuyong Gao, Wenqiang Zhang
The contemporary state-of-the-art of Dynamic Facial Expression Recognition (DFER) technology facilitates remarkable progress by deriving emotional mappings of facial expressions from video content, underpinned by training on voluminous datasets.
Dynamic Facial Expression Recognition
Facial Expression Recognition
no code implementations • 24 Jun 2024 • Junxiong Lin, Zeng Tao, Xuan Tong, Xinji Mai, Haoran Wang, Boyang Wang, Yan Wang, Qing Zhao, Jiawen Yu, Yuxuan Lin, Shaoqi Yan, Shuyong Gao, Wenqiang Zhang
To extract Uncertainty-based Degradation Representation from LR images, the AUDE utilizes the Self-supervised Uncertainty Contrast module with Uncertainty Suppression Loss to suppress the inherent model uncertainty of the Degradation Extractor.
1 code implementation • 27 Apr 2024 • Mingyu Yang, Bowen Liu, Boyang Wang, Hun-Seok Kim
In the following diffusion step, DiffJSCC uses the derived multimodal features, together with channel state information such as the signal-to-noise ratio (SNR), as conditions to guide the denoising diffusion process, which converts the initial random noise to the final reconstruction.
no code implementations • 9 Mar 2024 • Junxiong Lin, Yan Wang, Zeng Tao, Boyang Wang, Qing Zhao, Haorang Wang, Xuan Tong, Xinji Mai, Yuxuan Lin, Wei Song, Jiawen Yu, Shaoqi Yan, Wenqiang Zhang
Harnessing the potential of leveraging this a priori knowledge in the context of image super-resolution presents a compelling avenue.
1 code implementation • CVPR 2024 • Boyang Wang, Fengyu Yang, Xihang Yu, Chao Zhang, Hanbin Zhao
In addition, we identify two anime-specific challenges of distorted and faint hand-drawn lines and unwanted color artifacts.
1 code implementation • 9 Jan 2024 • Hongcheng Guo, Jian Yang, Jiaheng Liu, Jiaqi Bai, Boyang Wang, Zhoujun Li, Tieqiao Zheng, Bo Zhang, Junran Peng, Qi Tian
Log anomaly detection is a key component in the field of artificial intelligence for IT operations (AIOps).
no code implementations • 25 Nov 2023 • Boyang Wang, Weihao Zheng, Ying Wang, Zhe Zhang, Yuchen Sheng, Minmin Wang
The morphological fingerprint in the brain is capable of identifying the uniqueness of an individual.
1 code implementation • 2 Nov 2023 • Boyang Wang, Bowen Liu, Shiyu Liu, Fengyu Yang
In this work, we for the first time, present a video compression-based degradation model to synthesize low-resolution image data in the blind SISR task.
1 code implementation • 26 Oct 2023 • Hongcheng Guo, Boyang Wang, Jiaqi Bai, Jiaheng Liu, Jian Yang, Zhoujun Li
In other words, the Multimodal Manga Complement (M2C) task has not been investigated, which aims to handle the aforementioned issues by providing a shared semantic space for vision and language understanding.
no code implementations • 6 May 2023 • Boyang Wang, DingFan Zhang, Tingyu Zhang, Chayanis Sutcharitchan, Jianlin Hua, Dongfang Hua, Bo Zhang, Shao Li
Network target analysis was performed to explore the potential mechanisms of YQTQP in the treatment of AR and the mechanisms were classified into different modules according to their biological functions.
no code implementations • 10 Feb 2023 • Boyang Wang, Pan Chen, Peng Zhang, Shao Li
And we collected 25 formulae (with traditional effects related to Cold/Hot ZHENG) for CG and corresponding 89 Cold/Hot herbs (including Warm/Cool herbs) to discover features and construct target networks of Cold/Hot herbs on the basis of network target and enrichment analysis.
no code implementations • 22 Aug 2020 • Yi Zhou, Boyang Wang, Lei Huang, Shanshan Cui, Ling Shao
This dataset has 1, 842 images with pixel-level DR-related lesion annotations, and 1, 000 images with image-level labels graded by six board-certified ophthalmologists with intra-rater consistency.
no code implementations • 10 Dec 2019 • Yi Zhou, Boyang Wang, Xiaodong He, Shanshan Cui, Ling Shao
In this paper, we propose a diabetic retinopathy generative adversarial network (DR-GAN) to synthesize high-resolution fundus images which can be manipulated with arbitrary grading and lesion information.
4 code implementations • 10 Jul 2019 • Huazhu Fu, Boyang Wang, Jianbing Shen, Shanshan Cui, Yanwu Xu, Jiang Liu, Ling Shao
Retinal image quality assessment (RIQA) is essential for controlling the quality of retinal imaging and guaranteeing the reliability of diagnoses by ophthalmologists or automated analysis systems.
no code implementations • 2 Jun 2018 • Boyang Wang, Zirui Li, Jianwei Gong, Yidi Liu, Huiyan Chen, Chao Lu
Therefore, the goal of this paper is to generate the prediction results of lateral commands with confidence regions according to the reference based on the learned motion primitives.