no code implementations • 29 Jan 2025 • Jiang Li, Yuan-Ting Li
In the A2RR stage, we employ a graph attention-based residue reconstruction method to explore the internal relationships and features of proteins.
no code implementations • 11 Nov 2024 • Steven Goldenberg, Kawser Ahammed, Adam Carpenter, Jiang Li, Riad Suleiman, Chris Tennant
Field emission can cause significant problems in superconducting radio-frequency linear accelerators (linacs).
no code implementations • 2 Oct 2024 • Siyi Liu, Yang Li, Jiang Li, Shan Yang, Yunshi Lan
Specifically, our framework employs a three-stage diversity approach to prompt LLMs, generating multiple synthetic samples that encapsulate specific relations from scratch.
1 code implementation • 31 Jul 2024 • Jiang Li, XiaoPing Wang, Zhigang Zeng
Furthermore, GraphSmile is effortlessly applied to multimodal sentiment analysis in conversation (MSAC), forging a unified multimodal affective model capable of executing MERC and MSAC tasks.
Emotion Recognition in Conversation
Multimodal Emotion Recognition
+1
1 code implementation • 14 Apr 2024 • Jiang Li, Xiangdong Su, Yeyun Gong, Guanglai Gao
Recent studies have highlighted the effectiveness of tensor decomposition methods in the Temporal Knowledge Graphs Embedding (TKGE) task.
1 code implementation • 15 Dec 2023 • Yilin Liu, Yunkui Pang, Jiang Li, Yong Chen, Pew-Thian Yap
Their success is widely attributed to implicit regularization due to the spectral bias of suitable network architectures.
no code implementations • 18 Sep 2023 • Shanglin Lei, XiaoPing Wang, Guanting Dong, Jiang Li, Yingjian Liu
Our model achieves state-of-the-art performance on three datasets, demonstrating the superiority of our work.
no code implementations • 12 Aug 2023 • Jiang Li, XiaoPing Wang, Yingjian Liu, Zhigang Zeng
We utilize TE and SE to combine the strengths of previous methods in a simplistic manner to efficiently capture temporal and spatial contextual information in the conversation.
1 code implementation • 28 Jul 2023 • Jiang Li, XiaoPing Wang, Yingjian Liu, Zhigang Zeng
RUME is applied to extract conversation-level contextual emotional cues while pulling together data distributions between modalities; ACME is utilized to perform multimodal interaction centered on textual modality; LESM is used to model emotion shift and capture emotion-shift information, thereby guiding the learning of the main task.
Ranked #12 on
Emotion Recognition in Conversation
on IEMOCAP
Emotion Recognition in Conversation
Multimodal Emotion Recognition
+1
no code implementations • 2 Jul 2023 • Jiang Li, XiaoPing Wang, Zhigang Zeng
How to model the context in a conversation is a central aspect and a major challenge of ERC tasks.
2 code implementations • 26 Jun 2023 • Jiang Li, Xiangdong Su, Fujun Zhang, Guanglai Gao
This paper presents a translation-based knowledge geraph embedding method via efficient relation rotation (TransERR), a straightforward yet effective alternative to traditional translation-based knowledge graph embedding models.
Ranked #16 on
Link Property Prediction
on ogbl-wikikg2
1 code implementation • 3 Jun 2023 • Fusheng Yu, Jiang Li, XiaoPing Wang, Shaojin Wu, Junjie Zhang, Zhigang Zeng
In this study, we construct a large, complex, and realistic safety clothing and helmet detection (SFCHD) dataset.
Ranked #1 on
Object Detection
on SFCHD
no code implementations • 25 May 2023 • Zhengyang Lou, Huan Xu, Fangzhou Mu, Yanli Liu, XiaoYu Zhang, Liang Shang, Jiang Li, Bochen Guan, Yin Li, Yu Hen Hu
Using a modern game engine, our approach renders crisp clean images and their precise depth maps, based on which high-quality hazy images can be synthesized for training dehazing models.
2 code implementations • ICCV 2023 • Yilin Liu, Jiang Li, Yunkui Pang, Dong Nie, Pew-Thian Yap
Existing methods mostly handcraft or search for the architecture from a large design space, due to the lack of understanding on how the architectural choice corresponds to the image.
1 code implementation • 20 Mar 2023 • Yingjian Liu, Jiang Li, XiaoPing Wang, Zhigang Zeng
Emotion Recognition in Conversation (ERC) has attracted growing attention in recent years as a result of the advancement and implementation of human-computer interface technologies.
Ranked #6 on
Emotion Recognition in Conversation
on EmoryNLP
no code implementations • 13 Dec 2022 • Guoqing Lv, Jiang Li, XiaoPing Wang, Zhigang Zeng
We separately encode the last utterance and fuse it with the entire dialogue through the multi-head attention based intention fusion module to capture the speaker's intention.
1 code implementation • 6 Jul 2022 • Jiang Li, XiaoPing Wang, Guoqing Lv, Zhigang Zeng
In multimodal ERC, GNNs are capable of extracting both long-distance contextual information and inter-modal interactive information.
Ranked #24 on
Emotion Recognition in Conversation
on IEMOCAP
Emotion Classification
Emotion Recognition in Conversation
+1
no code implementations • 30 Apr 2022 • Md Reshad Ul Hoque, Jiang Li, Jian Wu
To our best knowledge, this is the first dataset of this kind.
1 code implementation • 3 Jun 2021 • Hao liu, Qian Gao, Jiang Li, Xiaochao Liao, Hao Xiong, Guangxing Chen, Wenlin Wang, Guobao Yang, Zhiwei Zha, daxiang dong, Dejing Dou, Haoyi Xiong
In this work, we present JIZHI - a Model-as-a-Service system - that per second handles hundreds of millions of online inference requests to huge deep models with more than trillions of sparse parameters, for over twenty real-time recommendation services at Baidu, Inc.
1 code implementation • 21 Apr 2021 • Ren Yang, Radu Timofte, Jing Liu, Yi Xu, Xinjian Zhang, Minyi Zhao, Shuigeng Zhou, Kelvin C. K. Chan, Shangchen Zhou, Xiangyu Xu, Chen Change Loy, Xin Li, Fanglong Liu, He Zheng, Lielin Jiang, Qi Zhang, Dongliang He, Fu Li, Qingqing Dang, Yibin Huang, Matteo Maggioni, Zhongqian Fu, Shuai Xiao, Cheng Li, Thomas Tanay, Fenglong Song, Wentao Chao, Qiang Guo, Yan Liu, Jiang Li, Xiaochao Qu, Dewang Hou, Jiayu Yang, Lyn Jiang, Di You, Zhenyu Zhang, Chong Mou, Iaroslav Koshelev, Pavel Ostyakov, Andrey Somov, Jia Hao, Xueyi Zou, Shijie Zhao, Xiaopeng Sun, Yiting Liao, Yuanzhi Zhang, Qing Wang, Gen Zhan, Mengxi Guo, Junlin Li, Ming Lu, Zhan Ma, Pablo Navarrete Michelini, Hai Wang, Yiyun Chen, Jingyu Guo, Liliang Zhang, Wenming Yang, Sijung Kim, Syehoon Oh, Yucong Wang, Minjie Cai, Wei Hao, Kangdi Shi, Liangyan Li, Jun Chen, Wei Gao, Wang Liu, XiaoYu Zhang, Linjie Zhou, Sixin Lin, Ru Wang
This paper reviews the first NTIRE challenge on quality enhancement of compressed video, with a focus on the proposed methods and results.
no code implementations • 8 Jul 2020 • Liu Fangxin, Zhao Wenbo, Wang Yanzhi, Dai Changzhi, Jiang Li
Theoretical analysis~(see Appendix A) and accuracy evaluation on various DNN models of different tasks show the effectiveness and generalization of AUSN.
no code implementations • 21 Oct 2016 • Feng Li, Guangfan Zhang, Wei Wang, Roger Xu, Tom Schnell, Jonathan Wen, Frederic McKenzie, Jiang Li
We compared performances of the new data representations with the original EEG features for engagement assessment.