1 code implementation • ACL 2022 • Zichu Fei, Qi Zhang, Tao Gui, Di Liang, Sirui Wang, Wei Wu, Xuanjing Huang
CQG employs a simple method to generate the multi-hop questions that contain key entities in multi-hop reasoning chains, which ensure the complexity and quality of the questions.
no code implementations • 10 Dec 2024 • Bo Li, Di Liang, Zixin Zhang
To this end, in this work, we propose a novel semantic sentence matching model named Combined Attention Network based on Transformer model (Comateformer).
1 code implementation • 9 Oct 2024 • Di Liang, Xiaofei Li
This work proposes a frame-wise online/streaming end-to-end neural diarization (EEND) method, which detects speaker activities in a frame-in-frame-out fashion.
no code implementations • 17 Aug 2024 • Xianjie Wu, Jian Yang, Linzheng Chai, Ge Zhang, Jiaheng Liu, Xinrun Du, Di Liang, Daixin Shu, Xianfu Cheng, Tianzhen Sun, Guanglin Niu, Tongliang Li, Zhoujun Li
Recent advancements in Large Language Models (LLMs) have markedly enhanced the interpretation and processing of tabular data, introducing previously unimaginable capabilities.
no code implementations • 21 May 2024 • Yonghao Liu, Mengyu Li, Di Liang, Ximing Li, Fausto Giunchiglia, Lan Huang, Xiaoyue Feng, Renchu Guan
By incorporating relevant visual information and leveraging linguistic knowledge, our approach bridges the gap between language and vision, leading to improved understanding and inference capabilities in NLI tasks.
no code implementations • 20 Feb 2024 • Chao Xue, Di Liang, Pengfei Wang, Jing Zhang
In the real world, many facts contained in KGs are time-constrained thus temporal KGQA has received increasing attention.
Ranked #2 on
Question Answering
on CronQuestions
no code implementations • 30 Jan 2024 • Chen Bai, Zeman Shao, Guoxiang Zhang, Di Liang, Jie Yang, Zhuorui Zhang, Yujian Guo, Chengzhang Zhong, Yiqiao Qiu, Zhendong Wang, Yichen Guan, Xiaoyin Zheng, Tao Wang, Cheng Lu
Our proposed general framework encompasses three key processes: 1) integrating a realistic object into a given scene video with proper placement to ensure geometric realism; 2) estimating the sky and environmental lighting distribution and simulating realistic shadows to enhance the light realism; 3) employing a style transfer network that refines the final video output to maximize photorealism.
1 code implementation • 25 Sep 2023 • Di Liang, Nian Shao, Xiaofei Li
This work proposes a frame-wise online/streaming end-to-end neural diarization (FS-EEND) method in a frame-in-frame-out fashion.
no code implementations • 15 Sep 2023 • Xianjie Wu, Jian Yang, Tongliang Li, Di Liang, Shiwei Zhang, Yiyang Du, Zhoujun Li
To fully Unleash the potential of evidence, we propose a framework to effectively incorporate Evidence in knowledge-Intensive Dialogue Generation (u-EIDG).
no code implementations • 10 Mar 2023 • Bassem Tossoun, Di Liang, Stanley Cheung, Zhuoran Fang, Xia Sheng, John Paul Strachan, Raymond G. Beausoleil
Recently, interest in programmable photonics integrated circuits has grown as a potential hardware framework for deep neural networks, quantum computing, and field programmable arrays (FPGAs).
no code implementations • 24 Feb 2023 • Yonghao Liu, Di Liang, Fang Fang, Sirui Wang, Wei Wu, Rui Jiang
For each given question, TMA first extracts the relevant concepts from the KG, and then feeds them into a multiway adaptive module to produce a \emph{temporal-specific} representation of the question.
Ranked #13 on
Question Answering
on TimeQuestions
no code implementations • 24 Feb 2023 • Chao Xue, Di Liang, Sirui Wang, Wei Wu, Jing Zhang
To alleviate this problem, we propose a novel Dual Path Modeling Framework to enhance the model's ability to perceive subtle differences in sentence pairs by separately modeling affinity and difference semantics.
2 code implementations • ACL 2022 • Rui Zheng, Rong Bao, Yuhao Zhou, Di Liang, Sirui Wang, Wei Wu, Tao Gui, Qi Zhang, Xuanjing Huang
Recent works on Lottery Ticket Hypothesis have shown that pre-trained language models (PLMs) contain smaller matching subnetworks(winning tickets) which are capable of reaching accuracy comparable to the original models.
no code implementations • 16 Oct 2022 • Jian Song, Di Liang, Rumei Li, Yuntao Li, Sirui Wang, Minlong Peng, Wei Wu, Yongxin Yu
Transformer-based pre-trained models like BERT have achieved great progress on Semantic Sentence Matching.
no code implementations • COLING 2022 • Sirui Wang, Di Liang, Jian Song, Yuntao Li, Wei Wu
To alleviate this problem, we propose a novel Dual Attention Enhanced BERT (DABERT) to enhance the ability of BERT to capture fine-grained differences in sentence pairs.
1 code implementation • 7 Jun 2022 • Ruotian Ma, Yiding Tan, Xin Zhou, Xuanting Chen, Di Liang, Sirui Wang, Wei Wu, Tao Gui, Qi Zhang
Input distribution shift is one of the vital problems in unsupervised domain adaptation (UDA).
no code implementations • IJCNLP 2019 • Di Liang, Fubao Zhang, Qi Zhang, Xuanjing Huang
However, in the process of reasoning, the role of the two sentences is obviously different, and the sentence pairs for NLI are asymmetrical corpora.
no code implementations • EMNLP 2018 • Tao Gui, Qi Zhang, Jingjing Gong, Minlong Peng, Di Liang, Keyu Ding, Xuanjing Huang
However, from a linguistic perspective, Twitter users not only tend to mimic the formal expressions of traditional media, like news, but they also appear to be developing linguistically informal styles.
Ranked #2 on
Part-Of-Speech Tagging
on Ritter