no code implementations • ACL 2022 • Shuang Liu, Dong Wang, Xiaoguang Li, Minghui Huang, Meizhen Ding
Open-domain question answering is a challenging task with a wide variety of practical applications.
no code implementations • 9 Nov 2024 • Yi Zeng, Mingguang Han, Xiaoguang Li, Tiejun Li
Channel estimation and extrapolation are fundamental issues in MIMO communication systems.
no code implementations • 1 Sep 2024 • Mingguang Han, Yi Zeng, Xiaoguang Li, Tiejun Li
To reduce computational complexity and improve frequency estimation accuracy, a two-stage strategy was further introduced to dynamically adjust the number of the optimized degrees of freedom.
no code implementations • 16 Aug 2024 • Xiongtao Sun, Gan Liu, Zhipeng He, Hui Li, Xiaoguang Li
Prompt serves as a crucial link in interacting with large language models (LLMs), widely impacting the accuracy and interpretability of model outputs.
no code implementations • 31 Jul 2024 • Ziyu Zhao, Xiaoguang Li, Pingping Cai, Canyu Zhang, Song Wang
To address these limitations, we propose a novel approach that leverages the newly proposed Adaptive Implicit Representation Mapping (AIRM) for ultra-high-resolution Image Segmentation.
no code implementations • 23 Jul 2024 • Liang Zhao, Qing Guo, Xiaoguang Li, Song Wang
In this work, we identify the visual-text inpainting task to achieve high-quality scene text image restoration and text completion: Given a scene text image with unknown missing regions and the corresponding text with unknown missing characters, we aim to complete the missing information in both images and text by leveraging their complementary information.
no code implementations • 14 Jun 2024 • Mohammad Dehghan, Mohammad Ali Alomrani, Sunyam Bagga, David Alfonso-Hermelo, Khalil Bibi, Abbas Ghaddar, Yingxue Zhang, Xiaoguang Li, Jianye Hao, Qun Liu, Jimmy Lin, Boxing Chen, Prasanna Parthasarathi, Mahdi Biparva, Mehdi Rezagholizadeh
To mitigate these issues, we propose our enhanced web and efficient knowledge graph (KG) retrieval solution (EWEK-QA) to enrich the content of the extracted knowledge fed to the system.
no code implementations • 29 May 2024 • Hao Zhang, Yuyang Zhang, Xiaoguang Li, Wenxuan Shi, Haonan Xu, Huanshuo Liu, Yasheng Wang, Lifeng Shang, Qun Liu, Yong liu, Ruiming Tang
Integrating external knowledge into large language models (LLMs) presents a promising solution to overcome the limitations imposed by their antiquated and static parametric memory.
no code implementations • 28 Feb 2024 • Jiebin Zhang, Eugene J. Yu, Qinyu Chen, Chenhao Xiong, Dawei Zhu, Han Qian, Mingbo Song, Xiaoguang Li, Qun Liu, Sujian Li
In today's fast-paced world, the growing demand to quickly generate comprehensive and accurate Wikipedia documents for emerging events is both crucial and challenging.
no code implementations • 25 Feb 2024 • Xuming Hu, Xiaochuan Li, Junzhe Chen, Yinghui Li, Yangning Li, Xiaoguang Li, Yasheng Wang, Qun Liu, Lijie Wen, Philip S. Yu, Zhijiang Guo
To this end, we propose evaluating the robustness of generative search engines in the realistic and high-risk setting, where adversaries have only black-box system access and seek to deceive the model into returning incorrect responses.
no code implementations • 22 Feb 2024 • Xinshuo Hu, Baotian Hu, Dongfang Li, Xiaoguang Li, Lifeng Shang
The present study introduces the knowledge-augmented generator, which is specifically designed to produce information that remains grounded in contextual knowledge, regardless of alterations in the context.
1 code implementation • 26 Jan 2024 • Haochen Tan, Zhijiang Guo, Zhan Shi, Lu Xu, Zhili Liu, Yunlong Feng, Xiaoguang Li, Yasheng Wang, Lifeng Shang, Qun Liu, Linqi Song
Large Language Models (LLMs) have succeeded remarkably in understanding long-form contents.
1 code implementation • 18 Dec 2023 • Nandan Thakur, Luiz Bonifacio, Xinyu Zhang, Odunayo Ogundepo, Ehsan Kamalloo, David Alfonso-Hermelo, Xiaoguang Li, Qun Liu, Boxing Chen, Mehdi Rezagholizadeh, Jimmy Lin
NoMIRACL includes both a non-relevant and a relevant subset.
no code implementations • 13 Oct 2023 • Canyu Zhang, Xiaoguang Li, Qing Guo, Song Wang
To this end, we propose a framework with two modules: (1) building a semantic implicit representation (SIR) for a corrupted image whose large regions miss.
1 code implementation • ICCV 2023 • Rabab Abdelfattah, Qing Guo, Xiaoguang Li, XiaoFeng Wang, Song Wang
Using the aggregated similarity scores as the initial pseudo labels at the training stage, we propose an optimization framework to train the parameters of the classification network and refine pseudo labels for unobserved labels.
no code implementations • 26 Jul 2023 • Canyu Zhang, Qing Guo, Xiaoguang Li, Renjie Wan, Hongkai Yu, Ivor Tsang, Song Wang
Given the coordinates of a pixel we want to reconstruct, we first collect its neighboring pixels in the input image and extract their detail-enhanced semantic embeddings, unmask-attentional semantic embeddings, importance values, and spatial distances to the desired pixel.
no code implementations • 21 May 2023 • Xiaoguang Li
To make HSD only focus on the feature of hard samples of dendrite cores, we destroy the structure of the easy samples of dendrites which are detected by ESD and force HSD to learn the feature of hard samples.
no code implementations • 18 May 2023 • Xiaoguang Li, Qing Guo, Pingping Cai, Wei Feng, Ivor Tsang, Song Wang
State-of-the-art shadow removal methods train deep neural networks on collected shadow & shadow-free image pairs, which are desired to complete two distinct tasks via shared weights, i. e., data restoration for shadow regions and identical mapping for non-shadow regions.
1 code implementation • AAAI 2023 • Pingping Cai, Deja Scott, Xiaoguang Li, Song Wang
Point cloud shape completion, which aims to reconstruct the missing regions of the incomplete point clouds with plausible shapes, is an ill-posed and challenging task that benefits many downstream 3D applications.
Ranked #1 on Point Cloud Completion on ShapeNet
1 code implementation • ICCV 2023 • Xiaoguang Li, Qing Guo, Rabab Abdelfattah, Di Lin, Wei Feng, Ivor Tsang, Song Wang
In this work, we find that pretraining shadow removal networks on the image inpainting dataset can reduce the shadow remnants significantly: a naive encoder-decoder network gets competitive restoration quality w. r. t.
no code implementations • 15 Dec 2022 • Jiawei Zhou, Xiaoguang Li, Lifeng Shang, Xin Jiang, Qun Liu, Lei Chen
In light of this, we present Vocabulary Disentangled Retrieval (VDR), a retrieval-based framework that harnesses natural language as proxies of the underlying data variation to drive disentangled representation learning.
no code implementations • 20 Oct 2022 • Shaobo Li, Xiaoguang Li, Lifeng Shang, Chengjie Sun, Bingquan Liu, Zhenzhou Ji, Xin Jiang, Qun Liu
Further experiments on question-answering datasets show that trying to learn a deterministic relationship with the proposed methods can also help other knowledge-intensive tasks.
1 code implementation • 18 Oct 2022 • Xinyu Zhang, Nandan Thakur, Odunayo Ogundepo, Ehsan Kamalloo, David Alfonso-Hermelo, Xiaoguang Li, Qun Liu, Mehdi Rezagholizadeh, Jimmy Lin
MIRACL (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual dataset we have built for the WSDM 2023 Cup challenge that focuses on ad hoc retrieval across 18 different languages, which collectively encompass over three billion native speakers around the world.
no code implementations • Findings (ACL) 2022 • Shaobo Li, Xiaoguang Li, Lifeng Shang, Zhenhua Dong, Chengjie Sun, Bingquan Liu, Zhenzhou Ji, Xin Jiang, Qun Liu
We check the words that have three typical associations with the missing words: knowledge-dependent, positionally close, and highly co-occurred.
1 code implementation • ACL 2022 • Jiawei Zhou, Xiaoguang Li, Lifeng Shang, Lan Luo, Ke Zhan, Enrui Hu, Xinyu Zhang, Hao Jiang, Zhao Cao, Fan Yu, Xin Jiang, Qun Liu, Lei Chen
To alleviate the data scarcity problem in training question answering systems, recent works propose additional intermediate pre-training for dense passage retrieval (DPR).
1 code implementation • CVPR 2022 • Xiaoguang Li, Qing Guo, Di Lin, Ping Li, Wei Feng, Song Wang
As a result, the final method takes the advantage of effective semantic & image-level filling for high-fidelity inpainting.
no code implementations • Findings (ACL) 2022 • Dan Su, Xiaoguang Li, Jindi Zhang, Lifeng Shang, Xin Jiang, Qun Liu, Pascale Fung
Long-form question answering (LFQA) aims to generate a paragraph-length answer for a given question.
Ranked #1 on Question Answering on KILT: ELI5
no code implementations • 31 Aug 2021 • Pengfei Zhu, Xiaoguang Li, Jian Li, Hai Zhao
Open-domain Question Answering (ODQA) has achieved significant results in terms of supervised learning manner.
Machine Reading Comprehension Open-Domain Question Answering
1 code implementation • 9 Jul 2021 • Qing Guo, Xiaoguang Li, Felix Juefei-Xu, Hongkai Yu, Yang Liu, Song Wang
In this paper, for the first time, we formulate image inpainting as a mix of two problems, predictive filtering and deep generation.
no code implementations • 5 Mar 2021 • Chang Liu, Xiaoguang Li, Guohao Cai, Zhenhua Dong, Hong Zhu, Lifeng Shang
It is still an open question to leverage various types of information under the BERT framework.
no code implementations • 31 Dec 2020 • Shaobo Li, Xiaoguang Li, Lifeng Shang, Xin Jiang, Qun Liu, Chengjie Sun, Zhenzhou Ji, Bingquan Liu
In this paper, we propose a new retrieval target, hop, to collect the hidden reasoning evidence from Wikipedia for complex question answering.
Ranked #6 on Question Answering on HotpotQA
no code implementations • 1 Nov 2020 • Haonan Yan, Xiaoguang Li, Hui Li, Jiamin Li, Wenhai Sun, Fenghua Li
In MDP, we first propose a novel real-time model extraction status assessment scheme called Monitor to evaluate the situation of the model.
no code implementations • 2 Oct 2020 • Yang Bai, Xiaoguang Li, Gang Wang, Chaoliang Zhang, Lifeng Shang, Jun Xu, Zhaowei Wang, Fangshan Wang, Qun Liu
Term-based sparse representations dominate the first-stage text retrieval in industrial applications, due to its advantage in efficiency, interpretability, and exact term matching.
no code implementations • 19 Aug 2020 • Xiaoguang Li, Feifan Yang, Kin Man Lam, Li Zhuo, Jiafeng Li
Our method can adaptively select the weights of the extracted features according to the spatially varying blur features, and dynamically restore the images.
no code implementations • 28 Jun 2020 • Xiaoguang Li, Peng Fu, Hongxia Yin, ZhenChang Wang, Li Zhuo, HUI ZHANG
Computed Tomography (CT) of the temporal bone has become an important method for diagnosing ear diseases.
no code implementations • 25 May 2020 • Laichuan Shen, Xiaoguang Li, Jing Xia, Lei Qiu, Xichao Zhang, Oleg A. Tretiakov, Motohiko Ezawa, Yan Zhou
Numerical simulations demonstrate that two bimerons with opposite signs of topological numbers can be created simultaneously in a ferromagnetic thin film via current-induced spin torques.
Mesoscale and Nanoscale Physics
no code implementations • 6 Feb 2020 • Xiaoguang Li, Hui Li, Haonan Yan, Zelei Cheng, Wenhai Sun, Hui Zhu
Public intelligent services enabled by machine learning algorithms are vulnerable to model extraction attacks that can steal confidential information of the learning models through public queries.
3 code implementations • 26 Jan 2020 • Pengfei Zhu, Hai Zhao, Xiaoguang Li
Multi-choice Machine Reading Comprehension (MRC) requires model to decide the correct answer from a set of answer options when given a passage and a question.
Ranked #3 on Reading Comprehension on RACE
no code implementations • 1 Jan 2020 • Pengfei Zhu, Hai Zhao, Xiaoguang Li
Multi-choice Machine Reading Comprehension (MRC) requires model to decide the correct answer from a set of answer options when given a passage and a question.
no code implementations • 23 Sep 2019 • Wei Cai, Xiaoguang Li, Lizuo Liu
In this paper, we propose a phase shift deep neural network (PhaseDNN), which provides a uniform wideband convergence in approximating high frequency functions and solutions of wave equations.
10 code implementations • 31 Aug 2019 • Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen, Qun Liu
The pre-trained language models have achieved great successes in various natural language understanding (NLU) tasks due to its capacity to capture the deep contextualized information in text by pre-training on large-scale corpora.
no code implementations • 3 May 2019 • Wei Cai, Xiaoguang Li, Lizuo Liu
Due to the phase shift, each DNN achieves the speed of convergence as in the low frequency range.