7 code implementations • 8 Apr 2019 • Tao Kong, Fuchun Sun, Huaping Liu, Yuning Jiang, Lei LI, Jianbo Shi
In FoveaBox, an instance is assigned to adjacent feature levels to make the model more accurate. We demonstrate its effectiveness on standard benchmarks and report extensive experimental analysis.
Ranked #82 on Object Detection on COCO test-dev (APM metric)
24 code implementations • ECCV 2020 • Xinlong Wang, Tao Kong, Chunhua Shen, Yuning Jiang, Lei LI
We present a new, embarrassingly simple approach to instance segmentation in images.
Ranked #67 on Instance Segmentation on COCO test-dev
18 code implementations • NeurIPS 2020 • Xinlong Wang, Rufeng Zhang, Tao Kong, Lei LI, Chunhua Shen
Importantly, we take one step further by dynamically learning the mask head of the object segmenter such that the mask head is conditioned on the location.
Ranked #10 on Real-time Instance Segmentation on MSCOCO
6 code implementations • CVPR 2021 • Peize Sun, Rufeng Zhang, Yi Jiang, Tao Kong, Chenfeng Xu, Wei Zhan, Masayoshi Tomizuka, Lei LI, Zehuan Yuan, Changhu Wang, Ping Luo
In our method, however, a fixed sparse set of learned object proposals, total length of $N$, are provided to object recognition head to perform classification and location.
Ranked #5 on 2D Object Detection on CeyMo
1 code implementation • NAACL 2021 • Xiaohui Wang, Ying Xiong, Yang Wei, Mingxuan Wang, Lei LI
Transformer, BERT and their variants have achieved great success in natural language processing.
1 code implementation • 12 Oct 2021 • Xiaohui Wang, Yang Wei, Ying Xiong, Guyue Huang, Xian Qian, Yufei Ding, Mingxuan Wang, Lei LI
In this paper, we present LightSeq2, a system to accelerate training for a general family of Transformer models on GPUs.
6 code implementations • CVPR 2021 • Xinlong Wang, Rufeng Zhang, Chunhua Shen, Tao Kong, Lei LI
Compared to the baseline method MoCo-v2, our method introduces negligible computation overhead (only <1% slower), but demonstrates consistently superior performance when transferring to downstream dense prediction tasks including object detection, semantic segmentation and instance segmentation; and outperforms the state-of-the-art methods by a large margin.
1 code implementation • 10 Jan 2022 • Ningyu Zhang, Xin Xu, Liankuan Tao, Haiyang Yu, Hongbin Ye, Shuofei Qiao, Xin Xie, Xiang Chen, Zhoubo Li, Lei LI, Xiaozhuan Liang, Yunzhi Yao, Shumin Deng, Peng Wang, Wen Zhang, Zhenru Zhang, Chuanqi Tan, Qiang Chen, Feiyu Xiong, Fei Huang, Guozhou Zheng, Huajun Chen
We present an open-source and extensible knowledge extraction toolkit DeepKE, supporting complicated low-resource, document-level and multimodal scenarios in the knowledge base population.
2 code implementations • 14 Nov 2022 • Lei LI, Xiang Chen, Shuofei Qiao, Feiyu Xiong, Huajun Chen, Ningyu Zhang
Multimodal relation extraction is an essential task for knowledge graph construction.
2 code implementations • 25 Jan 2023 • Xiang Chen, Lei LI, Shuofei Qiao, Ningyu Zhang, Chuanqi Tan, Yong Jiang, Fei Huang, Huajun Chen
Previous typical solutions mainly obtain a NER model by pre-trained language models (PLMs) with data from a rich-resource domain and adapt it to the target domain.
1 code implementation • COLING 2022 • Xiang Chen, Lei LI, Shumin Deng, Chuanqi Tan, Changliang Xu, Fei Huang, Luo Si, Huajun Chen, Ningyu Zhang
Most NER methods rely on extensive labeled data for model training, which struggles in the low-resource scenarios with limited training data.
1 code implementation • 30 Apr 2022 • Chengyu Wang, Minghui Qiu, Chen Shi, Taolin Zhang, Tingting Liu, Lei LI, Jianing Wang, Ming Wang, Jun Huang, Wei Lin
The success of Pre-Trained Models (PTMs) has reshaped the development of Natural Language Processing (NLP).
3 code implementations • 5 Feb 2024 • Yixin Ou, Ningyu Zhang, Honghao Gui, Ziwen Xu, Shuofei Qiao, Yida Xue, Runnan Fang, Kangwei Liu, Lei LI, Zhen Bi, Guozhou Zheng, Huajun Chen
In recent years, instruction tuning has gained increasing attention and emerged as a crucial technique to enhance the capabilities of Large Language Models (LLMs).
1 code implementation • 31 Dec 2022 • Qingxiu Dong, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu sun, Jingjing Xu, Lei LI, Zhifang Sui
With the increasing ability of large language models (LLMs), in-context learning (ICL) has become a new paradigm for natural language processing (NLP), where LLMs make predictions only based on contexts augmented with a few examples.
2 code implementations • ACL 2022 • Ningyu Zhang, Mosha Chen, Zhen Bi, Xiaozhuan Liang, Lei LI, Xin Shang, Kangping Yin, Chuanqi Tan, Jian Xu, Fei Huang, Luo Si, Yuan Ni, Guotong Xie, Zhifang Sui, Baobao Chang, Hui Zong, Zheng Yuan, Linfeng Li, Jun Yan, Hongying Zan, Kunli Zhang, Buzhou Tang, Qingcai Chen
Artificial Intelligence (AI), along with the recent progress in biomedical language understanding, is gradually changing medical practice.
Ranked #1 on Semantic Similarity on CHIP-STS
4 code implementations • NeurIPS 2018 • Wei Cao, Dong Wang, Jian Li, Hao Zhou, Lei LI, Yitan Li
It is ubiquitous that time series contains many missing values.
General Classification Multivariate Time Series Forecasting +5
3 code implementations • EMNLP 2020 • Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, Lei LI
Pre-trained contextual representations like BERT have achieved great success in natural language processing.
Ranked #16 on Semantic Textual Similarity on STS16
1 code implementation • 4 May 2022 • Xiang Chen, Lei LI, Ningyu Zhang, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen
Note that the previous parametric learning paradigm can be viewed as memorization regarding training data as a book and inference as the close-book test.
2 code implementations • 29 May 2022 • Xiang Chen, Lei LI, Ningyu Zhang, Xiaozhuan Liang, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen
Specifically, vanilla prompt learning may struggle to utilize atypical instances by rote during fully-supervised training or overfit shallow patterns with low-shot data.
3 code implementations • 23 Aug 2022 • Ren Yang, Radu Timofte, Qi Zhang, Lin Zhang, Fanglong Liu, Dongliang He, Fu Li, He Zheng, Weihang Yuan, Pavel Ostyakov, Dmitry Vyal, Magauiya Zhussip, Xueyi Zou, Youliang Yan, Lei LI, Jingzhu Tang, Ming Chen, Shijie Zhao, Yu Zhu, Xiaoran Qin, Chenghua Li, Cong Leng, Jian Cheng, Claudio Rota, Marco Buzzelli, Simone Bianco, Raimondo Schettini, Dafeng Zhang, Feiyu Huang, Shizhuo Liu, Xiaobing Wang, Zhezhu Jin, Bingchen Li, Xin Li, Mingxi Li, Ding Liu, Wenbin Zou, Peijie Dong, Tian Ye, Yunchen Zhang, Ming Tan, Xin Niu, Mustafa Ayazoglu, Marcos Conde, Ui-Jin Choi, Zhuang Jia, Tianyu Xu, Yijian Zhang, Mao Ye, Dengyan Luo, Xiaofeng Pan, Liuhan Peng
The homepage of this challenge is at https://github. com/RenYang-home/AIM22_CompressSR.
4 code implementations • 30 Jun 2023 • Xuandong Zhao, Prabhanjan Ananth, Lei LI, Yu-Xiang Wang
We propose a robust and high-quality watermark method, Unigram-Watermark, by extending an existing approach with a simplified fixed grouping strategy.
1 code implementation • ACL 2021 • Jingjing Xu, Hao Zhou, Chun Gan, Zaixiang Zheng, Lei LI
The choice of token vocabulary affects the performance of machine translation.
2 code implementations • 15 Aug 2019 • Jiacheng Yang, Mingxuan Wang, Hao Zhou, Chengqi Zhao, Yong Yu, Wei-Nan Zhang, Lei LI
Our experiments in machine translation show CTNMT gains of up to 3 BLEU score on the WMT14 English-German language pair which even surpasses the previous state-of-the-art pre-training aided NMT by 1. 4 BLEU score.
1 code implementation • ACL 2021 • Chengqi Zhao, Mingxuan Wang, Qianqian Dong, Rong Ye, Lei LI
NeurST is an open-source toolkit for neural speech translation.
Ranked #1 on Speech-to-Text Translation on libri-trans
2 code implementations • 19 Dec 2020 • Jianze Liang, Chengqi Zhao, Mingxuan Wang, Xipeng Qiu, Lei LI
Neural machine translation often adopts the fine-tuning approach to adapt to specific domains.
1 code implementation • ACL (IWSLT) 2021 • Chengqi Zhao, Zhicheng Liu, Jian Tong, Tao Wang, Mingxuan Wang, Rong Ye, Qianqian Dong, Jun Cao, Lei LI
For offline speech translation, our best end-to-end model achieves 8. 1 BLEU improvements over the benchmark on the MuST-C test set and is even approaching the results of a strong cascade solution.
2 code implementations • ACL 2021 • Runxin Xu, Tianyu Liu, Lei LI, Baobao Chang
Existing methods are not effective due to two challenges of this task: a) the target event arguments are scattered across sentences; b) the correlation among events in a document is non-trivial to model.
Ranked #2 on Document-level Event Extraction on ChFinAnn
1 code implementation • 7 Feb 2024 • Ziyang Wang, Jian-Qing Zheng, Yichi Zhang, Ge Cui, Lei LI
Mamba-UNet adopts a pure Visual Mamba (VMamba)-based encoder-decoder structure, infused with skip connections to preserve spatial information across different scales of the network.
1 code implementation • CVPR 2021 • Xuyang Bai, Zixin Luo, Lei Zhou, Hongkai Chen, Lei LI, Zeyu Hu, Hongbo Fu, Chiew-Lan Tai
Removing outlier correspondences is one of the critical steps for successful feature-based point cloud registration.
1 code implementation • CVPR 2021 • Yukang Chen, Yanwei Li, Tao Kong, Lu Qi, Ruihang Chu, Lei LI, Jiaya Jia
We propose Scale-aware AutoAug to learn data augmentation policies for object detection.
1 code implementation • ACL 2019 • Yunxuan Xiao, Yanru Qu, Lin Qiu, Hao Zhou, Lei LI, Wei-Nan Zhang, Yong Yu
However, many difficult questions require multiple supporting evidence from scattered text among two or more documents.
Ranked #33 on Question Answering on HotpotQA
1 code implementation • 7 Oct 2022 • Jiangtao Feng, Yi Zhou, Jun Zhang, Xian Qian, Liwei Wu, Zhexi Zhang, Yanming Liu, Mingxuan Wang, Lei LI, Hao Zhou
PARAGEN is a PyTorch-based NLP toolkit for further development on parallel generation.
1 code implementation • EMNLP 2020 • Zehui Lin, Xiao Pan, Mingxuan Wang, Xipeng Qiu, Jiangtao Feng, Hao Zhou, Lei LI
We investigate the following question for machine translation (MT): can we develop a single universal MT model to serve as the common seed and obtain derivative and improved models on arbitrary language pairs?
Ranked #3 on Machine Translation on WMT2014 English-French (using extra training data)
3 code implementations • ACL 2021 • Xiao Pan, Mingxuan Wang, Liwei Wu, Lei LI
Existing multilingual machine translation approaches mainly focus on English-centric directions, while the non-English directions still lag behind.
1 code implementation • 14 Nov 2018 • Ning Miao, Hao Zhou, Lili Mou, Rui Yan, Lei LI
In real-world applications of natural language generation, there are often constraints on the target sentences in addition to fluency and naturalness requirements.
1 code implementation • 4 May 2022 • Xiang Chen, Ningyu Zhang, Lei LI, Shumin Deng, Chuanqi Tan, Changliang Xu, Fei Huang, Luo Si, Huajun Chen
Since most MKGs are far from complete, extensive knowledge graph completion studies have been proposed focusing on the multimodal entity, relation extraction and link prediction.
2 code implementations • 1 Oct 2022 • Ningyu Zhang, Lei LI, Xiang Chen, Xiaozhuan Liang, Shumin Deng, Huajun Chen
Analogical reasoning is fundamental to human cognition and holds an important place in various fields.
2 code implementations • EMNLP 2020 • Shuang Zeng, Runxin Xu, Baobao Chang, Lei LI
Document-level relation extraction aims to extract relations among entities within a document.
Ranked #12 on Relation Extraction on DocRED
1 code implementation • ACL 2021 • Lihua Qian, Hao Zhou, Yu Bao, Mingxuan Wang, Lin Qiu, Wei-Nan Zhang, Yong Yu, Lei LI
With GLM, we develop Glancing Transformer (GLAT) for machine translation.
Ranked #69 on Machine Translation on WMT2014 English-German
1 code implementation • 2 Jun 2023 • Xuandong Zhao, Kexun Zhang, Zihao Su, Saastha Vasan, Ilya Grishchenko, Christopher Kruegel, Giovanni Vigna, Yu-Xiang Wang, Lei LI
However, if we do not require the watermarked image to look the same as the original one, watermarks that keep the image semantically similar can be an alternative defense against our attack.
1 code implementation • EMNLP 2021 • Shuhuai Ren, Jinchao Zhang, Lei LI, Xu sun, Jie zhou
Data augmentation aims to enrich training samples for alleviating the overfitting issue in low-resource or class-imbalanced situations.
1 code implementation • 29 May 2023 • Peiyi Wang, Lei LI, Liang Chen, Zefan Cai, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, Zhifang Sui
In this paper, we uncover a systematic bias in the evaluation paradigm of adopting large language models~(LLMs), e. g., GPT-4, as a referee to score and compare the quality of responses generated by candidate models.
1 code implementation • ACL 2021 • Yijun Wang, Changzhi Sun, Yuanbin Wu, Hao Zhou, Lei LI, Junchi Yan
Entities and relations are represented by squares and rectangles in the table.
1 code implementation • 15 Feb 2022 • Lei LI, Yongfeng Zhang, Li Chen
In the latter case, ID vectors are randomly initialized but the model is trained in advance on large corpora, so they are actually in different learning stages.
1 code implementation • 23 May 2023 • Lean Wang, Lei LI, Damai Dai, Deli Chen, Hao Zhou, Fandong Meng, Jie zhou, Xu sun
In-context learning (ICL) emerges as a promising capability of large language models (LLMs) by providing them with demonstration examples to perform diverse tasks.
1 code implementation • 7 May 2022 • Xiang Chen, Ningyu Zhang, Lei LI, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen
To deal with these issues, we propose a novel Hierarchical Visual Prefix fusion NeTwork (HVPNeT) for visual-enhanced entity and relation extraction, aiming to achieve more effective and robust performance.
1 code implementation • Findings (NAACL) 2022 • Xiang Chen, Ningyu Zhang, Lei LI, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen
Multimodal named entity recognition and relation extraction (MNER and MRE) is a fundamental and crucial branch in information extraction.
1 code implementation • ACL 2021 • Lei LI, Yongfeng Zhang, Li Chen
Transformer, which is demonstrated with strong language modeling capability, however, is not personalized and fails to make use of the user and item IDs since the ID tokens are not even in the same semantic space as the words.
1 code implementation • 28 Apr 2023 • Yuhao Huang, Xin Yang, Lian Liu, Han Zhou, Ao Chang, Xinrui Zhou, Rusi Chen, Junxuan Yu, Jiongquan Chen, Chaoyu Chen, Sijing Liu, Haozhe Chi, Xindi Hu, Kejuan Yue, Lei LI, Vicente Grau, Deng-Ping Fan, Fajin Dong, Dong Ni
To fully validate SAM's performance on medical data, we collected and sorted 53 open-source datasets and built a large medical segmentation dataset with 18 modalities, 84 objects, 125 object-modality paired targets, 1050K 2D images, and 6033K masks.
1 code implementation • EMNLP 2018 • Haoyue Shi, Hao Zhou, Jiaze Chen, Lei LI
To study the effectiveness of different tree structures, we replace the parsing trees with trivial trees (i. e., binary balanced tree, left-branching tree and right-branching tree) in the encoders.
Ranked #9 on Sentiment Analysis on Amazon Review Full
1 code implementation • 17 Sep 2019 • Xinlong Wang, Wei Yin, Tao Kong, Yuning Jiang, Lei LI, Chunhua Shen
In this paper, we first analyse the data distributions and interaction of foreground and background, then propose the foreground-background separated monocular depth estimation (ForeSeE) method, to estimate the foreground depth and background depth using separate optimization objectives and depth decoders.
1 code implementation • CVPR 2020 • Lei Li, Siyu Zhu, Hongbo Fu, Ping Tan, Chiew-Lan Tai
In this work, we propose an end-to-end framework to learn local multi-view descriptors for 3D point clouds.
Ranked #5 on Point Cloud Registration on 3DMatch Benchmark
1 code implementation • NAACL 2022 • Rong Ye, Mingxuan Wang, Lei LI
Learning similar representations for semantically similar speech and text is important for speech translation.
1 code implementation • 23 May 2023 • Wenda Xu, Danqing Wang, Liangming Pan, Zhenqiao Song, Markus Freitag, William Yang Wang, Lei LI
By harnessing both explicit human instruction and the implicit knowledge of GPT-4, we fine-tune a text evaluation metric based on LLaMA, producing both a score for generated text and a human readable diagnostic report.
1 code implementation • 21 Mar 2024 • Wei Chen, Yuxuan Liang, Yuanshao Zhu, Yanchuan Chang, Kang Luo, Haomin Wen, Lei LI, Yanwei Yu, Qingsong Wen, Chao Chen, Kai Zheng, Yunjun Gao, Xiaofang Zhou, Yu Zheng
In this paper, we present a comprehensive review of the development and recent advances in deep learning for trajectory computing (DL4Traj).
1 code implementation • 30 Oct 2021 • Jiasong Wu, Qingchun Li, Guanyu Yang, Lei LI, Lotfi Senhadji, Huazhong Shu
The first module adopts a random audio sub-sampler on each noisy audio to generate training pairs.
2 code implementations • ICCV 2019 • Xin Wang, Jiawei Wu, Junkun Chen, Lei LI, Yuan-Fang Wang, William Yang Wang
We also introduce two tasks for video-and-language research based on VATEX: (1) Multilingual Video Captioning, aimed at describing a video in various languages with a compact unified captioning model, and (2) Video-guided Machine Translation, to translate a source language description into the target language using the video information as additional spatiotemporal context.
1 code implementation • 28 Feb 2022 • Tianyun Yang, Ziyao Huang, Juan Cao, Lei LI, Xirong Li
With the rapid progress of generation technology, it has become necessary to attribute the origin of fake images.
1 code implementation • 25 Dec 2020 • Jiangjie Chen, Qiaoben Bao, Changzhi Sun, Xinbo Zhang, Jiaze Chen, Hao Zhou, Yanghua Xiao, Lei LI
The final claim verification is based on all latent variables.
2 code implementations • Findings (ACL) 2021 • Chi Han, Mingxuan Wang, Heng Ji, Lei LI
By projecting audio and text features to a common semantic representation, Chimera unifies MT and ST tasks and boosts the performance on ST benchmarks, MuST-C and Augmented Librispeech, to a new state-of-the-art.
1 code implementation • EMNLP 2021 • Dongyu Ru, Changzhi Sun, Jiangtao Feng, Lin Qiu, Hao Zhou, Weinan Zhang, Yong Yu, Lei LI
LogiRE treats logic rules as latent variables and consists of two modules: a rule generator and a relation extractor.
Ranked #21 on Relation Extraction on DocRED
1 code implementation • 30 Jan 2024 • Xuandong Zhao, Xianjun Yang, Tianyu Pang, Chao Du, Lei LI, Yu-Xiang Wang, William Yang Wang
In this paper, we propose the weak-to-strong jailbreaking attack, an efficient method to attack aligned LLMs to produce harmful text.
2 code implementations • 14 Oct 2021 • Chenyang Huang, Hao Zhou, Osmar R. Zaïane, Lili Mou, Lei LI
How do we perform efficient inference while retaining high translation quality?
1 code implementation • ACL 2019 • Yu Bao, Hao Zhou, Shu-Jian Huang, Lei LI, Lili Mou, Olga Vechtomova, Xin-yu Dai, Jia-Jun Chen
In this paper, we propose to generate sentences from disentangled syntactic and semantic spaces.
1 code implementation • IJCNLP 2019 • Fuli Luo, Shunyao Li, Pengcheng Yang, Lei LI, Baobao Chang, Zhifang Sui, Xu sun
It consists of a generator to produce pun sentences, and a discriminator to distinguish between the generated pun sentences and the real sentences with specific word senses.
1 code implementation • NeurIPS 2023 • Yuanxin Liu, Lei LI, Shuhuai Ren, Rundong Gao, Shicheng Li, Sishuo Chen, Xu sun, Lu Hou
The multi-aspect categorization of FETV enables fine-grained analysis of the metrics' reliability in different scenarios.
1 code implementation • ACL 2019 • Wenhuan Zeng, Abulikemu Abuduweili, Lei LI, Pengcheng Yang
Comments on social media are very diverse, in terms of content, style and vocabulary, which make generating comments much more challenging than other existing natural language generation~(NLG) tasks.
1 code implementation • EMNLP 2021 • Lei LI, Yankai Lin, Shuhuai Ren, Peng Li, Jie zhou, Xu sun
Knowledge distillation~(KD) has been proved effective for compressing large-scale pre-trained language models.
1 code implementation • 21 Sep 2020 • Qianqian Dong, Mingxuan Wang, Hao Zhou, Shuang Xu, Bo Xu, Lei LI
The key idea is to generate source transcript and target translation text with a single decoder.
1 code implementation • 21 Sep 2020 • Qianqian Dong, Rong Ye, Mingxuan Wang, Hao Zhou, Shuang Xu, Bo Xu, Lei LI
Can we build a system to fully utilize signals in a parallel ST corpus?
1 code implementation • NAACL 2021 • Wenkai Yang, Lei LI, Zhiyuan Zhang, Xuancheng Ren, Xu sun, Bin He
However, in this paper, we find that it is possible to hack the model in a data-free way by modifying one single word embedding vector, with almost no accuracy sacrificed on clean samples.
1 code implementation • ACL 2022 • Qingkai Fang, Rong Ye, Lei LI, Yang Feng, Mingxuan Wang
How to learn a better speech representation for end-to-end speech-to-text translation (ST) with limited labeled data?
2 code implementations • 15 Sep 2022 • Gongping Chen, Lei LI, Jianxun Zhang, Yu Dai
However, variable tumor morphology, blurred boundary, and similar intensity distributions bring challenges for accurate segmentation of breast tumors.
1 code implementation • ACL 2022 • Zhiyi Fu, Wangchunshu Zhou, Jingjing Xu, Hao Zhou, Lei LI
How do masked language models (MLMs) such as BERT learn contextual representations?
1 code implementation • CVPR 2021 • Lei LI, Ke Gao, Juan Cao, Ziyao Huang, Yepeng Weng, Xiaoyue Mi, Zhengze Yu, Xiaoya Li, Boyang xia
A series of strategies are introduced to guarantee the safety and effectiveness of the expanded domains.
1 code implementation • 13 Oct 2021 • Guangxiang Zhao, Wenkai Yang, Xuancheng Ren, Lei LI, Yunfang Wu, Xu sun
The conventional wisdom behind learning deep classification models is to focus on bad-classified examples and ignore well-classified examples that are far from the decision boundary.
2 code implementations • 22 May 2023 • Ce Zheng, Lei LI, Qingxiu Dong, Yuxuan Fan, Zhiyong Wu, Jingjing Xu, Baobao Chang
Inspired by in-context learning (ICL), a new paradigm based on demonstration contexts without parameter updating, we explore whether ICL can edit factual knowledge.
2 code implementations • 1 Feb 2021 • Lei LI, Yongfeng Zhang, Li Chen
Explaining to users why some items are recommended is critical, as it can help users to make better decisions, increase their satisfaction, and gain their trust in recommender systems (RS).
1 code implementation • 20 Feb 2021 • Lei LI, Yongfeng Zhang, Li Chen
To achieve a standard way of evaluating recommendation explanations, we provide three benchmark datasets for EXplanaTion RAnking (denoted as EXTRA), on which explainability can be measured by ranking-oriented metrics.
1 code implementation • 5 Apr 2022 • Yu Bao, Hao Zhou, ShuJian Huang, Dongqi Wang, Lihua Qian, Xinyu Dai, Jiajun Chen, Lei LI
Recently, parallel text generation has received widespread attention due to its success in generation efficiency.
1 code implementation • ACL 2022 • Yu Bao, Hao Zhou, ShuJian Huang, Dongqi Wang, Lihua Qian, Xinyu Dai, Jiajun Chen, Lei LI
Recently, parallel text generation has received widespread attention due to its success in generation efficiency.
1 code implementation • ACL 2021 • Zehui Lin, Liwei Wu, Mingxuan Wang, Lei LI
These jointly trained models often suffer from performance degradation on rich-resource language pairs.
1 code implementation • 7 Oct 2022 • Qingxiu Dong, Damai Dai, YiFan Song, Jingjing Xu, Zhifang Sui, Lei LI
However, we find that facts stored in the PLMs are not always correct.
1 code implementation • ICLR 2020 • Rong Ye, Wenxian Shi, Hao Zhou, Zhongyu Wei, Lei LI
We propose the variational template machine (VTM), a novel method to generate text descriptions from data tables.
1 code implementation • 1 Mar 2024 • Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, Lei LI, Sishuo Chen, Xu sun, Lu Hou
Motivated by these two problems, we propose the \textbf{TempCompass} benchmark, which introduces a diversity of temporal aspects and task formats.
2 code implementations • EMNLP 2021 • Qingnan Jiang, Mingxuan Wang, Jun Cao, Shanbo Cheng, ShuJian Huang, Lei LI
How to effectively adapt neural machine translation (NMT) models according to emerging cases without retraining?
1 code implementation • 7 Jul 2023 • Zhongyu Jiang, Zhuoran Zhou, Lei LI, Wenhao Chai, Cheng-Yen Yang, Jenq-Neng Hwang
Learning-based methods have dominated the 3D human pose estimation (HPE) tasks with significantly better performance in most benchmarks than traditional optimization-based methods.
Ranked #11 on 3D Human Pose Estimation on 3DPW (PA-MPJPE metric)
1 code implementation • 10 Oct 2023 • Kexun Zhang, Hongqiao Chen, Lei LI, William Wang
Large language models (LLMs) have shown promising capabilities in using external tools to solve complex problems.
1 code implementation • 17 Nov 2023 • Zhuoran Zhou, Zhongyu Jiang, Wenhao Chai, Cheng-Yen Yang, Lei LI, Jenq-Neng Hwang
We further apply a guided diffusion model to domain adapt 3D adult pose to infant pose to supplement small datasets.
1 code implementation • NeurIPS 2023 • Kexun Zhang, Danqing Wang, Jingtao Xia, William Yang Wang, Lei LI
To address these challenges, we propose ALGO, a framework that synthesizes Algorithmic programs with LLM-Generated Oracles to guide the generation and verify their correctness.
1 code implementation • 16 Jun 2019 • Wenxian Shi, Hao Zhou, Ning Miao, Lei LI
To enhance the controllability and interpretability, one can replace the Gaussian prior with a mixture of Gaussian distributions (GM-VAE), whose mixture components could be related to hidden semantic aspects of data.
1 code implementation • 17 Apr 2024 • Xin Li, Kun Yuan, Yajing Pei, Yiting Lu, Ming Sun, Chao Zhou, Zhibo Chen, Radu Timofte, Wei Sun, HaoNing Wu, ZiCheng Zhang, Jun Jia, Zhichao Zhang, Linhan Cao, Qiubo Chen, Xiongkuo Min, Weisi Lin, Guangtao Zhai, Jianhui Sun, Tianyi Wang, Lei LI, Han Kong, Wenxuan Wang, Bing Li, Cheng Luo, Haiqiang Wang, Xiangguang Chen, Wenhui Meng, Xiang Pan, Huiying Shi, Han Zhu, Xiaozhong Xu, Lei Sun, Zhenzhong Chen, Shan Liu, Fangyuan Kong, Haotian Fan, Yifang Xu, Haoran Xu, Mengduo Yang, Jie zhou, Jiaze Li, Shijie Wen, Mai Xu, Da Li, Shunyu Yao, Jiazhi Du, WangMeng Zuo, Zhibo Li, Shuai He, Anlong Ming, Huiyuan Fu, Huadong Ma, Yong Wu, Fie Xue, Guozhi Zhao, Lina Du, Jie Guo, Yu Zhang, huimin zheng, JunHao Chen, Yue Liu, Dulan Zhou, Kele Xu, Qisheng Xu, Tao Sun, Zhixiang Ding, Yuhang Hu
This paper reviews the NTIRE 2024 Challenge on Shortform UGC Video Quality Assessment (S-UGC VQA), where various excellent solutions are submitted and evaluated on the collected dataset KVQ from popular short-form video platform, i. e., Kuaishou/Kwai Platform.
1 code implementation • ACL 2022 • Qianqian Dong, Yaoming Zhu, Mingxuan Wang, Lei LI
Given a usually long speech sequence, we develop an efficient monotonic segmentation module inside an encoder-decoder model to accumulate acoustic information incrementally and detect proper speech unit boundaries for the input in speech translation task.
1 code implementation • 6 Jan 2021 • Jieyu Zhang, Xiangchen Song, Ying Zeng, Jiaze Chen, Jiaming Shen, Yuning Mao, Lei LI
Previous approaches focus on the taxonomy expansion, i. e. finding an appropriate hypernym concept from the taxonomy for a new query concept.
1 code implementation • Findings (EMNLP) 2021 • Zewei Sun, Mingxuan Wang, Lei LI
Can pre-trained BERT for one language and GPT for another be glued together to translate texts?
1 code implementation • 21 Nov 2020 • Lei LI, Suping Wu
Then, we use a separate side branch network to process the extracted data to better capture edge geometry and corners feature information.
1 code implementation • 2 Oct 2023 • Lei LI, Yekun Chai, Shuohuan Wang, Yu Sun, Hao Tian, Ningyu Zhang, Hua Wu
We validate our approach across a wide range of domains, incorporating seven distinct external tools.
1 code implementation • 4 Jun 2022 • Shuhuai Ren, Lei LI, Xuancheng Ren, Guangxiang Zhao, Xu sun
However, evaluating the openness of CLIP-like models is challenging, as the models are open to arbitrary vocabulary in theory, but their accuracy varies in practice.
2 code implementations • 10 Apr 2023 • Wenhao Zhu, Hongyi Liu, Qingxiu Dong, Jingjing Xu, ShuJian Huang, Lingpeng Kong, Jiajun Chen, Lei LI
Large language models (LLMs) have demonstrated remarkable potential in handling multilingual machine translation (MMT).
2 code implementations • 9 Aug 2023 • Wenhao Zhu, Yunzhe Lv, Qingxiu Dong, Fei Yuan, Jingjing Xu, ShuJian Huang, Lingpeng Kong, Jiajun Chen, Lei LI
We start from targeting individual languages by performing cross-lingual instruction-tuning (CoIT) on LLaMA, i. e. tuning it with translation task data and cross-lingual general task data to obtain cross-lingual models (x-LLaMAs), and formulate underlying scaling laws to investigate the advantages of using scalable translation data.
1 code implementation • 24 Aug 2020 • Ke Mei, Lei LI, Jinchang Xu, Yanhua Cheng, Yugeng Lin
Image retrieval is a fundamental problem in computer vision.
1 code implementation • Findings (ACL) 2022 • Zewei Sun, Mingxuan Wang, Hao Zhou, Chengqi Zhao, ShuJian Huang, Jiajun Chen, Lei LI
This paper does not aim at introducing a novel model for document-level neural machine translation.
1 code implementation • ACL 2019 • Pengcheng Yang, Zhihan Zhang, Fuli Luo, Lei LI, Chengyang Huang, Xu sun
Automatic commenting of online articles can provide additional opinions and facts to the reader, which improves user experience and engagement on social media platforms.
1 code implementation • Findings (ACL) 2021 • Huanqin Wu, Wei Liu, Lei LI, Dan Nie, Tao Chen, Feng Zhang, Di Wang
Keyphrase Prediction (KP) task aims at predicting several keyphrases that can summarize the main idea of the given document.
1 code implementation • ICLR 2022 • Huiyun Yang, Huadong Chen, Hao Zhou, Lei LI
Based on large-scale pre-trained multilingual representations, recent cross-lingual transfer methods have achieved impressive transfer performances.
1 code implementation • 7 Sep 2022 • Lei LI, Zhizheng Liu, Weining Ren, Liudi Yang, Fangjinhua Wang, Marc Pollefeys, Songyou Peng
3D textured shape recovery from partial scans is crucial for many real-world applications.
1 code implementation • 11 Oct 2022 • Lei LI, Yankai Lin, Xuancheng Ren, Guangxiang Zhao, Peng Li, Jie zhou, Xu sun
We then design a Model Uncertainty--aware Knowledge Integration (MUKI) framework to recover the golden supervision for the student.
1 code implementation • 23 May 2023 • Lei LI, Jingjing Xu, Qingxiu Dong, Ce Zheng, Qi Liu, Lingpeng Kong, Xu sun
Language models~(LMs) gradually become general-purpose interfaces in the interactive and embodied world, where the understanding of physical concepts is an essential prerequisite.
1 code implementation • 21 Apr 2021 • Rong Ye, Mingxuan Wang, Lei LI
XSTNet takes both speech and text as input and outputs both transcription and translation text.
1 code implementation • NeurIPS 2019 • Ning Miao, Hao Zhou, Chengqi Zhao, Wenxian Shi, Lei LI
Neural models for text generation require a softmax layer with proper token embeddings during the decoding phase.
1 code implementation • Findings (EMNLP) 2021 • Yaoming Zhu, Jiangtao Feng, Chengqi Zhao, Mingxuan Wang, Lei LI
Developing a unified multilingual model has long been a pursuit for machine translation.
1 code implementation • 5 Aug 2021 • Lei LI, Hongbo Fu, Maks Ovsjanikov
Instead of using a predefined fixed-size local support in voxelization, we propose to learn the optimal support in a data-driven manner.
1 code implementation • 4 Feb 2022 • Wangbin Ding, Lei LI, Xiahai Zhuang, Liqin Huang
For the label fusion, we design a similarity estimation network (SimNet), which estimates the fusion weight of each atlas by measuring its similarity to the target image.
1 code implementation • 11 Aug 2020 • Lei Li, Veronika A. Zimmer, Julia A. Schnabel, Xiahai Zhuang
In this work, we develop a new framework, namely AtrialJSQnet, where LA segmentation, scar projection onto the LA surface, and scar quantification are performed simultaneously in an end-to-end style.
1 code implementation • 18 Jun 2021 • Lei LI, Veronika A. Zimmer, Julia A. Schnabel, Xiahai Zhuang
Late gadolinium enhancement magnetic resonance imaging (LGE MRI) is commonly used to visualize and quantify left atrial (LA) scars.
2 code implementations • 4 Mar 2022 • Bing Zou, Yubo Zheng, Mu Shen, Yingying Luo, Lei LI, Lin Zhang
Commonly used EEG acquisition system's hardware and software are usually closed-source.
1 code implementation • WS 2019 • Yao Fu, Hao Zhou, Jiaze Chen, Lei LI
We apply this framework to existing datasets and models and show that: (1) the pivot words are strong features for the classification of sentence attributes; (2) to change the attribute of a sentence, many datasets only requires to change certain pivot words; (3) consequently, many transfer models only perform the lexical-level modification, while leaving higher-level sentence structures unchanged.
1 code implementation • NeurIPS 2021 • Zaixiang Zheng, Hao Zhou, ShuJian Huang, Jiajun Chen, Jingjing Xu, Lei LI
Thus REDER enables reversible machine translation by simply flipping the input and output ends.
1 code implementation • 10 Oct 2022 • Wenda Xu, YiLin Tuan, Yujie Lu, Michael Saxon, Lei LI, William Yang Wang
Is it possible to build a general and automatic natural language generation (NLG) evaluation metric?
1 code implementation • 10 Jun 2022 • Zheyao Gao, Lei LI, Fuping Wu, Sihan Wang, Xiahai Zhuang
In this work, we propose a new framework of distributed learning that bridges the gap between two groups, and improves the performance for both generic and local data.
1 code implementation • 12 Oct 2022 • Lei LI, Nicolas Donati, Maks Ovsjanikov
Our approach is not only accurate with near-isometric input, for which a high spectral resolution is typically preferred, but also robust and able to produce reasonable matching even in the presence of significant non-isometric distortion, which poses great challenges to existing methods.
1 code implementation • 19 Dec 2022 • Wenda Xu, Xian Qian, Mingxuan Wang, Lei LI, William Yang Wang
In this paper, we propose SESCORE2, a self-supervised approach for training a model-based metric for text generation evaluation.
1 code implementation • 2 Nov 2023 • Fengyi Wu, Tianfang Zhang, Lei LI, Yian Huang, Zhenming Peng
Deep learning (DL) networks have achieved remarkable performance in infrared small target detection (ISTD).
1 code implementation • 29 Nov 2023 • Shicheng Li, Lei LI, Shuhuai Ren, Yuanxin Liu, Yi Liu, Rundong Gao, Xu sun, Lu Hou
The ability to perceive how objects change over time is a crucial ingredient in human intelligence.
1 code implementation • 10 Jun 2021 • Mingxuan Jing, Wenbing Huang, Fuchun Sun, Xiaojian Ma, Tao Kong, Chuang Gan, Lei LI
In particular, we propose an Expectation-Maximization(EM)-style algorithm: an E-step that samples the options of expert conditioned on the current learned policy, and an M-step that updates the low- and high-level policies of agent simultaneously to minimize the newly proposed option-occupancy measurement between the expert and the agent.
1 code implementation • 10 Apr 2022 • Xinhang Li, Zihao Li, Nan Yang, Zheng Yuan, Qinwen Wang, Yiying Yang, Yupeng Huang, Xuri Song, Lei LI, Lin Zhang
The expansion of renewable energy could help realizing the goals of peaking carbon dioxide emissions and carbon neutralization.
1 code implementation • 12 Apr 2022 • Yunfei Li, Tao Kong, Lei LI, Yi Wu
Can a robot autonomously learn to design and construct a bridge from varying-sized blocks without a blueprint?
1 code implementation • 5 Feb 2023 • Kexun Zhang, Xianjun Yang, William Yang Wang, Lei LI
Diffusion models show promising generation capability for a variety of data.
1 code implementation • 5 Mar 2024 • Xijia Tao, Shuai Zhong, Lei LI, Qi Liu, Lingpeng Kong
In this paper, we propose a novel jailbreaking attack against VLMs, aiming to bypass their safety barrier when a user inputs harmful instructions.
1 code implementation • 18 Jun 2019 • Anfeng Cheng, Chuan Zhou, Hong Yang, Jia Wu, Lei LI, Jianlong Tan, Li Guo
Due to the expensive costs of labeling anchor users for training prediction models, we consider in this paper the problem of minimizing the number of user pairs across multiple networks for labeling as to improve the accuracy of the prediction.
2 code implementations • 24 Oct 2019 • An Yan, Xin Eric Wang, Jiangtao Feng, Lei LI, William Yang Wang
Commanding a robot to navigate with natural language instructions is a long-term goal for grounded language understanding and robotics.
1 code implementation • Findings (EMNLP) 2021 • Lei LI, Yankai Lin, Deli Chen, Shuhuai Ren, Peng Li, Jie zhou, Xu sun
On the other hand, the exiting decisions made by internal classifiers are unreliable, leading to wrongly emitted early predictions.
1 code implementation • 14 Jan 2022 • Kai-Ni Wang, Xin Yang, Juzheng Miao, Lei LI, Jing Yao, Ping Zhou, Wufeng Xue, Guang-Quan Zhou, Xiahai Zhuang, Dong Ni
Extensive experimental results on a publicly available dataset from Myocardial pathology segmentation combining multi-sequence CMR (MyoPS 2020) demonstrate our method can achieve promising performance compared with other state-of-the-art methods.
1 code implementation • 16 Sep 2022 • Lei LI, Souhaib Attaiki, Maks Ovsjanikov
In this work, we present a novel learning-based framework that combines the local accuracy of contrastive learning with the global consistency of geometric approaches, for robust non-rigid matching.
2 code implementations • 6 Feb 2023 • Xuandong Zhao, Yu-Xiang Wang, Lei LI
We can then detect the secret message by probing a suspect model to tell if it is distilled from the protected one.
1 code implementation • 21 Jun 2023 • Wentao Liu, Tong Tian, Lemeng Wang, Weijin Xu, Lei LI, Haoyuan Li, Wenyi Zhao, Siyu Tian, Xipeng Pan, Huihua Yang, Feng Gao, Yiming Deng, Ruisheng Su
In this paper, we introduces DIAS, a dataset specifically developed for IA segmentation in DSA sequences.
1 code implementation • 10 Dec 2021 • Jiangjie Chen, Chun Gan, Sijie Cheng, Hao Zhou, Yanghua Xiao, Lei LI
We also propose a new metric to alleviate the shortcomings of current automatic metrics and better evaluate the trade-off.
1 code implementation • 4 Aug 2022 • Lei LI, Zhiyuan Zhang, Ruihan Bao, Keiko Harimoto, Xu sun
Traditional knowledge distillation in classification problems transfers the knowledge via class correlations in the soft label produced by teacher models, which are not available in regression problems like stock trading volume prediction.
1 code implementation • 7 Oct 2022 • Xuandong Zhao, Lei LI, Yu-Xiang Wang
We prove that a protected model still retains the original accuracy within a certain bound.
1 code implementation • 18 Apr 2023 • Lei LI, Jing Chen, Bozhong Tian, Ningyu Zhang
Pre-trained Language Models (PLMs), as parametric-based eager learners, have become the de-facto choice for current paradigms of Natural Language Processing (NLP).
1 code implementation • 24 May 2023 • Heming Xia, Qingxiu Dong, Lei LI, Jingjing Xu, Tianyu Liu, Ziwei Qin, Zhifang Sui
Recently, Large Language Models (LLMs) have been serving as general-purpose interfaces, posing a significant demand for comprehensive visual knowledge.
1 code implementation • 11 Apr 2019 • Hao Wu, Jiayuan Mao, Yufeng Zhang, Yuning Jiang, Lei LI, Weiwei Sun, Wei-Ying Ma
We propose Unified Visual-Semantic Embeddings (UniVSE) for learning a joint space of visual and textual concepts.
1 code implementation • 21 May 2023 • Yi Liu, Xiaohan Bi, Lei LI, Sishuo Chen, Wenkai Yang, Xu sun
However, as pre-trained language models (PLMs) continue to increase in size, the communication cost for transmitting parameters during synchronization has become a training speed bottleneck.
1 code implementation • IJCAI 2019 2019 • Pengcheng Yang, Fuli Luo, Peng Chen, Lei LI, Zhiyi Yin, Xiaodong He, Xu sun
The visual storytelling (VST) task aims at generating a reasonable and coherent paragraph-level story with the image stream as input.
Ranked #21 on Visual Storytelling on VIST
1 code implementation • EACL 2021 • Yijun Wang, Changzhi Sun, Yuanbin Wu, Hao Zhou, Lei LI, Junchi Yan
Current state-of-the-art systems for joint entity relation extraction (Luan et al., 2019; Wad-den et al., 2019) usually adopt the multi-task learning framework.
1 code implementation • COLING 2022 • Dugang Liu, Weihao Du, Lei LI, Weike Pan, Zhong Ming
Existing legal judgment prediction methods usually only consider one single case fact description as input, which may not fully utilize the information in the data such as case relations and frequency.
1 code implementation • 22 Nov 2022 • Jiangjie Chen, Rui Xu, Wenxuan Zeng, Changzhi Sun, Lei LI, Yanghua Xiao
Given a possibly false claim sentence, how can we automatically correct it with minimal editing?
1 code implementation • 10 May 2023 • Jiangjie Chen, Wei Shi, Ziquan Fu, Sijie Cheng, Lei LI, Yanghua Xiao
Large language models (LLMs) have been widely studied for their ability to store and utilize positive knowledge.
1 code implementation • 6 Oct 2023 • Zhenqiao Song, Yunlong Zhao, Wenxian Shi, Yang Yang, Lei LI
In this paper, we propose NAEPro, a model to jointly design Protein sequence and structure based on automatically detected functional sites.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Maosen Zhang, Nan Jiang, Lei LI, Yexiang Xue
Generating natural language under complex constraints is a principled formulation towards controllable text generation.
1 code implementation • Findings (EMNLP) 2021 • Hua Zheng, Lei LI, Damai Dai, Deli Chen, Tianyu Liu, Xu sun, Yang Liu
In this paper, we propose to leverage word-formation knowledge to enhance Chinese WSD.
1 code implementation • 20 Dec 2022 • Fei Yuan, Yinquan Lu, Wenhao Zhu, Lingpeng Kong, Lei LI, Yu Qiao, Jingjing Xu
To address the needs of learning representations for all languages in a unified space, we propose a novel efficient training recipe, upon which we build an effective detachable model, Lego-MT.
1 code implementation • ACL 2020 • Ning Miao, Yuxuan Song, Hao Zhou, Lei LI
It has been a common approach to pre-train a language model on a large corpus and fine-tune it on task-specific data.
1 code implementation • 5 Sep 2021 • Lei LI, Wangbin Ding, Liqun Huang, Xiahai Zhuang
In this work, we propose an automatic RV segmentation framework, where the information from long-axis (LA) views is utilized to assist the segmentation of short-axis (SA) views via information transition.
1 code implementation • 1 Mar 2022 • Zheng Yuan, Tianhao Wu, Qinwen Wang, Yiying Yang, Lei LI, Lin Zhang
Although there are some achievements in the field of MVP in the open space environment, the urban area brings complicated road structures and restricted moving spaces as challenges to the resolution of MVP games.
1 code implementation • Findings (ACL) 2022 • Xuandong Zhao, Zhiguo Yu, Ming Wu, Lei LI
How to learn highly compact yet effective sentence representation?
2 code implementations • 14 Dec 2022 • Xuandong Zhao, Siqi Ouyang, Zhiguo Yu, Ming Wu, Lei LI
How can we extend a pre-trained model to many language understanding tasks, without labeled or additional unlabeled data?
1 code implementation • 28 Nov 2022 • Danqing Wang, Zeyu Wen, Fei Ye, Lei LI, Hao Zhou
By sampling in the latent space, LSSAMP can simultaneously generate peptides with ideal sequence attributes and secondary structures.
1 code implementation • 23 May 2023 • Danqing Wang, Lei LI
In this paper, we propose Study Assistant for Large LAnguage Model (SALAM), a novel framework with an auxiliary agent to assist the main LLM in learning from mistakes through interactive cooperation.
1 code implementation • 8 Feb 2024 • Xuandong Zhao, Lei LI, Yu-Xiang Wang
In this paper, we propose a new decoding method called Permute-and-Flip (PF) decoder.
1 code implementation • 6 Oct 2022 • Yiyang Li, Lei LI, Marina Litvak, Natalia Vanetik, Dingxin Hu, Yuze Li, Yanquan Zhou
The issue of factual consistency in abstractive summarization has received extensive attention in recent years, and the evaluation of factual consistency between summary and document has become an important and urgent task.
1 code implementation • CVPR 2023 • Souhaib Attaiki, Lei LI, Maks Ovsjanikov
We observe that with proper training, learned features can be useful in such tasks, but, crucially, only with an appropriate choice of the receptive field size.
1 code implementation • 30 Apr 2023 • Zhenqiao Song, Lei LI
How can we efficiently generate diverse and novel protein sequences with high fitness?
1 code implementation • 8 Jun 2023 • Xinhang Li, Yiying Yang, Zheng Yuan, Zhe Wang, Qinwen Wang, Chen Xu, Lei LI, Jianhua He, Lin Zhang
For the more challenging problem of pursuing multiple evading vehicles, these algorithms typically select a fixed target evading vehicle for pursuing vehicles without considering dynamic traffic situation, which significantly reduces pursuing success rate.
1 code implementation • 12 Jan 2024 • Lei LI, Jianxun Lian, Xiao Zhou, Xing Xie
However, most existing retrieval models employ a single-round inference paradigm, which may not adequately capture the dynamic nature of user preferences and stuck in one area in the item space.
1 code implementation • CONLL 2019 • Lei Li, Wei Liu, Marina Litvak, Natalia Vanetik, Zuying Huang
Various Seq2Seq learning models designed for machine translation were applied for abstractive summarization task recently.
1 code implementation • Findings (ACL) 2021 • Changzhi Sun, Xinbo Zhang, Jiangjie Chen, Chun Gan, Yuanbin Wu, Jiaze Chen, Hao Zhou, Lei LI
In this paper, we propose PRobr, a novel approach for joint answer prediction and proof generation.
1 code implementation • Findings (NAACL) 2022 • Yiran Chen, Zhenqiao Song, Xianze Wu, Danqing Wang, Jingjing Xu, Jiaze Chen, Hao Zhou, Lei LI
We introduce MTG, a new benchmark suite for training and evaluating multilingual text generation.
1 code implementation • 2 Nov 2022 • Lean Wang, Lei LI, Xu sun
Knowledge distillation (KD) is an effective framework to transfer knowledge from a large-scale teacher to a compact yet well-performing student.
1 code implementation • ICLR 2021 • Yutong Xie, Chence Shi, Hao Zhou, Yuwei Yang, Weinan Zhang, Yong Yu, Lei LI
Searching for novel molecules with desired chemical properties is crucial in drug discovery.
1 code implementation • 16 May 2021 • Wangbin Ding, Lei LI, Xiahai Zhuang, Liqin Huang
However, it is still challenging to develop a multi-modality registration network due to the lack of robust criteria for network training.
1 code implementation • 19 Dec 2022 • Siqi Ouyang, Rong Ye, Lei LI
In this paper, we propose Word-Aligned COntrastive learning (WACO), a simple and effective method for extremely low-resource speech-to-text translation.
1 code implementation • 12 Jun 2020 • Xunpeng Huang, Runxin Xu, Hao Zhou, Zhe Wang, Zhengyang Liu, Lei LI
Due to its simplicity and outstanding ability to generalize, stochastic gradient descent (SGD) is still the most widely used optimization method despite its slow convergence.
1 code implementation • 18 Jun 2021 • Lei LI, Wei Liu, Marina Litvak, Natalia Vanetik, Jiacheng Pei, Yinan Liu, Siya Qi
Due to the subjectivity of the summarization, it is a good practice to have more than one gold summary for each training document.
1 code implementation • NAACL 2022 • Xuandong Zhao, Lei LI, Yu-Xiang Wang
Large language models are shown to memorize privacy information such as social security numbers in training data.
1 code implementation • IWSLT (ACL) 2022 • Siqi Ouyang, Rong Ye, Lei LI
Training speech translation (ST) models requires large and high-quality datasets.
1 code implementation • 24 May 2023 • Siqi Ouyang, Lei LI
However, LLMs frequently fail in complex decision-making tasks due to the misalignment between the pre-trained knowledge in LLMs and the actual rules in the environment.
1 code implementation • 30 Mar 2021 • Tao Wang, Chengqi Zhao, Mingxuan Wang, Lei LI, Deyi Xiong
Automatic translation of dialogue texts is a much needed demand in many real life scenarios.
1 code implementation • 13 Feb 2024 • Yiyang Li, Lei LI, Dingxin Hu, Xueyi Hao, Marina Litvak, Natalia Vanetik, Yanquan Zhou
Improving factual consistency in abstractive summarization has been a focus of current research.
1 code implementation • 15 Feb 2024 • André V. Duarte, Xuandong Zhao, Arlindo L. Oliveira, Lei LI
We are motivated by the premise that a language model is likely to identify verbatim excerpts from its training text.
1 code implementation • 31 Mar 2024 • Jingzhe Shi, Jialuo Li, Qinwei Ma, Zaiwen Yang, Huan Ma, Lei LI
We have conducted extensive experiments to validate the performance of our proposed CHOPS architecture using the CPHOS-dataset, with the aim of demonstrating how LLMs can enhance or serve as alternatives to human customer service.
no code implementations • NAACL 2018 • Jiawei Wu, Lei LI, William Yang Wang
However, the selection of samples in existing co-training methods is based on a predetermined policy, which ignores the sampling bias between the unlabeled and the labeled subsets, and fails to explore the data space.
no code implementations • 22 May 2017 • Wenqing Hu, Chris Junchi Li, Lei LI, Jian-Guo Liu
In addition, we discuss the effects of batch size for the deep neural networks, and we find that small batch size is helpful for SGD algorithms to escape unstable stationary points and sharp minimizers.
no code implementations • 1 Jul 2017 • Wenbo Hu, Lifeng Hua, Lei LI, Hang Su, Tian Wang, Ning Chen, Bo Zhang
This paper presents a Semantic Attribute Modulation (SAM) for language modeling and style variation.
no code implementations • 26 May 2017 • Guang Yang, Xiahai Zhuang, Habib Khan, Shouvik Haldar, Eva Nyktari, Lei LI, Rick Wage, Xujiong Ye, Greg Slabaugh, Raad Mohiaddin, Tom Wong, Jennifer Keegan, David Firmin
In this study, we proposed a novel fully automatic pipeline to achieve an accurate and objective atrial scarring segmentation and assessment of LGE MRI scans for the AF patients.
no code implementations • 7 Aug 2016 • Yanan Guo, Lei LI, Weifeng Liu, Jun Cheng, Dapeng Tao
Since human actions can be characterized by multiple feature representations extracted from Kinect and inertial sensors, multiview features must be encoded into a unified space optimal for human action recognition.
no code implementations • 29 Jul 2016 • Hanming Zhang, Liang Li, Kai Qiao, Linyuan Wang, Bin Yan, Lei LI, Guoen Hu
The qualitative and quantitative evaluations of experimental results indicate that the proposed method show a stable and prospective performance on artifacts reduction and detail recovery for limited angle tomography.
no code implementations • ACL 2016 • Zihang Dai, Lei LI, Wei Xu
We propose CFO, a Conditional Focused neural-network-based approach to answering factoid questions with knowledge bases.
no code implementations • 30 Jun 2016 • Yi Wu, Lei LI, Stuart Russell, Rastislav Bodik
A probabilistic program defines a probability measure over its semantic structures.
no code implementations • 29 Mar 2016 • Yusuf Bugra Erol, Yi Wu, Lei LI, Stuart Russell
Joint state and parameter estimation is a core problem for dynamic Bayesian networks.
no code implementations • 30 May 2015 • Liquan Qiu, Lianwen Jin, Ruifen Dai, Yuxiang Zhang, Lei LI
This paper presents an open source tool for testing the recognition accuracy of Chinese handwriting input methods.
no code implementations • 8 May 2013 • Yusuf Erol, Lei LI, Bharath Ramsundar, Stuart J. Russell
Drawing on an analogy to the extended Kalman filter, we develop and analyze, both theoretically and experimentally, a Taylor approximation to the parameter posterior that allows Storvik's method to be applied to a broader class of models.
no code implementations • 22 Oct 2018 • Lei Li, Fuping Wu, Guang Yang, Tom Wong, Raad Mohiaddin, David Firmin, Jenny Keegan, Lingchao Xu, Xiahai Zhuang
Late Gadolinium Enhancement Magnetic Resonance Imaging (LGE MRI) emerged as a routine scan for patients with atrial fibrillation (AF).
no code implementations • 22 Oct 2018 • Fuping Wu, Lei LI, Guang Yang, Tom Wong, Raad Mohiaddin, David Firmin, Jennifer Keegan, Lingchao Xu, Xiahai Zhuang
We present a fully-automated segmentation and quantification of the left atrial (LA) fibrosis and scars combining two cardiac MRIs, one is the target late gadolinium-enhanced (LGE) image, and the other is an anatomical MRI from the same acquisition session.
no code implementations • 31 Oct 2018 • Lei Li, Zhaoqiang Xia, Xiaoyue Jiang, Fabio Roli, Xiaoyi Feng
Face presentation attack detection (PAD) has become a thorny problem for biometric systems and numerous countermeasures have been proposed to address it.
Face Presentation Attack Detection Generative Adversarial Network
no code implementations • IJCNLP 2019 • Mingxuan Wang, Jun Xie, Zhixing Tan, Jinsong Su, Deyi Xiong, Lei LI
In this study, we first investigate a novel capsule network with dynamic routing for linear time Neural Machine Translation (NMT), referred as \textsc{CapsNMT}.
no code implementations • 20 Nov 2018 • Lei Li, Changqing Zou, Youyi Zheng, Qingkun Su, Hongbo Fu, Chiew-Lan Tai
To bridge the gap between these two spaces in neural networks, we propose a neural line rasterization module to convert the vector sketch along with the attention estimated by RNN into a bitmap image, which is subsequently consumed by CNN.
no code implementations • WS 2017 • Lei Li, Liyuan Mao, Moye Chen
Multiple grammatical and semantic features are adopted in content linking and argument/sentiment labeling for online forums in this paper.
no code implementations • WS 2017 • Danchen Zhang, Daqing He, Sanqiang Zhao, Lei LI
Frequent diseases often have more training data, which helps its classification to perform better than that of an infrequent disease.