no code implementations • 11 Nov 2024 • Mianqiu Huang, Xiaoran Liu, Shaojun Zhou, Mozhi Zhang, Chenkun Tan, Pengyu Wang, Qipeng Guo, Zhe Xu, Linyang Li, Zhikai Lei, Linlin Li, Qun Liu, Yaqian Zhou, Xipeng Qiu, Xuanjing Huang
With the development of large language models (LLMs), the sequence length of these models continues to increase, drawing significant attention to long-context language models.
no code implementations • 20 Sep 2024 • Linlin Li, Steven X. Ding, Liutao Zhou, Maiying Zhong, Kaixiang Peng
Considering the limit capacity of standard observer-based detection and feedback control schemes in detecting and handling the cyber-attacks, a modified configuration for cyber-physical systems is developed by transmitting the combinations of the input and output residuals instead of the input and output signals, which is facile for dealing with both the process faults and cyber-attacks.
no code implementations • 4 Sep 2024 • Zhe Xu, Jiasheng Ye, Xiangyang Liu, Tianxiang Sun, Xiaoran Liu, Qipeng Guo, Linlin Li, Qun Liu, Xuanjing Huang, Xipeng Qiu
DetectiveQA focuses on evaluating the long-context reasoning ability of LLMs, which not only requires a full understanding of context but also requires extracting important evidences from the context and reasoning according to extracted evidences to answer the given questions.
no code implementations • 21 Jul 2024 • Xiaoran Liu, Ruixiao Li, Qipeng Guo, Zhigeng Liu, Yuerong Song, Kai Lv, Hang Yan, Linlin Li, Qun Liu, Xipeng Qiu
The long-context capability of the Large Language Models (LLM) has made significant breakthroughs, but the maximum supported context length remains a critical bottleneck limiting their practical applications.
no code implementations • 6 Sep 2023 • Steven X. Ding, Linlin Li
It is demonstrated that the projection onto the manifold of uncertainty data, together with the correspondingly defined Bregman divergence, is also capable for fault detection.
no code implementations • 4 Apr 2023 • Wuwei Ren, Siyuan Shen, Linlin Li, Shengyu Gao, Yuehan Wang, Liangtao Gu, Shiying Li, Xingjun Zhu, Jiahua Jiang, Jingyi Yu
Light scattering imposes a major obstacle for imaging objects seated deeply in turbid media, such as biological tissues and foggy air.
no code implementations • 18 Dec 2022 • Ning Wang, Jiangrong Xie, Hang Luo, Qinglin Cheng, Jihao Wu, Mingbo Jia, Linlin Li
On the other hand, we transfer the image-text retrieval design of CLIP to image captioning scenarios by devising a novel visual concept extractor and a cross-modal modulator.
no code implementations • 4 Dec 2022 • Ning Wang, Jiahao Xie, Jihao Wu, Mingbo Jia, Linlin Li
Despite the remarkable progress of image captioning, existing captioners typically lack the controllable capability to generate desired image captions, e. g., describing the image in a rough or detailed manner, in a factual or emotional view, etc.
no code implementations • 2 Aug 2022 • Linlin Li, Steven X. Ding, Ketian Liang, Zhiwen Chen, Ting Xue
The major efforts are made on the development of a control theoretic solution to the optimal fault detection problem, in which an analog concept to minimal sufficient statistic, the so-called lossless information compression, is introduced and proven for dynamic systems and fault detection specifications.
no code implementations • 16 Feb 2022 • Steven X. Ding, Linlin Li, Tianyu Liu
In this paper, we propose a new paradigm of fault diagnosis in dynamic systems as an alternative to the well-established observer-based framework.
1 code implementation • 22 Dec 2021 • Xiao Xu, Libo Qin, Kaiji Chen, Guoxing Wu, Linlin Li, Wanxiang Che
Current researches on spoken language understanding (SLU) heavily are limited to a simple setting: the plain text-based SLU that takes the user utterance as input and generates its corresponding semantic frames (e. g., intent and slots).
Ranked #1 on
Semantic Frame Parsing
on ProSLU
1 code implementation • EMNLP 2021 • Baojun Wang, Zhao Zhang, Kun Xu, Guang-Yuan Hao, Yuyang Zhang, Lifeng Shang, Linlin Li, Xiao Chen, Xin Jiang, Qun Liu
Incorporating lexical knowledge into deep learning models has been proved to be very effective for sequence labeling tasks.
no code implementations • 11 Mar 2021 • Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, Qun Liu
The multilingual pre-trained language models (e. g, mBERT, XLM and XLM-R) have shown impressive performance on cross-lingual natural language understanding tasks.
no code implementations • 27 Feb 2021 • Steven X. Ding, Linlin Li, Dong Zhao, Chris Louen, Tianyu Liu
It is demonstrated, in the unified framework of control and detection, that all kernel attacks can be structurally detected when not only the observer-based residual, but also the control signal based residual signals are generated and used for the detection purpose.
no code implementations • 11 Dec 2020 • Xiaoqi Jiao, Huating Chang, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, Qun Liu
Comprehensive experiments on the evaluation benchmarks demonstrate that 1) layer mapping strategy has a significant effect on task-agnostic BERT distillation and different layer mappings can result in quite different performances; 2) the optimal layer mapping strategy from the proposed search process consistently outperforms the other heuristic ones; 3) with the optimal layer mapping, our student model achieves state-of-the-art performance on the GLUE tasks.
11 code implementations • Findings of the Association for Computational Linguistics 2020 • Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, Qun Liu
To accelerate inference and reduce model size while maintaining accuracy, we first propose a novel Transformer distillation method that is specially designed for knowledge distillation (KD) of the Transformer-based models.
Ranked #1 on
Natural Language Inference
on MultiNLI Dev
1 code implementation • ACL 2019 • Ruixue Ding, Pengjun Xie, Xiaoyan Zhang, Wei Lu, Linlin Li, Luo Si
Gazetteers were shown to be useful resources for named entity recognition (NER).
no code implementations • SEMEVAL 2019 • Xiaobin Wang, Chunping Ma, Huafei Zheng, Chu Liu, Pengjun Xie, Linlin Li, Luo Si
This paper describes DM-NLP{'}s system for toponym resolution task at Semeval 2019.
no code implementations • NAACL 2019 • Hao Li, Wei Lu, Pengjun Xie, Linlin Li
This paper introduces a new task {--} Chinese address parsing {--} the task of mapping Chinese addresses into semantically meaningful chunks.
no code implementations • NAACL 2019 • Zhanming Jie, Pengjun Xie, Wei Lu, Ruixue Ding, Linlin Li
Supervised approaches to named entity recognition (NER) are largely developed based on the assumption that the training data is fully annotated with named entity information.
1 code implementation • EMNLP 2018 • Zuchao Li, Shexia He, Jiaxun Cai, Zhuosheng Zhang, Hai Zhao, Gongshen Liu, Linlin Li, Luo Si
Semantic role labeling (SRL) aims to recognize the predicate-argument structure of a sentence.
no code implementations • WS 2018 • Chen Li, Junpei Zhou, Zuyi Bao, Hengyou Liu, Guangwei Xu, Linlin Li
In the correction stage, candidates were generated by the three GEC models and then merged to output the final corrections for M and S types.
no code implementations • SEMEVAL 2018 • Chunping Ma, Huafei Zheng, Pengjun Xie, Chen Li, Linlin Li, Luo Si
This paper describes our submissions for SemEval-2018 Task 8: Semantic Extraction from CybersecUrity REports using NLP.
no code implementations • SEMEVAL 2018 • Wei Qiu, Mosha Chen, Linlin Li, Luo Si
Hypernym discovery aims to discover the hypernym word sets given a hyponym word and proper corpus.
Ranked #3 on
Hypernym Discovery
on General
no code implementations • IJCNLP 2017 • Yi Yang, Pengjun Xie, Jun Tao, Guangwei Xu, Linlin Li, Luo Si
This paper introduces Alibaba NLP team system on IJCNLP 2017 shared task No.
Ranked #1 on
2D Human Pose Estimation
on Alibaba Cluster Trace
(using extra training data)