1 code implementation • ACL 2022 • Yikang Shen, Shawn Tan, Alessandro Sordoni, Peng Li, Jie zhou, Aaron Courville
We introduce a new model, the Unsupervised Dependency Graph Network (UDGN), that can induce dependency structures from raw corpora and the masked language modeling task.
no code implementations • EMNLP 2020 • Xiuyi Chen, Fandong Meng, Peng Li, Feilong Chen, Shuang Xu, Bo Xu, Jie zhou
Here, we deal with these issues on two aspects: (1) We enhance the prior selection module with the necessary posterior information obtained from the specially designed Posterior Information Prediction Module (PIPM); (2) We propose a Knowledge Distillation Based Training Strategy (KDBTS) to train the decoder with the knowledge selected from the prior distribution, removing the exposure bias of knowledge selection.
1 code implementation • EMNLP 2021 • Yuan YAO, Jiaju Du, Yankai Lin, Peng Li, Zhiyuan Liu, Jie zhou, Maosong Sun
Existing relation extraction (RE) methods typically focus on extracting relational facts between entity pairs within single sentences or documents.
1 code implementation • COLING 2022 • Jiaxin Mi, Po Hu, Peng Li
To this end, we propose a simple yet effective model named DualGAT (Dual Relational Graph Attention Networks), which exploits the complementary nature of syntactic and semantic relations to alleviate the problem.
1 code implementation • Findings (ACL) 2022 • Xin Lv, Yankai Lin, Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu, Peng Li, Jie zhou
In recent years, pre-trained language models (PLMs) have been shown to capture factual knowledge from massive texts, which encourages the proposal of PLM-based knowledge graph completion (KGC) models.
no code implementations • 26 Mar 2024 • Haoran Liu, Mingzhe Liu, Peng Li, Jiahui Wu, Xin Jiang, Zhuo Zuo, Bingqi Liu
This process randomly closes some neural connections in the RCNN model, realized by the random inactivation weight matrix of link input.
no code implementations • 21 Mar 2024 • Zonghan Yang, Peng Li, Ming Yan, Ji Zhang, Fei Huang, Yang Liu
In WebShop, the 1-shot performance of the A$^3$T agent matches human average, and 4 rounds of iterative refinement lead to the performance approaching human experts.
2 code implementations • 12 Mar 2024 • Zhicheng Guo, Sijie Cheng, Hao Wang, Shihao Liang, Yujia Qin, Peng Li, Zhiyuan Liu, Maosong Sun, Yang Liu
The virtual API server contains a caching system and API simulators which are complementary to alleviate the change in API status.
no code implementations • 11 Mar 2024 • Yuanhang Zheng, Peng Li, Wei Liu, Yang Liu, Jian Luan, Bin Wang
Specifically, our proposed ToolRerank includes Adaptive Truncation, which truncates the retrieval results related to seen and unseen tools at different positions, and Hierarchy-Aware Reranking, which makes retrieval results more concentrated for single-tool queries and more diverse for multi-tool queries.
1 code implementation • 27 Feb 2024 • Wenqi Zhang, Ke Tang, Hai Wu, Mengna Wang, Yongliang Shen, Guiyang Hou, Zeqi Tan, Peng Li, Yueting Zhuang, Weiming Lu
Large Language Models exhibit robust problem-solving capabilities for diverse tasks.
no code implementations • 27 Feb 2024 • Xiaolong Wang, Yile Wang, Yuanchi Zhang, Fuwen Luo, Peng Li, Maosong Sun, Yang Liu
Based on the characteristics of the tasks and the strong dialogue-generation capabilities of LLMs, we propose RiC (Reasoning in Conversation), a method that focuses on solving subjective tasks through dialogue simulation.
1 code implementation • 25 Feb 2024 • Yuanhang Zheng, Peng Li, Ming Yan, Ji Zhang, Fei Huang, Yang Liu
Despite intensive efforts devoted to tool learning, the problem of budget-constrained tool learning, which focuses on resolving user queries within a specific budget constraint, has been widely overlooked.
no code implementations • 23 Feb 2024 • Xiaolong Wang, Yile Wang, Sijie Cheng, Peng Li, Yang Liu
Recent work has made a preliminary attempt to use large language models (LLMs) to solve the stance detection task, showing promising results.
no code implementations • 21 Feb 2024 • Fuwen Luo, Chi Chen, Zihao Wan, Zhaolu Kang, Qidong Yan, Yingjie Li, Xiaolong Wang, Siyu Wang, Ziyue Wang, Xiaoyue Mi, Peng Li, Ning Ma, Maosong Sun, Yang Liu
Multimodal large language models (MLLMs) have demonstrated promising results in a variety of tasks that combine vision and language.
no code implementations • 20 Feb 2024 • An Liu, Zonghan Yang, Zhenhe Zhang, Qingyuan Hu, Peng Li, Ming Yan, Ji Zhang, Fei Huang, Yang Liu
While Large language models (LLMs) have demonstrated considerable capabilities across various natural language tasks, they often fall short of the performance achieved by domain-specific state-of-the-art models.
no code implementations • 20 Feb 2024 • Chi Chen, Yiyang Du, Zheng Fang, Ziyue Wang, Fuwen Luo, Peng Li, Ming Yan, Ji Zhang, Fei Huang, Maosong Sun, Yang Liu
In this paper, we propose a new paradigm through the model composition of existing MLLMs to create a new model that retains the modal understanding capabilities of each original model.
1 code implementation • 19 Feb 2024 • Zijun Liu, Boqun Kou, Peng Li, Ming Yan, Ji Zhang, Fei Huang, Yang Liu
Although Large Language Models (LLMs) have demonstrated strong performance on a wide range of tasks, they still face reliability challenges such as hallucination.
1 code implementation • 19 Feb 2024 • Xuanyu Lei, Zonghan Yang, Xinrui Chen, Peng Li, Yang Liu
State-of-the-art Large Multi-Modal Models (LMMs) have demonstrated exceptional capabilities in vision-language tasks.
1 code implementation • 19 Feb 2024 • Ziyue Wang, Chi Chen, Yiqi Zhu, Fuwen Luo, Peng Li, Ming Yan, Ji Zhang, Fei Huang, Maosong Sun, Yang Liu
With the bloom of Large Language Models (LLMs), Multimodal Large Language Models (MLLMs) that incorporate LLMs with pre-trained vision models have recently demonstrated impressive performance across diverse vision-language tasks.
1 code implementation • 19 Feb 2024 • Yuanchi Zhang, Yile Wang, Zijun Liu, Shuo Wang, Xiaolong Wang, Peng Li, Maosong Sun, Yang Liu
While large language models (LLMs) have been pre-trained on multilingual corpora, their performance still lags behind in most languages compared to a few resource-rich languages.
no code implementations • 12 Feb 2024 • Zonghan Yang, An Liu, Zijun Liu, Kaiming Liu, Fangzhou Xiong, Yile Wang, Zeyuan Yang, Qingyuan Hu, Xinrui Chen, Zhenhe Zhang, Fuwen Luo, Zhicheng Guo, Peng Li, Yang Liu
We also conduct proof-of-concept studies by introducing realistic features to WebShop, including user profiles to demonstrate intentions, personalized reranking for complex environmental dynamics, and runtime cost statistics to reflect self-constraints.
1 code implementation • 22 Jan 2024 • Yile Wang, Sijie Cheng, Zixin Sun, Peng Li, Yang Liu
We propose symbol-to-language (S2L), a tuning-free method that enables large language models to solve symbol-related problems with information expressed in natural language.
no code implementations • 19 Jan 2024 • Zewen Chen, Juan Wang, Bing Li, Chunfeng Yuan, Weiming Hu, Junxian Liu, Peng Li, Yan Wang, Youqun Zhang, Congxuan Zhang
Due to the subjective nature of image quality assessment (IQA), assessing which image has better quality among a sequence of images is more reliable than assigning an absolute mean opinion score for an image.
2 code implementations • 10 Jan 2024 • Yuanchun Li, Hao Wen, Weijun Wang, Xiangyu Li, Yizhen Yuan, Guohong Liu, Jiacheng Liu, Wenxing Xu, Xiang Wang, Yi Sun, Rui Kong, Yile Wang, Hanfei Geng, Jian Luan, Xuefeng Jin, Zilong Ye, Guanjing Xiong, Fan Zhang, Xiang Li, Mengwei Xu, Zhijun Li, Peng Li, Yang Liu, Ya-Qin Zhang, Yunxin Liu
Next, we discuss several key challenges to achieve intelligent, efficient and secure Personal LLM Agents, followed by a comprehensive survey of representative solutions to address these challenges.
no code implementations • 25 Dec 2023 • Yifan Lu, Ziqi Zhang, Chunfeng Yuan, Peng Li, Yan Wang, Bing Li, Weiming Hu
Each caption in the set is attached to a concept combination indicating the primary semantic content of the caption and facilitating element alignment in set prediction.
no code implementations • 21 Dec 2023 • Zhongyang Guo, Guanran Jiang, Zhongdan Zhang, Peng Li, Zhefeng Wang, Yinchun Wang
This paper introduces "Shai" a 10B level large language model specifically designed for the asset management industry, built upon an open-source foundational model.
no code implementations • 8 Dec 2023 • Xin Li, Peng Li, Zeyong Wei, Zhe Zhu, Mingqiang Wei, Junhui Hou, Liangliang Nan, Jing Qin, Haoran Xie, Fu Lee Wang
By performing cross-modal interaction, Cross-BERT can smoothly reconstruct the masked tokens during pretraining, leading to notable performance enhancements for downstream tasks.
no code implementations • 29 Nov 2023 • Xingqun Qi, Jiahao Pan, Peng Li, Ruibin Yuan, Xiaowei Chi, Mengfei Li, Wenhan Luo, Wei Xue, Shanghang Zhang, Qifeng Liu, Yike Guo
In addition, the lack of large-scale available datasets with emotional transition speech and corresponding 3D human gestures also limits the addressing of this task.
no code implementations • 29 Nov 2023 • Xiaoyue Mi, Fan Tang, Yepeng Weng, Danding Wang, Juan Cao, Sheng Tang, Peng Li, Yang Liu
Despite the effectiveness in improving the robustness of neural networks, adversarial training has suffered from the natural accuracy degradation problem, i. e., accuracy on natural samples has reduced significantly.
no code implementations • 29 Nov 2023 • Xiaoyue Mi, Fan Tang, Zonghan Yang, Danding Wang, Juan Cao, Peng Li, Yang Liu
Despite the remarkable advances that have been made in continual learning, the adversarial vulnerability of such methods has not been fully discussed.
1 code implementation • 27 Nov 2023 • Sijie Cheng, Zhicheng Guo, Jingwen Wu, Kechen Fang, Peng Li, Huaping Liu, Yang Liu
However, the capability of VLMs to "think" from a first-person perspective, a crucial attribute for advancing autonomous agents and robotics, remains largely unexplored.
1 code implementation • 20 Nov 2023 • Ziyue Wang, Chi Chen, Peng Li, Yang Liu
Large Language Models (LLMs) demonstrate impressive reasoning ability and the maintenance of world knowledge not only in natural language tasks, but also in some vision-language tasks such as open-domain knowledge-based visual question answering (OK-VQA).
no code implementations • 15 Nov 2023 • Boxun Xu, Hejia Geng, Yuxuan Yin, Peng Li
We introduce DISTA, a Denoising Spiking Transformer with Intrinsic Plasticity and SpatioTemporal Attention, designed to maximize the spatiotemporal computational prowess of spiking neurons, particularly for vision applications.
1 code implementation • 24 Oct 2023 • Zeyuan Yang, Peng Li, Yang Liu
Large Language Models (LLMs) have showcased impressive performance.
no code implementations • 19 Oct 2023 • Yu Wang, Yuxuan Yin, Karthik Somayaji Nanjangud Suryanarayana, Jan Drgona, Malachi Schram, Mahantesh Halappanavar, Frank Liu, Peng Li
Modeling dynamical systems is crucial for a wide range of tasks, but it remains challenging due to complex nonlinear dynamics, limited observations, or lack of prior knowledge.
no code implementations • 13 Oct 2023 • Peng Li, Yeye He, Dror Yashar, Weiwei Cui, Song Ge, Haidong Zhang, Danielle Rifinski Fainman, Dongmei Zhang, Surajit Chaudhuri
Language models, such as GPT-3. 5 and ChatGPT, demonstrate remarkable abilities to follow diverse human instructions and perform a wide range of tasks.
no code implementations • 9 Oct 2023 • Peng Li, Yuping Ji, Yue Hu
To fill this gap, we propose a novel MRF reconstruction framework based on manifold structured data priors.
no code implementations • 8 Oct 2023 • Yile Wang, Peng Li, Maosong Sun, Yang Liu
Large language models (LLMs) have shown superior performance without task-specific fine-tuning.
1 code implementation • 3 Oct 2023 • Zijun Liu, Yanzhe Zhang, Peng Li, Yang Liu, Diyi Yang
We further design an automatic agent team optimization algorithm based on an unsupervised metric termed $\textit{Agent Importance Score}$, enabling the selection of best agents based on the contribution each agent makes.
no code implementations • 30 Sep 2023 • Hejia Geng, Boxun Xu, Peng Li
Large Language Models (LLMs) have demonstrated impressive inferential capabilities, with numerous research endeavors devoted to enhancing this capacity through prompting.
no code implementations • 9 Sep 2023 • Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu
Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence.
no code implementations • 7 Sep 2023 • Fahao Chen, Peng Li, Celimuge Wu
Although DGNN has recently received considerable attention by AI community and various DGNN models have been proposed, building a distributed system for efficient DGNN training is still challenging.
1 code implementation • 25 Aug 2023 • Chi Chen, Ruoyu Qin, Fuwen Luo, Xiaoyue Mi, Peng Li, Maosong Sun, Yang Liu
However, existing visual instruction tuning methods only utilize image-language instruction data to align the language and image modalities, lacking a more fine-grained cross-modal alignment.
no code implementations • 24 Aug 2023 • Karthik Somayaji NS, Yu Wang, Malachi Schram, Jan Drgona, Mahantesh Halappanavar, Frank Liu, Peng Li
Our work proposes to enhance the resilience of RL agents when faced with very rare and risky events by focusing on refining the predictions of the extreme values predicted by the state-action value function distribution.
no code implementations • 20 Aug 2023 • Hejia Geng, Peng Li
Spiking neural networks (SNNs) offer promise for efficient and powerful neurally inspired computation.
1 code implementation • 20 Aug 2023 • Peng Li, Zhiyi Chen, Xu Chu, Kexin Rong
Data preprocessing is a crucial step in the machine learning process that transforms raw data into a more usable format for downstream ML models.
1 code implementation • 18 Aug 2023 • Yixuan Li, Huaping Liu, Qiang Jin, Miaomiao Cai, Peng Li
Optical Music Recognition (OMR) is an important technology in music and has been researched for a long time.
1 code implementation • 27 Jul 2023 • Peng Li, Yeye He, Cong Yan, Yue Wang, Surajit Chaudhuri
Relational tables, where each row corresponds to an entity and each column corresponds to an attribute, have been the standard for tables in relational databases.
1 code implementation • 12 Jul 2023 • Yuzhuang Xu, Shuo Wang, Peng Li, Xuebo Liu, Xiaolong Wang, Weidong Liu, Yang Liu
Although neural machine translation (NMT) models perform well in the general domain, it remains rather challenging to control their generation behavior to satisfy the requirement of different users.
no code implementations • 30 Jun 2023 • Takuma Yoneda, Jiading Fang, Peng Li, Huanyu Zhang, Tianchong Jiang, Shengjie Lin, Ben Picker, David Yunis, Hongyuan Mei, Matthew R. Walter
In this paper, we explore a new dimension in which large language models may benefit robotics planning.
1 code implementation • 15 Jun 2023 • Qinhong Zhou, Zonghan Yang, Peng Li, Yang Liu
By combining the theoretical and empirical estimations of the decision distributions together, the estimation of logits can be successfully reduced to a simple root-finding problem.
1 code implementation • 5 Jun 2023 • Fengran Mo, Jian-Yun Nie, Kaiyu Huang, Kelong Mao, Yutao Zhu, Peng Li, Yang Liu
An effective way to improve retrieval effectiveness is to expand the current query with historical queries.
1 code implementation • 2 Jun 2023 • Zonghan Yang, Peng Li, Tianyu Pang, Yang Liu
To this end, we interpret DEQs through the lens of neural dynamics and find that AT under-regulates intermediate states.
1 code implementation • 28 May 2023 • Zhicheng Guo, Sijie Cheng, Yile Wang, Peng Li, Yang Liu
There are two main challenges to leveraging retrieval-augmented methods for NKI tasks: 1) the demand for diverse relevance score functions and 2) the dilemma between training cost and task performance.
1 code implementation • 28 May 2023 • Zhengyan Zhang, Zhiyuan Zeng, Yankai Lin, Huadong Wang, Deming Ye, Chaojun Xiao, Xu Han, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou
Experimental results on three knowledge-driven NLP tasks show that existing injection methods are not suitable for the new paradigm, while map-tuning effectively improves the performance of downstream models.
1 code implementation • 26 May 2023 • Haoran Liu, Peng Li, Ming-Zhe Liu, Kai-Ming Wang, Zhuo Zuo, Bing-Qi Liu
This study introduces the Tempotron, a powerful classifier based on a third-generation neural network model, for pulse shape discrimination.
no code implementations • 24 May 2023 • Runxi Liu, Peng Li, Haoran Liu
In addition to the pulse signals, this dataset includes the source code for all the aforementioned pulse shape discrimination methods.
no code implementations • 24 May 2023 • Chi Chen, Peng Li, Maosong Sun, Yang Liu
Weakly supervised vision-and-language pre-training (WVLP), which learns cross-modal representations with limited cross-modal supervision, has been shown to effectively reduce the data cost of pre-training while maintaining decent performance on downstream tasks.
1 code implementation • 9 May 2023 • Peng Li, Tianxiang Sun, Qiong Tang, Hang Yan, Yuanbin Wu, Xuanjing Huang, Xipeng Qiu
A common practice is to recast the task into a text-to-text format such that generative LLMs of natural language (NL-LLMs) like GPT-3 can be prompted to solve it.
no code implementations • 4 May 2023 • Yuxuan Yin, Yu Wang, Peng Li
$\texttt{TSBO}$ incorporates a teacher model, an unlabeled data sampler, and a student model.
no code implementations • 4 May 2023 • Yuanhang Zheng, Zhixing Tan, Peng Li, Yang Liu
Black-box prompt tuning uses derivative-free optimization algorithms to learn prompts in low-dimensional subspaces instead of back-propagating through the network of Large Language Models (LLMs).
no code implementations • CVPR 2023 • Ruichen Zheng, Peng Li, Haoqian Wang, Tao Yu
Detailed 3D reconstruction and photo-realistic relighting of digital humans are essential for various applications.
no code implementations • 21 Apr 2023 • Chengyu Zheng, Peng Li, Xiao-Ping Zhang, Xuequan Lu, Mingqiang Wei
The IS is designed to simulate the detection procedure of human recognition for identifying transparent glass by global context and edge information.
no code implementations • 28 Mar 2023 • Yunfeng Hou, Ching-Yen Weng, Peng Li
However, new challenges arise for sensor activations in networked discrete-event systems, where observation delays and control delays exist between the sensor systems and the agent.
no code implementations • 16 Mar 2023 • Ning Qi, Peng Li, Lin Cheng, Ziyi Zhang, Wenrui Huang, Weiwei Yang
Energy storage (ES) and virtual energy storage (VES) are key components to realizing power system decarbonization.
no code implementations • 7 Mar 2023 • Zhiqiang Zhou, Chaoli Zhang, Lingna Ma, Jing Gu, Huajie Qian, Qingsong Wen, Liang Sun, Peng Li, Zhimin Tang
This paper discusses horizontal POD resources management in Alibaba Cloud Container Services with a newly deployed AI algorithm framework named AHPA -- the adaptive horizontal pod auto-scaling system.
no code implementations • 3 Feb 2023 • Zihu Wang, Yu Wang, Hanbin Hu, Peng Li
Contrastive learning demonstrates great promise for representation learning.
no code implementations • 28 Jan 2023 • Zeyuan Yang, Zonghan Yang, Peng Li, Yang Liu
The basic idea is to adopt a restricted orthogonal constraint allowing parameters optimized in the direction oblique to the whole frozen space to facilitate forward knowledge transfer while consolidating previous knowledge.
no code implementations • 25 Jan 2023 • Wenkai Yang, Yankai Lin, Guangxiang Zhao, Peng Li, Jie zhou, Xu sun
Federated Learning has become a widely-used framework which allows learning a global model on decentralized local datasets under the condition of protecting local data privacy.
1 code implementation • 19 Dec 2022 • Xuancheng Huang, Zijun Liu, Peng Li, Tao Li, Maosong Sun, Yang Liu
Recently, multi-aspect controllable text generation that controls the generated text in multiple aspects (e. g., sentiment, topic, and keywords) has attracted increasing attention.
1 code implementation • 18 Dec 2022 • Yuanchi Zhang, Peng Li, Maosong Sun, Yang Liu
While many parallel corpora are not publicly accessible for data copyright, data privacy and competitive differentiation reasons, trained translation models are increasingly available on open platforms.
no code implementations • 1 Dec 2022 • Yukun Yang, Peng Li
Gradient-based first-order adaptive optimization methods such as the Adam optimizer are prevalent in training artificial networks, achieving the state-of-the-art results.
1 code implementation • 14 Nov 2022 • Xiaozhi Wang, Yulin Chen, Ning Ding, Hao Peng, Zimu Wang, Yankai Lin, Xu Han, Lei Hou, Juanzi Li, Zhiyuan Liu, Peng Li, Jie zhou
It contains 103, 193 event coreference chains, 1, 216, 217 temporal relations, 57, 992 causal relations, and 15, 841 subevent relations, which is larger than existing datasets of all the ERE tasks by at least an order of magnitude.
no code implementations • 29 Oct 2022 • Zhiheng Hu, Yongzhen Wang, Peng Li, Jie Qin, Haoran Xie, Mingqiang Wei
First, to maintain small targets in deep layers, we develop a multi-scale nested interaction module to explore a wide range of context information.
no code implementations • 28 Oct 2022 • Zhaowei Chen, Peng Li, Zeyong Wei, Honghua Chen, Haoran Xie, Mingqiang Wei, Fu Lee Wang
We propose GeoGCN, a novel geometric dual-domain graph convolution network for point cloud denoising (PCD).
1 code implementation • 18 Oct 2022 • Lan Jiang, Hao Zhou, Yankai Lin, Peng Li, Jie zhou, Rui Jiang
Even though the large-scale language models have achieved excellent performances, they suffer from various adversarial attacks.
1 code implementation • 11 Oct 2022 • Lei LI, Yankai Lin, Xuancheng Ren, Guangxiang Zhao, Peng Li, Jie zhou, Xu sun
We then design a Model Uncertainty--aware Knowledge Integration (MUKI) framework to recover the golden supervision for the student.
no code implementations • 10 Oct 2022 • Zonghan Yang, Xiaoyuan Yi, Peng Li, Yang Liu, Xing Xie
Warning: this paper contains model outputs exhibiting offensiveness and biases.
3 code implementations • 12 Sep 2022 • Ze Wang, Kailun Yang, Hao Shi, Peng Li, Fei Gao, Jian Bai, Kaiwei Wang
As loop closure on wide-FoV panoramic data further comes with a large number of outliers, traditional outlier rejection methods are not directly applicable.
no code implementations • 16 Aug 2022 • Ryuichi Takanobu, Hao Zhou, Yankai Lin, Peng Li, Jie zhou, Minlie Huang
Modeling these subtasks is consistent with the human agent's behavior patterns.
no code implementations • 3 Jun 2022 • Qiqi Ding, Peng Li, Xuefeng Yan, Ding Shi, Luming Liang, Weiming Wang, Haoran Xie, Jonathan Li, Mingqiang Wei
To our knowledge, RSOD is the first quantitatively evaluated and graded snowy OD dataset.
no code implementations • 2 Jun 2022 • Yinghao Zhang, Peng Li, Yue Hu
While low-rank matrix prior has been exploited in dynamic MR image reconstruction and has obtained satisfying performance, tensor low-rank models have recently emerged as powerful alternative representations for three-dimensional dynamic MR datasets.
1 code implementation • 23 May 2022 • Shuo Wang, Peng Li, Zhixing Tan, Zhaopeng Tu, Maosong Sun, Yang Liu
In this work, we propose a template-based method that can yield results with high translation quality and match accuracy and the inference speed of our method is comparable with unconstrained NMT models.
no code implementations • 15 May 2022 • Yukun Yang, Peng Li
We employ the Hebbian rule operating in local compartments to update synaptic weights and achieve supervised learning in a biologically plausible manner.
1 code implementation • ACL 2022 • Pei Ke, Hao Zhou, Yankai Lin, Peng Li, Jie zhou, Xiaoyan Zhu, Minlie Huang
Existing reference-free metrics have obvious limitations for evaluating controlled text generation models.
no code implementations • 26 Mar 2022 • Sha Yuan, Hanyu Zhao, Shuai Zhao, Jiahong Leng, Yangxiao Liang, Xiaozhi Wang, Jifan Yu, Xin Lv, Zhou Shao, Jiaao He, Yankai Lin, Xu Han, Zhenghao Liu, Ning Ding, Yongming Rao, Yizhao Gao, Liang Zhang, Ming Ding, Cong Fang, Yisen Wang, Mingsheng Long, Jing Zhang, Yinpeng Dong, Tianyu Pang, Peng Cui, Lingxiao Huang, Zheng Liang, HuaWei Shen, HUI ZHANG, Quanshi Zhang, Qingxiu Dong, Zhixing Tan, Mingxuan Wang, Shuo Wang, Long Zhou, Haoran Li, Junwei Bao, Yingwei Pan, Weinan Zhang, Zhou Yu, Rui Yan, Chence Shi, Minghao Xu, Zuobai Zhang, Guoqiang Wang, Xiang Pan, Mengjie Li, Xiaoyu Chu, Zijun Yao, Fangwei Zhu, Shulin Cao, Weicheng Xue, Zixuan Ma, Zhengyan Zhang, Shengding Hu, Yujia Qin, Chaojun Xiao, Zheni Zeng, Ganqu Cui, Weize Chen, Weilin Zhao, Yuan YAO, Peng Li, Wenzhao Zheng, Wenliang Zhao, Ziyi Wang, Borui Zhang, Nanyi Fei, Anwen Hu, Zenan Ling, Haoyang Li, Boxi Cao, Xianpei Han, Weidong Zhan, Baobao Chang, Hao Sun, Jiawen Deng, Chujie Zheng, Juanzi Li, Lei Hou, Xigang Cao, Jidong Zhai, Zhiyuan Liu, Maosong Sun, Jiwen Lu, Zhiwu Lu, Qin Jin, Ruihua Song, Ji-Rong Wen, Zhouchen Lin, LiWei Wang, Hang Su, Jun Zhu, Zhifang Sui, Jiajun Zhang, Yang Liu, Xiaodong He, Minlie Huang, Jian Tang, Jie Tang
With the rapid development of deep learning, training Big Models (BMs) for multiple downstream tasks becomes a popular paradigm.
1 code implementation • Findings (ACL) 2022 • Yujia Qin, Jiajie Zhang, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou
We experiment ELLE with streaming data from 5 domains on BERT and GPT.
no code implementations • 4 Mar 2022 • Peng Li, Jiayin Zhao, Jingyao Wu, Chao Deng, Haoqian Wang, Tao Yu
Light field disparity estimation is an essential task in computer vision with various applications.
1 code implementation • 27 Feb 2022 • Hao Shi, Yifan Zhou, Kailun Yang, Xiaoting Yin, Ze Wang, Yaozu Ye, Zhe Yin, Shi Meng, Peng Li, Kaiwei Wang
PanoFlow achieves state-of-the-art performance on the public OmniFlowNet and the established FlowScape benchmarks.
1 code implementation • ACL 2022 • Deming Ye, Yankai Lin, Peng Li, Maosong Sun, Zhiyuan Liu
Pre-trained language models (PLMs) cannot well recall rich factual knowledge of entities exhibited in large-scale corpora, especially those rare entities.
1 code implementation • 25 Feb 2022 • Ze Wang, Kailun Yang, Hao Shi, Peng Li, Fei Gao, Kaiwei Wang
To tackle this issue, we propose LF-VIO, a real-time VIO framework for cameras with extremely large FoV.
no code implementations • 8 Feb 2022 • Guhong Nie, Lirui Xiao, Menglong Zhu, Dongliang Chu, Yue Shen, Peng Li, Kang Yang, Li Du, Bo Chen
For binary neural networks (BNNs) to become the mainstream on-device computer vision algorithm, they must achieve a superior speed-vs-accuracy tradeoff than 8-bit quantization and establish a similar degree of general applicability in vision tasks.
no code implementations • 26 Jan 2022 • Peng Li, Arim Park, Soohyun Cho, Yao Zhao
In this paper, we study the effect of compensated reviews on non-compensated reviews by utilizing online reviews on 1, 240 auto shipping companies over a ten-year period from a transportation website.
no code implementations • 14 Dec 2021 • Lei LI, Yankai Lin, Xuancheng Ren, Guangxiang Zhao, Peng Li, Jie zhou, Xu sun
As many fine-tuned pre-trained language models~(PLMs) with promising performance are generously released, investigating better ways to reuse these models is vital as it can greatly reduce the retraining computational cost and the potential environmental side-effects.
no code implementations • 14 Nov 2021 • Yukun Yang, Peng Li
Our experiments show that the proposed framework demonstrates learning accuracy comparable to BP-based rules and may provide new insights on how learning is orchestrated in biological systems.
1 code implementation • NAACL 2022 • Yusheng Su, Xiaozhi Wang, Yujia Qin, Chi-Min Chan, Yankai Lin, Huadong Wang, Kaiyue Wen, Zhiyuan Liu, Peng Li, Juanzi Li, Lei Hou, Maosong Sun, Jie zhou
To explore whether we can improve PT via prompt transfer, we empirically investigate the transferability of soft prompts across different downstream tasks and PLMs in this work.
no code implementations • 2 Nov 2021 • Fahao Chen, Peng Li, Toshiaki Miyazaki, Celimuge Wu
In this paper, we propose FedGraph for federated graph learning among multiple computing clients, each of which holds a subgraph.
no code implementations • 29 Oct 2021 • Guanglin Niu, Yang Li, Chengguang Tang, Zhongkai Hu, Shibin Yang, Peng Li, Chengyu Wang, Hao Wang, Jian Sun
The multi-relational Knowledge Base Question Answering (KBQA) system performs multi-hop reasoning over the knowledge graph (KG) to achieve the answer.
Knowledge Base Question Answering Knowledge Graph Embedding +1
no code implementations • 23 Oct 2021 • Pudong Ge, Peng Li, Boli Chen, Fei Teng
The robust distributed state estimation for a class of continuous-time linear time-invariant systems is achieved by a novel kernel-based distributed observer, which, for the first time, ensures fixed-time convergence properties.
1 code implementation • 15 Oct 2021 • Yujia Qin, Xiaozhi Wang, Yusheng Su, Yankai Lin, Ning Ding, Jing Yi, Weize Chen, Zhiyuan Liu, Juanzi Li, Lei Hou, Peng Li, Maosong Sun, Jie zhou
In the experiments, we study diverse few-shot NLP tasks and surprisingly find that in a 250-dimensional subspace found with 100 tasks, by only tuning 250 free parameters, we can recover 97% and 83% of the full prompt tuning performance for 100 seen tasks (using different training data) and 20 unseen tasks, respectively, showing great generalization ability of the found intrinsic task subspace.
1 code implementation • EMNLP 2021 • Wenkai Yang, Yankai Lin, Peng Li, Jie zhou, Xu sun
Motivated by this observation, we construct a word-based robustness-aware perturbation to distinguish poisoned samples from clean samples to defend against the backdoor attacks on natural language processing (NLP) models.
1 code implementation • NeurIPS 2021 • Deli Chen, Yankai Lin, Guangxiang Zhao, Xuancheng Ren, Peng Li, Jie zhou, Xu sun
The class imbalance problem, as an important issue in learning node representations, has drawn increasing attention from the community.
1 code implementation • Findings (ACL) 2022 • Zhengyan Zhang, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou
In this work, we study the computational patterns of FFNs and observe that most inputs only activate a tiny ratio of neurons of FFNs.
no code implementations • 29 Sep 2021 • Yu Wang, Jan Drgona, Jiaxin Zhang, Karthik Somayaji NS, Frank Y Liu, Malachi Schram, Peng Li
Although various flow models based on different transformations have been proposed, there still lacks a quantitative analysis of performance-cost trade-offs between different flows as well as a systematic way of constructing the best flow architecture.
no code implementations • 29 Sep 2021 • Yukun Yang, Peng Li
There exists a marked cleavage between the biological plausible approaches and the practical backpropagation-based approaches on how to train a deep spiking neural network (DSNN) with better performance.
1 code implementation • EMNLP 2021 • Lei LI, Yankai Lin, Shuhuai Ren, Peng Li, Jie zhou, Xu sun
Knowledge distillation~(KD) has been proved effective for compressing large-scale pre-trained language models.
no code implementations • Findings (ACL) 2021 • Feilong Chen, Xiuyi Chen, Fandong Meng, Peng Li, Jie zhou
Specifically, GoG consists of three sequential graphs: 1) H-Graph, which aims to capture coreference relations among dialog history; 2) History-aware Q-Graph, which aims to fully understand the question through capturing dependency relations between words based on coreference resolution on the dialog history; and 3) Question-aware I-Graph, which aims to capture the relations between objects in an image based on fully question representation.
1 code implementation • Findings (ACL) 2021 • Feilong Chen, Fandong Meng, Xiuyi Chen, Peng Li, Jie zhou
Visual dialogue is a challenging task since it needs to answer a series of coherent questions on the basis of understanding the visual environment.
2 code implementations • ACL 2022 • Deming Ye, Yankai Lin, Peng Li, Maosong Sun
In particular, we propose a neighborhood-oriented packing strategy, which considers the neighbor spans integrally to better model the entity boundary information.
Ranked #1 on Named Entity Recognition (NER) on Few-NERD (SUP)
no code implementations • WMT (EMNLP) 2021 • Xianfeng Zeng, Yijin Liu, Ernan Li, Qiu Ran, Fandong Meng, Peng Li, Jinan Xu, Jie zhou
This paper introduces WeChat AI's participation in WMT 2021 shared news translation task on English->Chinese, English->Japanese, Japanese->English and English->German.
no code implementations • 4 Aug 2021 • Wenrui Zhang, Hejia Geng, Peng Li
The small size of the motifs and sparse inter-motif connectivity leads to an RSNN architecture scalable to large network sizes.
1 code implementation • ACL 2021 • Wenkai Yang, Yankai Lin, Peng Li, Jie zhou, Xu sun
In this work, we point out a potential problem of current backdoor attacking research: its evaluation ignores the stealthiness of backdoor attacks, and most of existing backdoor attacking methods are not stealthy either to system deployers or to system users.
no code implementations • 25 Jul 2021 • Ling Liang, Zheng Qu, Zhaodong Chen, Fengbin Tu, Yujie Wu, Lei Deng, Guoqi Li, Peng Li, Yuan Xie
Although spiking neural networks (SNNs) take benefits from the bio-plausible neural modeling, the low accuracy under the common local synaptic plasticity learning rules limits their application in many practical tasks.
no code implementations • 22 Jun 2021 • Yukun Yang, Wenrui Zhang, Peng Li
While backpropagation (BP) has been applied to spiking neural networks (SNNs) achieving encouraging results, a key challenge involved is to backpropagate a continuous-valued loss over layers of spiking neurons exhibiting discontinuous all-or-none firing activities.
no code implementations • 21 Jun 2021 • Renzhi Wu, Prem Sakala, Peng Li, Xu Chu, Yeye He
Panda's IDE includes many novel features purpose-built for EM, such as smart data sampling, a builtin library of EM utility functions, automatically generated LFs, visual debugging of LFs, and finally, an EM-specific labeling model.
no code implementations • 10 Jun 2021 • Runhuan Feng, Peng Li
The nesting of such stochastic modeling can be computationally challenging.
no code implementations • NAACL 2021 • Yingxue Zhang, Fandong Meng, Peng Li, Ping Jian, Jie zhou
Implicit discourse relation recognition (IDRR) aims to identify logical relations between two adjacent sentences in the discourse.
1 code implementation • ACL 2022 • Weize Chen, Xu Han, Yankai Lin, Hexu Zhao, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou
Hyperbolic neural networks have shown great potential for modeling complex data.
1 code implementation • ACL 2021 • Ziqi Wang, Xiaozhi Wang, Xu Han, Yankai Lin, Lei Hou, Zhiyuan Liu, Peng Li, Juanzi Li, Jie zhou
Event extraction (EE) has considerably benefited from pre-trained language models (PLMs) by fine-tuning.
2 code implementations • NAACL 2022 • Yujia Qin, Yankai Lin, Jing Yi, Jiajie Zhang, Xu Han, Zhengyan Zhang, Yusheng Su, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou
Specifically, we introduce a pre-training framework named "knowledge inheritance" (KI) and explore how could knowledge distillation serve as auxiliary supervision during pre-training to efficiently learn larger PLMs.
1 code implementation • Findings (ACL) 2021 • Tianyu Gao, Xu Han, Keyue Qiu, Yuzhuo Bai, Zhiyu Xie, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou
Distantly supervised (DS) relation extraction (RE) has attracted much attention in the past few years as it can utilize large-scale auto-labeled data.
1 code implementation • 7 Feb 2021 • Yusheng Su, Xu Han, Yankai Lin, Zhengyan Zhang, Zhiyuan Liu, Peng Li, Jie zhou, Maosong Sun
We then perform contrastive semi-supervised learning on both the retrieved unlabeled and original labeled instances to help PLMs capture crucial task-related semantic features.
1 code implementation • 22 Jan 2021 • Xiaowei Hu, Peng Li
In the era of a growing population, systemic changes to the world, and the rising risk of crises, humanity has been facing an unprecedented challenge of resource scarcity.
no code implementations • 21 Jan 2021 • Yuan Fang, Ding Wang, Peng Li, Hang Su, Tian Le, Yi Wu, Guo-Wei Yang, Hua-Li Zhang, Zhi-Guang Xiao, Yan-Qiu Sun, Si-Yuan Hong, Yan-Wu Xie, Huan-Hua Wang, Chao Cao, Xin Lu, Hui-Qiu Yuan, Yang Liu
We report growth, electronic structure and superconductivity of ultrathin epitaxial CoSi2 films on Si(111).
Mesoscale and Nanoscale Physics
1 code implementation • ACL 2021 • Yujia Qin, Yankai Lin, Ryuichi Takanobu, Zhiyuan Liu, Peng Li, Heng Ji, Minlie Huang, Maosong Sun, Jie zhou
Pre-trained Language Models (PLMs) have shown superior performance on various downstream Natural Language Processing (NLP) tasks.
1 code implementation • Findings (EMNLP) 2021 • Lei LI, Yankai Lin, Deli Chen, Shuhuai Ren, Peng Li, Jie zhou, Xu sun
On the other hand, the exiting decisions made by internal classifiers are unreliable, leading to wrongly emitted early predictions.
no code implementations • 16 Dec 2020 • Peng Li, Jinjun Ding, Steven S. -L. Zhang, James Kally, Timothy Pillsbury, Olle G. Heinonen, Gaurab Rimal, Chong Bi, August DeMann, Stuart B. Field, Weigang Wang, Jinke Tang, J. S. Jiang, Axel Hoffmann, Nitin Samarth, Mingzhong Wu
A topological insulator (TI) interfaced with a magnetic insulator (MI) may host an anomalous Hall effect (AHE), a quantum AHE, and a topological Hall effect (THE).
Materials Science Mesoscale and Nanoscale Physics Applied Physics
no code implementations • 14 Dec 2020 • Deli Chen, Yankai Lin, Lei LI, Xuancheng Ren, Peng Li, Jie zhou, Xu sun
Graph Contrastive Learning (GCL) has proven highly effective in promoting the performance of Semi-Supervised Node Classification (SSNC).
1 code implementation • Asian Chapter of the Association for Computational Linguistics 2020 • Xiaozhi Wang, Shengyu Jia, Xu Han, Zhiyuan Liu, Juanzi Li, Peng Li, Jie zhou
Existing EAE methods either extract each event argument roles independently or sequentially, which cannot adequately model the joint probability distribution among event arguments and their roles.
no code implementations • 23 Nov 2020 • Peng Li, Baijiang Lv, Yuan Fang, Wei Guo, Zhongzheng Wu, Yi Wu, Cheng-Maw Cheng, Dawei Shen, Yuefeng Nie, Luca Petaccia, Chao Cao, Zhu-An Xu, Yang Liu
Using angle-resolved photoemission spectroscopy (ARPES) and low-energy electron diffraction (LEED), together with density-functional theory (DFT) calculation, we report the formation of charge density wave (CDW) and its interplay with the Kondo effect and topological states in CeSbTe.
Strongly Correlated Electrons Materials Science
2 code implementations • 18 Nov 2020 • Minghui Qiu, Peng Li, Chengyu Wang, Hanjie Pan, Ang Wang, Cen Chen, Xianyan Jia, Yaliang Li, Jun Huang, Deng Cai, Wei Lin
The literature has witnessed the success of leveraging Pre-trained Language Models (PLMs) and Transfer Learning (TL) algorithms to a wide range of Natural Language Processing (NLP) applications, yet it is not easy to build an easy-to-use and scalable TL toolkit for this purpose.
no code implementations • 16 Nov 2020 • Uwe Aickelin, Jenna Marie Reps, Peer-Olaf Siebers, Peng Li
In this paper, we present a case study demonstrating how dynamic and uncertain criteria can be incorporated into a multicriteria analysis with the help of discrete event simulation.
no code implementations • 28 Oct 2020 • Xiaoyu Kou, Yankai Lin, Yuntao Li, Jiahao Xu, Peng Li, Jie zhou, Yan Zhang
Knowledge graph embedding (KGE), aiming to embed entities and relations into low-dimensional vectors, has attracted wide attention recently.
no code implementations • 23 Oct 2020 • Wenrui Zhang, Peng Li
Moreover, we propose a new backpropagation (BP) method called backpropagated intrinsic plasticity (BIP) to further boost the performance of ScSr-SNNs by training intrinsic model parameters.
no code implementations • 10 Oct 2020 • Yingxue Zhang, Fandong Meng, Peng Li, Ping Jian, Jie zhou
As conventional answer selection (AS) methods generally match the question with each candidate answer independently, they suffer from the lack of matching information between the question and the candidate.
1 code implementation • EMNLP 2020 • Xiaoyu Kou, Yankai Lin, Shaobo Liu, Peng Li, Jie zhou, Yan Zhang
Graph embedding (GE) methods embed nodes (and/or edges) in graph into a low-dimensional semantic space, and have shown its effectiveness in modeling multi-relational data.
1 code implementation • EMNLP 2020 • Hao Peng, Tianyu Gao, Xu Han, Yankai Lin, Peng Li, Zhiyuan Liu, Maosong Sun, Jie zhou
We find that (i) while context is the main source to support the predictions, RE models also heavily rely on the information from entity mentions, most of which is type information, and (ii) existing datasets may leak shallow heuristics via entity mentions and thus contribute to the high performance on RE benchmarks.
Ranked #23 on Relation Extraction on TACRED
no code implementations • WMT (EMNLP) 2020 • Fandong Meng, Jianhao Yan, Yijin Liu, Yuan Gao, Xianfeng Zeng, Qinsong Zeng, Peng Li, Ming Chen, Jie zhou, Sifan Liu, Hao Zhou
We participate in the WMT 2020 shared news translation task on Chinese to English.
1 code implementation • 29 Sep 2020 • Yusheng Su, Xu Han, Zhengyan Zhang, Peng Li, Zhiyuan Liu, Yankai Lin, Jie zhou, Maosong Sun
In this paper, we propose a novel framework named Coke to dynamically select contextual knowledge and embed knowledge context according to textual context for PLMs, which can avoid the effect of redundant and ambiguous knowledge in KGs that cannot match the input text.
1 code implementation • 4 Aug 2020 • Siddharth Maddali, Marc Allain, Peng Li, Virginie Chamard, Stephan O. Hruszkewycz
This paper addresses three-dimensional signal distortion and image reconstruction issues in x-ray Bragg coherent diffraction imaging (BCDI) in the event of a non-trivial, non-orthogonal orientation of the area detector with respect to the diffracted beam.
Instrumentation and Detectors Image and Video Processing
no code implementations • ACL 2020 • Xu Han, Yi Dai, Tianyu Gao, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou
Continual relation learning aims to continually train a model on new data to learn incessantly emerging novel relations while avoiding catastrophically forgetting old relations.
1 code implementation • ACL 2020 • Qiu Ran, Yankai Lin, Peng Li, Jie zhou
By dynamically determining segment length and deleting repetitive segments, RecoverSAT is capable of recovering from repetitive and missing token errors.
3 code implementations • 20 May 2020 • Dehong Gao, Linbo Jin, Ben Chen, Minghui Qiu, Peng Li, Yi Wei, Yi Hu, Hao Wang
In this paper, we address the text and image matching in cross-modal retrieval of the fashion industry.
no code implementations • 12 May 2020 • Fei Gao, Jingjie Zhu, Zeyuan Yu, Peng Li, Tao Wang
The whole portrait drawing robotic system is named AiSketcher.
1 code implementation • 11 May 2020 • Bojan Karlaš, Peng Li, Renzhi Wu, Nezihe Merve Gürel, Xu Chu, Wentao Wu, Ce Zhang
Machine learning (ML) applications have been thriving recently, largely attributed to the increasing availability of data.
1 code implementation • EMNLP 2020 • Xiaozhi Wang, Ziqi Wang, Xu Han, Wangyi Jiang, Rong Han, Zhiyuan Liu, Juanzi Li, Peng Li, Yankai Lin, Jie zhou
Most existing datasets exhibit the following issues that limit further development of ED: (1) Data scarcity.
2 code implementations • EMNLP 2020 • Deming Ye, Yankai Lin, Jiaju Du, Zheng-Hao Liu, Peng Li, Maosong Sun, Zhiyuan Liu
Language representation models such as BERT could effectively capture contextual semantic information from plain text, and have been proved to achieve promising results in lots of downstream NLP tasks with appropriate fine-tuning.
Ranked #31 on Relation Extraction on DocRED
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Xu Han, Tianyu Gao, Yankai Lin, Hao Peng, Yaoliang Yang, Chaojun Xiao, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou
Relational facts are an important component of human knowledge, which are hidden in vast amounts of text.
1 code implementation • NeurIPS 2020 • Wenrui Zhang, Peng Li
Spiking neural networks (SNNs) are well suited for spatio-temporal learning and implementations on energy-efficient event-driven neuromorphic processors.
no code implementations • 1 Jan 2020 • Ling Liang, Xing Hu, Lei Deng, Yujie Wu, Guoqi Li, Yufei Ding, Peng Li, Yuan Xie
Recently, backpropagation through time inspired learning algorithms are widely introduced into SNNs to improve the performance, which brings the possibility to attack the models accurately given Spatio-temporal gradient maps.
1 code implementation • 18 Dec 2019 • Feilong Chen, Fandong Meng, Jiaming Xu, Peng Li, Bo Xu, Jie zhou
Visual Dialog is a vision-language task that requires an AI agent to engage in a conversation with humans grounded in an image.
no code implementations • 27 Nov 2019 • Xiao-Yu Zhang, Changsheng Li, Haichao Shi, Xiaobin Zhu, Peng Li, Jing Dong
The point process is a solid framework to model sequential data, such as videos, by exploring the underlying relevance.
no code implementations • 20 Nov 2019 • Xuepeng Fan, Peng Li, Yulong Zeng, Xiaoping Zhou
We study the liquid democracy problem, where each voter can either directly vote to a candidate or delegate his voting power to a proxy.
Cryptography and Security
no code implementations • 10 Nov 2019 • Deli Chen, Xiaoqian Liu, Yankai Lin, Peng Li, Jie zhou, Qi Su, Xu sun
To address this issue, we propose to model long-distance node relations by simply relying on shallow GNN architectures with two solutions: (1) Implicitly modelling by learning to predict node pair relations (2) Explicitly modelling by adding edges between nodes that potentially have the same label.
no code implementations • 6 Nov 2019 • Qiu Ran, Yankai Lin, Peng Li, Jie zhou
Non-autoregressive neural machine translation (NAT) generates each target word in parallel and has achieved promising inference acceleration.
1 code implementation • 3 Nov 2019 • Lei Deng, Yujie Wu, Yifan Hu, Ling Liang, Guoqi Li, Xing Hu, Yufei Ding, Peng Li, Yuan Xie
As well known, the huge memory and compute costs of both artificial neural networks (ANNs) and spiking neural networks (SNNs) greatly hinder their deployment on edge devices with high efficiency.
1 code implementation • IJCNLP 2019 • Xiaozhi Wang, Ziqi Wang, Xu Han, Zhiyuan Liu, Juanzi Li, Peng Li, Maosong Sun, Jie zhou, Xiang Ren
Existing event extraction methods classify each argument role independently, ignoring the conceptual correlations between different argument roles.
1 code implementation • IJCNLP 2019 • Tianyu Gao, Xu Han, Hao Zhu, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou
We present FewRel 2. 0, a more challenging task to investigate two aspects of few-shot relation classification models: (1) Can they adapt to a new domain with only a handful of instances?
2 code implementations • IJCNLP 2019 • Qiu Ran, Yankai Lin, Peng Li, Jie zhou, Zhiyuan Liu
Numerical reasoning, such as addition, subtraction, sorting and counting is a critical skill in human's reading comprehension, which has not been well considered in existing machine reading comprehension (MRC) systems.
Ranked #10 on Question Answering on DROP Test
no code implementations • 10 Sep 2019 • Changqing Xu, Wenrui Zhang, Yu Liu, Peng Li
Using spiking speech and image recognition datasets, we demonstrate the feasibility of supporting large time compression ratios of up to 16x, delivering up to 15. 93x, 13. 88x, and 86. 21x improvements in throughput, energy dissipation, the tradeoffs between hardware area, runtime, energy, and classification accuracy, respectively based on different spike codes on a Xilinx Zynq-7000 FPGA.
no code implementations • 7 Sep 2019 • Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie zhou, Xu sun
Graph Neural Networks (GNNs) have achieved promising performance on a wide range of graph-based tasks.
Ranked #52 on Node Classification on Cora
no code implementations • 3 Sep 2019 • Dafydd Gibbon, Peng Li
Consequently, only the LF LTS of the absolute speech signal is used in the empirical analysis.
1 code implementation • NeurIPS 2019 • Wenrui Zhang, Peng Li
However, the practical application of RSNNs is severely limited by challenges in training.
no code implementations • 15 Aug 2019 • Jiabin Zhang, Zheng Zhu, Wei Zou, Peng Li, Yanwei Li, Hu Su, Guan Huang
Given the results of MTN, we adopt an occlusion-aware Re-ID feature strategy in the pose tracking module, where pose information is utilized to infer the occlusion state to make better use of Re-ID feature.
1 code implementation • 15 Aug 2019 • Peng Li, Siddharth Maddali, Anastasios Pateras, Irene Calvo-Almazan, Stephan O. Hruszkewycz, Virginie Chamard, Marc Allain
To deal with this, the currently favored approach (detailed in Part I) is to perform the entire inversion in conjugate non-orthogonal real and Fourier space frames, and to transform the 3D sample image into an orthogonal frame as a post-processing step for result analysis.
Instrumentation and Detectors Signal Processing
1 code implementation • 15 Aug 2019 • Siddharth Maddali, Peng Li, Anastasios Pateras, Daniel Timbie, Nazar Delegan, Alex Crook, Hope Lee, Irene Calvo-Almazan, Dina Sheyfer, Wonsuk Cha, F. Joseph Heremans, David D. Awschalom, Virginie Chamard, Marc Allain, Stephan O. Hruszkewycz
Part II builds upon the geometric theory developed in Part I with the formalism to correct the shear distortions directly on an orthogonal grid within the phase retrieval algorithm itself, allowing more physically realistic constraints to be applied.
Instrumentation and Detectors
1 code implementation • ACL 2019 • Shuming Ma, Pengcheng Yang, Tianyu Liu, Peng Li, Jie zhou, Xu sun
We propose a novel model to separate the generation into two stages: key fact prediction and surface realization.
1 code implementation • ACL 2019 • Fuli Luo, Peng Li, Pengcheng Yang, Jie zhou, Yutong Tan, Baobao Chang, Zhifang Sui, Xu sun
In this paper, we focus on the task of fine-grained text sentiment transfer (FGST).
no code implementations • 19 Jun 2019 • Hanbin Hu, Mit Shah, Jianhua Z. Huang, Peng Li
It has been shown that deep neural networks (DNNs) may be vulnerable to adversarial attacks, raising the concern on their robustness particularly for safety-critical applications.
no code implementations • 18 Jun 2019 • Yao-Hui Chen, Peng Li, Jun Xu, Shengjian Guo, Rundong Zhou, Yulong Zhang, Taowei, Long Lu
Unlike the existing hybrid testing tools, SAVIOR prioritizes the concolic execution of the seeds that are likely to uncover more vulnerabilities.
Software Engineering
4 code implementations • ACL 2019 • Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zheng-Hao Liu, Zhiyuan Liu, Lixin Huang, Jie zhou, Maosong Sun
Multiple entities in a document generally exhibit complex inter-sentence relations, and cannot be well handled by existing relation extraction (RE) methods that typically focus on extracting intra-sentence relations for single entity pairs.
Ranked #59 on Relation Extraction on DocRED
no code implementations • 4 Jun 2019 • Rui Zhang, Zheng Zhu, Peng Li, Rui Wu, Chaoxu Guo, Guan Huang, Hailun Xia
Human pose estimation has witnessed a significant advance thanks to the development of deep learning.
no code implementations • 4 Jun 2019 • Peng Li, Jiabin Zhang, Zheng Zhu, Yanwei Li, Lu Jiang, Guan Huang
Multi-target Multi-camera Tracking (MTMCT) aims to extract the trajectories from videos captured by a set of cameras.
1 code implementation • NAACL 2019 • Xiaozhi Wang, Xu Han, Zhiyuan Liu, Maosong Sun, Peng Li
Modern weakly supervised methods for event detection (ED) avoid time-consuming human annotation and achieve promising results by learning from auto-labeled data.
2 code implementations • 24 May 2019 • Fuli Luo, Peng Li, Jie zhou, Pengcheng Yang, Baobao Chang, Zhifang Sui, Xu sun
Therefore, in this paper, we propose a dual reinforcement learning framework to directly transfer the style of the text via a one-step mapping model, without any separation of content and style.
Ranked #1 on Unsupervised Text Style Transfer on GYAFC
no code implementations • 20 Apr 2019 • Peng Li, Xi Rao, Jennifer Blase, Yue Zhang, Xu Chu, Ce Zhang
Data quality affects machine learning (ML) model performances, and data scientists spend considerable amount of time on data cleaning before model training.
no code implementations • 7 Mar 2019 • Qiu Ran, Peng Li, Weiwei Hu, Jie zhou
However, humans typically compare the options at multiple-granularity level before reading the article in detail to make reasoning more efficient.
Ranked #2 on Question Answering on RACE
no code implementations • 29 Jan 2019 • Myung Seok Shim, Chenye Zhao, Yang Li, Xuchong Zhang, Wenrui Zhang, Peng Li
Sensor fusion has wide applications in many domains including health care and autonomous systems.
no code implementations • 23 Jan 2019 • Bo Liu, Wenhao Chi, Xinran Li, Peng Li, Wenhua Liang, Haiping Liu, Wei Wang, Jianxing He
Lung cancer is the commonest cause of cancer deaths worldwide, and its mortality can be reduced significantly by performing early diagnosis and screening.
no code implementations • ICLR 2019 • Ting-Jui Chang, Yukun He, Peng Li
However, the computational cost of the adversarial training with PGD and other multi-step adversarial examples is much higher than that of the adversarial training with other simpler attack techniques.
no code implementations • ICLR 2019 • Myung Seok Shim, Peng Li
Sensor fusion is a key technology that integrates various sensory inputs to allow for robust decision making in many applications such as autonomous driving and robot control.
1 code implementation • EMNLP 2018 • Xu Han, Pengfei Yu, Zhiyuan Liu, Maosong Sun, Peng Li
In this paper, we aim to incorporate the hierarchical information of relations for distantly supervised relation extraction and propose a novel hierarchical attention scheme.
no code implementations • 13 Sep 2018 • Haichao Shi, Peng Li, Bo wang, Zhenyu Wang
However, in this paper, we propose a novel architecture for image captioning with deep reinforcement learning to optimize image captioning tasks.
1 code implementation • NeurIPS 2018 • Yingyezhe Jin, Wenrui Zhang, Peng Li
We evaluate the proposed HM2-BP algorithm by training deep fully connected and convolutional SNNs based on the static MNIST [14] and dynamic neuromorphic N-MNIST [26].
no code implementations • COLING 2016 • Xiaotian Jiang, Quan Wang, Peng Li, Bin Wang
In this paper, we propose a multi-instance multi-label convolutional neural network for distantly supervised RE.
3 code implementations • 21 Jul 2016 • Peng Li, Wei Li, Zhengyan He, Xuguang Wang, Ying Cao, Jie zhou, Wei Xu
While question answering (QA) with neural network, i. e. neural QA, has achieved promising results in recent years, lacking of large scale real-word QA dataset is still a challenge for developing and evaluating neural QA system.
1 code implementation • TACL 2016 • Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu
On the WMT'14 English-to-French task, we achieve BLEU=37. 7 with a single attention model, which outperforms the corresponding single shallow model by 6. 2 BLEU points.
Ranked #37 on Machine Translation on WMT2014 English-French
no code implementations • 30 Mar 2016 • Peng Li, Heng Huang
Neural network based approaches for sentence relation modeling automatically generate hidden matching features from raw sentence pairs.
no code implementations • 30 Mar 2016 • Peng Li, Heng Huang
We report an implementation of a clinical information extraction tool that leverages deep neural network to annotate event spans and their attributes from raw clinical notes and pathology reports.