Search Results for author: Peng Li

Found 137 papers, 60 papers with code

Unsupervised Dependency Graph Network

1 code implementation ACL 2022 Yikang Shen, Shawn Tan, Alessandro Sordoni, Peng Li, Jie zhou, Aaron Courville

We introduce a new model, the Unsupervised Dependency Graph Network (UDGN), that can induce dependency structures from raw corpora and the masked language modeling task.

Language Modelling Masked Language Modeling +2

Event Detection with Dual Relational Graph Attention Networks

1 code implementation COLING 2022 Jiaxin Mi, Po Hu, Peng Li

To this end, we propose a simple yet effective model named DualGAT (Dual Relational Graph Attention Networks), which exploits the complementary nature of syntactic and semantic relations to alleviate the problem.

Event Detection Graph Attention

CodRED: A Cross-Document Relation Extraction Dataset for Acquiring Knowledge in the Wild

1 code implementation EMNLP 2021 Yuan YAO, Jiaju Du, Yankai Lin, Peng Li, Zhiyuan Liu, Jie zhou, Maosong Sun

Existing relation extraction (RE) methods typically focus on extracting relational facts between entity pairs within single sentences or documents.

Relation Extraction

Bridging the Gap between Prior and Posterior Knowledge Selection for Knowledge-Grounded Dialogue Generation

no code implementations EMNLP 2020 Xiuyi Chen, Fandong Meng, Peng Li, Feilong Chen, Shuang Xu, Bo Xu, Jie zhou

Here, we deal with these issues on two aspects: (1) We enhance the prior selection module with the necessary posterior information obtained from the specially designed Posterior Information Prediction Module (PIPM); (2) We propose a Knowledge Distillation Based Training Strategy (KDBTS) to train the decoder with the knowledge selected from the prior distribution, removing the exposure bias of knowledge selection.

Dialogue Generation Knowledge Distillation

Do Pre-trained Models Benefit Knowledge Graph Completion? A Reliable Evaluation and a Reasonable Approach

1 code implementation Findings (ACL) 2022 Xin Lv, Yankai Lin, Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu, Peng Li, Jie zhou

In recent years, pre-trained language models (PLMs) have been shown to capture factual knowledge from massive texts, which encourages the proposal of PLM-based knowledge graph completion (KGC) models.

Knowledge Graph Completion Link Prediction

AHPA: Adaptive Horizontal Pod Autoscaling Systems on Alibaba Cloud Container Service for Kubernetes

no code implementations7 Mar 2023 Zhiqiang Zhou, Chaoli Zhang, Lingna Ma, Jing Gu, Huajie Qian, Qingsong Wen, Liang Sun, Peng Li, Zhimin Tang

This paper discusses horizontal POD resources management in Alibaba Cloud Container Services with a newly deployed AI algorithm framework named AHPA -- the adaptive horizontal pod auto-scaling system.


Restricted Orthogonal Gradient Projection for Continual Learning

no code implementations28 Jan 2023 Zeyuan Yang, Zonghan Yang, Peng Li, Yang Liu

The basic idea is to adopt a restricted orthogonal constraint allowing parameters optimized in the direction oblique to the whole frozen space to facilitate forward knowledge transfer while consolidating previous knowledge.

Continual Learning Transfer Learning

When to Trust Aggregated Gradients: Addressing Negative Client Sampling in Federated Learning

no code implementations25 Jan 2023 Wenkai Yang, Yankai Lin, Guangxiang Zhao, Peng Li, Jie zhou, Xu sun

Federated Learning has become a widely-used framework which allows learning a global model on decentralized local datasets under the condition of protecting local data privacy.

Federated Learning text-classification +1

Prompt Gating: A Parameter Efficient Tuning Method for Zero-Shot Multi-Source Translation

no code implementations19 Dec 2022 Xuancheng Huang, Zijun Liu, Peng Li, Maosong Sun, Yang Liu

Multi-source translation (MST), which typically receives multiple source sentences of the same meaning in different languages, has been shown superior to single-source translation.


Continually Learning from Existing Models: Knowledge Accumulation for Neural Machine Translation

no code implementations18 Dec 2022 Yuanchi Zhang, Peng Li, Maosong Sun, Yang Liu

Although continually extending an existing NMT model to new domains or languages has attracted intensive interest in recent years, the equally valuable problem of continually improving a given NMT model in its domain by leveraging knowledge from an unlimited number of existing NMT models is not explored yet.

Machine Translation NMT +1

Synaptic Dynamics Realize First-order Adaptive Learning and Weight Symmetry

no code implementations1 Dec 2022 Yukun Yang, Peng Li

Gradient-based first-order adaptive optimization methods such as the Adam optimizer are prevalent in training artificial networks, achieving the state-of-the-art results.

MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference, Temporal, Causal, and Subevent Relation Extraction

1 code implementation14 Nov 2022 Xiaozhi Wang, Yulin Chen, Ning Ding, Hao Peng, Zimu Wang, Yankai Lin, Xu Han, Lei Hou, Juanzi Li, Zhiyuan Liu, Peng Li, Jie zhou

It contains 103, 193 event coreference chains, 1, 216, 217 temporal relations, 57, 992 causal relations, and 15, 841 subevent relations, which is larger than existing datasets of all the ERE tasks by at least an order of magnitude.

Relation Extraction

iSmallNet: Densely Nested Network with Label Decoupling for Infrared Small Target Detection

no code implementations29 Oct 2022 Zhiheng Hu, Yongzhen Wang, Peng Li, Jie Qin, Haoran Xie, Mingqiang Wei

First, to maintain small targets in deep layers, we develop a multi-scale nested interaction module to explore a wide range of context information.

object-detection Small Object Detection

GeoGCN: Geometric Dual-domain Graph Convolution Network for Point Cloud Denoising

no code implementations28 Oct 2022 Zhaowei Chen, Peng Li, Zeyong Wei, Honghua Chen, Haoran Xie, Mingqiang Wei, Fu Lee Wang

We propose GeoGCN, a novel geometric dual-domain graph convolution network for point cloud denoising (PCD).


ROSE: Robust Selective Fine-tuning for Pre-trained Language Models

1 code implementation18 Oct 2022 Lan Jiang, Hao Zhou, Yankai Lin, Peng Li, Jie zhou, Rui Jiang

Even though the large-scale language models have achieved excellent performances, they suffer from various adversarial attacks.

Adversarial Robustness

From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models

1 code implementation11 Oct 2022 Lei LI, Yankai Lin, Xuancheng Ren, Guangxiang Zhao, Peng Li, Jie zhou, Xu sun

We then design a Model Uncertainty--aware Knowledge Integration (MUKI) framework to recover the golden supervision for the student.

LF-SLAM: A SLAM Framework for Large Field-of-View Cameras with Negative Imaging Plane on Mobile Agents

2 code implementations12 Sep 2022 Ze Wang, Kailun Yang, Hao Shi, Peng Li, Fei Gao, Jian Bai, Kaiwei Wang

We collect the PALVIO dataset using a Panoramic Annular Lens (PAL) system with an entire FoV of 360\deg x(40\deg-120\deg) and IMU sensor to address the lack of panoramic SLAM datasets.

Autonomous Driving Simultaneous Localization and Mapping

Dynamic MRI using Learned Transform-based Tensor Low-Rank Network (LT$^2$LR-Net)

no code implementations2 Jun 2022 Yinghao Zhang, Peng Li, Yue Hu

While low-rank matrix prior has been exploited in dynamic MR image reconstruction and has obtained satisfying performance, tensor low-rank models have recently emerged as powerful alternative representations for three-dimensional dynamic MR datasets.

MRI Reconstruction Tensor Decomposition

A Template-based Method for Constrained Neural Machine Translation

1 code implementation23 May 2022 Shuo Wang, Peng Li, Zhixing Tan, Zhaopeng Tu, Maosong Sun, Yang Liu

In this work, we propose a template-based method that can yield results with high translation quality and match accuracy and the inference speed of our method is comparable with unconstrained NMT models.

Machine Translation NMT +1

A Computational Framework of Cortical Microcircuits Approximates Sign-concordant Random Backpropagation

no code implementations15 May 2022 Yukun Yang, Peng Li

We employ the Hebbian rule operating in local compartments to update synaptic weights and achieve supervised learning in a biologically plausible manner.

A Simple but Effective Pluggable Entity Lookup Table for Pre-trained Language Models

1 code implementation ACL 2022 Deming Ye, Yankai Lin, Peng Li, Maosong Sun, Zhiyuan Liu

Pre-trained language models (PLMs) cannot well recall rich factual knowledge of entities exhibited in large-scale corpora, especially those rare entities.

Domain Adaptation

Binary Neural Networks as a general-propose compute paradigm for on-device computer vision

no code implementations8 Feb 2022 Guhong Nie, Lirui Xiao, Menglong Zhu, Dongliang Chu, Yue Shen, Peng Li, Kang Yang, Li Du, Bo Chen

For binary neural networks (BNNs) to become the mainstream on-device computer vision algorithm, they must achieve a superior speed-vs-accuracy tradeoff than 8-bit quantization and establish a similar degree of general applicability in vision tasks.

Quantization Super-Resolution

Toward a More Populous Online Platform: The Economic Impacts of Compensated Reviews

no code implementations26 Jan 2022 Peng Li, Arim Park, Soohyun Cho, Yao Zhao

In this paper, we study the effect of compensated reviews on non-compensated reviews by utilizing online reviews on 1, 240 auto shipping companies over a ten-year period from a transportation website.

text-classification Text Classification

Model Uncertainty-Aware Knowledge Amalgamation for Pre-Trained Language Models

no code implementations14 Dec 2021 Lei LI, Yankai Lin, Xuancheng Ren, Guangxiang Zhao, Peng Li, Jie zhou, Xu sun

As many fine-tuned pre-trained language models~(PLMs) with promising performance are generously released, investigating better ways to reuse these models is vital as it can greatly reduce the retraining computational cost and the potential environmental side-effects.

BioLeaF: A Bio-plausible Learning Framework for Training of Spiking Neural Networks

no code implementations14 Nov 2021 Yukun Yang, Peng Li

Our experiments show that the proposed framework demonstrates learning accuracy comparable to BP-based rules and may provide new insights on how learning is orchestrated in biological systems.

On Transferability of Prompt Tuning for Natural Language Processing

1 code implementation NAACL 2022 Yusheng Su, Xiaozhi Wang, Yujia Qin, Chi-Min Chan, Yankai Lin, Huadong Wang, Kaiyue Wen, Zhiyuan Liu, Peng Li, Juanzi Li, Lei Hou, Maosong Sun, Jie zhou

To explore whether we can improve PT via prompt transfer, we empirically investigate the transferability of soft prompts across different downstream tasks and PLMs in this work.

Natural Language Understanding Transfer Learning

FedGraph: Federated Graph Learning with Intelligent Sampling

no code implementations2 Nov 2021 Fahao Chen, Peng Li, Toshiaki Miyazaki, Celimuge Wu

In this paper, we propose FedGraph for federated graph learning among multiple computing clients, each of which holds a subgraph.

Federated Learning Graph Sampling

Path-Enhanced Multi-Relational Question Answering with Knowledge Graph Embeddings

no code implementations29 Oct 2021 Guanglin Niu, Yang Li, Chengguang Tang, Zhongkai Hu, Shibin Yang, Peng Li, Chengyu Wang, Hao Wang, Jian Sun

The multi-relational Knowledge Base Question Answering (KBQA) system performs multi-hop reasoning over the knowledge graph (KG) to achieve the answer.

Knowledge Base Question Answering Knowledge Graph Embedding +1

Fixed-Time Convergent Distributed Observer Design of Linear Systems: A Kernel-Based Approach

no code implementations23 Oct 2021 Pudong Ge, Peng Li, Boli Chen, Fei Teng

The robust distributed state estimation for a class of continuous-time linear time-invariant systems is achieved by a novel kernel-based distributed observer, which, for the first time, ensures fixed-time convergence properties.

Exploring Universal Intrinsic Task Subspace via Prompt Tuning

1 code implementation15 Oct 2021 Yujia Qin, Xiaozhi Wang, Yusheng Su, Yankai Lin, Ning Ding, Jing Yi, Weize Chen, Zhiyuan Liu, Juanzi Li, Lei Hou, Peng Li, Maosong Sun, Jie zhou

In the experiments, we study diverse few-shot NLP tasks and surprisingly find that in a 250-dimensional subspace found with 100 tasks, by only tuning 250 free parameters, we can recover 97% and 83% of the full prompt tuning performance for 100 seen tasks (using different training data) and 20 unseen tasks, respectively, showing great generalization ability of the found intrinsic task subspace.

RAP: Robustness-Aware Perturbations for Defending against Backdoor Attacks on NLP Models

1 code implementation EMNLP 2021 Wenkai Yang, Yankai Lin, Peng Li, Jie zhou, Xu sun

Motivated by this observation, we construct a word-based robustness-aware perturbation to distinguish poisoned samples from clean samples to defend against the backdoor attacks on natural language processing (NLP) models.

Sentiment Analysis

Topology-Imbalance Learning for Semi-Supervised Node Classification

1 code implementation NeurIPS 2021 Deli Chen, Yankai Lin, Guangxiang Zhao, Xuancheng Ren, Peng Li, Jie zhou, Xu sun

The class imbalance problem, as an important issue in learning node representations, has drawn increasing attention from the community.

Classification Node Classification

Training Deep Spiking Neural Networks with Bio-plausible Learning Rules

no code implementations29 Sep 2021 Yukun Yang, Peng Li

There exists a marked cleavage between the biological plausible approaches and the practical backpropagation-based approaches on how to train a deep spiking neural network (DSNN) with better performance.

AutoNF: Automated Architecture Optimization of Normalizing Flows Using a Mixture Distribution Formulation

no code implementations29 Sep 2021 Yu Wang, Jan Drgona, Jiaxin Zhang, Karthik Somayaji NS, Frank Y Liu, Malachi Schram, Peng Li

Although various flow models based on different transformations have been proposed, there still lacks a quantitative analysis of performance-cost trade-offs between different flows as well as a systematic way of constructing the best flow architecture.

Dynamic Knowledge Distillation for Pre-trained Language Models

1 code implementation EMNLP 2021 Lei LI, Yankai Lin, Shuhuai Ren, Peng Li, Jie zhou, Xu sun

Knowledge distillation~(KD) has been proved effective for compressing large-scale pre-trained language models.

Knowledge Distillation

GoG: Relation-aware Graph-over-Graph Network for Visual Dialog

no code implementations Findings (ACL) 2021 Feilong Chen, Xiuyi Chen, Fandong Meng, Peng Li, Jie zhou

Specifically, GoG consists of three sequential graphs: 1) H-Graph, which aims to capture coreference relations among dialog history; 2) History-aware Q-Graph, which aims to fully understand the question through capturing dependency relations between words based on coreference resolution on the dialog history; and 3) Question-aware I-Graph, which aims to capture the relations between objects in an image based on fully question representation.

coreference-resolution Coreference Resolution +2

Multimodal Incremental Transformer with Visual Grounding for Visual Dialogue Generation

1 code implementation Findings (ACL) 2021 Feilong Chen, Fandong Meng, Xiuyi Chen, Peng Li, Jie zhou

Visual dialogue is a challenging task since it needs to answer a series of coherent questions on the basis of understanding the visual environment.

Dialogue Generation Visual Grounding

Packed Levitated Marker for Entity and Relation Extraction

1 code implementation ACL 2022 Deming Ye, Yankai Lin, Peng Li, Maosong Sun

In particular, we propose a neighborhood-oriented packing strategy, which considers the neighbor spans integrally to better model the entity boundary information.

Joint Entity and Relation Extraction

WeChat Neural Machine Translation Systems for WMT21

no code implementations WMT (EMNLP) 2021 Xianfeng Zeng, Yijin Liu, Ernan Li, Qiu Ran, Fandong Meng, Peng Li, Jinan Xu, Jie zhou

This paper introduces WeChat AI's participation in WMT 2021 shared news translation task on English->Chinese, English->Japanese, Japanese->English and English->German.

Knowledge Distillation Machine Translation +3

Composing Recurrent Spiking Neural Networks using Locally-Recurrent Motifs and Risk-Mitigating Architectural Optimization

no code implementations4 Aug 2021 Wenrui Zhang, Peng Li

The small size of the motifs and sparse inter-motif connectivity leads to an RSNN architecture scalable to large network sizes.

Rethinking Stealthiness of Backdoor Attack against NLP Models

1 code implementation ACL 2021 Wenkai Yang, Yankai Lin, Peng Li, Jie zhou, Xu sun

In this work, we point out a potential problem of current backdoor attacking research: its evaluation ignores the stealthiness of backdoor attacks, and most of existing backdoor attacking methods are not stealthy either to system deployers or to system users.

Backdoor Attack Data Augmentation +2

H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks

no code implementations25 Jul 2021 Ling Liang, Zheng Qu, Zhaodong Chen, Fengbin Tu, Yujie Wu, Lei Deng, Guoqi Li, Peng Li, Yuan Xie

Although spiking neural networks (SNNs) take benefits from the bio-plausible neural modeling, the low accuracy under the common local synaptic plasticity learning rules limits their application in many practical tasks.

Backpropagated Neighborhood Aggregation for Accurate Training of Spiking Neural Networks

no code implementations22 Jun 2021 Yukun Yang, Wenrui Zhang, Peng Li

While backpropagation (BP) has been applied to spiking neural networks (SNNs) achieving encouraging results, a key challenge involved is to backpropagate a continuous-valued loss over layers of spiking neurons exhibiting discontinuous all-or-none firing activities.

Demonstration of Panda: A Weakly Supervised Entity Matching System

no code implementations21 Jun 2021 Renzhi Wu, Prem Sakala, Peng Li, Xu Chu, Yeye He

Panda's IDE includes many novel features purpose-built for EM, such as smart data sampling, a builtin library of EM utility functions, automatically generated LFs, visual debugging of LFs, and finally, an EM-specific labeling model.


Context Tracking Network: Graph-based Context Modeling for Implicit Discourse Relation Recognition

no code implementations NAACL 2021 Yingxue Zhang, Fandong Meng, Peng Li, Ping Jian, Jie zhou

Implicit discourse relation recognition (IDRR) aims to identify logical relations between two adjacent sentences in the discourse.


Fully Hyperbolic Neural Networks

1 code implementation ACL 2022 Weize Chen, Xu Han, Yankai Lin, Hexu Zhao, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou

Hyperbolic neural networks have shown great potential for modeling complex data.

Knowledge Inheritance for Pre-trained Language Models

2 code implementations NAACL 2022 Yujia Qin, Yankai Lin, Jing Yi, Jiajie Zhang, Xu Han, Zhengyan Zhang, Yusheng Su, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou

Specifically, we introduce a pre-training framework named "knowledge inheritance" (KI) and explore how could knowledge distillation serve as auxiliary supervision during pre-training to efficiently learn larger PLMs.

Domain Adaptation Knowledge Distillation +2

Manual Evaluation Matters: Reviewing Test Protocols of Distantly Supervised Relation Extraction

1 code implementation Findings (ACL) 2021 Tianyu Gao, Xu Han, Keyue Qiu, Yuzhuo Bai, Zhiyu Xie, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou

Distantly supervised (DS) relation extraction (RE) has attracted much attention in the past few years as it can utilize large-scale auto-labeled data.

Relation Extraction

CSS-LM: A Contrastive Framework for Semi-supervised Fine-tuning of Pre-trained Language Models

1 code implementation7 Feb 2021 Yusheng Su, Xu Han, Yankai Lin, Zhengyan Zhang, Zhiyuan Liu, Peng Li, Jie zhou, Maosong Sun

We then perform contrastive semi-supervised learning on both the retrieved unlabeled and original labeled instances to help PLMs capture crucial task-related semantic features.

Relief and Stimulus in A Cross-sector Multi-product Scarce Resource Supply Chain Network

1 code implementation22 Jan 2021 Xiaowei Hu, Peng Li

In the era of a growing population, systemic changes to the world, and the rising risk of crises, humanity has been facing an unprecedented challenge of resource scarcity.

Growth, Electronic Structure and Superconductivity of Ultrathin Epitaxial CoSi2 Films

no code implementations21 Jan 2021 Yuan Fang, Ding Wang, Peng Li, Hang Su, Tian Le, Yi Wu, Guo-Wei Yang, Hua-Li Zhang, Zhi-Guang Xiao, Yan-Qiu Sun, Si-Yuan Hong, Yan-Wu Xie, Huan-Hua Wang, Chao Cao, Xin Lu, Hui-Qiu Yuan, Yang Liu

We report growth, electronic structure and superconductivity of ultrathin epitaxial CoSi2 films on Si(111).

Mesoscale and Nanoscale Physics

Topological Hall Effect in a Topological Insulator Interfaced with a Magnetic Insulator

no code implementations16 Dec 2020 Peng Li, Jinjun Ding, Steven S. -L. Zhang, James Kally, Timothy Pillsbury, Olle G. Heinonen, Gaurab Rimal, Chong Bi, August DeMann, Stuart B. Field, Weigang Wang, Jinke Tang, J. S. Jiang, Axel Hoffmann, Nitin Samarth, Mingzhong Wu

A topological insulator (TI) interfaced with a magnetic insulator (MI) may host an anomalous Hall effect (AHE), a quantum AHE, and a topological Hall effect (THE).

Materials Science Mesoscale and Nanoscale Physics Applied Physics

Rethinking the Promotion Brought by Contrastive Learning to Semi-Supervised Node Classification

no code implementations14 Dec 2020 Deli Chen, Yankai Lin, Lei LI, Xuancheng Ren, Peng Li, Jie zhou, Xu sun

Graph Contrastive Learning (GCL) has proven highly effective in promoting the performance of Semi-Supervised Node Classification (SSNC).

Contrastive Learning Graph Learning +1

Neural Gibbs Sampling for Joint Event Argument Extraction

1 code implementation Asian Chapter of the Association for Computational Linguistics 2020 Xiaozhi Wang, Shengyu Jia, Xu Han, Zhiyuan Liu, Juanzi Li, Peng Li, Jie zhou

Existing EAE methods either extract each event argument roles independently or sequentially, which cannot adequately model the joint probability distribution among event arguments and their roles.

Event Argument Extraction Event Extraction

Charge density wave and weak Kondo effect in a Dirac semimetal CeSbTe

no code implementations23 Nov 2020 Peng Li, Baijiang Lv, Yuan Fang, Wei Guo, Zhongzheng Wu, Yi Wu, Cheng-Maw Cheng, Dawei Shen, Yuefeng Nie, Luca Petaccia, Chao Cao, Zhu-An Xu, Yang Liu

Using angle-resolved photoemission spectroscopy (ARPES) and low-energy electron diffraction (LEED), together with density-functional theory (DFT) calculation, we report the formation of charge density wave (CDW) and its interplay with the Kondo effect and topological states in CeSbTe.

Strongly Correlated Electrons Materials Science

EasyTransfer -- A Simple and Scalable Deep Transfer Learning Platform for NLP Applications

2 code implementations18 Nov 2020 Minghui Qiu, Peng Li, Chengyu Wang, Hanjie Pan, Ang Wang, Cen Chen, Xianyan Jia, Yaliang Li, Jun Huang, Deng Cai, Wei Lin

The literature has witnessed the success of leveraging Pre-trained Language Models (PLMs) and Transfer Learning (TL) algorithms to a wide range of Natural Language Processing (NLP) applications, yet it is not easy to build an easy-to-use and scalable TL toolkit for this purpose.

Compiler Optimization Conversational Question Answering +1

Using simulation to incorporate dynamic criteria into multiple criteria decision-making

no code implementations16 Nov 2020 Uwe Aickelin, Jenna Marie Reps, Peer-Olaf Siebers, Peng Li

In this paper, we present a case study demonstrating how dynamic and uncertain criteria can be incorporated into a multicriteria analysis with the help of discrete event simulation.

Decision Making

DisenE: Disentangling Knowledge Graph Embeddings

no code implementations28 Oct 2020 Xiaoyu Kou, Yankai Lin, Yuntao Li, Jiahao Xu, Peng Li, Jie zhou, Yan Zhang

Knowledge graph embedding (KGE), aiming to embed entities and relations into low-dimensional vectors, has attracted wide attention recently.

Entity Embeddings Knowledge Graph Embedding +2

Skip-Connected Self-Recurrent Spiking Neural Networks with Joint Intrinsic Parameter and Synaptic Weight Training

no code implementations23 Oct 2020 Wenrui Zhang, Peng Li

Moreover, we propose a new backpropagation (BP) method called backpropagated intrinsic plasticity (BIP) to further boost the performance of ScSr-SNNs by training intrinsic model parameters.

MS-Ranker: Accumulating Evidence from Potentially Correct Candidates for Answer Selection

no code implementations10 Oct 2020 Yingxue Zhang, Fandong Meng, Peng Li, Ping Jian, Jie zhou

As conventional answer selection (AS) methods generally match the question with each candidate answer independently, they suffer from the lack of matching information between the question and the candidate.

Answer Selection Reinforcement Learning (RL)

Disentangle-based Continual Graph Representation Learning

1 code implementation EMNLP 2020 Xiaoyu Kou, Yankai Lin, Shaobo Liu, Peng Li, Jie zhou, Yan Zhang

Graph embedding (GE) methods embed nodes (and/or edges) in graph into a low-dimensional semantic space, and have shown its effectiveness in modeling multi-relational data.

Continual Learning Graph Embedding +1

Learning from Context or Names? An Empirical Study on Neural Relation Extraction

1 code implementation EMNLP 2020 Hao Peng, Tianyu Gao, Xu Han, Yankai Lin, Peng Li, Zhiyuan Liu, Maosong Sun, Jie zhou

We find that (i) while context is the main source to support the predictions, RE models also heavily rely on the information from entity mentions, most of which is type information, and (ii) existing datasets may leak shallow heuristics via entity mentions and thus contribute to the high performance on RE benchmarks.

Memorization Relation Extraction

CokeBERT: Contextual Knowledge Selection and Embedding towards Enhanced Pre-Trained Language Models

1 code implementation29 Sep 2020 Yusheng Su, Xu Han, Zhengyan Zhang, Peng Li, Zhiyuan Liu, Yankai Lin, Jie zhou, Maosong Sun

In this paper, we propose a novel framework named Coke to dynamically select contextual knowledge and embed knowledge context according to textual context for PLMs, which can avoid the effect of redundant and ambiguous knowledge in KGs that cannot match the input text.

Knowledge Graphs

Detector tilt considerations in high-energy Bragg coherent diffraction imaging: a simulation study

1 code implementation4 Aug 2020 Siddharth Maddali, Marc Allain, Peng Li, Virginie Chamard, Stephan O. Hruszkewycz

This paper addresses three-dimensional signal distortion and image reconstruction issues in x-ray Bragg coherent diffraction imaging (BCDI) in the event of a non-trivial, non-orthogonal orientation of the area detector with respect to the diffracted beam.

Instrumentation and Detectors Image and Video Processing

Continual Relation Learning via Episodic Memory Activation and Reconsolidation

no code implementations ACL 2020 Xu Han, Yi Dai, Tianyu Gao, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou

Continual relation learning aims to continually train a model on new data to learn incessantly emerging novel relations while avoiding catastrophically forgetting old relations.

Continual Learning

Learning to Recover from Multi-Modality Errors for Non-Autoregressive Neural Machine Translation

1 code implementation ACL 2020 Qiu Ran, Yankai Lin, Peng Li, Jie zhou

By dynamically determining segment length and deleting repetitive segments, RecoverSAT is capable of recovering from repetitive and missing token errors.

Machine Translation Translation

Nearest Neighbor Classifiers over Incomplete Information: From Certain Answers to Certain Predictions

1 code implementation11 May 2020 Bojan Karlaš, Peng Li, Renzhi Wu, Nezihe Merve Gürel, Xu Chu, Wentao Wu, Ce Zhang

Machine learning (ML) applications have been thriving recently, largely attributed to the increasing availability of data.

BIG-bench Machine Learning

Coreferential Reasoning Learning for Language Representation

2 code implementations EMNLP 2020 Deming Ye, Yankai Lin, Jiaju Du, Zheng-Hao Liu, Peng Li, Maosong Sun, Zhiyuan Liu

Language representation models such as BERT could effectively capture contextual semantic information from plain text, and have been proved to achieve promising results in lots of downstream NLP tasks with appropriate fine-tuning.

Relation Extraction

Temporal Spike Sequence Learning via Backpropagation for Deep Spiking Neural Networks

1 code implementation NeurIPS 2020 Wenrui Zhang, Peng Li

Spiking neural networks (SNNs) are well suited for spatio-temporal learning and implementations on energy-efficient event-driven neuromorphic processors.

Image Classification

Exploring Adversarial Attack in Spiking Neural Networks with Spike-Compatible Gradient

no code implementations1 Jan 2020 Ling Liang, Xing Hu, Lei Deng, Yujie Wu, Guoqi Li, Yufei Ding, Peng Li, Yuan Xie

Recently, backpropagation through time inspired learning algorithms are widely introduced into SNNs to improve the performance, which brings the possibility to attack the models accurately given Spatio-temporal gradient maps.

Adversarial Attack

DMRM: A Dual-channel Multi-hop Reasoning Model for Visual Dialog

1 code implementation18 Dec 2019 Feilong Chen, Fandong Meng, Jiaming Xu, Peng Li, Bo Xu, Jie zhou

Visual Dialog is a vision-language task that requires an AI agent to engage in a conversation with humans grounded in an image.

Visual Dialog

Implement Liquid Democracy on Ethereum: A Fast Algorithm for Realtime Self-tally Voting System

no code implementations20 Nov 2019 Xuepeng Fan, Peng Li, Yulong Zeng, Xiaoping Zhou

We study the liquid democracy problem, where each voter can either directly vote to a candidate or delegate his voting power to a proxy.

Cryptography and Security

HighwayGraph: Modelling Long-distance Node Relations for Improving General Graph Neural Network

no code implementations10 Nov 2019 Deli Chen, Xiaoqian Liu, Yankai Lin, Peng Li, Jie zhou, Qi Su, Xu sun

To address this issue, we propose to model long-distance node relations by simply relying on shallow GNN architectures with two solutions: (1) Implicitly modelling by learning to predict node pair relations (2) Explicitly modelling by adding edges between nodes that potentially have the same label.

General Classification Node Classification

Guiding Non-Autoregressive Neural Machine Translation Decoding with Reordering Information

no code implementations6 Nov 2019 Qiu Ran, Yankai Lin, Peng Li, Jie zhou

Non-autoregressive neural machine translation (NAT) generates each target word in parallel and has achieved promising inference acceleration.

Machine Translation Translation

Comprehensive SNN Compression Using ADMM Optimization and Activity Regularization

no code implementations3 Nov 2019 Lei Deng, Yujie Wu, Yifan Hu, Ling Liang, Guoqi Li, Xing Hu, Yufei Ding, Peng Li, Yuan Xie

As well known, the huge memory and compute costs of both artificial neural networks (ANNs) and spiking neural networks (SNNs) greatly hinder their deployment on edge devices with high efficiency.

Model Compression Quantization

HMEAE: Hierarchical Modular Event Argument Extraction

1 code implementation IJCNLP 2019 Xiaozhi Wang, Ziqi Wang, Xu Han, Zhiyuan Liu, Juanzi Li, Peng Li, Maosong Sun, Jie zhou, Xiang Ren

Existing event extraction methods classify each argument role independently, ignoring the conceptual correlations between different argument roles.

Event Argument Extraction Event Extraction +1

FewRel 2.0: Towards More Challenging Few-Shot Relation Classification

1 code implementation IJCNLP 2019 Tianyu Gao, Xu Han, Hao Zhu, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou

We present FewRel 2. 0, a more challenging task to investigate two aspects of few-shot relation classification models: (1) Can they adapt to a new domain with only a handful of instances?

Classification Domain Adaptation +2

NumNet: Machine Reading Comprehension with Numerical Reasoning

2 code implementations IJCNLP 2019 Qiu Ran, Yankai Lin, Peng Li, Jie zhou, Zhiyuan Liu

Numerical reasoning, such as addition, subtraction, sorting and counting is a critical skill in human's reading comprehension, which has not been well considered in existing machine reading comprehension (MRC) systems.

Machine Reading Comprehension Question Answering

Boosting Throughput and Efficiency of Hardware Spiking Neural Accelerators using Time Compression Supporting Multiple Spike Codes

no code implementations10 Sep 2019 Changqing Xu, Wenrui Zhang, Yu Liu, Peng Li

Using spiking speech and image recognition datasets, we demonstrate the feasibility of supporting large time compression ratios of up to 16x, delivering up to 15. 93x, 13. 88x, and 86. 21x improvements in throughput, energy dissipation, the tradeoffs between hardware area, runtime, energy, and classification accuracy, respectively based on different spike codes on a Xilinx Zynq-7000 FPGA.

Quantifying and Correlating Rhythm Formants in Speech

no code implementations3 Sep 2019 Dafydd Gibbon, Peng Li

Consequently, only the LF LTS of the absolute speech signal is used in the empirical analysis.

Spike-Train Level Backpropagation for Training Deep Recurrent Spiking Neural Networks

1 code implementation NeurIPS 2019 Wenrui Zhang, Peng Li

However, the practical application of RSNNs is severely limited by challenges in training.

FastPose: Towards Real-time Pose Estimation and Tracking via Scale-normalized Multi-task Networks

no code implementations15 Aug 2019 Jiabin Zhang, Zheng Zhu, Wei Zou, Peng Li, Yanwei Li, Hu Su, Guan Huang

Given the results of MTN, we adopt an occlusion-aware Re-ID feature strategy in the pose tracking module, where pose information is utilized to infer the occlusion state to make better use of Re-ID feature.

Human Detection Multi-Person Pose Estimation +3

General approaches for shear-correcting coordinate transformations in Bragg coherent diffraction imaging: Part 2

1 code implementation15 Aug 2019 Peng Li, Siddharth Maddali, Anastasios Pateras, Irene Calvo-Almazan, Stephan O. Hruszkewycz, Virginie Chamard, Marc Allain

To deal with this, the currently favored approach (detailed in Part I) is to perform the entire inversion in conjugate non-orthogonal real and Fourier space frames, and to transform the 3D sample image into an orthogonal frame as a post-processing step for result analysis.

Instrumentation and Detectors Signal Processing

General approaches for shear-correcting coordinate transformations in Bragg coherent diffraction imaging: Part 1

1 code implementation15 Aug 2019 Siddharth Maddali, Peng Li, Anastasios Pateras, Daniel Timbie, Nazar Delegan, Alex Crook, Hope Lee, Irene Calvo-Almazan, Dina Sheyfer, Wonsuk Cha, F. Joseph Heremans, David D. Awschalom, Virginie Chamard, Marc Allain, Stephan O. Hruszkewycz

Part II builds upon the geometric theory developed in Part I with the formalism to correct the shear distortions directly on an orthogonal grid within the phase retrieval algorithm itself, allowing more physically realistic constraints to be applied.

Instrumentation and Detectors

Global Adversarial Attacks for Assessing Deep Learning Robustness

no code implementations19 Jun 2019 Hanbin Hu, Mit Shah, Jianhua Z. Huang, Peng Li

It has been shown that deep neural networks (DNNs) may be vulnerable to adversarial attacks, raising the concern on their robustness particularly for safety-critical applications.

SAVIOR: Towards Bug-Driven Hybrid Testing

no code implementations18 Jun 2019 Yao-Hui Chen, Peng Li, Jun Xu, Shengjian Guo, Rundong Zhou, Yulong Zhang, Taowei, Long Lu

Unlike the existing hybrid testing tools, SAVIOR prioritizes the concolic execution of the seeds that are likely to uncover more vulnerabilities.

Software Engineering

DocRED: A Large-Scale Document-Level Relation Extraction Dataset

4 code implementations ACL 2019 Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zheng-Hao Liu, Zhiyuan Liu, Lixin Huang, Jie zhou, Maosong Sun

Multiple entities in a document generally exhibit complex inter-sentence relations, and cannot be well handled by existing relation extraction (RE) methods that typically focus on extracting intra-sentence relations for single entity pairs.

Document-level Relation Extraction

State-aware Re-identification Feature for Multi-target Multi-camera Tracking

no code implementations4 Jun 2019 Peng Li, Jiabin Zhang, Zheng Zhu, Yanwei Li, Lu Jiang, Guan Huang

Multi-target Multi-camera Tracking (MTMCT) aims to extract the trajectories from videos captured by a set of cameras.


Adversarial Training for Weakly Supervised Event Detection

1 code implementation NAACL 2019 Xiaozhi Wang, Xu Han, Zhiyuan Liu, Maosong Sun, Peng Li

Modern weakly supervised methods for event detection (ED) avoid time-consuming human annotation and achieve promising results by learning from auto-labeled data.

Event Detection

A Dual Reinforcement Learning Framework for Unsupervised Text Style Transfer

2 code implementations24 May 2019 Fuli Luo, Peng Li, Jie zhou, Pengcheng Yang, Baobao Chang, Zhifang Sui, Xu sun

Therefore, in this paper, we propose a dual reinforcement learning framework to directly transfer the style of the text via a one-step mapping model, without any separation of content and style.

reinforcement-learning Reinforcement Learning (RL) +2

CleanML: A Study for Evaluating the Impact of Data Cleaning on ML Classification Tasks

no code implementations20 Apr 2019 Peng Li, Xi Rao, Jennifer Blase, Yue Zhang, Xu Chu, Ce Zhang

Data quality affects machine learning (ML) model performances, and data scientists spend considerable amount of time on data cleaning before model training.

General Classification Two-sample testing

Option Comparison Network for Multiple-choice Reading Comprehension

no code implementations7 Mar 2019 Qiu Ran, Peng Li, Weiwei Hu, Jie zhou

However, humans typically compare the options at multiple-granularity level before reading the article in detail to make reasoning more efficient.

Multiple-choice Question Answering +1

Robust Deep Multi-Modal Sensor Fusion using Fusion Weight Regularization and Target Learning

no code implementations29 Jan 2019 Myung Seok Shim, Chenye Zhao, Yang Li, Xuchong Zhang, Wenrui Zhang, Peng Li

Sensor fusion has wide applications in many domains including health care and autonomous systems.

Evolving the pulmonary nodules diagnosis from classical approaches to deep learning aided decision support: three decades development course and future prospect

no code implementations23 Jan 2019 Bo Liu, Wenhao Chi, Xinran Li, Peng Li, Wenhua Liang, Haiping Liu, Wei Wang, Jianxing He

Lung cancer is the commonest cause of cancer deaths worldwide, and its mortality can be reduced significantly by performing early diagnosis and screening.

Efficient Two-Step Adversarial Defense for Deep Neural Networks

no code implementations ICLR 2019 Ting-Jui Chang, Yukun He, Peng Li

However, the computational cost of the adversarial training with PGD and other multi-step adversarial examples is much higher than that of the adversarial training with other simpler attack techniques.

Adversarial Defense

Optimized Gated Deep Learning Architectures for Sensor Fusion

no code implementations ICLR 2019 Myung Seok Shim, Peng Li

Sensor fusion is a key technology that integrates various sensory inputs to allow for robust decision making in many applications such as autonomous driving and robot control.

Autonomous Driving Decision Making +1

Hierarchical Relation Extraction with Coarse-to-Fine Grained Attention

1 code implementation EMNLP 2018 Xu Han, Pengfei Yu, Zhiyuan Liu, Maosong Sun, Peng Li

In this paper, we aim to incorporate the hierarchical information of relations for distantly supervised relation extraction and propose a novel hierarchical attention scheme.

Knowledge Graphs Relation Extraction

Image Captioning based on Deep Reinforcement Learning

no code implementations13 Sep 2018 Haichao Shi, Peng Li, Bo wang, Zhenyu Wang

However, in this paper, we propose a novel architecture for image captioning with deep reinforcement learning to optimize image captioning tasks.

Image Captioning Policy Gradient Methods +2

Hybrid Macro/Micro Level Backpropagation for Training Deep Spiking Neural Networks

1 code implementation NeurIPS 2018 Yingyezhe Jin, Wenrui Zhang, Peng Li

We evaluate the proposed HM2-BP algorithm by training deep fully connected and convolutional SNNs based on the static MNIST [14] and dynamic neuromorphic N-MNIST [26].

speech-recognition Speech Recognition

Dataset and Neural Recurrent Sequence Labeling Model for Open-Domain Factoid Question Answering

3 code implementations21 Jul 2016 Peng Li, Wei Li, Zhengyan He, Xuguang Wang, Ying Cao, Jie zhou, Wei Xu

While question answering (QA) with neural network, i. e. neural QA, has achieved promising results in recent years, lacking of large scale real-word QA dataset is still a challenge for developing and evaluating neural QA system.

Answer Generation Question Answering

Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation

1 code implementation TACL 2016 Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu

On the WMT'14 English-to-French task, we achieve BLEU=37. 7 with a single attention model, which outperforms the corresponding single shallow model by 6. 2 BLEU points.

Machine Translation NMT +1

Clinical Information Extraction via Convolutional Neural Network

no code implementations30 Mar 2016 Peng Li, Heng Huang

We report an implementation of a clinical information extraction tool that leverages deep neural network to annotate event spans and their attributes from raw clinical notes and pathology reports.

Enhancing Sentence Relation Modeling with Auxiliary Character-level Embedding

no code implementations30 Mar 2016 Peng Li, Heng Huang

Neural network based approaches for sentence relation modeling automatically generate hidden matching features from raw sentence pairs.

Cannot find the paper you are looking for? You can Submit a new open access paper.