Search Results for author: Ning Ding

Found 80 papers, 50 papers with code

Free Process Rewards without Process Labels

1 code implementation2 Dec 2024 Lifan Yuan, Wendi Li, Huayu Chen, Ganqu Cui, Ning Ding, Kaiyan Zhang, BoWen Zhou, Zhiyuan Liu, Hao Peng

The only assumption is to parameterize the outcome reward as the log-likelihood ratios of the policy and reference models, which can be optimized regardless of the specific choice of loss objectives.

Math

MemoryFormer: Minimize Transformer Computation by Removing Fully-Connected Layers

no code implementations20 Nov 2024 Ning Ding, Yehui Tang, Haochen Qin, Zhenli Zhou, Chao Xu, Lin Li, Kai Han, Heng Liao, Yunhe Wang

This is made possible by utilizing an alternative method for feature transformation to replace the linear projection of fully-connected layers.

Automating Exploratory Proteomics Research via Language Models

no code implementations6 Nov 2024 Ning Ding, Shang Qu, Linhai Xie, Yifei Li, Zaoqu Liu, Kaiyan Zhang, Yibai Xiong, Yuxin Zuo, Zhangren Chen, Ermo Hua, Xingtai Lv, Youbang Sun, Yang Li, Dong Li, Fuchu He, BoWen Zhou

By automating complex proteomics analysis workflows and hypothesis generation, PROTEUS has the potential to considerably accelerate the pace of scientific discovery in proteomics research, enabling researchers to efficiently explore large-scale datasets and uncover biological insights.

scientific discovery

Scalable Efficient Training of Large Language Models with Low-dimensional Projected Attention

1 code implementation4 Nov 2024 Xingtai Lv, Ning Ding, Kaiyan Zhang, Ermo Hua, Ganqu Cui, BoWen Zhou

Improving the effectiveness and efficiency of large language models (LLMs) simultaneously is a critical yet challenging research goal.

CALF: Benchmarking Evaluation of LFQA Using Chinese Examinations

no code implementations2 Oct 2024 Yuchen Fan, Xin Zhong, Heng Zhou, Yuchen Zhang, Mingyu Liang, Chengxing Xie, Ermo Hua, Ning Ding, BoWen Zhou

To address this gap, we make the first attempt by proposing a well-constructed, reference-based benchmark named Chinese exAmination for LFQA Evaluation (CALF), aiming to rigorously assess the performance of automatic evaluation metrics for LFQA.

Benchmarking Long Form Question Answering

Space evaluation based on pitch control using drone video in Ultimate

1 code implementation3 Sep 2024 Shunsuke Iwashita, Atom Scott, Rikuhei Umemoto, Ning Ding, Keisuke Fujii

A distinctive aspect of Ultimate is that the player holding the disc is unable to move, underscoring the significance of creating space to receive passes.

Pitch control

Enhancing Neural Radiance Fields with Depth and Normal Completion Priors from Sparse Views

no code implementations8 Jul 2024 Jiawei Guo, HungChyun Chou, Ning Ding

Based on the sparse depth maps and a normal estimator, we generate sparse normal maps for training a normal completion prior with precise standard deviations.

Patch Matching

EVA-Score: Evaluating Abstractive Long-form Summarization on Informativeness through Extraction and Validation

no code implementations6 Jul 2024 Yuchen Fan, Xin Zhong, Yazhe Wan, Chengsi Wang, Haonan Cheng, Gaoche Wu, Ning Ding, BoWen Zhou

Current evaluation metrics either use traditional metrics like ROUGE and BERTScore, which rely on surface-level similarity and fail to consider informativeness, or simple LLM-based metrics, which are not robust and easily overwhelmed by the long contexts.

Document-level Relation Extraction Informativeness

Fast and Slow Generating: An Empirical Study on Large and Small Language Models Collaborative Decoding

1 code implementation18 Jun 2024 Kaiyan Zhang, Jianyu Wang, Ning Ding, Biqing Qi, Ermo Hua, Xingtai Lv, BoWen Zhou

Our research underscores that the fundamental distinction between System 1 and System 2 lies in the uncertainty of next token predictions, where interventions by System 2 are crucial to support System 1.

Hallucination

Zero-Shot Generalization during Instruction Tuning: Insights from Similarity and Granularity

no code implementations17 Jun 2024 Bingxiang He, Ning Ding, Cheng Qian, Jia Deng, Ganqu Cui, Lifan Yuan, Huan-ang Gao, Huimin Chen, Zhiyuan Liu, Maosong Sun

For the first time, we show that zero-shot generalization during instruction tuning is a form of similarity-based generalization between training and test data at the instance level.

Continual Learning Zero-shot Generalization

UltraMedical: Building Specialized Generalists in Biomedicine

1 code implementation6 Jun 2024 Kaiyan Zhang, Sihang Zeng, Ermo Hua, Ning Ding, Zhang-Ren Chen, Zhiyuan Ma, Haoxin Li, Ganqu Cui, Biqing Qi, Xuekai Zhu, Xingtai Lv, Hu Jinfang, Zhiyuan Liu, BoWen Zhou

Large Language Models (LLMs) have demonstrated remarkable capabilities across various domains and are moving towards more specialized areas.

Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Process

1 code implementation20 May 2024 Ermo Hua, Biqing Qi, Kaiyan Zhang, Yue Yu, Ning Ding, Xingtai Lv, Kai Tian, BoWen Zhou

To obtain a unified understanding, we interpret SFT and PO with two sub-processes -- Preference Estimation and Transition Optimization -- defined at token level within the Markov Decision Process (MDP) framework.

Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning

1 code implementation9 May 2024 Shibo Jie, Yehui Tang, Ning Ding, Zhi-Hong Deng, Kai Han, Yunhe Wang

Current solutions for efficiently constructing large vision-language (VL) models follow a two-step paradigm: projecting the output of pre-trained vision encoders to the input space of pre-trained language models as visual prompts; and then transferring the models to downstream VL tasks via end-to-end parameter-efficient fine-tuning (PEFT).

parameter-efficient fine-tuning Visual Prompting

TeamTrack: A Dataset for Multi-Sport Multi-Object Tracking in Full-pitch Videos

no code implementations22 Apr 2024 Atom Scott, Ikuma Uchida, Ning Ding, Rikuhei Umemoto, Rory Bunker, Ren Kobayashi, Takeshi Koyama, Masaki Onishi, Yoshinari Kameda, Keisuke Fujii

Multi-object tracking (MOT) is a critical and challenging task in computer vision, particularly in situations involving objects with similar appearances but diverse movements, as seen in team sports.

Benchmarking Multi-Object Tracking +2

Mastering Text, Code and Math Simultaneously via Fusing Highly Specialized Language Models

no code implementations13 Mar 2024 Ning Ding, Yulin Chen, Ganqu Cui, Xingtai Lv, Weilin Zhao, Ruobing Xie, BoWen Zhou, Zhiyuan Liu, Maosong Sun

Underlying data distributions of natural language, programming code, and mathematical symbols vary vastly, presenting a complex challenge for large language models (LLMs) that strive to achieve high performance across all three domains simultaneously.

Math

CoGenesis: A Framework Collaborating Large and Small Language Models for Secure Context-Aware Instruction Following

no code implementations5 Mar 2024 Kaiyan Zhang, Jianyu Wang, Ermo Hua, Biqing Qi, Ning Ding, BoWen Zhou

With the advancement of language models (LMs), their exposure to private data is increasingly inevitable, and their deployment (especially for smaller ones) on personal devices, such as PCs and smartphones, has become a prevailing trend.

Instruction Following

Controllable Preference Optimization: Toward Controllable Multi-Objective Alignment

1 code implementation29 Feb 2024 Yiju Guo, Ganqu Cui, Lifan Yuan, Ning Ding, Zexu Sun, Bowen Sun, Huimin Chen, Ruobing Xie, Jie zhou, Yankai Lin, Zhiyuan Liu, Maosong Sun

In practice, the multifaceted nature of human preferences inadvertently introduces what is known as the "alignment tax" -a compromise where enhancements in alignment within one objective (e. g., harmlessness) can diminish performance in others (e. g., helpfulness).

Navigate

UltraLink: An Open-Source Knowledge-Enhanced Multilingual Supervised Fine-tuning Dataset

1 code implementation7 Feb 2024 Haoyu Wang, Shuo Wang, Yukun Yan, Xujia Wang, Zhiyu Yang, Yuzhuang Xu, Zhenghao Liu, Liner Yang, Ning Ding, Xu Han, Zhiyuan Liu, Maosong Sun

Different from previous works that simply translate English instructions, we consider both the language-specific and language-agnostic abilities of LLMs.

Cross-Lingual Transfer Data Augmentation

Sparse Low-rank Adaptation of Pre-trained Language Models

1 code implementation20 Nov 2023 Ning Ding, Xingtai Lv, Qiaosen Wang, Yulin Chen, BoWen Zhou, Zhiyuan Liu, Maosong Sun

Recognizing the need for more flexible adaptation, we extend the methodology of LoRA to an innovative approach we call sparse low-rank adaptation (SoRA) that enables dynamic adjustments to the intrinsic rank during the adaptation process.

Memorization

INTERVENOR: Prompting the Coding Ability of Large Language Models with the Interactive Chain of Repair

1 code implementation16 Nov 2023 Hanbin Wang, Zhenghao Liu, Shuo Wang, Ganqu Cui, Ning Ding, Zhiyuan Liu, Ge Yu

INTERVENOR prompts Large Language Models (LLMs) to play distinct roles during the code repair process, functioning as both a Code Learner and a Code Teacher.

Code Repair Code Translation

CRaSh: Clustering, Removing, and Sharing Enhance Fine-tuning without Full Large Language Model

1 code implementation24 Oct 2023 Kaiyan Zhang, Ning Ding, Biqing Qi, Xuekai Zhu, Xinwei Long, BoWen Zhou

Instruction tuning has recently been recognized as an effective way of aligning Large Language Models (LLMs) to enhance their generalization ability across various tasks.

Clustering Language Modelling +1

UltraFeedback: Boosting Language Models with Scaled AI Feedback

4 code implementations2 Oct 2023 Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Bingxiang He, Wei Zhu, Yuan Ni, Guotong Xie, Ruobing Xie, Yankai Lin, Zhiyuan Liu, Maosong Sun

Our work validates the effectiveness of scaled AI feedback data in constructing strong open-source chat language models, serving as a solid foundation for future feedback learning research.

Language Modelling

Empowering Private Tutoring by Chaining Large Language Models

no code implementations15 Sep 2023 Yulin Chen, Ning Ding, Hai-Tao Zheng, Zhiyuan Liu, Maosong Sun, BoWen Zhou

Artificial intelligence has been applied in various aspects of online education to facilitate teaching and learning.

The Impact of Different Backbone Architecture on Autonomous Vehicle Dataset

no code implementations15 Sep 2023 Ning Ding, Azim Eskandarian

Object detection is a crucial component of autonomous driving, and many detection applications have been developed to address this task.

Autonomous Driving Object +2

OpenDelta: A Plug-and-play Library for Parameter-efficient Adaptation of Pre-trained Models

1 code implementation5 Jul 2023 Shengding Hu, Ning Ding, Weilin Zhao, Xingtai Lv, Zhen Zhang, Zhiyuan Liu, Maosong Sun

The scale of large pre-trained models (PTMs) poses significant challenges in adapting to downstream tasks due to the high optimization overhead and storage costs associated with full-parameter fine-tuning.

Exploring the Impact of Model Scaling on Parameter-Efficient Tuning

1 code implementation4 Jun 2023 Yusheng Su, Chi-Min Chan, Jiali Cheng, Yujia Qin, Yankai Lin, Shengding Hu, Zonghan Yang, Ning Ding, Xingzhi Sun, Guotong Xie, Zhiyuan Liu, Maosong Sun

Our investigations reveal that model scaling (1) mitigates the effects of the positions of tunable parameters on performance, and (2) enables tuning methods to achieve performance comparable to full-parameter fine-tuning by optimizing fewer tunable parameters.

GPT4Image: Can Large Pre-trained Models Help Vision Models on Perception Tasks?

1 code implementation1 Jun 2023 Ning Ding, Yehui Tang, Zhongqian Fu, Chao Xu, Kai Han, Yunhe Wang

We present a new learning paradigm in which the knowledge extracted from large pre-trained models are utilized to help models like CNN and ViT learn enhanced representations and achieve better performance.

Descriptive Image Classification

Exploring Lottery Prompts for Pre-trained Language Models

no code implementations31 May 2023 Yulin Chen, Ning Ding, Xiaobin Wang, Shengding Hu, Hai-Tao Zheng, Zhiyuan Liu, Pengjun Xie

Consistently scaling pre-trained language models (PLMs) imposes substantial burdens on model adaptation, necessitating more efficient alternatives to conventional fine-tuning.

Enhancing Chat Language Models by Scaling High-quality Instructional Conversations

1 code implementation23 May 2023 Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, BoWen Zhou

Fine-tuning on instruction data has been widely validated as an effective practice for implementing chat language models like ChatGPT.

Diversity

SalienDet: A Saliency-based Feature Enhancement Algorithm for Object Detection for Autonomous Driving

1 code implementation11 May 2023 Ning Ding, Ce Zhang, Azim Eskandarian

On the other hand, unknown objects, which have not been seen in training sample set, are one of the reasons that hinder autonomous vehicles from driving beyond the operational domain.

Autonomous Driving Incremental Learning +3

Estimation of control area in badminton doubles with pose information from top and back view drone videos

1 code implementation7 May 2023 Ning Ding, Kazuya Takeda, Wenhui Jin, Yingjiu Bei, Keisuke Fujii

In this work, we present the first annotated drone dataset from top and back views in badminton doubles and propose a framework to estimate the control area probability map, which can be used to evaluate teamwork performance.

Visual Tracking

Enhancing Depth Completion with Multi-View Monitored Distillation

no code implementations28 Mar 2023 Jia-Wei Guo, Cong Li, Sen-Hua Zhu, Chang-Zheng Zhang, Ming Ouyang, Ning Ding, Hung-Chyun Chou

Our approach builds upon the state-of-the-art ensemble distillation method, in which we introduce a stereo-based model as a teacher model to improve the accuracy of the student model for depth completion.

Depth Completion

CHMATCH: Contrastive Hierarchical Matching and Robust Adaptive Threshold Boosted Semi-Supervised Learning

1 code implementation CVPR 2023 Jianlong Wu, Haozhe Yang, Tian Gan, Ning Ding, Feijun Jiang, Liqiang Nie

In the meantime, we make full use of the structured information in the hierarchical labels to learn an accurate affinity graph for contrastive learning.

Contrastive Learning

Network Expansion for Practical Training Acceleration

1 code implementation CVPR 2023 Ning Ding, Yehui Tang, Kai Han, Chao Xu, Yunhe Wang

Recently, the sizes of deep neural networks and training datasets both increase drastically to pursue better performance in a practical sense.

Decoder Tuning: Efficient Language Understanding as Decoding

3 code implementations16 Dec 2022 Ganqu Cui, Wentao Li, Ning Ding, Longtao Huang, Zhiyuan Liu, Maosong Sun

With the evergrowing sizes of pre-trained models (PTMs), it has been an emerging practice to only provide the inference APIs for users, namely model-as-a-service (MaaS) setting.

Decoder Natural Language Understanding

MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference, Temporal, Causal, and Subevent Relation Extraction

1 code implementation14 Nov 2022 Xiaozhi Wang, Yulin Chen, Ning Ding, Hao Peng, Zimu Wang, Yankai Lin, Xu Han, Lei Hou, Juanzi Li, Zhiyuan Liu, Peng Li, Jie zhou

It contains 103, 193 event coreference chains, 1, 216, 217 temporal relations, 57, 992 causal relations, and 15, 841 subevent relations, which is larger than existing datasets of all the ERE tasks by at least an order of magnitude.

Event Relation Extraction Relation +1

Few-shot Classification with Hypersphere Modeling of Prototypes

no code implementations10 Nov 2022 Ning Ding, Yulin Chen, Ganqu Cui, Xiaobin Wang, Hai-Tao Zheng, Zhiyuan Liu, Pengjun Xie

Moreover, it is more convenient to perform metric-based classification with hypersphere prototypes than statistical modeling, as we only need to calculate the distance from a data point to the surface of the hypersphere.

Classification Few-Shot Learning +1

Sparse Structure Search for Delta Tuning

1 code implementation NIPS 2022 Shengding Hu, Zhen Zhang, Ning Ding, Yadao Wang, Yasheng Wang, Zhiyuan Liu, Maosong Sun

Generally, DT methods exquisitely design delta modules (DT modules) which could be applied to arbitrary fine-grained positions inside PTMs.

Different Tunes Played with Equal Skill: Exploring a Unified Optimization Subspace for Delta Tuning

1 code implementation24 Oct 2022 Jing Yi, Weize Chen, Yujia Qin, Yankai Lin, Ning Ding, Xu Han, Zhiyuan Liu, Maosong Sun, Jie zhou

To fathom the mystery, we hypothesize that the adaptations of different DETs could all be reparameterized as low-dimensional optimizations in a unified optimization subspace, which could be found by jointly decomposing independent solutions of different DETs.

Improving Task Generalization via Unified Schema Prompt

no code implementations5 Aug 2022 Wanjun Zhong, Yifan Gao, Ning Ding, Zhiyuan Liu, Ming Zhou, Jiahai Wang, Jian Yin, Nan Duan

Task generalization has been a long standing challenge in Natural Language Processing (NLP).

Sparse Structure Search for Parameter-Efficient Tuning

no code implementations15 Jun 2022 Shengding Hu, Zhen Zhang, Ning Ding, Yadao Wang, Yasheng Wang, Zhiyuan Liu, Maosong Sun

The searched structures preserve more than 99\% fine-tuning performance with 0. 01\% trainable parameters.

A Survey on Video Action Recognition in Sports: Datasets, Methods and Applications

1 code implementation2 Jun 2022 Fei Wu, Qingzhong Wang, Jian Bian, Haoyi Xiong, Ning Ding, Feixiang Lu, Jun Cheng, Dejing Dou

Finally, we discuss the challenges and unsolved problems in this area and to facilitate sports analytics, we develop a toolbox using PaddlePaddle, which supports football, basketball, table tennis and figure skating action recognition.

Action Recognition Sports Analytics +1

ProQA: Structural Prompt-based Pre-training for Unified Question Answering

1 code implementation NAACL 2022 Wanjun Zhong, Yifan Gao, Ning Ding, Yujia Qin, Zhiyuan Liu, Ming Zhou, Jiahai Wang, Jian Yin, Nan Duan

Furthermore, ProQA exhibits strong ability in both continual learning and transfer learning by taking the advantages of the structural prompt.

Continual Learning Few-Shot Learning +2

Source-Free Domain Adaptation via Distribution Estimation

1 code implementation CVPR 2022 Ning Ding, Yixing Xu, Yehui Tang, Chao Xu, Yunhe Wang, DaCheng Tao

Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.

Privacy Preserving Source-Free Domain Adaptation

Prototypical Verbalizer for Prompt-based Few-shot Tuning

1 code implementation ACL 2022 Ganqu Cui, Shengding Hu, Ning Ding, Longtao Huang, Zhiyuan Liu

However, manual verbalizers heavily depend on domain-specific prior knowledge and human efforts, while finding appropriate label words automatically still remains challenging. In this work, we propose the prototypical verbalizer (ProtoVerb) which is built directly from training data.

Contrastive Learning Entity Typing +2

Upright-Net: Learning Upright Orientation for 3D Point Cloud

no code implementations CVPR 2022 Xufang Pang, Feng Li, Ning Ding, Xiaopin Zhong

A mass of experiments shows that the pose of the input 3D models exerts a tremendous influence on automatic 3D shape analysis.

OpenPrompt: An Open-source Framework for Prompt-learning

2 code implementations ACL 2022 Ning Ding, Shengding Hu, Weilin Zhao, Yulin Chen, Zhiyuan Liu, Hai-Tao Zheng, Maosong Sun

Prompt-learning has become a new paradigm in modern natural language processing, which directly adapts pre-trained language models (PLMs) to $cloze$-style prediction, autoregressive modeling, or sequence to sequence generation, resulting in promising performances on various tasks.

Exploring Universal Intrinsic Task Subspace via Prompt Tuning

1 code implementation15 Oct 2021 Yujia Qin, Xiaozhi Wang, Yusheng Su, Yankai Lin, Ning Ding, Jing Yi, Weize Chen, Zhiyuan Liu, Juanzi Li, Lei Hou, Peng Li, Maosong Sun, Jie zhou

In the experiments, we study diverse few-shot NLP tasks and surprisingly find that in a 250-dimensional subspace found with 100 tasks, by only tuning 250 free parameters, we can recover 97% and 83% of the full prompt tuning performance for 100 seen tasks (using different training data) and 20 unseen tasks, respectively, showing great generalization ability of the found intrinsic task subspace.

Few-shot Learning with Big Prototypes

no code implementations29 Sep 2021 Ning Ding, Yulin Chen, Xiaobin Wang, Hai-Tao Zheng, Zhiyuan Liu, Pengjun Xie

A big prototype could be effectively modeled by two sets of learnable parameters, one is the center of the hypersphere, which is an embedding with the same dimension of training examples.

Few-Shot Learning

Prompt-Learning for Fine-Grained Entity Typing

no code implementations24 Aug 2021 Ning Ding, Yulin Chen, Xu Han, Guangwei Xu, Pengjun Xie, Hai-Tao Zheng, Zhiyuan Liu, Juanzi Li, Hong-Gee Kim

In this work, we investigate the application of prompt-learning on fine-grained entity typing in fully supervised, few-shot and zero-shot scenarios.

Entity Typing Knowledge Probing +5

Discriminative-Generative Representation Learning for One-Class Anomaly Detection

no code implementations27 Jul 2021 Xuan Xia, Xizhou Pan, Xing He, Jingfei Zhang, Ning Ding, Lin Ma

As a kind of generative self-supervised learning methods, generative adversarial nets have been widely studied in the field of anomaly detection.

Anomaly Detection Representation Learning +1

CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language Understanding

1 code implementation ACL 2021 Dong Wang, Ning Ding, Piji Li, Hai-Tao Zheng

Recent works aimed to improve the robustness of pre-trained models mainly focus on adversarial training from perturbed examples with similar semantics, neglecting the utilization of different or even opposite semantics.

Contrastive Learning Natural Language Understanding +3

PTR: Prompt Tuning with Rules for Text Classification

1 code implementation24 May 2021 Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, Maosong Sun

This indicates that PTR is a promising approach to take advantage of both human prior knowledge and PLMs for those complicated classification tasks.

Natural Language Inference Relation Classification +4

Few-NERD: A Few-Shot Named Entity Recognition Dataset

7 code implementations ACL 2021 Ning Ding, Guangwei Xu, Yulin Chen, Xiaobin Wang, Xu Han, Pengjun Xie, Hai-Tao Zheng, Zhiyuan Liu

In this paper, we present Few-NERD, a large-scale human-annotated few-shot NER dataset with a hierarchy of 8 coarse-grained and 66 fine-grained entity types.

Few-shot NER Named Entity Recognition

A Hybrid Task-Oriented Dialog System with Domain and Task Adaptive Pretraining

no code implementations8 Feb 2021 Boliang Zhang, Ying Lyu, Ning Ding, Tianhao Shen, Zhaoyang Jia, Kun Han, Kevin Knight

This paper describes our submission for the End-to-end Multi-domain Task Completion Dialog shared task at the 9th Dialog System Technology Challenge (DSTC-9).

dialog state tracking Natural Language Understanding +1

TP-LSD: Tri-Points Based Line Segment Detector

2 code implementations ECCV 2020 Siyu Huang, Fangbo Qin, Pengfei Xiong, Ning Ding, Yijia He, Xiao Liu

To realize one-step detection with a faster and more compact model, we introduce the tri-points representation, converting the line segment detection to the end-to-end prediction of a root-point and two endpoints for each line segment.

Line Segment Detection

Length-Controllable Image Captioning

1 code implementation ECCV 2020 Chaorui Deng, Ning Ding, Mingkui Tan, Qi Wu

We verify the merit of the proposed length level embedding on three models: two state-of-the-art (SOTA) autoregressive models with different types of decoder, as well as our proposed non-autoregressive model, to show its generalization ability.

controllable image captioning Decoder +1

Coupling Distant Annotation and Adversarial Training for Cross-Domain Chinese Word Segmentation

1 code implementation ACL 2020 Ning Ding, Dingkun Long, Guangwei Xu, Muhua Zhu, Pengjun Xie, Xiaobin Wang, Hai-Tao Zheng

In order to simultaneously alleviate these two issues, this paper proposes to couple distant annotation and adversarial training for cross-domain CWS.

Chinese Word Segmentation Sentence

Event Detection with Trigger-Aware Lattice Neural Network

1 code implementation IJCNLP 2019 Ning Ding, Ziran Li, Zhiyuan Liu, Hai-Tao Zheng, Zibo Lin

To ad- dress the two issues simultaneously, we pro- pose the Trigger-aware Lattice Neural Net- work (TLNN).

Event Detection

Chinese Relation Extraction with Multi-Grained Information and External Linguistic Knowledge

1 code implementation ACL 2019 Ziran Li, Ning Ding, Zhiyuan Liu, Hai-Tao Zheng, Ying Shen

Chinese relation extraction is conducted using neural networks with either character-based or word-based inputs, and most existing methods typically suffer from segmentation errors and ambiguity of polysemy.

Relation Relation Extraction +1

Cannot find the paper you are looking for? You can Submit a new open access paper.