Search Results for author: Ning Ding

Found 66 papers, 40 papers with code

TeamTrack: A Dataset for Multi-Sport Multi-Object Tracking in Full-pitch Videos

no code implementations22 Apr 2024 Atom Scott, Ikuma Uchida, Ning Ding, Rikuhei Umemoto, Rory Bunker, Ren Kobayashi, Takeshi Koyama, Masaki Onishi, Yoshinari Kameda, Keisuke Fujii

Multi-object tracking (MOT) is a critical and challenging task in computer vision, particularly in situations involving objects with similar appearances but diverse movements, as seen in team sports.

Benchmarking Multi-Object Tracking +2

Mastering Text, Code and Math Simultaneously via Fusing Highly Specialized Language Models

no code implementations13 Mar 2024 Ning Ding, Yulin Chen, Ganqu Cui, Xingtai Lv, Weilin Zhao, Ruobing Xie, BoWen Zhou, Zhiyuan Liu, Maosong Sun

Underlying data distributions of natural language, programming code, and mathematical symbols vary vastly, presenting a complex challenge for large language models (LLMs) that strive to achieve high performance across all three domains simultaneously.

Math

CoGenesis: A Framework Collaborating Large and Small Language Models for Secure Context-Aware Instruction Following

no code implementations5 Mar 2024 Kaiyan Zhang, Jianyu Wang, Ermo Hua, Biqing Qi, Ning Ding, BoWen Zhou

With the advancement of language models (LMs), their exposure to private data is increasingly inevitable, and their deployment (especially for smaller ones) on personal devices, such as PCs and smartphones, has become a prevailing trend.

Instruction Following

Controllable Preference Optimization: Toward Controllable Multi-Objective Alignment

no code implementations29 Feb 2024 Yiju Guo, Ganqu Cui, Lifan Yuan, Ning Ding, Jiexin Wang, Huimin Chen, Bowen Sun, Ruobing Xie, Jie zhou, Yankai Lin, Zhiyuan Liu, Maosong Sun

In practice, the multifaceted nature of human preferences inadvertently introduces what is known as the "alignment tax" -a compromise where enhancements in alignment within one objective (e. g., harmlessness) can diminish performance in others (e. g., helpfulness).

Navigate

UltraLink: An Open-Source Knowledge-Enhanced Multilingual Supervised Fine-tuning Dataset

1 code implementation7 Feb 2024 Haoyu Wang, Shuo Wang, Yukun Yan, Xujia Wang, Zhiyu Yang, Yuzhuang Xu, Zhenghao Liu, Liner Yang, Ning Ding, Xu Han, Zhiyuan Liu, Maosong Sun

Different from previous works that simply translate English instructions, we consider both the language-specific and language-agnostic abilities of LLMs.

Cross-Lingual Transfer Data Augmentation

Sparse Low-rank Adaptation of Pre-trained Language Models

1 code implementation20 Nov 2023 Ning Ding, Xingtai Lv, Qiaosen Wang, Yulin Chen, BoWen Zhou, Zhiyuan Liu, Maosong Sun

Recognizing the need for more flexible adaptation, we extend the methodology of LoRA to an innovative approach we call sparse low-rank adaptation (SoRA) that enables dynamic adjustments to the intrinsic rank during the adaptation process.

Memorization

INTERVENOR: Prompting the Coding Ability of Large Language Models with the Interactive Chain of Repair

1 code implementation16 Nov 2023 Hanbin Wang, Zhenghao Liu, Shuo Wang, Ganqu Cui, Ning Ding, Zhiyuan Liu, Ge Yu

INTERVENOR prompts Large Language Models (LLMs) to play distinct roles during the code repair process, functioning as both a Code Learner and a Code Teacher.

Code Repair Code Translation

CRaSh: Clustering, Removing, and Sharing Enhance Fine-tuning without Full Large Language Model

no code implementations24 Oct 2023 Kaiyan Zhang, Ning Ding, Biqing Qi, Xuekai Zhu, Xinwei Long, BoWen Zhou

Instruction tuning has recently been recognized as an effective way of aligning Large Language Models (LLMs) to enhance their generalization ability across various tasks.

Clustering Language Modelling +1

UltraFeedback: Boosting Language Models with High-quality Feedback

2 code implementations2 Oct 2023 Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Wei Zhu, Yuan Ni, Guotong Xie, Zhiyuan Liu, Maosong Sun

However, the scarcity of diverse, naturalistic datasets of human preferences on LLM outputs at scale poses a great challenge to RLHF as well as feedback learning research within the open-source community.

Language Modelling

The Impact of Different Backbone Architecture on Autonomous Vehicle Dataset

no code implementations15 Sep 2023 Ning Ding, Azim Eskandarian

Object detection is a crucial component of autonomous driving, and many detection applications have been developed to address this task.

Autonomous Driving Object +2

OpenDelta: A Plug-and-play Library for Parameter-efficient Adaptation of Pre-trained Models

1 code implementation5 Jul 2023 Shengding Hu, Ning Ding, Weilin Zhao, Xingtai Lv, Zhen Zhang, Zhiyuan Liu, Maosong Sun

The scale of large pre-trained models (PTMs) poses significant challenges in adapting to downstream tasks due to the high optimization overhead and storage costs associated with full-parameter fine-tuning.

Exploring the Impact of Model Scaling on Parameter-Efficient Tuning

1 code implementation4 Jun 2023 Yusheng Su, Chi-Min Chan, Jiali Cheng, Yujia Qin, Yankai Lin, Shengding Hu, Zonghan Yang, Ning Ding, Xingzhi Sun, Guotong Xie, Zhiyuan Liu, Maosong Sun

Our investigations reveal that model scaling (1) mitigates the effects of the positions of tunable parameters on performance, and (2) enables tuning methods to achieve performance comparable to full-parameter fine-tuning by optimizing fewer tunable parameters.

GPT4Image: Can Large Pre-trained Models Help Vision Models on Perception Tasks?

1 code implementation1 Jun 2023 Ning Ding, Yehui Tang, Zhongqian Fu, Chao Xu, Kai Han, Yunhe Wang

We present a new learning paradigm in which the knowledge extracted from large pre-trained models are utilized to help models like CNN and ViT learn enhanced representations and achieve better performance.

Descriptive Image Classification

Exploring Lottery Prompts for Pre-trained Language Models

no code implementations31 May 2023 Yulin Chen, Ning Ding, Xiaobin Wang, Shengding Hu, Hai-Tao Zheng, Zhiyuan Liu, Pengjun Xie

Consistently scaling pre-trained language models (PLMs) imposes substantial burdens on model adaptation, necessitating more efficient alternatives to conventional fine-tuning.

Enhancing Chat Language Models by Scaling High-quality Instructional Conversations

1 code implementation23 May 2023 Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, BoWen Zhou

Fine-tuning on instruction data has been widely validated as an effective practice for implementing chat language models like ChatGPT.

SalienDet: A Saliency-based Feature Enhancement Algorithm for Object Detection for Autonomous Driving

1 code implementation11 May 2023 Ning Ding, Ce Zhang, Azim Eskandarian

On the other hand, unknown objects, which have not been seen in training sample set, are one of the reasons that hinder autonomous vehicles from driving beyond the operational domain.

Autonomous Driving Incremental Learning +3

Estimation of control area in badminton doubles with pose information from top and back view drone videos

1 code implementation7 May 2023 Ning Ding, Kazuya Takeda, Wenhui Jin, Yingjiu Bei, Keisuke Fujii

In this work, we present the first annotated drone dataset from top and back views in badminton doubles and propose a framework to estimate the control area probability map, which can be used to evaluate teamwork performance.

Visual Tracking

Enhancing Depth Completion with Multi-View Monitored Distillation

no code implementations28 Mar 2023 Jia-Wei Guo, Cong Li, Sen-Hua Zhu, Chang-Zheng Zhang, Ming Ouyang, Ning Ding, Hung-Chyun Chou

Our approach builds upon the state-of-the-art ensemble distillation method, in which we introduce a stereo-based model as a teacher model to improve the accuracy of the student model for depth completion.

Depth Completion

CHMATCH: Contrastive Hierarchical Matching and Robust Adaptive Threshold Boosted Semi-Supervised Learning

1 code implementation CVPR 2023 Jianlong Wu, Haozhe Yang, Tian Gan, Ning Ding, Feijun Jiang, Liqiang Nie

In the meantime, we make full use of the structured information in the hierarchical labels to learn an accurate affinity graph for contrastive learning.

Contrastive Learning

Network Expansion for Practical Training Acceleration

1 code implementation CVPR 2023 Ning Ding, Yehui Tang, Kai Han, Chao Xu, Yunhe Wang

Recently, the sizes of deep neural networks and training datasets both increase drastically to pursue better performance in a practical sense.

Decoder Tuning: Efficient Language Understanding as Decoding

2 code implementations16 Dec 2022 Ganqu Cui, Wentao Li, Ning Ding, Longtao Huang, Zhiyuan Liu, Maosong Sun

With the evergrowing sizes of pre-trained models (PTMs), it has been an emerging practice to only provide the inference APIs for users, namely model-as-a-service (MaaS) setting.

Natural Language Understanding

MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference, Temporal, Causal, and Subevent Relation Extraction

1 code implementation14 Nov 2022 Xiaozhi Wang, Yulin Chen, Ning Ding, Hao Peng, Zimu Wang, Yankai Lin, Xu Han, Lei Hou, Juanzi Li, Zhiyuan Liu, Peng Li, Jie zhou

It contains 103, 193 event coreference chains, 1, 216, 217 temporal relations, 57, 992 causal relations, and 15, 841 subevent relations, which is larger than existing datasets of all the ERE tasks by at least an order of magnitude.

Event Relation Extraction Relation +1

Few-shot Classification with Hypersphere Modeling of Prototypes

no code implementations10 Nov 2022 Ning Ding, Yulin Chen, Ganqu Cui, Xiaobin Wang, Hai-Tao Zheng, Zhiyuan Liu, Pengjun Xie

Moreover, it is more convenient to perform metric-based classification with hypersphere prototypes than statistical modeling, as we only need to calculate the distance from a data point to the surface of the hypersphere.

Classification Few-Shot Learning +1

Sparse Structure Search for Delta Tuning

1 code implementation NIPS 2022 Shengding Hu, Zhen Zhang, Ning Ding, Yadao Wang, Yasheng Wang, Zhiyuan Liu, Maosong Sun

Generally, DT methods exquisitely design delta modules (DT modules) which could be applied to arbitrary fine-grained positions inside PTMs.

Different Tunes Played with Equal Skill: Exploring a Unified Optimization Subspace for Delta Tuning

1 code implementation24 Oct 2022 Jing Yi, Weize Chen, Yujia Qin, Yankai Lin, Ning Ding, Xu Han, Zhiyuan Liu, Maosong Sun, Jie zhou

To fathom the mystery, we hypothesize that the adaptations of different DETs could all be reparameterized as low-dimensional optimizations in a unified optimization subspace, which could be found by jointly decomposing independent solutions of different DETs.

Improving Task Generalization via Unified Schema Prompt

no code implementations5 Aug 2022 Wanjun Zhong, Yifan Gao, Ning Ding, Zhiyuan Liu, Ming Zhou, Jiahai Wang, Jian Yin, Nan Duan

Task generalization has been a long standing challenge in Natural Language Processing (NLP).

Sparse Structure Search for Parameter-Efficient Tuning

no code implementations15 Jun 2022 Shengding Hu, Zhen Zhang, Ning Ding, Yadao Wang, Yasheng Wang, Zhiyuan Liu, Maosong Sun

The searched structures preserve more than 99\% fine-tuning performance with 0. 01\% trainable parameters.

A Survey on Video Action Recognition in Sports: Datasets, Methods and Applications

1 code implementation2 Jun 2022 Fei Wu, Qingzhong Wang, Jian Bian, Haoyi Xiong, Ning Ding, Feixiang Lu, Jun Cheng, Dejing Dou

Finally, we discuss the challenges and unsolved problems in this area and to facilitate sports analytics, we develop a toolbox using PaddlePaddle, which supports football, basketball, table tennis and figure skating action recognition.

Action Recognition Sports Analytics +1

ProQA: Structural Prompt-based Pre-training for Unified Question Answering

1 code implementation NAACL 2022 Wanjun Zhong, Yifan Gao, Ning Ding, Yujia Qin, Zhiyuan Liu, Ming Zhou, Jiahai Wang, Jian Yin, Nan Duan

Furthermore, ProQA exhibits strong ability in both continual learning and transfer learning by taking the advantages of the structural prompt.

Continual Learning Few-Shot Learning +2

Source-Free Domain Adaptation via Distribution Estimation

no code implementations CVPR 2022 Ning Ding, Yixing Xu, Yehui Tang, Chao Xu, Yunhe Wang, DaCheng Tao

Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.

Privacy Preserving Source-Free Domain Adaptation

Prototypical Verbalizer for Prompt-based Few-shot Tuning

1 code implementation ACL 2022 Ganqu Cui, Shengding Hu, Ning Ding, Longtao Huang, Zhiyuan Liu

However, manual verbalizers heavily depend on domain-specific prior knowledge and human efforts, while finding appropriate label words automatically still remains challenging. In this work, we propose the prototypical verbalizer (ProtoVerb) which is built directly from training data.

Contrastive Learning Entity Typing +2

Upright-Net: Learning Upright Orientation for 3D Point Cloud

no code implementations CVPR 2022 Xufang Pang, Feng Li, Ning Ding, Xiaopin Zhong

A mass of experiments shows that the pose of the input 3D models exerts a tremendous influence on automatic 3D shape analysis.

OpenPrompt: An Open-source Framework for Prompt-learning

2 code implementations ACL 2022 Ning Ding, Shengding Hu, Weilin Zhao, Yulin Chen, Zhiyuan Liu, Hai-Tao Zheng, Maosong Sun

Prompt-learning has become a new paradigm in modern natural language processing, which directly adapts pre-trained language models (PLMs) to $cloze$-style prediction, autoregressive modeling, or sequence to sequence generation, resulting in promising performances on various tasks.

Exploring Universal Intrinsic Task Subspace via Prompt Tuning

1 code implementation15 Oct 2021 Yujia Qin, Xiaozhi Wang, Yusheng Su, Yankai Lin, Ning Ding, Jing Yi, Weize Chen, Zhiyuan Liu, Juanzi Li, Lei Hou, Peng Li, Maosong Sun, Jie zhou

In the experiments, we study diverse few-shot NLP tasks and surprisingly find that in a 250-dimensional subspace found with 100 tasks, by only tuning 250 free parameters, we can recover 97% and 83% of the full prompt tuning performance for 100 seen tasks (using different training data) and 20 unseen tasks, respectively, showing great generalization ability of the found intrinsic task subspace.

Few-shot Learning with Big Prototypes

no code implementations29 Sep 2021 Ning Ding, Yulin Chen, Xiaobin Wang, Hai-Tao Zheng, Zhiyuan Liu, Pengjun Xie

A big prototype could be effectively modeled by two sets of learnable parameters, one is the center of the hypersphere, which is an embedding with the same dimension of training examples.

Few-Shot Learning

Prompt-Learning for Fine-Grained Entity Typing

no code implementations24 Aug 2021 Ning Ding, Yulin Chen, Xu Han, Guangwei Xu, Pengjun Xie, Hai-Tao Zheng, Zhiyuan Liu, Juanzi Li, Hong-Gee Kim

In this work, we investigate the application of prompt-learning on fine-grained entity typing in fully supervised, few-shot and zero-shot scenarios.

Entity Typing Knowledge Probing +5

Discriminative-Generative Representation Learning for One-Class Anomaly Detection

no code implementations27 Jul 2021 Xuan Xia, Xizhou Pan, Xing He, Jingfei Zhang, Ning Ding, Lin Ma

As a kind of generative self-supervised learning methods, generative adversarial nets have been widely studied in the field of anomaly detection.

Anomaly Detection Representation Learning +1

CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language Understanding

1 code implementation ACL 2021 Dong Wang, Ning Ding, Piji Li, Hai-Tao Zheng

Recent works aimed to improve the robustness of pre-trained models mainly focus on adversarial training from perturbed examples with similar semantics, neglecting the utilization of different or even opposite semantics.

Contrastive Learning Natural Language Understanding +3

PTR: Prompt Tuning with Rules for Text Classification

1 code implementation24 May 2021 Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, Maosong Sun

This indicates that PTR is a promising approach to take advantage of both human prior knowledge and PLMs for those complicated classification tasks.

Natural Language Inference Relation Classification +4

Few-NERD: A Few-Shot Named Entity Recognition Dataset

7 code implementations ACL 2021 Ning Ding, Guangwei Xu, Yulin Chen, Xiaobin Wang, Xu Han, Pengjun Xie, Hai-Tao Zheng, Zhiyuan Liu

In this paper, we present Few-NERD, a large-scale human-annotated few-shot NER dataset with a hierarchy of 8 coarse-grained and 66 fine-grained entity types.

Few-shot NER Named Entity Recognition

A Hybrid Task-Oriented Dialog System with Domain and Task Adaptive Pretraining

no code implementations8 Feb 2021 Boliang Zhang, Ying Lyu, Ning Ding, Tianhao Shen, Zhaoyang Jia, Kun Han, Kevin Knight

This paper describes our submission for the End-to-end Multi-domain Task Completion Dialog shared task at the 9th Dialog System Technology Challenge (DSTC-9).

dialog state tracking Natural Language Understanding +1

TP-LSD: Tri-Points Based Line Segment Detector

2 code implementations ECCV 2020 Siyu Huang, Fangbo Qin, Pengfei Xiong, Ning Ding, Yijia He, Xiao Liu

To realize one-step detection with a faster and more compact model, we introduce the tri-points representation, converting the line segment detection to the end-to-end prediction of a root-point and two endpoints for each line segment.

Line Segment Detection

Length-Controllable Image Captioning

1 code implementation ECCV 2020 Chaorui Deng, Ning Ding, Mingkui Tan, Qi Wu

We verify the merit of the proposed length level embedding on three models: two state-of-the-art (SOTA) autoregressive models with different types of decoder, as well as our proposed non-autoregressive model, to show its generalization ability.

controllable image captioning

Coupling Distant Annotation and Adversarial Training for Cross-Domain Chinese Word Segmentation

1 code implementation ACL 2020 Ning Ding, Dingkun Long, Guangwei Xu, Muhua Zhu, Pengjun Xie, Xiaobin Wang, Hai-Tao Zheng

In order to simultaneously alleviate these two issues, this paper proposes to couple distant annotation and adversarial training for cross-domain CWS.

Chinese Word Segmentation Sentence

Event Detection with Trigger-Aware Lattice Neural Network

1 code implementation IJCNLP 2019 Ning Ding, Ziran Li, Zhiyuan Liu, Hai-Tao Zheng, Zibo Lin

To ad- dress the two issues simultaneously, we pro- pose the Trigger-aware Lattice Neural Net- work (TLNN).

Event Detection

Chinese Relation Extraction with Multi-Grained Information and External Linguistic Knowledge

1 code implementation ACL 2019 Ziran Li, Ning Ding, Zhiyuan Liu, Hai-Tao Zheng, Ying Shen

Chinese relation extraction is conducted using neural networks with either character-based or word-based inputs, and most existing methods typically suffer from segmentation errors and ambiguity of polysemy.

Relation Relation Extraction +1

Cannot find the paper you are looking for? You can Submit a new open access paper.