Search Results for author: Yuyang Ding

Found 6 papers, 5 papers with code

Rethinking Negative Instances for Generative Named Entity Recognition

1 code implementation26 Feb 2024 Yuyang Ding, Juntao Li, Pinzheng Wang, Zecheng Tang, Bowen Yan, Min Zhang

In the Named Entity Recognition (NER) task, recent advancements have seen the remarkable improvement of LLMs in a broad range of entity domains via instruction tuning, by adopting entity-centric schema.

named-entity-recognition Named Entity Recognition +2

Mathematical Language Models: A Survey

no code implementations12 Dec 2023 Wentao Liu, Hanglei Hu, Jie zhou, Yuyang Ding, Junsong Li, Jiayi Zeng, Mengliang He, Qin Chen, Bo Jiang, Aimin Zhou, Liang He

In recent years, there has been remarkable progress in leveraging Language Models (LMs), encompassing Pre-trained Language Models (PLMs) and Large-scale Language Models (LLMs), within the domain of mathematics.

OpenBA: An Open-sourced 15B Bilingual Asymmetric seq2seq Model Pre-trained from Scratch

1 code implementation19 Sep 2023 Juntao Li, Zecheng Tang, Yuyang Ding, Pinzheng Wang, Pei Guo, Wangjie You, Dan Qiao, Wenliang Chen, Guohong Fu, Qiaoming Zhu, Guodong Zhou, Min Zhang

This report provides the main details to pre-train an analogous model, including pre-training data processing, Bilingual Flan data collection, the empirical observations that inspire our model architecture design, training objectives of different stages, and other enhancement techniques.

CMD: a framework for Context-aware Model self-Detoxification

2 code implementations16 Aug 2023 Zecheng Tang, Keyan Zhou, Juntao Li, Yuyang Ding, Pinzheng Wang, Bowen Yan, Min Zhang

In view of this, we introduce a Context-aware Model self-Detoxification~(CMD) framework that pays attention to both the context and the detoxification process, i. e., first detoxifying the context and then making the language model generate along the safe context.

Language Modelling

SelfMix: Robust Learning Against Textual Label Noise with Self-Mixup Training

1 code implementation COLING 2022 Dan Qiao, Chenchen Dai, Yuyang Ding, Juntao Li, Qiang Chen, Wenliang Chen, Min Zhang

The conventional success of textual classification relies on annotated data, and the new paradigm of pre-trained language models (PLMs) still requires a few labeled data for downstream tasks.

text-classification Text Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.