Search Results for author: Ruixue Ding

Found 8 papers, 6 papers with code

Let LLMs Take on the Latest Challenges! A Chinese Dynamic Question Answering Benchmark

1 code implementation29 Feb 2024 Zhikun Xu, Yinghui Li, Ruixue Ding, Xinyu Wang, Boli Chen, Yong Jiang, Hai-Tao Zheng, Wenlian Lu, Pengjun Xie, Fei Huang

To promote the improvement of Chinese LLMs' ability to answer dynamic questions, in this paper, we introduce CDQA, a Chinese Dynamic QA benchmark containing question-answer pairs related to the latest news on the Chinese Internet.

Question Answering

Geo-Encoder: A Chunk-Argument Bi-Encoder Framework for Chinese Geographic Re-Ranking

1 code implementation4 Sep 2023 Yong Cao, Ruixue Ding, Boli Chen, Xianzhi Li, Min Chen, Daniel Hershcovich, Pengjun Xie, Fei Huang

Chinese geographic re-ranking task aims to find the most relevant addresses among retrieved candidates, which is crucial for location-related services such as navigation maps.

Chunking Multi-Task Learning +1

GeoGLUE: A GeoGraphic Language Understanding Evaluation Benchmark

no code implementations11 May 2023 Dongyang Li, Ruixue Ding, Qiang Zhang, Zheng Li, Boli Chen, Pengjun Xie, Yao Xu, Xin Li, Ning Guo, Fei Huang, Xiaofeng He

With a fast developing pace of geographic applications, automatable and intelligent models are essential to be designed to handle the large volume of information.

Entity Alignment Natural Language Understanding

MGeo: Multi-Modal Geographic Pre-Training Method

1 code implementation11 Jan 2023 Ruixue Ding, Boli Chen, Pengjun Xie, Fei Huang, Xin Li, Qiang Zhang, Yao Xu

Single-modal PTMs can barely make use of the important GC and therefore have limited performance.

Language Modelling

Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning

2 code implementations19 Oct 2022 Hongqiu Wu, Ruixue Ding, Hai Zhao, Boli Chen, Pengjun Xie, Fei Huang, Min Zhang

Multiple pre-training objectives fill the vacancy of the understanding capability of single-objective language modeling, which serves the ultimate purpose of pre-trained language models (PrLMs), generalizing well on a mass of scenarios.

Language Modelling Meta-Learning

Adversarial Self-Attention for Language Understanding

1 code implementation25 Jun 2022 Hongqiu Wu, Ruixue Ding, Hai Zhao, Pengjun Xie, Fei Huang, Min Zhang

Deep neural models (e. g. Transformer) naturally learn spurious features, which create a ``shortcut'' between the labels and inputs, thus impairing the generalization and robustness.

Machine Reading Comprehension Named Entity Recognition (NER) +4

Better Modeling of Incomplete Annotations for Named Entity Recognition

no code implementations NAACL 2019 Zhanming Jie, Pengjun Xie, Wei Lu, Ruixue Ding, Linlin Li

Supervised approaches to named entity recognition (NER) are largely developed based on the assumption that the training data is fully annotated with named entity information.

named-entity-recognition Named Entity Recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.