Search Results for author: Jiasheng Ye

Found 7 papers, 5 papers with code

DetectiveQA: Evaluating Long-Context Reasoning on Detective Novels

no code implementations4 Sep 2024 Zhe Xu, Jiasheng Ye, Xiangyang Liu, Tianxiang Sun, Xiaoran Liu, Qipeng Guo, Linlin Li, Qun Liu, Xuanjing Huang, Xipeng Qiu

DetectiveQA focuses on evaluating the long-context reasoning ability of LLMs, which not only requires a full understanding of context but also requires extracting important evidences from the context and reasoning according to extracted evidences to answer the given questions.

Data Mixing Laws: Optimizing Data Mixtures by Predicting Language Modeling Performance

1 code implementation25 Mar 2024 Jiasheng Ye, Peiju Liu, Tianxiang Sun, Yunhua Zhou, Jun Zhan, Xipeng Qiu

Pretraining data of large language models composes multiple domains (e. g., web texts, academic papers, codes), whose mixture proportions crucially impact the competence of outcome models.

Language Modelling

AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling

1 code implementation19 Feb 2024 Jun Zhan, Junqi Dai, Jiasheng Ye, Yunhua Zhou, Dong Zhang, Zhigeng Liu, Xin Zhang, Ruibin Yuan, Ge Zhang, Linyang Li, Hang Yan, Jie Fu, Tao Gui, Tianxiang Sun, Yugang Jiang, Xipeng Qiu

We introduce AnyGPT, an any-to-any multimodal language model that utilizes discrete representations for the unified processing of various modalities, including speech, text, images, and music.

Language Modelling Large Language Model

LLM can Achieve Self-Regulation via Hyperparameter Aware Generation

no code implementations17 Feb 2024 Siyin Wang, ShiMin Li, Tianxiang Sun, Jinlan Fu, Qinyuan Cheng, Jiasheng Ye, Junjie Ye, Xipeng Qiu, Xuanjing Huang

HAG extends the current paradigm in the text generation process, highlighting the feasibility of endowing the LLMs with self-regulate decoding strategies.

Text Generation

Diffusion Language Models Can Perform Many Tasks with Scaling and Instruction-Finetuning

1 code implementation23 Aug 2023 Jiasheng Ye, Zaixiang Zheng, Yu Bao, Lihua Qian, Quanquan Gu

We then reprogram pretrained masked language models into diffusion language models via diffusive adaptation, wherein task-specific finetuning and instruction finetuning are explored to unlock their versatility in solving general language tasks.

In-Context Learning Language Modelling +1

DINOISER: Diffused Conditional Sequence Learning by Manipulating Noises

1 code implementation20 Feb 2023 Jiasheng Ye, Zaixiang Zheng, Yu Bao, Lihua Qian, Mingxuan Wang

In this paper, we introduce DINOISER to facilitate diffusion models for sequence generation by manipulating noises.

Energy-based Unknown Intent Detection with Data Manipulation

2 code implementations Findings (ACL) 2021 Yawen Ouyang, Jiasheng Ye, Yu Chen, Xinyu Dai, ShuJian Huang, Jiajun Chen

Unknown intent detection aims to identify the out-of-distribution (OOD) utterance whose intent has never appeared in the training set.

Intent Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.