Search Results for author: Ruyi Gan

Found 14 papers, 8 papers with code

Taiyi-Diffusion-XL: Advancing Bilingual Text-to-Image Generation with Large Vision-Language Model Support

1 code implementation26 Jan 2024 XiaoJun Wu, Dixiang Zhang, Ruyi Gan, Junyu Lu, Ziwei Wu, Renliang Sun, Jiaxing Zhang, Pingjian Zhang, Yan Song

Recent advancements in text-to-image models have significantly enhanced image generation capabilities, yet a notable gap of open-source models persists in bilingual or Chinese language support.

Language Modelling Text-to-Image Generation

iDesigner: A High-Resolution and Complex-Prompt Following Text-to-Image Diffusion Model for Interior Design

no code implementations7 Dec 2023 Ruyi Gan, XiaoJun Wu, Junyu Lu, Yuanhe Tian, Dixiang Zhang, Ziwei Wu, Renliang Sun, Chang Liu, Jiaxing Zhang, Pingjian Zhang, Yan Song

However, there are few specialized models in certain domains, such as interior design, which is attributed to the complex textual descriptions and detailed visual elements inherent in design, alongside the necessity for adaptable resolution.

Image Generation

Ziya2: Data-centric Learning is All LLMs Need

no code implementations6 Nov 2023 Ruyi Gan, Ziwei Wu, Renliang Sun, Junyu Lu, XiaoJun Wu, Dixiang Zhang, Kunhao Pan, Junqing He, Yuanhe Tian, Ping Yang, Qi Yang, Hao Wang, Jiaxing Zhang, Yan Song

Although many such issues are addressed along the line of research on LLMs, an important yet practical limitation is that many studies overly pursue enlarging model sizes without comprehensively analyzing and optimizing the use of pre-training data in their learning process, as well as appropriate organization and leveraging of such data in training LLMs under cost-effective settings.

Ziya-Visual: Bilingual Large Vision-Language Model via Multi-Task Instruction Tuning

no code implementations12 Oct 2023 Junyu Lu, Dixiang Zhang, XiaoJun Wu, Xinyu Gao, Ruyi Gan, Jiaxing Zhang, Yan Song, Pingjian Zhang

Recent advancements enlarge the capabilities of large language models (LLMs) in zero-shot image-to-text generation and understanding by integrating multi-modal inputs.

Image Captioning Image-text Retrieval +7

UniEX: An Effective and Efficient Framework for Unified Information Extraction via a Span-extractive Perspective

no code implementations17 May 2023 Ping Yang, Junyu Lu, Ruyi Gan, Junjie Wang, Yuxiang Zhang, Jiaxing Zhang, Pingjian Zhang

We propose a new paradigm for universal information extraction (IE) that is compatible with any schema format and applicable to a list of IE tasks, such as named entity recognition, relation extraction, event extraction and sentiment analysis.

Event Extraction named-entity-recognition +3

TCBERT: A Technical Report for Chinese Topic Classification BERT

no code implementations21 Nov 2022 Ting Han, Kunhao Pan, Xinyu Chen, Dingjie Song, Yuchen Fan, Xinyu Gao, Ruyi Gan, Jiaxing Zhang

Bidirectional Encoder Representations from Transformers or BERT~\cite{devlin-etal-2019-bert} has been one of the base models for various NLP tasks due to its remarkable performance.

Classification Contrastive Learning +1

Solving Math Word Problems via Cooperative Reasoning induced Language Models

1 code implementation28 Oct 2022 Xinyu Zhu, Junjie Wang, Lin Zhang, Yuxiang Zhang, Ruyi Gan, Jiaxing Zhang, Yujiu Yang

This inspires us to develop a cooperative reasoning-induced PLM for solving MWPs, called Cooperative Reasoning (CoRe), resulting in a human-like reasoning architecture with system 1 as the generator and system 2 as the verifier.

Arithmetic Reasoning Math

Zero-Shot Learners for Natural Language Understanding via a Unified Multiple Choice Perspective

1 code implementation16 Oct 2022 Ping Yang, Junjie Wang, Ruyi Gan, Xinyu Zhu, Lin Zhang, Ziwei Wu, Xinyu Gao, Jiaxing Zhang, Tetsuya Sakai

We propose a new paradigm for zero-shot learners that is format agnostic, i. e., it is compatible with any format and applicable to a list of language tasks, such as text classification, commonsense reasoning, coreference resolution, and sentiment analysis.

Multiple-choice Natural Language Inference +4

Towards No.1 in CLUE Semantic Matching Challenge: Pre-trained Language Model Erlangshen with Propensity-Corrected Loss

1 code implementation5 Aug 2022 Junjie Wang, Yuxiang Zhang, Ping Yang, Ruyi Gan

This report describes a pre-trained language model Erlangshen with propensity-corrected loss, the No. 1 in CLUE Semantic Matching Challenge.

Language Modelling Masked Language Modeling

Unified BERT for Few-shot Natural Language Understanding

no code implementations24 Jun 2022 Junyu Lu, Ping Yang, Ruyi Gan, Jing Yang, Jiaxing Zhang

Even as pre-trained language models share a semantic encoder, natural language understanding suffers from a diversity of output schemas.

Diversity Natural Language Understanding

BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model

1 code implementation BioNLP (ACL) 2022 Hongyi Yuan, Zheng Yuan, Ruyi Gan, Jiaxing Zhang, Yutao Xie, Sheng Yu

Furthermore, we conduct ablation studies on the pretraining tasks for BioBART and find that sentence permutation has negative effects on downstream tasks.

Entity Linking Language Modelling +6

Cannot find the paper you are looking for? You can Submit a new open access paper.