Search Results for author: Junyang Lin

Found 65 papers, 41 papers with code

P-MMEval: A Parallel Multilingual Multitask Benchmark for Consistent Evaluation of LLMs

no code implementations14 Nov 2024 Yidan Zhang, Boyi Deng, Yu Wan, Baosong Yang, Haoran Wei, Bowen Yu, Junyang Lin, Fei Huang, Jingren Zhou

Recent advancements in large language models (LLMs) showcase varied multilingual capabilities across tasks like translation, code generation, and reasoning.

Code Generation

Aligning Large Language Models via Self-Steering Optimization

1 code implementation22 Oct 2024 Hao Xiang, Bowen Yu, Hongyu Lin, Keming Lu, Yaojie Lu, Xianpei Han, Le Sun, Jingren Zhou, Junyang Lin

The key to automated alignment lies in providing learnable and accurate preference signals for preference learning without human annotation.

Rethinking Data Selection at Scale: Random Selection is Almost All You Need

1 code implementation12 Oct 2024 Tingyu Xia, Bowen Yu, Kai Dang, An Yang, Yuan Wu, Yuan Tian, Yi Chang, Junyang Lin

Supervised fine-tuning (SFT) is crucial for aligning Large Language Models (LLMs) with human instructions.

A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegrained Image Generation

1 code implementation2 Oct 2024 Liang Chen, Sinan Tan, Zefan Cai, Weichu Xie, Haozhe Zhao, Yichi Zhang, Junyang Lin, Jinze Bai, Tianyu Liu, Baobao Chang

This work tackles the information loss bottleneck of vector-quantization (VQ) autoregressive image generation by introducing a novel model architecture called the 2-Dimensional Autoregression (DnD) Transformer.

Image Generation Quantization

Rotated Runtime Smooth: Training-Free Activation Smoother for accurate INT4 inference

no code implementations30 Sep 2024 Ke Yi, Zengke Liu, Jianwei Zhang, Chengyuan Li, Tong Zhang, Junyang Lin, Jingren Zhou

Based on observing activations from large language models, outliers can be classified into channel-wise and spike outliers.

Quantization

Analyzing and Mitigating Inconsistency in Discrete Audio Tokens for Neural Codec Language Models

no code implementations28 Sep 2024 Wenrui Liu, Zhifang Guo, Jin Xu, YuanJun Lv, Yunfei Chu, Zhou Zhao, Junyang Lin

This inconsistency can lead to a single audio segment being represented by multiple divergent sequences, which creates confusion in neural codec language models and results in omissions and repetitions during speech generation.

Audio Generation Language Modelling

Synthesizing Text-to-SQL Data from Weak and Strong LLMs

no code implementations6 Aug 2024 Jiaxi Yang, Binyuan Hui, Min Yang, Jian Yang, Junyang Lin, Chang Zhou

The capability gap between open-source and closed-source large language models (LLMs) remains a challenge in text-to-SQL tasks.

Domain Generalization Text-To-SQL

OpenHands: An Open Platform for AI Software Developers as Generalist Agents

2 code implementations23 Jul 2024 Xingyao Wang, Boxuan Li, Yufan Song, Frank F. Xu, Xiangru Tang, Mingchen Zhuge, Jiayi Pan, Yueqi Song, Bowen Li, Jaskirat Singh, Hoang H. Tran, Fuqiang Li, Ren Ma, Mingzhang Zheng, Bill Qian, Yanjun Shao, Niklas Muennighoff, Yizhe Zhang, Binyuan Hui, Junyang Lin, Robert Brennan, Hao Peng, Heng Ji, Graham Neubig

OpenDevin), a platform for the development of powerful and flexible AI agents that interact with the world in similar ways to those of a human developer: by writing code, interacting with a command line, and browsing the web.

Qwen2-Audio Technical Report

2 code implementations15 Jul 2024 Yunfei Chu, Jin Xu, Qian Yang, Haojie Wei, Xipin Wei, Zhifang Guo, Yichong Leng, YuanJun Lv, Jinzheng He, Junyang Lin, Chang Zhou, Jingren Zhou

We introduce the latest progress of Qwen-Audio, a large-scale audio-language model called Qwen2-Audio, which is capable of accepting various audio signal inputs and performing audio analysis or direct textual responses with regard to speech instructions.

Instruction Following Language Modelling

Can Large Language Models Always Solve Easy Problems if They Can Solve Harder Ones?

1 code implementation18 Jun 2024 Zhe Yang, Yichang Zhang, Tianyu Liu, Jian Yang, Junyang Lin, Chang Zhou, Zhifang Sui

Furthermore, we introduce the concept of consistency score to quantitatively measure this inconsistency and analyze the potential for improvement in consistency by relative consistency score.

In-Context Learning

An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language Models

1 code implementation11 Mar 2024 Liang Chen, Haozhe Zhao, Tianyu Liu, Shuai Bai, Junyang Lin, Chang Zhou, Baobao Chang

To this end, we introduce FastV, a versatile plug-and-play method designed to optimize computational efficiency by learning adaptive attention patterns in early layers and pruning visual tokens in subsequent ones.

Computational Efficiency Video Understanding

Routing to the Expert: Efficient Reward-guided Ensemble of Large Language Models

no code implementations15 Nov 2023 Keming Lu, Hongyi Yuan, Runji Lin, Junyang Lin, Zheng Yuan, Chang Zhou, Jingren Zhou

Zooter shows computation efficiency in inference as it introduces only a minor computation overhead of a routing function compared with reward model ranking methods.

TAG

Self-Evolved Diverse Data Sampling for Efficient Instruction Tuning

1 code implementation14 Nov 2023 Shengguang Wu, Keming Lu, Benfeng Xu, Junyang Lin, Qi Su, Chang Zhou

The key to our data sampling technique lies in the enhancement of diversity in the chosen subsets, as the model selects new data points most distinct from any existing ones according to its current embedding space.

Diversity Instruction Following

TouchStone: Evaluating Vision-Language Models by Language Models

1 code implementation31 Aug 2023 Shuai Bai, Shusheng Yang, Jinze Bai, Peng Wang, Xingxuan Zhang, Junyang Lin, Xinggang Wang, Chang Zhou, Jingren Zhou

Large vision-language models (LVLMs) have recently witnessed rapid advancements, exhibiting a remarkable capacity for perceiving, understanding, and processing visual information by connecting visual receptor with large language models (LLMs).

Visual Storytelling

#InsTag: Instruction Tagging for Analyzing Supervised Fine-tuning of Large Language Models

1 code implementation14 Aug 2023 Keming Lu, Hongyi Yuan, Zheng Yuan, Runji Lin, Junyang Lin, Chuanqi Tan, Chang Zhou, Jingren Zhou

Based on this observation, we propose a data selector based on InsTag to select 6K diverse and complex samples from open-source datasets and fine-tune models on InsTag-selected data.

Diversity Instruction Following +1

ExpertPrompting: Instructing Large Language Models to be Distinguished Experts

2 code implementations24 May 2023 Benfeng Xu, An Yang, Junyang Lin, Quan Wang, Chang Zhou, Yongdong Zhang, Zhendong Mao

The answering quality of an aligned large language model (LLM) can be drastically improved if treated with proper crafting of prompts.

In-Context Learning Instruction Following +2

ONE-PEACE: Exploring One General Representation Model Toward Unlimited Modalities

2 code implementations18 May 2023 Peng Wang, Shijie Wang, Junyang Lin, Shuai Bai, Xiaohuan Zhou, Jingren Zhou, Xinggang Wang, Chang Zhou

In this work, we explore a scalable way for building a general representation model toward unlimited modalities.

 Ranked #1 on Semantic Segmentation on ADE20K (using extra training data)

Action Classification AudioCaps +17

OFASys: A Multi-Modal Multi-Task Learning System for Building Generalist Models

1 code implementation8 Dec 2022 Jinze Bai, Rui Men, Hao Yang, Xuancheng Ren, Kai Dang, Yichang Zhang, Xiaohuan Zhou, Peng Wang, Sinan Tan, An Yang, Zeyu Cui, Yu Han, Shuai Bai, Wenbin Ge, Jianxin Ma, Junyang Lin, Jingren Zhou, Chang Zhou

As a starting point, we provide presets of 7 different modalities and 23 highly-diverse example tasks in OFASys, with which we also develop a first-in-kind, single model, OFA+, that can handle text, image, speech, video, and motion data.

Multi-Task Learning

Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese

1 code implementation2 Nov 2022 An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou

The tremendous success of CLIP (Radford et al., 2021) has promoted the research and application of contrastive learning for vision-language pretraining.

Contrastive Learning Image Classification +8

Prompt Tuning for Generative Multimodal Pretrained Models

1 code implementation4 Aug 2022 Hao Yang, Junyang Lin, An Yang, Peng Wang, Chang Zhou, Hongxia Yang

Prompt tuning has become a new paradigm for model tuning and it has demonstrated success in natural language pretraining and even vision pretraining.

Image Captioning Visual Entailment +1

Instance-wise Prompt Tuning for Pretrained Language Models

no code implementations4 Jun 2022 Yuezihan Jiang, Hao Yang, Junyang Lin, Hanyu Zhao, An Yang, Chang Zhou, Hongxia Yang, Zhi Yang, Bin Cui

Prompt Learning has recently gained great popularity in bridging the gap between pretraining tasks and various downstream tasks.

Modality Competition: What Makes Joint Training of Multi-modal Network Fail in Deep Learning? (Provably)

no code implementations23 Mar 2022 Yu Huang, Junyang Lin, Chang Zhou, Hongxia Yang, Longbo Huang

Recently, it has been observed that the best uni-modal network outperforms the jointly trained multi-modal network, which is counter-intuitive since multiple signals generally bring more information.

KNAS: Green Neural Architecture Search

1 code implementation26 Nov 2021 Jingjing Xu, Liang Zhao, Junyang Lin, Rundong Gao, Xu sun, Hongxia Yang

Many existing neural architecture search (NAS) solutions rely on downstream training for architecture evaluation, which takes enormous computations.

Image Classification Neural Architecture Search +2

M6-10T: A Sharing-Delinking Paradigm for Efficient Multi-Trillion Parameter Pretraining

no code implementations8 Oct 2021 Junyang Lin, An Yang, Jinze Bai, Chang Zhou, Le Jiang, Xianyan Jia, Ang Wang, Jie Zhang, Yong Li, Wei Lin, Jingren Zhou, Hongxia Yang

Recent expeditious developments in deep learning algorithms, distributed training, and even hardware design for large models have enabled training extreme-scale models, say GPT-3 and Switch Transformer possessing hundreds of billions or even trillions of parameters.

Sketch and Refine: Towards Faithful and Informative Table-to-Text Generation

no code implementations Findings (ACL) 2021 Peng Wang, Junyang Lin, An Yang, Chang Zhou, Yichang Zhang, Jingren Zhou, Hongxia Yang

Experimental results demonstrate that our method outperforms the previous state-of-the-art methods in both automatic and human evaluation, especially on coverage and faithfulness.

Descriptive Table-to-Text Generation

M6-T: Exploring Sparse Expert Models and Beyond

no code implementations31 May 2021 An Yang, Junyang Lin, Rui Men, Chang Zhou, Le Jiang, Xianyan Jia, Ang Wang, Jie Zhang, Jiamang Wang, Yong Li, Di Zhang, Wei Lin, Lin Qu, Jingren Zhou, Hongxia Yang

Mixture-of-Experts (MoE) models can achieve promising results with outrageous large amount of parameters but constant computation cost, and thus it has become a trend in model scaling.

Playing the Game of 2048

Connecting Language and Vision for Natural Language-Based Vehicle Retrieval

1 code implementation31 May 2021 Shuai Bai, Zhedong Zheng, Xiaohan Wang, Junyang Lin, Zhu Zhang, Chang Zhou, Yi Yang, Hongxia Yang

In this paper, we apply one new modality, i. e., the language description, to search the vehicle of interest and explore the potential of this task in the real-world scenario.

Language Modelling Management +2

Learning Relation Alignment for Calibrated Cross-modal Retrieval

1 code implementation ACL 2021 Shuhuai Ren, Junyang Lin, Guangxiang Zhao, Rui Men, An Yang, Jingren Zhou, Xu sun, Hongxia Yang

To bridge the semantic gap between the two modalities, previous studies mainly focus on word-region alignment at the object level, lacking the matching between the linguistic relation among the words and the visual relation among the regions.

Cross-Modal Retrieval Image-text Retrieval +4

CogView: Mastering Text-to-Image Generation via Transformers

4 code implementations NeurIPS 2021 Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, Jie Tang

Text-to-Image generation in the general domain has long been an open problem, which requires both a powerful generative model and cross-modal understanding.

Ranked #53 on Text-to-Image Generation on MS COCO (using extra training data)

Super-Resolution Zero-Shot Text-to-Image Generation

M6: A Chinese Multimodal Pretrainer

no code implementations1 Mar 2021 Junyang Lin, Rui Men, An Yang, Chang Zhou, Ming Ding, Yichang Zhang, Peng Wang, Ang Wang, Le Jiang, Xianyan Jia, Jie Zhang, Jianwei Zhang, Xu Zou, Zhikang Li, Xiaodong Deng, Jie Liu, Jinbao Xue, Huiling Zhou, Jianxin Ma, Jin Yu, Yong Li, Wei Lin, Jingren Zhou, Jie Tang, Hongxia Yang

In this work, we construct the largest dataset for multimodal pretraining in Chinese, which consists of over 1. 9TB images and 292GB texts that cover a wide range of domains.

Image Generation

A Gradient-based Kernel Approach for Efficient Network Architecture Search

no code implementations1 Jan 2021 Jingjing Xu, Liang Zhao, Junyang Lin, Xu sun, Hongxia Yang

Inspired by our new finding, we explore a simple yet effective network architecture search (NAS) approach that leverages gradient correlation and gradient values to find well-performing architectures.

Image Classification text-classification +1

Graph-based Multi-hop Reasoning for Long Text Generation

no code implementations28 Sep 2020 Liang Zhao, Jingjing Xu, Junyang Lin, Yichang Zhang, Hongxia Yang, Xu sun

The reasoning module is responsible for searching skeleton paths from a knowledge graph to imitate the imagination process in the human writing for semantic transfer.

Review Generation Sentence +1

InterBERT: Vision-and-Language Interaction for Multi-modal Pretraining

no code implementations30 Mar 2020 Junyang Lin, An Yang, Yichang Zhang, Jie Liu, Jingren Zhou, Hongxia Yang

We pretrain the model with three pretraining tasks, including masked segment modeling (MSM), masked region modeling (MRM) and image-text matching (ITM); and finetune the model on a series of vision-and-language downstream tasks.

Image Retrieval Image-text matching +3

Explicit Sparse Transformer: Concentrated Attention Through Explicit Selection

2 code implementations25 Dec 2019 Guangxiang Zhao, Junyang Lin, Zhiyuan Zhang, Xuancheng Ren, Qi Su, Xu sun

Self-attention based Transformer has demonstrated the state-of-the-art performances in a number of natural language processing tasks.

Image Captioning Language Modelling +2

Understanding and Improving Layer Normalization

2 code implementations NeurIPS 2019 Jingjing Xu, Xu sun, Zhiyuan Zhang, Guangxiang Zhao, Junyang Lin

Unlike them, we find that the derivatives of the mean and variance are more important than forward normalization by re-centering and re-scaling backward gradients.

Machine Translation Translation

Specificity-Driven Cascading Approach for Unsupervised Sentiment Modification

no code implementations IJCNLP 2019 Pengcheng Yang, Junyang Lin, Jingjing Xu, Jun Xie, Qi Su, Xu sun

The task of unsupervised sentiment modification aims to reverse the sentiment polarity of the input text while preserving its semantic content without any parallel data.

Specificity

Sparse Transformer: Concentrated Attention Through Explicit Selection

no code implementations25 Sep 2019 Guangxiang Zhao, Junyang Lin, Zhiyuan Zhang, Xuancheng Ren, Xu sun

Extensive experimental results on a series of natural language processing tasks, including neural machine translation, image captioning, and language modeling, all demonstrate the advantages of Sparse Transformer in model performance.

Image Captioning Language Modelling +2

Towards Knowledge-Based Recommender Dialog System

1 code implementation IJCNLP 2019 Qibin Chen, Junyang Lin, Yichang Zhang, Ming Ding, Yukuo Cen, Hongxia Yang, Jie Tang

In this paper, we propose a novel end-to-end framework called KBRD, which stands for Knowledge-Based Recommender Dialog System.

Recommendation Systems Text Generation

A Deep Reinforced Sequence-to-Set Model for Multi-Label Classification

1 code implementation ACL 2019 Pengcheng Yang, Fuli Luo, Shuming Ma, Junyang Lin, Xu sun

In this way, we can reduce the dependence of the model on the label order, as well as capture high-order correlations between labels.

General Classification Multi-Label Classification +1

Towards Knowledge-Based Personalized Product Description Generation in E-commerce

4 code implementations29 Mar 2019 Qibin Chen, Junyang Lin, Yichang Zhang, Hongxia Yang, Jingren Zhou, Jie Tang

In order to make the description both informative and personalized, KOBE considers a variety of important factors during text generation, including product aspects, user categories, and knowledge base, etc.

Text Generation

An Auto-Encoder Matching Model for Learning Utterance-Level Semantic Dependency in Dialogue Generation

1 code implementation EMNLP 2018 Liangchen Luo, Jingjing Xu, Junyang Lin, Qi Zeng, Xu sun

Different from conventional text generation tasks, the mapping between inputs and responses in conversations is more complicated, which highly demands the understanding of utterance-level semantic dependency, a relation between the whole meanings of inputs and outputs.

Dialogue Generation

Learning When to Concentrate or Divert Attention: Self-Adaptive Attention Temperature for Neural Machine Translation

1 code implementation EMNLP 2018 Junyang Lin, Xu sun, Xuancheng Ren, Muyu Li, Qi Su

Most of the Neural Machine Translation (NMT) models are based on the sequence-to-sequence (Seq2Seq) model with an encoder-decoder framework equipped with the attention mechanism.

Decoder Machine Translation +2

Deconvolution-Based Global Decoding for Neural Machine Translation

1 code implementation COLING 2018 Junyang Lin, Xu sun, Xuancheng Ren, Shuming Ma, Jinsong Su, Qi Su

A great proportion of sequence-to-sequence (Seq2Seq) models for Neural Machine Translation (NMT) adopt Recurrent Neural Network (RNN) to generate translation word by word following a sequential order.

Machine Translation NMT +1

Bag-of-Words as Target for Neural Machine Translation

1 code implementation ACL 2018 Shuming Ma, Xu sun, Yizhong Wang, Junyang Lin

However, most of the existing neural machine translation models only use one of the correct translations as the targets, and the other correct sentences are punished as the incorrect sentences in the training stage.

Machine Translation Sentence +1

Global Encoding for Abstractive Summarization

4 code implementations ACL 2018 Junyang Lin, Xu sun, Shuming Ma, Qi Su

To tackle the problem, we propose a global encoding framework, which controls the information flow from the encoder to the decoder based on the global information of the source context.

Abstractive Text Summarization Decoder

Decoding-History-Based Adaptive Control of Attention for Neural Machine Translation

no code implementations6 Feb 2018 Junyang Lin, Shuming Ma, Qi Su, Xu sun

ACA learns to control the attention by keeping track of the decoding history and the current information with a memory vector, so that the model can take the translated contents and the current information into consideration.

Decoder Machine Translation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.