Search Results for author: Linxin Song

Found 13 papers, 7 papers with code

Template Matters: Understanding the Role of Instruction Templates in Multimodal Language Model Evaluation and Training

1 code implementation11 Dec 2024 Shijian Wang, Linxin Song, Jieyu Zhang, Ryotaro Shimizu, Ao Luo, Li Yao, Cunjian Chen, Julian McAuley, Hanqian Wu

Models tuned on our augmented dataset achieve the best overall performance when compared with the same scale MLMs tuned on at most 75 times the scale of our augmented dataset, highlighting the importance of instruction templates in MLM training.

Language Modeling Language Modelling

Disentangling Likes and Dislikes in Personalized Generative Explainable Recommendation

no code implementations17 Oct 2024 Ryotaro Shimizu, Takashi Wada, Yu Wang, Johannes Kruse, Sean O'Brien, Sai HtaungKham, Linxin Song, Yuya Yoshikawa, Yuki Saito, Fugee Tsung, Masayuki Goto, Julian McAuley

Specifically, we construct the datasets by explicitly extracting users' positive and negative opinions from their post-purchase reviews using an LLM, and propose to evaluate systems based on whether the generated explanations 1) align well with the users' sentiments, and 2) accurately identify both positive and negative opinions of users on the target items.

Explainable Recommendation Text Generation

Explaining Length Bias in LLM-Based Preference Evaluations

no code implementations1 Jul 2024 Zhengyu Hu, Linxin Song, Jieyu Zhang, Zheyuan Xiao, Tianfu Wang, Zhengyu Chen, Nicholas Jing Yuan, Jianxun Lian, Kaize Ding, Hui Xiong

The use of large language models (LLMs) as judges, particularly in preference comparisons, has become widespread, but this reveals a notable bias towards longer responses, undermining the reliability of such evaluations.

Language Modelling Large Language Model

Adaptive In-conversation Team Building for Language Model Agents

no code implementations29 May 2024 Linxin Song, Jiale Liu, Jieyu Zhang, Shaokun Zhang, Ao Luo, Shijian Wang, Qingyun Wu, Chi Wang

Leveraging multiple large language model (LLM) agents has shown to be a promising approach for tackling complex tasks, while the effective design of multiple agents for a particular application remains an art.

Diversity Language Modeling +3

Offline Training of Language Model Agents with Functions as Learnable Weights

1 code implementation17 Feb 2024 Shaokun Zhang, Jieyu Zhang, Jiale Liu, Linxin Song, Chi Wang, Ranjay Krishna, Qingyun Wu

Researchers and practitioners have recently reframed powerful Large Language Models (LLMs) as agents, enabling them to automate complex tasks largely via the use of specialized functions.

Language Modeling Language Modelling

Better Explain Transformers by Illuminating Important Information

1 code implementation18 Jan 2024 Linxin Song, Yan Cui, Ao Luo, Freddy Lecue, Irene Li

Transformer-based models excel in various natural language processing (NLP) tasks, attracting countless efforts to explain their inner workings.

Question Answering

NLPBench: Evaluating Large Language Models on Solving NLP Problems

1 code implementation27 Sep 2023 Linxin Song, Jieyu Zhang, Lechao Cheng, Pengyuan Zhou, Tianyi Zhou, Irene Li

Recent developments in large language models (LLMs) have shown promise in enhancing the capabilities of natural language processing (NLP).

Benchmarking Math

SCP: Spherical-Coordinate-based Learned Point Cloud Compression

no code implementations24 Aug 2023 Ao Luo, Linxin Song, Keisuke Nonaka, Kyohei Unno, Heming Sun, Masayuki Goto, Jiro Katto

In recent years, the task of learned point cloud compression has gained prominence.

Taming Small-sample Bias in Low-budget Active Learning

no code implementations19 Jun 2023 Linxin Song, Jieyu Zhang, Xiaotian Lu, Tianyi Zhou

Instead of tuning the coefficient for each query round, which is sensitive and time-consuming, we propose the curriculum Firth bias reduction (CHAIN) that can automatically adjust the coefficient to be adaptive to the training process.

Active Learning

Leveraging Instance Features for Label Aggregation in Programmatic Weak Supervision

2 code implementations6 Oct 2022 Jieyu Zhang, Linxin Song, Alexander Ratner

In particular, it is built on a mixture of Bayesian label models, each corresponding to a global pattern of correlation, and the coefficients of the mixture components are predicted by a Gaussian Process classifier based on instance features.

Variational Inference

Adaptive Ranking-based Sample Selection for Weakly Supervised Class-imbalanced Text Classification

2 code implementations6 Oct 2022 Linxin Song, Jieyu Zhang, Tianxiang Yang, Masayuki Goto

To obtain a large amount of training labels inexpensively, researchers have recently adopted the weak supervision (WS) paradigm, which leverages labeling rules to synthesize training labels rather than using individual annotations to achieve competitive results for natural language processing (NLP) tasks.

text-classification Text Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.