Search Results for author: Yuwei Yin

Found 20 papers, 13 papers with code

SWI: Speaking with Intent in Large Language Models

1 code implementation27 Mar 2025 Yuwei Yin, EunJeong Hwang, Giuseppe Carenini

Intent, typically clearly formulated and planned, functions as a cognitive framework for reasoning and problem-solving.

Mathematical Reasoning Question Answering +1

ARR: Question Answering with Large Language Models via Analyzing, Retrieving, and Reasoning

1 code implementation7 Feb 2025 Yuwei Yin, Giuseppe Carenini

This paper introduces ARR, an intuitive and effective zero-shot prompting method that explicitly incorporates three key steps in QA solving: analyzing the intent of the question, retrieving relevant information, and reasoning step by step.

Multiple-choice Question Answering

FuzzCoder: Byte-level Fuzzing Test via Large Language Model

1 code implementation3 Sep 2024 Liqun Yang, Jian Yang, Chaoren Wei, Guanglin Niu, Ge Zhang, Yunli Wang, Linzheng Chai, Wanxu Xia, Hongcheng Guo, Shun Zhang, Jiaheng Liu, Yuwei Yin, Junran Peng, Jiaxin Ma, Liang Sun, Zhoujun Li

In this work, we propose to adopt fine-tuned large language models (FuzzCoder) to learn patterns in the input files from successful attacks to guide future fuzzing explorations.

Language Modeling Language Modelling +2

UniCoder: Scaling Code Large Language Model via Universal Code

no code implementations24 Jun 2024 Tao Sun, Linzheng Chai, Jian Yang, Yuwei Yin, Hongcheng Guo, Jiaheng Liu, Bing Wang, Liqun Yang, Zhoujun Li

When applying LLMs for code generation, recent works mainly focus on directing the models to articulate intermediate natural-language reasoning steps, as in chain-of-thought (CoT) prompting, and then output code with the natural language or other structured intermediate steps.

Code Translation Language Modeling +3

Red Teaming Visual Language Models

no code implementations23 Jan 2024 Mukai Li, Lei LI, Yuwei Yin, Masood Ahmed, Zhenguang Liu, Qi Liu

Additionally, we simply apply red teaming alignment to LLaVA-v1. 5 with Supervised Fine-tuning (SFT) using RTVLM, and this bolsters the models' performance with 10% in RTVLM test set, 13% in MM-Hal, and without noticeable decline in MM-Bench, overpassing other LLaVA-based models with regular alignment data.

Fairness Red Teaming

FinPT: Financial Risk Prediction with Profile Tuning on Pretrained Foundation Models

1 code implementation22 Jul 2023 Yuwei Yin, Yazheng Yang, Jian Yang, Qi Liu

To tackle these issues, we propose FinPT and FinBench: the former is a novel approach for financial risk prediction that conduct Profile Tuning on large pretrained foundation models, and the latter is a set of high-quality datasets on financial risks such as default, fraud, and churn.

M$^3$IT: A Large-Scale Dataset towards Multi-Modal Multilingual Instruction Tuning

no code implementations7 Jun 2023 Lei LI, Yuwei Yin, Shicheng Li, Liang Chen, Peiyi Wang, Shuhuai Ren, Mukai Li, Yazheng Yang, Jingjing Xu, Xu sun, Lingpeng Kong, Qi Liu

To tackle this challenge and promote research in the vision-language field, we introduce the Multi-Modal, Multilingual Instruction Tuning (M$^3$IT) dataset, designed to optimize VLM alignment with human instructions.

World Knowledge

GanLM: Encoder-Decoder Pre-training with an Auxiliary Discriminator

1 code implementation20 Dec 2022 Jian Yang, Shuming Ma, Li Dong, Shaohan Huang, Haoyang Huang, Yuwei Yin, Dongdong Zhang, Liqun Yang, Furu Wei, Zhoujun Li

Inspired by the idea of Generative Adversarial Networks (GANs), we propose a GAN-style model for encoder-decoder pre-training by introducing an auxiliary discriminator, unifying the ability of language understanding and generation in a single model.

Decoder Denoising +2

GTrans: Grouping and Fusing Transformer Layers for Neural Machine Translation

1 code implementation29 Jul 2022 Jian Yang, Yuwei Yin, Liqun Yang, Shuming Ma, Haoyang Huang, Dongdong Zhang, Furu Wei, Zhoujun Li

Transformer structure, stacked by a sequence of encoder and decoder network layers, achieves significant development in neural machine translation.

Decoder Machine Translation +1

HLT-MT: High-resource Language-specific Training for Multilingual Neural Machine Translation

1 code implementation11 Jul 2022 Jian Yang, Yuwei Yin, Shuming Ma, Dongdong Zhang, Zhoujun Li, Furu Wei

Nonetheless, multilingual training is plagued by language interference degeneration in shared parameters because of the negative interference among different translation directions, especially on high-resource languages.

Decoder Machine Translation +1

Exploring Entity Interactions for Few-Shot Relation Learning (Student Abstract)

no code implementations4 May 2022 Yi Liang, Shuai Zhao, Bo Cheng, Yuwei Yin, Hao Yang

Few-shot relation learning refers to infer facts for relations with a limited number of observed triples.

Metric Learning Relation

Multilingual Agreement for Multilingual Neural Machine Translation

no code implementations ACL 2021 Jian Yang, Yuwei Yin, Shuming Ma, Haoyang Huang, Dongdong Zhang, Zhoujun Li, Furu Wei

Although multilingual neural machine translation (MNMT) enables multiple language translations, the training process is based on independent multilingual objectives.

Machine Translation Translation

Toward Tweet Entity Linking with Heterogeneous Information Networks

1 code implementation IEEE Transactions on Knowledge and Data Engineering 2021 Wei Shen, Yuwei Yin, Yang Yang, Jiawei Han, Jianyong Wang, Xiaojie Yuan

The task of linking an entity mention in a tweet with its corresponding entity in a heterogeneous information network is of great importance, for the purpose of enriching heterogeneous information networks with the abundant and fresh knowledge embedded in tweets.

Entity Linking Metric Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.