Search Results for author: Wenxiang Jiao

Found 41 papers, 31 papers with code

VisFactor: Benchmarking Fundamental Visual Cognition in Multimodal Large Language Models

1 code implementation23 Feb 2025 Jen-tse Huang, Dasen Dai, Jen-Yuan Huang, Youliang Yuan, Xiaoyuan Liu, Wenxuan Wang, Wenxiang Jiao, Pinjia He, Zhaopeng Tu

Multimodal Large Language Models (MLLMs) have demonstrated remarkable advancements in multimodal understanding; however, their fundamental visual cognitive abilities remain largely underexplored.

Benchmarking Spatial Reasoning +1

Findings of the WMT 2024 Shared Task on Discourse-Level Literary Translation

1 code implementation16 Dec 2024 Longyue Wang, Siyou Liu, Chenyang Lyu, Wenxiang Jiao, Xing Wang, Jiahao Xu, Zhaopeng Tu, Yan Gu, WeiYu Chen, Minghao Wu, Liting Zhou, Philipp Koehn, Andy Way, Yulin Yuan

Following last year, we have continued to host the WMT translation shared task this year, the second edition of the Discourse-Level Literary Translation.

Translation

DRPruning: Efficient Large Language Model Pruning through Distributionally Robust Optimization

1 code implementation21 Nov 2024 Hexuan Deng, Wenxiang Jiao, Xuebo Liu, Min Zhang, Zhaopeng Tu

To address this, we propose DRPruning, which incorporates distributionally robust optimization to restore balanced performance across domains, along with further improvements to enhance robustness.

Language Modeling Language Modelling +1

On the Shortcut Learning in Multilingual Neural Machine Translation

no code implementations15 Nov 2024 Wenxuan Wang, Wenxiang Jiao, Jen-tse Huang, Zhaopeng Tu, Michael R. Lyu

By carefully designing experiments on different MNMT scenarios and models, we attribute the off-target issue to the overfitting of the shortcuts of (non-centric, centric) language mappings.

Attribute Machine Translation +1

NewTerm: Benchmarking Real-Time New Terms for Large Language Models with Annual Updates

1 code implementation28 Oct 2024 Hexuan Deng, Wenxiang Jiao, Xuebo Liu, Min Zhang, Zhaopeng Tu

Despite their remarkable abilities in various tasks, large language models (LLMs) still struggle with real-time information (e. g., new facts and terms) due to the knowledge cutoff in their development process.

Benchmarking

Chain-of-Jailbreak Attack for Image Generation Models via Editing Step by Step

no code implementations4 Oct 2024 Wenxuan Wang, Kuiyi Gao, Zihan Jia, Youliang Yuan, Jen-tse Huang, Qiuzhi Liu, Shuai Wang, Wenxiang Jiao, Zhaopeng Tu

To assess the safety of existing models, we introduce a novel jailbreaking method called Chain-of-Jailbreak (CoJ) attack, which compromises image generation models through a step-by-step editing process.

Image Generation

Learning to Ask: When LLM Agents Meet Unclear Instruction

no code implementations31 Aug 2024 Wenxuan Wang, Juluan Shi, Zixuan Ling, Yuk-Kit Chan, Chaozheng Wang, Cheryl Lee, Youliang Yuan, Jen-tse Huang, Wenxiang Jiao, Michael R. Lyu

Equipped with the capability to call functions, modern large language models (LLMs) can leverage external tools for addressing a range of tasks unattainable through language skills alone.

Refuse Whenever You Feel Unsafe: Improving Safety in LLMs via Decoupled Refusal Training

2 code implementations12 Jul 2024 Youliang Yuan, Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Jiahao Xu, Tian Liang, Pinjia He, Zhaopeng Tu

DeRTa incorporates two novel components: (1) Maximum Likelihood Estimation (MLE) with Harmful Response Prefix, which trains models to recognize and avoid unsafe content by appending a segment of harmful response to the beginning of a safe response, and (2) Reinforced Transition Optimization (RTO), which equips models with the ability to transition from potential harm to safety refusal consistently throughout the harmful response sequence.

Position

CoAct: A Global-Local Hierarchy for Autonomous Agent Collaboration

2 code implementations19 Jun 2024 Xinming Hou, Mingming Yang, Wenxiang Jiao, Xing Wang, Zhaopeng Tu, Wayne Xin Zhao

Existing LLMs exhibit remarkable performance on various NLP tasks, but still struggle with complex real-world tasks, even equipped with advanced strategies like CoT and ReAct.

Improving Gloss-free Sign Language Translation by Reducing Representation Density

1 code implementation23 May 2024 Jinhui Ye, Xing Wang, Wenxiang Jiao, Junwei Liang, Hui Xiong

In this paper, we identify a representation density problem that could be a bottleneck in restricting the performance of gloss-free SLT.

Contrastive Learning Gloss-free Sign Language Translation +2

Unsupervised Sign Language Translation and Generation

no code implementations12 Feb 2024 Zhengsheng Guo, Zhiwei He, Wenxiang Jiao, Xing Wang, Rui Wang, Kehai Chen, Zhaopeng Tu, Yong Xu, Min Zhang

Motivated by the success of unsupervised neural machine translation (UNMT), we introduce an unsupervised sign language translation and generation network (USLNet), which learns from abundant single-modality (text and video) data without parallel sign language data.

Machine Translation Sign Language Translation +1

Improving Machine Translation with Human Feedback: An Exploration of Quality Estimation as a Reward Model

1 code implementation23 Jan 2024 Zhiwei He, Xing Wang, Wenxiang Jiao, Zhuosheng Zhang, Rui Wang, Shuming Shi, Zhaopeng Tu

In this work, we investigate the potential of employing the QE model as the reward model to predict human preferences for feedback training.

Machine Translation Translation

The Earth is Flat? Unveiling Factual Errors in Large Language Models

no code implementations1 Jan 2024 Wenxuan Wang, Juluan Shi, Zhaopeng Tu, Youliang Yuan, Jen-tse Huang, Wenxiang Jiao, Michael R. Lyu

Current methods for evaluating LLMs' veracity are limited by test data leakage or the need for extensive human labor, hindering efficient and accurate error detection.

In-Context Learning Multiple-choice

LogicAsker: Evaluating and Improving the Logical Reasoning Ability of Large Language Models

1 code implementation1 Jan 2024 Yuxuan Wan, Wenxuan Wang, Yiliu Yang, Youliang Yuan, Jen-tse Huang, Pinjia He, Wenxiang Jiao, Michael R. Lyu

We introduce LogicAsker, a novel approach for evaluating and enhancing the logical reasoning capabilities of large language models (LLMs) such as ChatGPT and GPT-4.

Code Generation In-Context Learning +2

Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models

1 code implementation31 Oct 2023 Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang

Ideally, an advanced agent should possess the ability to accurately describe a given word using an aggressive description while concurrently maximizing confusion in the conservative description, enhancing its participation in the game.

Not All Countries Celebrate Thanksgiving: On the Cultural Dominance in Large Language Models

no code implementations19 Oct 2023 Wenxuan Wang, Wenxiang Jiao, Jingyuan Huang, Ruyi Dai, Jen-tse Huang, Zhaopeng Tu, Michael R. Lyu

This paper identifies a cultural dominance issue within large language models (LLMs) due to the predominant use of English data in model training (e. g., ChatGPT).

All

All Languages Matter: On the Multilingual Safety of Large Language Models

1 code implementation2 Oct 2023 Wenxuan Wang, Zhaopeng Tu, Chang Chen, Youliang Yuan, Jen-tse Huang, Wenxiang Jiao, Michael R. Lyu

In this work, we build the first multilingual safety benchmark for LLMs, XSafety, in response to the global deployment of LLMs in practice.

All Safety Alignment

Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench

1 code implementation2 Oct 2023 Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu

Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education.

Benchmarking Safety Alignment

GPT-4 Is Too Smart To Be Safe: Stealthy Chat with LLMs via Cipher

1 code implementation12 Aug 2023 Youliang Yuan, Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Pinjia He, Shuming Shi, Zhaopeng Tu

We propose a novel framework CipherChat to systematically examine the generalizability of safety alignment to non-natural languages -- ciphers.

Ethics Red Teaming +1

Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench

1 code implementation7 Aug 2023 Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu

Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse.

Revisiting the Reliability of Psychological Scales on Large Language Models

1 code implementation31 May 2023 Jen-tse Huang, Wenxiang Jiao, Man Ho Lam, Eric John Li, Wenxuan Wang, Michael R. Lyu

Recent research has focused on examining Large Language Models' (LLMs) characteristics from a psychological standpoint, acknowledging the necessity of understanding their behavioral characteristics.

Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate

1 code implementation30 May 2023 Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Shuming Shi, Zhaopeng Tu

To address the DoT problem, we propose a Multi-Agent Debate (MAD) framework, in which multiple agents express their arguments in the state of "tit for tat" and a judge manages the debate process to obtain a final solution.

Arithmetic Reasoning Machine Translation

Cross-modality Data Augmentation for End-to-End Sign Language Translation

1 code implementation18 May 2023 Jinhui Ye, Wenxiang Jiao, Xing Wang, Zhaopeng Tu, Hui Xiong

To tackle these challenges, we propose a novel Cross-modality Data Augmentation (XmDA) framework to transfer the powerful gloss-to-text translation capabilities to end-to-end sign language translation (i. e. video-to-text) by exploiting pseudo gloss-text pairs from the sign gloss translation model.

Data Augmentation Knowledge Distillation +3

Exploring Human-Like Translation Strategy with Large Language Models

2 code implementations6 May 2023 Zhiwei He, Tian Liang, Wenxiang Jiao, Zhuosheng Zhang, Yujiu Yang, Rui Wang, Zhaopeng Tu, Shuming Shi, Xing Wang

Compared to typical machine translation that focuses solely on source-to-target mapping, LLM-based translation can potentially mimic the human translation process which might take preparatory steps to ensure high-quality translation.

Hallucination Machine Translation +2

ParroT: Translating during Chat using Large Language Models tuned with Human Translation and Feedback

1 code implementation5 Apr 2023 Wenxiang Jiao, Jen-tse Huang, Wenxuan Wang, Zhiwei He, Tian Liang, Xing Wang, Shuming Shi, Zhaopeng Tu

Therefore, we propose ParroT, a framework to enhance and regulate the translation abilities during chat based on open-source LLMs (e. g., LLaMA), human-written translation and feedback data.

Instruction Following Machine Translation +1

ChatGPT or Grammarly? Evaluating ChatGPT on Grammatical Error Correction Benchmark

no code implementations15 Mar 2023 Haoran Wu, Wenxuan Wang, Yuxuan Wan, Wenxiang Jiao, Michael Lyu

ChatGPT is a cutting-edge artificial intelligence language model developed by OpenAI, which has attracted a lot of attention due to its surprisingly strong ability in answering follow-up questions.

Grammatical Error Correction Language Modeling +2

Is ChatGPT A Good Translator? Yes With GPT-4 As The Engine

1 code implementation20 Jan 2023 Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Xing Wang, Shuming Shi, Zhaopeng Tu

By evaluating on a number of benchmark test sets, we find that ChatGPT performs competitively with commercial translation products (e. g., Google Translate) on high-resource European languages but lags behind significantly on low-resource or distant languages.

Machine Translation Sentence +1

Adapters for Enhanced Modeling of Multilingual Knowledge and Text

1 code implementation24 Oct 2022 Yifan Hou, Wenxiang Jiao, Meizhen Liu, Carl Allen, Zhaopeng Tu, Mrinmaya Sachan

Specifically, we introduce a lightweight adapter set to enhance MLLMs with cross-lingual entity alignment and facts from MLKGs for many languages.

Entity Alignment

Tencent's Multilingual Machine Translation System for WMT22 Large-Scale African Languages

1 code implementation18 Oct 2022 Wenxiang Jiao, Zhaopeng Tu, Jiarui Li, Wenxuan Wang, Jen-tse Huang, Shuming Shi

This paper describes Tencent's multilingual machine translation systems for the WMT22 shared task on Large-Scale Machine Translation Evaluation for African Languages.

Data Augmentation Machine Translation +1

Scaling Back-Translation with Domain Text Generation for Sign Language Gloss Translation

1 code implementation13 Oct 2022 Jinhui Ye, Wenxiang Jiao, Xing Wang, Zhaopeng Tu

In this paper, to overcome the limitation, we propose a Prompt based domain text Generation (PGEN) approach to produce the large-scale in-domain spoken language text data.

Language Modelling Text Generation +1

Understanding and Mitigating the Uncertainty in Zero-Shot Translation

no code implementations20 May 2022 Wenxuan Wang, Wenxiang Jiao, Shuo Wang, Zhaopeng Tu, Michael R. Lyu

Zero-shot translation is a promising direction for building a comprehensive multilingual neural machine translation~(MNMT) system.

Machine Translation Translation

Understanding and Improving Sequence-to-Sequence Pretraining for Neural Machine Translation

no code implementations ACL 2022 Wenxuan Wang, Wenxiang Jiao, Yongchang Hao, Xing Wang, Shuming Shi, Zhaopeng Tu, Michael Lyu

In this paper, we present a substantial step in better understanding the SOTA sequence-to-sequence (Seq2Seq) pretraining for neural machine translation~(NMT).

Decoder Machine Translation +2

Self-Training Sampling with Monolingual Data Uncertainty for Neural Machine Translation

1 code implementation ACL 2021 Wenxiang Jiao, Xing Wang, Zhaopeng Tu, Shuming Shi, Michael R. Lyu, Irwin King

In this work, we propose to improve the sampling procedure by selecting the most informative monolingual sentences to complement the parallel data.

Machine Translation NMT +1

Multi-Task Learning with Shared Encoder for Non-Autoregressive Machine Translation

1 code implementation NAACL 2021 Yongchang Hao, Shilin He, Wenxiang Jiao, Zhaopeng Tu, Michael Lyu, Xing Wang

In addition, experimental results demonstrate that our Multi-Task NAT is complementary to knowledge distillation, the standard knowledge transfer method for NAT.

Knowledge Distillation Machine Translation +2

Data Rejuvenation: Exploiting Inactive Training Examples for Neural Machine Translation

1 code implementation EMNLP 2020 Wenxiang Jiao, Xing Wang, Shilin He, Irwin King, Michael R. Lyu, Zhaopeng Tu

First, we train an identification model on the original training data, and use it to distinguish inactive examples and active examples by their sentence-level output probabilities.

Machine Translation NMT +2

Real-Time Emotion Recognition via Attention Gated Hierarchical Memory Network

1 code implementation20 Nov 2019 Wenxiang Jiao, Michael R. Lyu, Irwin King

We propose an Attention Gated Hierarchical Memory Network (AGHMN) to address the problems of prior work: (1) Commonly used convolutional neural networks (CNNs) for utterance feature extraction are less compatible in the memory modules; (2) Unidirectional gated recurrent units (GRUs) only allow each historical utterance to have context before it, preventing information propagation in the opposite direction; (3) The Soft Attention for summarizing loses the positional and ordering information of memories, regardless of how the memory bank is built.

Emotion Recognition in Conversation

Improving Word Representations: A Sub-sampled Unigram Distribution for Negative Sampling

no code implementations21 Oct 2019 Wenxiang Jiao, Irwin King, Michael R. Lyu

Word2Vec is the most popular model for word representation and has been widely investigated in literature.

Sentence Sentence Completion

PT-CoDE: Pre-trained Context-Dependent Encoder for Utterance-level Emotion Recognition

1 code implementation20 Oct 2019 Wenxiang Jiao, Michael R. Lyu, Irwin King

Witnessing the success of transfer learning in natural language process (NLP), we propose to pre-train a context-dependent encoder (CoDE) for ULER by learning from unlabeled conversation data.

Emotion Recognition Sentence +3

HiGRU: Hierarchical Gated Recurrent Units for Utterance-level Emotion Recognition

1 code implementation NAACL 2019 Wenxiang Jiao, Haiqin Yang, Irwin King, Michael R. Lyu

In this paper, we address three challenges in utterance-level emotion recognition in dialogue systems: (1) the same word can deliver different emotions in different contexts; (2) some emotions are rarely seen in general dialogues; (3) long-range contextual information is hard to be effectively captured.

Emotion Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.