Search Results for author: Kai-Wei Chang

Found 224 papers, 126 papers with code

Generating Natural Language Adversarial Examples

5 code implementations EMNLP 2018 Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, Kai-Wei Chang

Deep neural networks (DNNs) are vulnerable to adversarial examples, perturbations to correctly classified examples which can cause the model to misclassify.

Natural Language Inference Sentiment Analysis

Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning

2 code implementations29 Sep 2022 Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, Ashwin Kalyan

However, it is unknown if the models can handle more complex problems that involve math reasoning over heterogeneous information, such as tabular data.

Logical Reasoning Math +1

Grounded Language-Image Pre-training

2 code implementations CVPR 2022 Liunian Harold Li, Pengchuan Zhang, Haotian Zhang, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, Kai-Wei Chang, Jianfeng Gao

The unification brings two benefits: 1) it allows GLIP to learn from both detection and grounding data to improve both tasks and bootstrap a good grounding model; 2) GLIP can leverage massive image-text pairs by generating grounding boxes in a self-training fashion, making the learned representation semantic-rich.

Few-Shot Object Detection Zero-Shot Object Detection

Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models

1 code implementation NeurIPS 2023 Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Jianfeng Gao

At the heart of Chameleon is an LLM-based planner that assembles a sequence of tools to execute to generate the final response.

Logical Reasoning

Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering

1 code implementation20 Sep 2022 Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, Ashwin Kalyan

We further design language models to learn to generate lectures and explanations as the chain of thought (CoT) to mimic the multi-hop reasoning process when answering ScienceQA questions.

Multimodal Deep Learning Multimodal Reasoning +5

GPT-GNN: Generative Pre-Training of Graph Neural Networks

2 code implementations27 Jun 2020 Ziniu Hu, Yuxiao Dong, Kuansan Wang, Kai-Wei Chang, Yizhou Sun

Graph neural networks (GNNs) have been demonstrated to be powerful in modeling graph-structured data.

Attribute Graph Generation

How Much Can CLIP Benefit Vision-and-Language Tasks?

4 code implementations13 Jul 2021 Sheng Shen, Liunian Harold Li, Hao Tan, Mohit Bansal, Anna Rohrbach, Kai-Wei Chang, Zhewei Yao, Kurt Keutzer

Most existing Vision-and-Language (V&L) models rely on pre-trained visual encoders, using a relatively small set of manually-annotated data (as compared to web-crawled data), to perceive the visual world.

Ranked #4 on Vision and Language Navigation on RxR (using extra training data)

Question Answering Vision and Language Navigation +2

A Survey of Deep Learning for Mathematical Reasoning

1 code implementation20 Dec 2022 Pan Lu, Liang Qiu, Wenhao Yu, Sean Welleck, Kai-Wei Chang

Mathematical reasoning is a fundamental aspect of human intelligence and is applicable in various fields, including science, engineering, finance, and everyday life.

Math Mathematical Reasoning

Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond

5 code implementations NeurIPS 2020 Kaidi Xu, Zhouxing Shi, huan zhang, Yihan Wang, Kai-Wei Chang, Minlie Huang, Bhavya Kailkhura, Xue Lin, Cho-Jui Hsieh

Linear relaxation based perturbation analysis (LiRPA) for neural networks, which computes provable linear bounds of output neurons given a certain amount of input perturbation, has become a core component in robustness verification and certified defense.

Quantization

Agent Lumos: Unified and Modular Training for Open-Source Language Agents

1 code implementation9 Nov 2023 Da Yin, Faeze Brahman, Abhilasha Ravichander, Khyathi Chandu, Kai-Wei Chang, Yejin Choi, Bill Yuchen Lin

To foster generalizable agent learning, we collect large-scale, unified, and high-quality training annotations derived from diverse ground-truth reasoning rationales across various complex interactive tasks.

Math Question Answering

Unified Pre-training for Program Understanding and Generation

1 code implementation NAACL 2021 Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang

Experiments on code summarization in the English language, code generation, and code translation in seven programming languages show that PLBART outperforms or rivals state-of-the-art models.

Clone Detection Code Summarization +6

Codec-SUPERB: An In-Depth Analysis of Sound Codec Models

1 code implementation20 Feb 2024 Haibin Wu, Ho-Lam Chung, Yi-Cheng Lin, Yuan-Kuei Wu, Xuanjun Chen, Yu-Chi Pai, Hsiu-Hsuan Wang, Kai-Wei Chang, Alexander H. Liu, Hung-Yi Lee

The sound codec's dual roles in minimizing data transmission latency and serving as tokenizers underscore its critical importance.

Multi-Task Learning for Document Ranking and Query Suggestion

1 code implementation ICLR 2018 Wasi Uddin Ahmad, Kai-Wei Chang, Hongning Wang

We propose a multi-task learning framework to jointly learn document ranking and query suggestion for web search.

Document Ranking Multi-Task Learning

Context Attentive Document Ranking and Query Suggestion

5 code implementations5 Jun 2019 Wasi Uddin Ahmad, Kai-Wei Chang, Hongning Wang

We present a context-aware neural ranking model to exploit users' on-task search activities and enhance retrieval performance.

Document Ranking Retrieval

SpeechPrompt: An Exploration of Prompt Tuning on Generative Spoken Language Model for Speech Processing Tasks

1 code implementation31 Mar 2022 Kai-Wei Chang, Wei-Cheng Tseng, Shang-Wen Li, Hung-Yi Lee

We report in this paper the first exploration of the prompt tuning paradigm for speech processing tasks based on Generative Spoken Language Model (GSLM).

Language Modelling Self-Supervised Learning

Mitigating Gender Bias Amplification in Distribution by Posterior Regularization

1 code implementation ACL 2020 Shengyu Jia, Tao Meng, Jieyu Zhao, Kai-Wei Chang

With little performance loss, our method can almost remove the bias amplification in the distribution.

Dynamic-SUPERB: Towards A Dynamic, Collaborative, and Comprehensive Instruction-Tuning Benchmark for Speech

1 code implementation18 Sep 2023 Chien-yu Huang, Ke-Han Lu, Shih-Heng Wang, Chi-Yuan Hsiao, Chun-Yi Kuan, Haibin Wu, Siddhant Arora, Kai-Wei Chang, Jiatong Shi, Yifan Peng, Roshan Sharma, Shinji Watanabe, Bhiksha Ramakrishnan, Shady Shehata, Hung-Yi Lee

To achieve comprehensive coverage of diverse speech tasks and harness instruction tuning, we invite the community to collaborate and contribute, facilitating the dynamic growth of the benchmark.

DEGREE: A Data-Efficient Generation-Based Event Extraction Model

2 code implementations NAACL 2022 I-Hung Hsu, Kuan-Hao Huang, Elizabeth Boschee, Scott Miller, Prem Natarajan, Kai-Wei Chang, Nanyun Peng

Given a passage and a manually designed prompt, DEGREE learns to summarize the events mentioned in the passage into a natural sentence that follows a predefined pattern.

Event Extraction Sentence +2

The Woman Worked as a Babysitter: On Biases in Language Generation

1 code implementation IJCNLP 2019 Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, Nanyun Peng

We present a systematic study of biases in natural language generation (NLG) by analyzing text generated from prompts that contain mentions of different demographic groups.

Language Modelling Text Generation +1

Dynosaur: A Dynamic Growth Paradigm for Instruction-Tuning Data Curation

1 code implementation23 May 2023 Da Yin, Xiao Liu, Fan Yin, Ming Zhong, Hritik Bansal, Jiawei Han, Kai-Wei Chang

Instruction tuning has emerged to enhance the capabilities of large language models (LLMs) to comprehend instructions and generate appropriate responses.

Continual Learning

Few-Shot Representation Learning for Out-Of-Vocabulary Words

1 code implementation ACL 2019 Ziniu Hu, Ting Chen, Kai-Wei Chang, Yizhou Sun

Existing approaches for learning word embeddings often assume there are sufficient occurrences for each word in the corpus, such that the representation of words can be accurately estimated from their contexts.

Learning Word Embeddings Meta-Learning +1

AVATAR: A Parallel Corpus for Java-Python Program Translation

1 code implementation26 Aug 2021 Wasi Uddin Ahmad, Md Golam Rahman Tushar, Saikat Chakraborty, Kai-Wei Chang

Automating program translation is of paramount importance in software migration, and recently researchers explored unsupervised approaches due to the unavailability of parallel corpora.

Translation

GATE: Graph Attention Transformer Encoder for Cross-lingual Relation and Event Extraction

1 code implementation6 Oct 2020 Wasi Uddin Ahmad, Nanyun Peng, Kai-Wei Chang

Recent progress in cross-lingual relation and event extraction use graph convolutional networks (GCNs) with universal dependency parses to learn language-agnostic sentence representations such that models trained on one language can be applied to other languages.

Event Extraction Graph Attention +2

VideoCon: Robust Video-Language Alignment via Contrast Captions

1 code implementation15 Nov 2023 Hritik Bansal, Yonatan Bitton, Idan Szpektor, Kai-Wei Chang, Aditya Grover

Despite being (pre)trained on a massive amount of data, state-of-the-art video-language alignment models are not robust to semantically-plausible contrastive changes in the video captions.

Language Modelling Large Language Model +5

Learning Gender-Neutral Word Embeddings

1 code implementation EMNLP 2018 Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, Kai-Wei Chang

Word embedding models have become a fundamental component in a wide range of Natural Language Processing (NLP) applications.

Word Embeddings

Generating Syntactically Controlled Paraphrases without Using Annotated Parallel Pairs

1 code implementation EACL 2021 Kuan-Hao Huang, Kai-Wei Chang

We also demonstrate that the performance of SynPG is competitive or even better than supervised models when the unannotated data is large.

Data Augmentation Disentanglement +2

Retrieval Augmented Code Generation and Summarization

1 code implementation Findings (EMNLP) 2021 Md Rizwan Parvez, Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang

To mimic developers' code or summary generation behavior, we propose a retrieval augmented framework, REDCODER, that retrieves relevant code or summaries from a retrieval database and provides them as a supplement to code generation or summarization models.

 Ranked #1 on Code Generation on CodeXGLUE - CodeSearchNet (using extra training data)

Code Generation Code Summarization +1

Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations

2 code implementations ICCV 2019 Tianlu Wang, Jieyu Zhao, Mark Yatskar, Kai-Wei Chang, Vicente Ordonez

In this work, we present a framework to measure and mitigate intrinsic biases with respect to protected variables --such as gender-- in visual recognition tasks.

Temporal Action Localization

What's "up" with vision-language models? Investigating their struggle with spatial reasoning

1 code implementation30 Oct 2023 Amita Kamath, Jack Hessel, Kai-Wei Chang

Recent vision-language (VL) models are powerful, but can they reliably distinguish "right" from "left"?

IdealGPT: Iteratively Decomposing Vision and Language Reasoning via Large Language Models

1 code implementation24 May 2023 Haoxuan You, Rui Sun, Zhecan Wang, Long Chen, Gengyu Wang, Hammad A. Ayyubi, Kai-Wei Chang, Shih-Fu Chang

Specifically, IdealGPT utilizes an LLM to generate sub-questions, a VLM to provide corresponding sub-answers, and another LLM to reason to achieve the final answer.

Searching for an Effective Defender: Benchmarking Defense against Adversarial Word Substitution

1 code implementation EMNLP 2021 Zongyi Li, Jianhan Xu, Jiehang Zeng, Linyang Li, Xiaoqing Zheng, Qi Zhang, Kai-Wei Chang, Cho-Jui Hsieh

Recent studies have shown that deep neural networks are vulnerable to intentionally crafted adversarial examples, and various methods have been proposed to defend against adversarial word-substitution attacks for neural NLP models.

Benchmarking

TextEE: Benchmark, Reevaluation, Reflections, and Future Challenges in Event Extraction

1 code implementation16 Nov 2023 Kuan-Hao Huang, I-Hung Hsu, Tanmay Parekh, Zhiyu Xie, Zixuan Zhang, Premkumar Natarajan, Kai-Wei Chang, Nanyun Peng, Heng Ji

In this work, we identify and address evaluation challenges, including inconsistency due to varying data assumptions or preprocessing steps, the insufficiency of current evaluation frameworks that may introduce dataset or data split bias, and the low reproducibility of some previous approaches.

Benchmarking Event Extraction

Robustness Verification for Transformers

1 code implementation ICLR 2020 Zhouxing Shi, huan zhang, Kai-Wei Chang, Minlie Huang, Cho-Jui Hsieh

Robustness verification that aims to formally certify the prediction behavior of neural networks has become an important tool for understanding model behavior and obtaining safety guarantees.

Position Sentiment Analysis

Disentangling Semantics and Syntax in Sentence Embeddings with Pre-trained Language Models

1 code implementation NAACL 2021 James Y. Huang, Kuan-Hao Huang, Kai-Wei Chang

In this work, we present ParaBART, a semantic sentence embedding model that learns to disentangle semantics and syntax in sentence embeddings obtained by pre-trained language models.

Semantic Similarity Semantic Textual Similarity +3

Semantic Probabilistic Layers for Neuro-Symbolic Learning

1 code implementation1 Jun 2022 Kareem Ahmed, Stefano Teso, Kai-Wei Chang, Guy Van Den Broeck, Antonio Vergari

We design a predictive layer for structured-output prediction (SOP) that can be plugged into any neural network guaranteeing its predictions are consistent with a set of predefined symbolic constraints.

Hierarchical Multi-label Classification Logical Reasoning

CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning

1 code implementation ICCV 2023 Hritik Bansal, Nishad Singhi, Yu Yang, Fan Yin, Aditya Grover, Kai-Wei Chang

Multimodal contrastive pretraining has been used to train multimodal representation models, such as CLIP, on large amounts of paired image-text data.

Backdoor Attack Contrastive Learning +1

Model Editing Can Hurt General Abilities of Large Language Models

1 code implementation9 Jan 2024 Jia-Chen Gu, Hao-Xiang Xu, Jun-Yu Ma, Pan Lu, Zhen-Hua Ling, Kai-Wei Chang, Nanyun Peng

One critical challenge that has emerged is the presence of hallucinations in the output of large language models (LLMs) due to false or outdated knowledge.

Model Editing Question Answering

On Prompt-Driven Safeguarding for Large Language Models

1 code implementation31 Jan 2024 Chujie Zheng, Fan Yin, Hao Zhou, Fandong Meng, Jie zhou, Kai-Wei Chang, Minlie Huang, Nanyun Peng

Prepending model inputs with safety prompts is a common practice for safeguarding large language models (LLMs) from complying with queries that contain harmful intents.

Are LLMs Capable of Data-based Statistical and Causal Reasoning? Benchmarking Advanced Quantitative Reasoning with Data

1 code implementation27 Feb 2024 Xiao Liu, Zirui Wu, Xueqing Wu, Pan Lu, Kai-Wei Chang, Yansong Feng

To address this gap, we introduce the Quantitative Reasoning with Data (QRData) benchmark, aiming to evaluate Large Language Models' capability in statistical and causal reasoning with real-world data.

Benchmarking

Pre-trained Language Models for Keyphrase Generation: A Thorough Empirical Study

1 code implementation20 Dec 2022 Di wu, Wasi Uddin Ahmad, Kai-Wei Chang

However, there lacks a systematic study of how the two types of approaches compare and how different design choices can affect the performance of PLM-based models.

Keyphrase Extraction Keyphrase Generation

On Leveraging Encoder-only Pre-trained Language Models for Effective Keyphrase Generation

1 code implementation21 Feb 2024 Di wu, Wasi Uddin Ahmad, Kai-Wei Chang

This study addresses the application of encoder-only Pre-trained Language Models (PLMs) in keyphrase generation (KPG) amidst the broader availability of domain-tailored encoder-only models compared to encoder-decoder models.

Keyphrase Generation

Syntax-augmented Multilingual BERT for Cross-lingual Transfer

1 code implementation ACL 2021 Wasi Uddin Ahmad, Haoran Li, Kai-Wei Chang, Yashar Mehdad

In recent years, we have seen a colossal effort in pre-training multilingual text encoders using large-scale corpora in many languages to facilitate cross-lingual transfer learning.

Cross-Lingual Transfer named-entity-recognition +7

Controllable Text Generation with Neurally-Decomposed Oracle

1 code implementation27 May 2022 Tao Meng, Sidi Lu, Nanyun Peng, Kai-Wei Chang

We propose a general and efficient framework to control auto-regressive generation models with NeurAlly-Decomposed Oracle (NADO).

Language Modelling Machine Translation +1

Learning to Discriminate Perturbations for Blocking Adversarial Attacks in Text Classification

1 code implementation IJCNLP 2019 Yichao Zhou, Jyun-Yu Jiang, Kai-Wei Chang, Wei Wang

To identify adversarial attacks, a perturbation discriminator validates how likely a token in the text is perturbed and provides a set of potential perturbations.

Blocking General Classification +3

Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood Ensemble

1 code implementation20 Jun 2020 Yi Zhou, Xiaoqing Zheng, Cho-Jui Hsieh, Kai-Wei Chang, Xuanjing Huang

Despite neural networks have achieved prominent performance on many natural language processing (NLP) tasks, they are vulnerable to adversarial examples.

Sentence

Defense against Synonym Substitution-based Adversarial Attacks via Dirichlet Neighborhood Ensemble

1 code implementation ACL 2021 Yi Zhou, Xiaoqing Zheng, Cho-Jui Hsieh, Kai-Wei Chang, Xuanjing Huang

Although deep neural networks have achieved prominent performance on many NLP tasks, they are vulnerable to adversarial examples.

Sentence

Societal Biases in Language Generation: Progress and Challenges

1 code implementation ACL 2021 Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, Nanyun Peng

Technology for language generation has advanced rapidly, spurred by advancements in pre-training large models on massive amounts of data and the need for intelligent agents to communicate in a natural manner.

Fairness Text Generation

Active Instruction Tuning: Improving Cross-Task Generalization by Training on Prompt Sensitive Tasks

1 code implementation1 Nov 2023 Po-Nien Kung, Fan Yin, Di wu, Kai-Wei Chang, Nanyun Peng

Instruction tuning (IT) achieves impressive zero-shot generalization results by training large language models (LLMs) on a massive amount of diverse tasks with instructions.

Informativeness Out-of-Distribution Generalization +1

How well can Text-to-Image Generative Models understand Ethical Natural Language Interventions?

1 code implementation27 Oct 2022 Hritik Bansal, Da Yin, Masoud Monajatipoor, Kai-Wei Chang

To this end, we introduce an Ethical NaTural Language Interventions in Text-to-Image GENeration (ENTIGEN) benchmark dataset to evaluate the change in image generations conditional on ethical interventions across three social axes -- gender, skin color, and culture.

Cultural Vocal Bursts Intensity Prediction Text-to-Image Generation

Text encoders bottleneck compositionality in contrastive vision-language models

1 code implementation24 May 2023 Amita Kamath, Jack Hessel, Kai-Wei Chang

We first curate CompPrompts, a set of increasingly compositional image captions that VL models should be able to capture (e. g., single object, to object+property, to multiple interacting objects).

Attribute Image Captioning +1

Gender Bias in Contextualized Word Embeddings

2 code implementations NAACL 2019 Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, Kai-Wei Chang

In this paper, we quantify, analyze and mitigate gender bias exhibited in ELMo's contextualized word vectors.

Word Embeddings

Representation Learning for Resource-Constrained Keyphrase Generation

1 code implementation15 Mar 2022 Di wu, Wasi Uddin Ahmad, Sunipa Dev, Kai-Wei Chang

State-of-the-art keyphrase generation methods generally depend on large annotated datasets, limiting their performance in domains with limited annotated data.

Denoising Domain Adaptation +4

GeoMLAMA: Geo-Diverse Commonsense Probing on Multilingual Pre-Trained Language Models

1 code implementation24 May 2022 Da Yin, Hritik Bansal, Masoud Monajatipoor, Liunian Harold Li, Kai-Wei Chang

In this paper, we introduce a benchmark dataset, Geo-Diverse Commonsense Multilingual Language Models Analysis (GeoMLAMA), for probing the diversity of the relational knowledge in multilingual PLMs.

Language Modelling

On the Robustness of Language Encoders against Grammatical Errors

1 code implementation ACL 2020 Fan Yin, Quanyu Long, Tao Meng, Kai-Wei Chang

We conduct a thorough study to diagnose the behaviors of pre-trained language encoders (ELMo, BERT, and RoBERTa) when confronted with natural grammatical errors.

Cloze Test Linguistic Acceptability +1

Intent Classification and Slot Filling for Privacy Policies

1 code implementation ACL 2021 Wasi Uddin Ahmad, Jianfeng Chi, Tu Le, Thomas Norton, Yuan Tian, Kai-Wei Chang

We refer to predicting the privacy practice explained in a sentence as intent classification and identifying the text spans sharing specific information as slot filling.

General Classification intent-classification +3

PLUE: Language Understanding Evaluation Benchmark for Privacy Policies in English

1 code implementation20 Dec 2022 Jianfeng Chi, Wasi Uddin Ahmad, Yuan Tian, Kai-Wei Chang

Privacy policies provide individuals with information about their rights and how their personal information is handled.

Language Modelling Natural Language Understanding

Planning as In-Painting: A Diffusion-Based Embodied Task Planning Framework for Environments under Uncertainty

1 code implementation2 Dec 2023 Cheng-Fu Yang, Haoyang Xu, Te-Lin Wu, Xiaofeng Gao, Kai-Wei Chang, Feng Gao

In this paper, we aim to tackle this problem with a unified framework consisting of an end-to-end trainable method and a planning algorithm.

Denoising Vision-Language Navigation

PolicyQA: A Reading Comprehension Dataset for Privacy Policies

1 code implementation Findings of the Association for Computational Linguistics 2020 Wasi Uddin Ahmad, Jianfeng Chi, Yuan Tian, Kai-Wei Chang

Prior studies in this domain frame the QA task as retrieving the most relevant text segment or a list of sentences from the policy document given a question.

Question Answering Reading Comprehension

"Nice Try, Kiddo": Investigating Ad Hominems in Dialogue Responses

1 code implementation24 Oct 2020 Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, Nanyun Peng

Ad hominem attacks are those that target some feature of a person's character instead of the position the person is maintaining.

Abusive Language

Red Teaming Language Model Detectors with Language Models

2 code implementations31 May 2023 Zhouxing Shi, Yihan Wang, Fan Yin, Xiangning Chen, Kai-Wei Chang, Cho-Jui Hsieh

The prevalence and strong capability of large language models (LLMs) present significant safety and ethical risks if exploited by malicious users.

Adversarial Robustness Language Modelling +2

UniFine: A Unified and Fine-grained Approach for Zero-shot Vision-Language Understanding

1 code implementation3 Jul 2023 Rui Sun, Zhecan Wang, Haoxuan You, Noel Codella, Kai-Wei Chang, Shih-Fu Chang

However, we find visual and textual fine-grained information, e. g., keywords in the sentence and objects in the image, can be fairly informative for semantics understanding.

Image-text matching Sentence +2

BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation

1 code implementation27 Jan 2021 Jwala Dhamala, Tony Sun, Varun Kumar, Satyapriya Krishna, Yada Pruksachatkun, Kai-Wei Chang, Rahul Gupta

To systematically study and benchmark social biases in open-ended language generation, we introduce the Bias in Open-Ended Language Generation Dataset (BOLD), a large-scale dataset that consists of 23, 679 English text generation prompts for bias benchmarking across five domains: profession, gender, race, religion, and political ideology.

Benchmarking Text Generation

Generating Sports News from Live Commentary: A Chinese Dataset for Sports Game Summarization

1 code implementation Asian Chapter of the Association for Computational Linguistics 2020 Kuan-Hao Huang, Chen Li, Kai-Wei Chang

To deeply study this task, we present SportsSum, a Chinese sports game summarization dataset which contains 5, 428 soccer games of live commentaries and the corresponding news articles.

LACMA: Language-Aligning Contrastive Learning with Meta-Actions for Embodied Instruction Following

1 code implementation18 Oct 2023 Cheng-Fu Yang, Yen-Chun Chen, Jianwei Yang, Xiyang Dai, Lu Yuan, Yu-Chiang Frank Wang, Kai-Wei Chang

Additional analysis shows that the contrastive objective and meta-actions are complementary in achieving the best results, and the resulting agent better aligns its states with corresponding instructions, making it more suitable for real-world embodied agents.

Contrastive Learning Instruction Following

Cross-lingual Dependency Parsing with Unlabeled Auxiliary Languages

1 code implementation CONLL 2019 Wasi Uddin Ahmad, Zhisong Zhang, Xuezhe Ma, Kai-Wei Chang, Nanyun Peng

We conduct experiments on cross-lingual dependency parsing where we train a dependency parser on a source language and transfer it to a wide range of target languages.

Cross-Lingual Transfer Dependency Parsing +2

On the Sensitivity and Stability of Model Interpretations in NLP

1 code implementation ACL 2022 Fan Yin, Zhouxing Shi, Cho-Jui Hsieh, Kai-Wei Chang

We propose two new criteria, sensitivity and stability, that provide complementary notions of faithfulness to the existed removal-based criteria.

Adversarial Robustness Dependency Parsing +2

GENEVA: Benchmarking Generalizability for Event Argument Extraction with Hundreds of Event Types and Argument Roles

1 code implementation25 May 2022 Tanmay Parekh, I-Hung Hsu, Kuan-Hao Huang, Kai-Wei Chang, Nanyun Peng

We utilize this ontology to further introduce GENEVA, a diverse generalizability benchmarking dataset comprising four test suites, aimed at evaluating models' ability to handle limited data and unseen event type generalization.

Benchmarking Event Argument Extraction +1

Improving the Adversarial Robustness of NLP Models by Information Bottleneck

1 code implementation Findings (ACL) 2022 Cenyuan Zhang, Xiang Zhou, Yixin Wan, Xiaoqing Zheng, Kai-Wei Chang, Cho-Jui Hsieh

Existing studies have demonstrated that adversarial examples can be directly attributed to the presence of non-robust features, which are highly predictive, but can be easily manipulated by adversaries to fool NLP models.

Adversarial Robustness SST-2

"Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in LLM-Generated Reference Letters

1 code implementation13 Oct 2023 Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng

Through benchmarking evaluation on 2 popular LLMs- ChatGPT and Alpaca, we reveal significant gender biases in LLM-generated recommendation letters.

Benchmarking Fairness +1

KPEval: Towards Fine-Grained Semantic-Based Keyphrase Evaluation

1 code implementation27 Mar 2023 Di wu, Da Yin, Kai-Wei Chang

Despite the significant advancements in keyphrase extraction and keyphrase generation methods, the predominant approach for evaluation mainly relies on exact matching with human references.

Keyphrase Extraction Keyphrase Generation

DeepEdit: Knowledge Editing as Decoding with Constraints

1 code implementation19 Jan 2024 Yiwei Wang, Muhao Chen, Nanyun Peng, Kai-Wei Chang

We propose DeepEdit (Depth-first Search based Progressive Decoding for Knowledge Editing), a neuro-symbolic method that improves knowledge editing with better coherence of reasoning, relevance to the question, and awareness of updated knowledge.

Informativeness knowledge editing +2

Target Language-Aware Constrained Inference for Cross-lingual Dependency Parsing

1 code implementation IJCNLP 2019 Tao Meng, Nanyun Peng, Kai-Wei Chang

Experiments show that the Lagrangian relaxation and posterior regularization inference improve the performances on 15 and 17 out of 19 target languages, respectively.

Dependency Parsing

Improving Zero-Shot Cross-Lingual Transfer Learning via Robust Training

1 code implementation EMNLP 2021 Kuan-Hao Huang, Wasi Uddin Ahmad, Nanyun Peng, Kai-Wei Chang

Pre-trained multilingual language encoders, such as multilingual BERT and XLM-R, show great potential for zero-shot cross-lingual transfer.

Sentence text-classification +4

Ethical-Advice Taker: Do Language Models Understand Natural Language Interventions?

1 code implementation Findings (ACL) 2021 Jieyu Zhao, Daniel Khashabi, Tushar Khot, Ashish Sabharwal, Kai-Wei Chang

We investigate the effectiveness of natural language interventions for reading-comprehension systems, studying this in the context of social stereotypes.

Ethics Few-Shot Learning +2

TAGPRIME: A Unified Framework for Relational Structure Extraction

1 code implementation25 May 2022 I-Hung Hsu, Kuan-Hao Huang, Shuning Zhang, Wenxin Cheng, Premkumar Natarajan, Kai-Wei Chang, Nanyun Peng

In this work, we propose to take a unified view of all these tasks and introduce TAGPRIME to address relational structure extraction problems.

Event Argument Extraction Language Modelling +2

Visualizing Trends of Key Roles in News Articles

1 code implementation IJCNLP 2019 Chen Xia, Haoxiang Zhang, Jacob Moghtader, Allen Wu, Kai-Wei Chang

There are tons of news articles generated every day reflecting the activities of key roles such as people, organizations and political parties.

"The Boating Store Had Its Best Sail Ever": Pronunciation-attentive Contextualized Pun Recognition

1 code implementation29 Apr 2020 Yichao Zhou, Jyun-Yu Jiang, Jieyu Zhao, Kai-Wei Chang, Wei Wang

In this paper, we propose Pronunciation-attentive Contextualized Pun Recognition (PCPR) to perceive human humor, detect if a sentence contains puns and locate them in the sentence.

Sentence

CASA: Causality-driven Argument Sufficiency Assessment

1 code implementation10 Jan 2024 Xiao Liu, Yansong Feng, Kai-Wei Chang

Motivated by the probability of sufficiency (PS) definition in the causal literature, we propose CASA, a zero-shot causality-driven argument sufficiency assessment framework.

Logical Fallacy Detection

LearningWord Embeddings for Low-resource Languages by PU Learning

1 code implementation9 May 2018 Chao Jiang, Hsiang-Fu Yu, Cho-Jui Hsieh, Kai-Wei Chang

In such a situation, the co-occurrence matrix is sparse as the co-occurrences of many word pairs are unobserved.

An Integer Linear Programming Framework for Mining Constraints from Data

1 code implementation18 Jun 2020 Tao Meng, Kai-Wei Chang

This raises a question -- \emph{can we mine constraints and rules from data based on a learning algorithm?}

Multi-class Classification Multi-Label Classification

Revealing Persona Biases in Dialogue Systems

1 code implementation18 Apr 2021 Emily Sheng, Josh Arnold, Zhou Yu, Kai-Wei Chang, Nanyun Peng

Dialogue systems in the form of chatbots and personal assistants are being increasingly integrated into people's lives.

Evaluating the Values of Sources in Transfer Learning

1 code implementation NAACL 2021 Md Rizwan Parvez, Kai-Wei Chang

Transfer learning that adapts a model trained on data-rich sources to low-resource targets has been widely applied in natural language processing (NLP).

Cross-Lingual POS Tagging Transfer Learning

MiniSUPERB: Lightweight Benchmark for Self-supervised Speech Models

1 code implementation30 May 2023 Yu-Hsiang Wang, Huang-Yu Chen, Kai-Wei Chang, Winston Hsu, Hung-Yi Lee

In this paper, we introduce MiniSUPERB, a lightweight benchmark that efficiently evaluates SSL speech models with comparable results to SUPERB but lower computational costs significantly.

Self-Supervised Learning

DACO: Towards Application-Driven and Comprehensive Data Analysis via Code Generation

1 code implementation4 Mar 2024 Xueqing Wu, Rui Zheng, Jingzhen Sha, Te-Lin Wu, Hanyu Zhou, Mohan Tang, Kai-Wei Chang, Nanyun Peng, Haoran Huang

We construct the DACO dataset, containing (1) 440 databases (of tabular data) collected from real-world scenarios, (2) ~2k query-answer pairs that can serve as weak supervision for model training, and (3) a concentrated but high-quality test set with human refined annotations that serves as our main evaluation benchmark.

Code Generation

Robust Text Classifier on Test-Time Budgets

1 code implementation IJCNLP 2019 Md. Rizwan Parvez, Tolga Bolukbasi, Kai-Wei Chang, Venkatesh Saligrama

We propose a generic and interpretable learning framework for building robust text classification model that achieves accuracy comparable to full models under test-time budget constraints.

General Classification text-classification +1

Towards Understanding Gender Bias in Relation Extraction

1 code implementation ACL 2020 Andrew Gaut, Tony Sun, Shirlyn Tang, Yuxin Huang, Jing Qian, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, William Yang Wang

We use WikiGenderBias to evaluate systems for bias and find that NRE systems exhibit gender biased predictions and lay groundwork for future evaluation of bias in NRE.

counterfactual Data Augmentation +3

Conditional Supervised Contrastive Learning for Fair Text Classification

1 code implementation23 May 2022 Jianfeng Chi, William Shand, Yaodong Yu, Kai-Wei Chang, Han Zhao, Yuan Tian

Contrastive representation learning has gained much attention due to its superior performance in learning representations from both image and sequential data.

Contrastive Learning Fairness +3

ADDMU: Detection of Far-Boundary Adversarial Examples with Data and Model Uncertainty Estimation

1 code implementation22 Oct 2022 Fan Yin, Yao Li, Cho-Jui Hsieh, Kai-Wei Chang

Finally, our analysis shows that the two types of uncertainty provided by \textbf{ADDMU} can be leveraged to characterize adversarial examples and identify the ones that contribute most to model's robustness in adversarial training.

Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona Biases in Dialogue Systems

1 code implementation8 Oct 2023 Yixin Wan, Jieyu Zhao, Aman Chadha, Nanyun Peng, Kai-Wei Chang

Recent advancements in Large Language Models empower them to follow freeform instructions, including imitating generic or specific demographic personas in conversations.

Benchmarking

Co-training Embeddings of Knowledge Graphs and Entity Descriptions for Cross-lingual Entity Alignment

no code implementations18 Jun 2018 Muhao Chen, Yingtao Tian, Kai-Wei Chang, Steven Skiena, Carlo Zaniolo

Since many multilingual KGs also provide literal descriptions of entities, in this paper, we introduce an embedding-based approach which leverages a weakly aligned multilingual KG for semi-supervised cross-lingual learning using entity descriptions.

Entity Alignment Knowledge Graphs

Counterexamples for Robotic Planning Explained in Structured Language

no code implementations23 Mar 2018 Lu Feng, Mahsa Ghasemi, Kai-Wei Chang, Ufuk Topcu

Automated techniques such as model checking have been used to verify models of robotic mission plans based on Markov decision processes (MDPs) and generate counterexamples that may help diagnose requirement violations.

Beyond Bilingual: Multi-sense Word Embeddings using Multilingual Context

no code implementations WS 2017 Shyam Upadhyay, Kai-Wei Chang, Matt Taddy, Adam Kalai, James Zou

We present a multi-view Bayesian non-parametric algorithm which improves multi-sense word embeddings by (a) using multilingual (i. e., more than two languages) corpora to significantly improve sense embeddings beyond what one achieves with bilingual information, and (b) uses a principled approach to learn a variable number of senses per word, in a data-driven manner.

Representation Learning Word Embeddings

Quantifying and Reducing Stereotypes in Word Embeddings

no code implementations20 Jun 2016 Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, Adam Kalai

Machine learning algorithms are optimized to model statistical properties of the training data.

Word Embeddings

A Credit Assignment Compiler for Joint Prediction

no code implementations NeurIPS 2016 Kai-Wei Chang, He He, Hal Daumé III, John Langford, Stephane Ross

Many machine learning applications involve jointly predicting multiple mutually dependent output variables.

Learning to Search Better Than Your Teacher

no code implementations8 Feb 2015 Kai-Wei Chang, Akshay Krishnamurthy, Alekh Agarwal, Hal Daumé III, John Langford

Methods for learning to search for structured prediction typically imitate a reference policy, with existing theoretical guarantees demonstrating low regret compared to that reference.

Multi-Armed Bandits Structured Prediction

Learning to Search for Dependencies

no code implementations18 Mar 2015 Kai-Wei Chang, He He, Hal Daumé III, John Langford

We demonstrate that a dependency parser can be built using a credit assignment compiler which removes the burden of worrying about low-level machine learning details from the parser implementation.

BIG-bench Machine Learning

Quantification and Analysis of Scientific Language Variation Across Research Fields

no code implementations4 Dec 2018 Pei Zhou, Muhao Chen, Kai-Wei Chang, Carlo Zaniolo

Quantifying differences in terminologies from various academic domains has been a longstanding problem yet to be solved.

Language Modelling

Efficient Contextual Representation Learning Without Softmax Layer

no code implementations28 Feb 2019 Liunian Harold Li, Patrick H. Chen, Cho-Jui Hsieh, Kai-Wei Chang

Our framework reduces the time spent on the output layer to a negligible level, eliminates almost all the trainable parameters of the softmax layer and performs language modeling without truncating the vocabulary.

Dimensionality Reduction Language Modelling +2

Dynamically Expanded CNN Array for Video Coding

no code implementations10 May 2019 Everett Fall, Kai-Wei Chang, Liang-Gee Chen

Marked progress has been made in video quality, compression, and computational efficiency.

Computational Efficiency

Pre-Training Graph Neural Networks for Generic Structural Feature Extraction

no code implementations31 May 2019 Ziniu Hu, Changjun Fan, Ting Chen, Kai-Wei Chang, Yizhou Sun

With the proposed pre-training procedure, the generic structural information is learned and preserved, thus the pre-trained GNN requires less amount of labeled data and fewer domain-specific features to achieve high performance on different downstream tasks.

Denoising

Learning Bilingual Word Embeddings Using Lexical Definitions

no code implementations WS 2019 Weijia Shi, Muhao Chen, Yingtao Tian, Kai-Wei Chang

Bilingual word embeddings, which representlexicons of different languages in a shared em-bedding space, are essential for supporting se-mantic and knowledge transfers in a variety ofcross-lingual NLP tasks.

Translation Word Alignment +1

BOSH: An Efficient Meta Algorithm for Decision-based Attacks

no code implementations10 Sep 2019 Zhenxin Xiao, Puyudi Yang, Yuchen Jiang, Kai-Wei Chang, Cho-Jui Hsieh

Adversarial example generation becomes a viable method for evaluating the robustness of a machine learning model.

Adversarial Attack Bayesian Optimization

Retrofitting Contextualized Word Embeddings with Paraphrases

no code implementations IJCNLP 2019 Weijia Shi, Muhao Chen, Pei Zhou, Kai-Wei Chang

Contextualized word embedding models, such as ELMo, generate meaningful representations of words and their context.

Sentence Sentence Classification +1

``The Boating Store Had Its Best Sail Ever'': Pronunciation-attentive Contextualized Pun Recognition

no code implementations ACL 2020 Yichao Zhou, Jyun-Yu Jiang, Jieyu Zhao, Kai-Wei Chang, Wei Wang

In this paper, we propose Pronunciation-attentive Contextualized Pun Recognition (PCPR) to perceive human humor, detect if a sentence contains puns and locate them in the sentence.

Sentence

What Does BERT with Vision Look At?

no code implementations ACL 2020 Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang

Pre-trained visually grounded language models such as ViLBERT, LXMERT, and UNITER have achieved significant performance improvement on vision-and-language tasks but what they learn during pre-training remains unclear.

Language Modelling

On the Transferability of Adversarial Attacksagainst Neural Text Classifier

no code implementations17 Nov 2020 Liping Yuan, Xiaoqing Zheng, Yi Zhou, Cho-Jui Hsieh, Kai-Wei Chang

Based on these studies, we propose a genetic algorithm to find an ensemble of models that can be used to induce adversarial examples to fool almost all existing models.

text-classification Text Classification

CREATe: Clinical Report Extraction and Annotation Technology

no code implementations28 Feb 2021 Yichao Zhou, Wei-Ting Chen, BoWen Zhang, David Lee, J. Harry Caufield, Kai-Wei Chang, Yizhou Sun, Peipei Ping, Wei Wang

Clinical case reports are written descriptions of the unique aspects of a particular clinical case, playing an essential role in sharing clinical experiences about atypical disease phenotypes and new therapies.

``Nice Try, Kiddo'': Investigating Ad Hominems in Dialogue Responses

no code implementations NAACL 2021 Emily Sheng, Kai-Wei Chang, Prem Natarajan, Nanyun Peng

Ad hominem attacks are those that target some feature of a person{'}s character instead of the position the person is maintaining.

Abusive Language

Cross-Lingual Dependency Parsing by POS-Guided Word Reordering

no code implementations Findings of the Association for Computational Linguistics 2020 Lu Liu, Yi Zhou, Jianhan Xu, Xiaoqing Zheng, Kai-Wei Chang, Xuanjing Huang

The words in each sentence of a source language corpus are rearranged to meet the word order in a target language under the guidance of a part-of-speech based language model (LM).

Dependency Parsing Language Modelling +2

Does Robustness Improve Fairness? Approaching Fairness with Word Substitution Robustness Methods for Text Classification

no code implementations Findings (ACL) 2021 Yada Pruksachatkun, Satyapriya Krishna, Jwala Dhamala, Rahul Gupta, Kai-Wei Chang

Existing bias mitigation methods to reduce disparities in model outcomes across cohorts have focused on data augmentation, debiasing model embeddings, or adding fairness-based optimization objectives during training.

Data Augmentation Fairness +2

Clinical Named Entity Recognition using Contextualized Token Representations

no code implementations23 Jun 2021 Yichao Zhou, Chelsea Ju, J. Harry Caufield, Kevin Shih, Calvin Chen, Yizhou Sun, Kai-Wei Chang, Peipei Ping, Wei Wang

To facilitate various downstream applications using clinical case reports (CCRs), we pre-train two deep contextualized language models, Clinical Embeddings from Language Model (C-ELMo) and Clinical Contextual String Embeddings (C-Flair) using the clinical-related corpus from the PubMed Central.

Language Modelling named-entity-recognition +3

On Measures of Biases and Harms in NLP

no code implementations7 Aug 2021 Sunipa Dev, Emily Sheng, Jieyu Zhao, Aubrie Amstutz, Jiao Sun, Yu Hou, Mattie Sanseverino, Jiin Kim, Akihiro Nishi, Nanyun Peng, Kai-Wei Chang

Recent studies show that Natural Language Processing (NLP) technologies propagate societal biases about demographic groups associated with attributes such as gender, race, and nationality.

Toward Degradation-Robust Voice Conversion

no code implementations14 Oct 2021 Chien-yu Huang, Kai-Wei Chang, Hung-Yi Lee

However, in real-world scenarios, it is difficult to collect clean utterances of a speaker, and they are usually degraded by noises or reverberations.

Denoising Speech Enhancement +1

On the Transferability of Adversarial Attacks against Neural Text Classifier

no code implementations EMNLP 2021 Liping Yuan, Xiaoqing Zheng, Yi Zhou, Cho-Jui Hsieh, Kai-Wei Chang

Based on these studies, we propose a genetic algorithm to find an ensemble of models that can be used to induce adversarial examples to fool almost all existing models.

text-classification Text Classification

Robustness and Adversarial Examples in Natural Language Processing

no code implementations EMNLP (ACL) 2021 Kai-Wei Chang, He He, Robin Jia, Sameer Singh

In particular, we will review recent studies on analyzing the weakness of NLP systems when facing adversarial inputs and data with a distribution shift.

SGEITL: Scene Graph Enhanced Image-Text Learning for Visual Commonsense Reasoning

no code implementations16 Dec 2021 Zhecan Wang, Haoxuan You, Liunian Harold Li, Alireza Zareian, Suji Park, Yiqing Liang, Kai-Wei Chang, Shih-Fu Chang

As for pre-training, a scene-graph-aware pre-training method is proposed to leverage structure knowledge extracted in the visual scene graph.

Visual Commonsense Reasoning

Neuro-Symbolic Entropy Regularization

no code implementations25 Jan 2022 Kareem Ahmed, Eric Wang, Kai-Wei Chang, Guy Van Den Broeck

We propose a loss, neuro-symbolic entropy regularization, that encourages the model to confidently predict a valid object.

Structured Prediction valid

A Survey of Knowledge-Intensive NLP with Pre-Trained Language Models

no code implementations17 Feb 2022 Da Yin, Li Dong, Hao Cheng, Xiaodong Liu, Kai-Wei Chang, Furu Wei, Jianfeng Gao

With the increasing of model capacity brought by pre-trained language models, there emerges boosting needs for more knowledgeable natural language processing (NLP) models with advanced functionalities including providing and making flexible use of encyclopedic and commonsense knowledge.

Language Modelling

Measuring Fairness of Text Classifiers via Prediction Sensitivity

no code implementations ACL 2022 Satyapriya Krishna, Rahul Gupta, Apurv Verma, Jwala Dhamala, Yada Pruksachatkun, Kai-Wei Chang

With the rapid growth in language processing applications, fairness has emerged as an important consideration in data-driven solutions.

Attribute counterfactual +3

Retrieval Enhanced Data Augmentation for Question Answering on Privacy Policies

no code implementations19 Apr 2022 Md Rizwan Parvez, Jianfeng Chi, Wasi Uddin Ahmad, Yuan Tian, Kai-Wei Chang

Prior studies in privacy policies frame the question answering (QA) task as identifying the most relevant text segment or a list of sentences from a policy document given a user query.

Data Augmentation Question Answering +1

Multimodal Adaptive Distillation for Leveraging Unimodal Encoders for Vision-Language Tasks

no code implementations22 Apr 2022 Zhecan Wang, Noel Codella, Yen-Chun Chen, Luowei Zhou, Xiyang Dai, Bin Xiao, Jianwei Yang, Haoxuan You, Kai-Wei Chang, Shih-Fu Chang, Lu Yuan

Experiments demonstrate that MAD leads to consistent gains in the low-shot, domain-shifted, and fully-supervised conditions on VCR, SNLI-VE, and VQA, achieving SOTA performance on VCR compared to other single models pretrained with image-text data.

Question Answering Visual Commonsense Reasoning +2

Towards Adversarially Robust Text Classifiers by Learning to Reweight Clean Examples

no code implementations Findings (ACL) 2022 Jianhan Xu, Cenyuan Zhang, Xiaoqing Zheng, Linyang Li, Cho-Jui Hsieh, Kai-Wei Chang, Xuanjing Huang

Most of the existing defense methods improve the adversarial robustness by making the models adapt to the training set augmented with some adversarial examples.

Adversarial Robustness

Using Item Response Theory to Measure Gender and Racial Bias of a BERT-based Automated English Speech Assessment System

no code implementations NAACL (BEA) 2022 Alexander Kwako, Yixin Wan, Jieyu Zhao, Kai-Wei Chang, Li Cai, Mark Hansen

This study addresses the need to examine potential biases of transformer-based models in the context of automated English speech assessment.

An Analysis of the Effects of Decoding Algorithms on Fairness in Open-Ended Language Generation

no code implementations7 Oct 2022 Jwala Dhamala, Varun Kumar, Rahul Gupta, Kai-Wei Chang, Aram Galstyan

We present a systematic analysis of the impact of decoding algorithms on LM fairness, and analyze the trade-off between fairness, diversity and quality.

Fairness Text Generation

Watermarking Pre-trained Language Models with Backdooring

no code implementations14 Oct 2022 Chenxi Gu, Chengsong Huang, Xiaoqing Zheng, Kai-Wei Chang, Cho-Jui Hsieh

Large pre-trained language models (PLMs) have proven to be a crucial component of modern natural language processing systems.

Multi-Task Learning

The Tail Wagging the Dog: Dataset Construction Biases of Social Bias Benchmarks

1 code implementation18 Oct 2022 Nikil Roashan Selvam, Sunipa Dev, Daniel Khashabi, Tushar Khot, Kai-Wei Chang

How reliably can we trust the scores obtained from social bias benchmarks as faithful indicators of problematic social biases in a given language model?

Language Modelling

Investigating Ensemble Methods for Model Robustness Improvement of Text Classifiers

no code implementations28 Oct 2022 Jieyu Zhao, Xuezhi Wang, Yao Qin, Jilin Chen, Kai-Wei Chang

Large pre-trained language models have shown remarkable performance over the past few years.

Unsupervised Syntactically Controlled Paraphrase Generation with Abstract Meaning Representations

no code implementations2 Nov 2022 Kuan-Hao Huang, Varun Iyer, Anoop Kumar, Sriram Venkatapathy, Kai-Wei Chang, Aram Galstyan

In this paper, we demonstrate that leveraging Abstract Meaning Representations (AMR) can greatly improve the performance of unsupervised syntactically controlled paraphrase generation.

Data Augmentation Paraphrase Generation +1

Understanding ME? Multimodal Evaluation for Fine-grained Visual Commonsense

no code implementations10 Nov 2022 Zhecan Wang, Haoxuan You, Yicheng He, Wenhao Li, Kai-Wei Chang, Shih-Fu Chang

Visual commonsense understanding requires Vision Language (VL) models to not only understand image and text but also cross-reference in-between to fully integrate and achieve comprehension of the visual scene described.

Auditing Algorithmic Fairness in Machine Learning for Health with Severity-Based LOGAN

no code implementations16 Nov 2022 Anaelia Ovalle, Sunipa Dev, Jieyu Zhao, Majid Sarrafzadeh, Kai-Wei Chang

Therefore, ML auditing tools must be (1) better aligned with ML4H auditing principles and (2) able to illuminate and characterize communities vulnerable to the most harm.

Bias Detection Clustering +1

Find Someone Who: Visual Commonsense Understanding in Human-Centric Grounding

no code implementations14 Dec 2022 Haoxuan You, Rui Sun, Zhecan Wang, Kai-Wei Chang, Shih-Fu Chang

We present a new commonsense task, Human-centric Commonsense Grounding, that tests the models' ability to ground individuals given the context descriptions about what happened before, and their mental/physical states or intentions.

GIVL: Improving Geographical Inclusivity of Vision-Language Models with Pre-Training Methods

no code implementations CVPR 2023 Da Yin, Feng Gao, Govind Thattai, Michael Johnston, Kai-Wei Chang

A key goal for the advancement of AI is to develop technologies that serve the needs not just of one group but of all communities regardless of their geographical region.

Ensemble knowledge distillation of self-supervised speech models

no code implementations24 Feb 2023 Kuan-Po Huang, Tzu-hsun Feng, Yu-Kuan Fu, Tsu-Yuan Hsu, Po-Chieh Yen, Wei-Cheng Tseng, Kai-Wei Chang, Hung-Yi Lee

We tried two different aggregation techniques, layerwise-average and layerwise-concatenation, to the representations of different teacher models and found that the former was more effective.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

Semantic Strengthening of Neuro-Symbolic Learning

no code implementations28 Feb 2023 Kareem Ahmed, Kai-Wei Chang, Guy Van Den Broeck

Numerous neuro-symbolic approaches have recently been proposed typically with the goal of adding symbolic knowledge to the output layer of a neural network.

SpeechPrompt v2: Prompt Tuning for Speech Classification Tasks

no code implementations1 Mar 2023 Kai-Wei Chang, Yu-Kai Wang, Hua Shen, Iu-thing Kang, Wei-Cheng Tseng, Shang-Wen Li, Hung-Yi Lee

For speech processing, SpeechPrompt shows its high parameter efficiency and competitive performance on a few speech classification tasks.

Ranked #17 on Spoken Language Understanding on Fluent Speech Commands (using extra training data)

Classification Language Modelling +1

Factoring the Matrix of Domination: A Critical Review and Reimagination of Intersectionality in AI Fairness

no code implementations16 Mar 2023 Anaelia Ovalle, Arjun Subramonian, Vagrant Gautam, Gilbert Gee, Kai-Wei Chang

Through a critical review of how intersectionality is discussed in 30 papers from the AI fairness literature, we deductively and inductively: 1) map how intersectionality tenets operate within the AI fairness paradigm and 2) uncover gaps between the conceptualization and operationalization of intersectionality.

Fairness

Understanding and Mitigating Spurious Correlations in Text Classification with Neighborhood Analysis

1 code implementation23 May 2023 Oscar Chew, Hsuan-Tien Lin, Kai-Wei Chang, Kuan-Hao Huang

Recent research has revealed that machine learning models have a tendency to leverage spurious correlations that exist in the training set but may not hold true in general circumstances.

text-classification Text Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.