Search Results for author: James Y. Huang

Found 8 papers, 6 papers with code

Offset Unlearning for Large Language Models

no code implementations17 Apr 2024 James Y. Huang, Wenxuan Zhou, Fei Wang, Fred Morstatter, Sheng Zhang, Hoifung Poon, Muhao Chen

Despite the strong capabilities of Large Language Models (LLMs) to acquire knowledge from their training corpora, the memorization of sensitive information in the corpora such as copyrighted, harmful, and private content has led to ethical and legal concerns.

Memorization

Contrastive Instruction Tuning

1 code implementation17 Feb 2024 Tianyi Yan, Fei Wang, James Y. Huang, Wenxuan Zhou, Fan Yin, Aram Galstyan, Wenpeng Yin, Muhao Chen

Instruction tuning has been used as a promising approach to improve the performance of large language models (LLMs) on unseen tasks.

Sentence

DeAL: Decoding-time Alignment for Large Language Models

no code implementations5 Feb 2024 James Y. Huang, Sailik Sengupta, Daniele Bonadiman, Yi-An Lai, Arshit Gupta, Nikolaos Pappas, Saab Mansour, Katrin Kirchhoff, Dan Roth

Current work focuses on alignment at model training time, through techniques such as Reinforcement Learning with Human Feedback (RLHF).

Robust Natural Language Understanding with Residual Attention Debiasing

1 code implementation28 May 2023 Fei Wang, James Y. Huang, Tianyi Yan, Wenxuan Zhou, Muhao Chen

However, previous ensemble-based debiasing methods typically apply debiasing on top-level logits without directly addressing biased attention patterns.

Natural Language Understanding

Parameter-Efficient Tuning with Special Token Adaptation

1 code implementation10 Oct 2022 Xiaocong Yang, James Y. Huang, Wenxuan Zhou, Muhao Chen

Parameter-efficient tuning aims at updating only a small subset of parameters when adapting a pretrained model to downstream tasks.

Natural Language Understanding NER +2

Unified Semantic Typing with Meaningful Label Inference

1 code implementation NAACL 2022 James Y. Huang, Bangzheng Li, Jiashu Xu, Muhao Chen

Semantic typing aims at classifying tokens or spans of interest in a textual context into semantic categories such as relations, entity types, and event types.

Entity Typing Relation Classification

Disentangling Semantics and Syntax in Sentence Embeddings with Pre-trained Language Models

1 code implementation NAACL 2021 James Y. Huang, Kuan-Hao Huang, Kai-Wei Chang

In this work, we present ParaBART, a semantic sentence embedding model that learns to disentangle semantics and syntax in sentence embeddings obtained by pre-trained language models.

Semantic Similarity Semantic Textual Similarity +3

Cannot find the paper you are looking for? You can Submit a new open access paper.