Search Results for author: Wenxuan Zhou

Found 40 papers, 26 papers with code

Offset Unlearning for Large Language Models

no code implementations17 Apr 2024 James Y. Huang, Wenxuan Zhou, Fei Wang, Fred Morstatter, Sheng Zhang, Hoifung Poon, Muhao Chen

Despite the strong capabilities of Large Language Models (LLMs) to acquire knowledge from their training corpora, the memorization of sensitive information in the corpora such as copyrighted, harmful, and private content has led to ethical and legal concerns.

Memorization

Contrastive Instruction Tuning

1 code implementation17 Feb 2024 Tianyi Yan, Fei Wang, James Y. Huang, Wenxuan Zhou, Fan Yin, Aram Galstyan, Wenpeng Yin, Muhao Chen

Instruction tuning has been used as a promising approach to improve the performance of large language models (LLMs) on unseen tasks.

Sentence

GeoLM: Empowering Language Models for Geospatially Grounded Language Understanding

1 code implementation23 Oct 2023 Zekun Li, Wenxuan Zhou, Yao-Yi Chiang, Muhao Chen

This paper introduces GeoLM, a geospatially grounded language model that enhances the understanding of geo-entities in natural language.

Contrastive Learning Entity Typing +4

Reinforcement Learning in a Safety-Embedded MDP with Trajectory Optimization

no code implementations10 Oct 2023 Fan Yang, Wenxuan Zhou, Zuxin Liu, Ding Zhao, David Held

This work introduces a novel approach that combines RL with trajectory optimization to manage this trade-off effectively.

reinforcement-learning Reinforcement Learning (RL) +1

Robust Natural Language Understanding with Residual Attention Debiasing

1 code implementation28 May 2023 Fei Wang, James Y. Huang, Tianyi Yan, Wenxuan Zhou, Muhao Chen

However, previous ensemble-based debiasing methods typically apply debiasing on top-level logits without directly addressing biased attention patterns.

Natural Language Understanding

A Causal View of Entity Bias in (Large) Language Models

1 code implementation24 May 2023 Fei Wang, Wenjie Mo, Yiwei Wang, Wenxuan Zhou, Muhao Chen

Building upon this SCM, we propose causal intervention techniques to mitigate entity bias for both white-box and black-box settings.

Machine Reading Comprehension Memorization +1

Getting Sick After Seeing a Doctor? Diagnosing and Mitigating Knowledge Conflicts in Event Temporal Reasoning

no code implementations24 May 2023 Tianqing Fang, Zhaowei Wang, Wenxuan Zhou, Hongming Zhang, Yangqiu Song, Muhao Chen

However, knowledge conflicts arise when there is a mismatch between the actual temporal relations of events in the context and the prior knowledge or biases learned by the model.

counterfactual Data Augmentation +2

How Fragile is Relation Extraction under Entity Replacements?

1 code implementation22 May 2023 Yiwei Wang, Bryan Hooi, Fei Wang, Yujun Cai, Yuxuan Liang, Wenxuan Zhou, Jing Tang, Manjuan Duan, Muhao Chen

In principle, textual context determines the ground-truth relation and the RE models should be able to correctly identify the relations reflected by the textual context.

Benchmarking Causal Inference +2

HACMan: Learning Hybrid Actor-Critic Maps for 6D Non-Prehensile Manipulation

no code implementations6 May 2023 Wenxuan Zhou, Bowen Jiang, Fan Yang, Chris Paxton, David Held

In this work, we introduce Hybrid Actor-Critic Maps for Manipulation (HACMan), a reinforcement learning approach for 6D non-prehensile manipulation of objects using point cloud observations.

Object

Context-faithful Prompting for Large Language Models

1 code implementation20 Mar 2023 Wenxuan Zhou, Sheng Zhang, Hoifung Poon, Muhao Chen

However, their reliance on parametric knowledge may cause them to overlook contextual cues, leading to incorrect predictions in context-sensitive NLP tasks (e. g., knowledge acquisition tasks).

counterfactual Machine Reading Comprehension +1

Multi-hop Evidence Retrieval for Cross-document Relation Extraction

1 code implementation21 Dec 2022 Keming Lu, I-Hung Hsu, Wenxuan Zhou, Mingyu Derek Ma, Muhao Chen

Relation Extraction (RE) has been extended to cross-document scenarios because many relations are not simply described in a single document.

Relation Relation Extraction +1

Continual Contrastive Finetuning Improves Low-Resource Relation Extraction

no code implementations21 Dec 2022 Wenxuan Zhou, Sheng Zhang, Tristan Naumann, Muhao Chen, Hoifung Poon

In this paper, we aim at bridging the gap and propose to pretrain and finetune the RE model using consistent objectives of contrastive learning.

Contrastive Learning Relation +3

Learning to Grasp the Ungraspable with Emergent Extrinsic Dexterity

no code implementations2 Nov 2022 Wenxuan Zhou, David Held

Previous work in extrinsic dexterity usually has careful assumptions about contacts which impose restrictions on robot design, robot motions, and the variations of the physical parameters.

Friction Object +1

Parameter-Efficient Tuning with Special Token Adaptation

1 code implementation10 Oct 2022 Xiaocong Yang, James Y. Huang, Wenxuan Zhou, Muhao Chen

Parameter-efficient tuning aims at updating only a small subset of parameters when adapting a pretrained model to downstream tasks.

Natural Language Understanding NER +2

Summarization as Indirect Supervision for Relation Extraction

1 code implementation19 May 2022 Keming Lu, I-Hung Hsu, Wenxuan Zhou, Mingyu Derek Ma, Muhao Chen

Considering that summarization tasks aim at acquiring concise expressions of synoptical information from the longer context, these tasks naturally align with the objective of RE, i. e., extracting a kind of synoptical information that describes the relation of entity mentions.

Relation Relation Extraction +1

Should We Rely on Entity Mentions for Relation Extraction? Debiasing Relation Extraction with Counterfactual Analysis

1 code implementation NAACL 2022 Yiwei Wang, Muhao Chen, Wenxuan Zhou, Yujun Cai, Yuxuan Liang, Dayiheng Liu, Baosong Yang, Juncheng Liu, Bryan Hooi

In this paper, we propose the CORE (Counterfactual Analysis based Relation Extraction) debiasing method that guides the RE models to focus on the main effects of textual context without losing the entity information.

counterfactual Relation +2

GRAPHCACHE: Message Passing as Caching for Sentence-Level Relation Extraction

no code implementations Findings (NAACL) 2022 Yiwei Wang, Muhao Chen, Wenxuan Zhou, Yujun Cai, Yuxuan Liang, Bryan Hooi

GRAPHCACHE aggregates the features from sentences in the whole dataset to learn global representations of properties, and use them to augment the local features within individual sentences.

Relation Relation Extraction +1

Answer Consolidation: Formulation and Benchmarking

1 code implementation NAACL 2022 Wenxuan Zhou, Qiang Ning, Heba Elfardy, Kevin Small, Muhao Chen

Current question answering (QA) systems primarily consider the single-answer scenario, where each question is assumed to be paired with one correct answer.

Benchmarking Question Answering

Forgetting and Imbalance in Robot Lifelong Learning with Off-policy Data

no code implementations12 Apr 2022 Wenxuan Zhou, Steven Bohez, Jan Humplik, Abbas Abdolmaleki, Dushyant Rao, Markus Wulfmeier, Tuomas Haarnoja, Nicolas Heess

We propose the Offline Distillation Pipeline to break this trade-off by separating the training procedure into an online interaction phase and an offline distillation phase. Second, we find that training with the imbalanced off-policy data from multiple environments across the lifetime creates a significant performance drop.

Reinforcement Learning (RL)

Sharpness-Aware Minimization with Dynamic Reweighting

no code implementations16 Dec 2021 Wenxuan Zhou, Fangyu Liu, huan zhang, Muhao Chen

Deep neural networks are often overparameterized and may not easily achieve model generalization.

Natural Language Understanding

Lyapunov Barrier Policy Optimization

1 code implementation16 Mar 2021 Harshit Sikchi, Wenxuan Zhou, David Held

Current RL agents explore the environment without considering these constraints, which can lead to damage to the hardware or even other agents in the environment.

Reinforcement Learning (RL)

An Improved Baseline for Sentence-level Relation Extraction

1 code implementation2 Feb 2021 Wenxuan Zhou, Muhao Chen

Sentence-level relation extraction (RE) aims at identifying the relationship between two entities in a sentence.

Relation Relation Extraction +1

PLAS: Latent Action Space for Offline Reinforcement Learning

2 code implementations14 Nov 2020 Wenxuan Zhou, Sujay Bajracharya, David Held

The goal of offline reinforcement learning is to learn a policy from a fixed dataset, without further interactions with the environment.

Continuous Control Deformable Object Manipulation +2

Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling

1 code implementation21 Oct 2020 Wenxuan Zhou, Kevin Huang, Tengyu Ma, Jing Huang

In this paper, we propose two novel techniques, adaptive thresholding and localized context pooling, to solve the multi-label and multi-entity problems.

Document-level Relation Extraction Multi-Label Classification +2

Learning Off-Policy with Online Planning

1 code implementation23 Aug 2020 Harshit Sikchi, Wenxuan Zhou, David Held

In this work, we investigate a novel instantiation of H-step lookahead with a learned model and a terminal value function learned by a model-free off-policy algorithm, named Learning Off-Policy with Online Planning (LOOP).

Continuous Control reinforcement-learning +1

A Variational Approach to Unsupervised Sentiment Analysis

no code implementations21 Aug 2020 Ziqian Zeng, Wenxuan Zhou, Xin Liu, Zizheng Lin, Yangqin Song, Michael David Kuo, Wan Hang Keith Chiu

Our objective function is to predict an opinion word given a target word while our ultimate goal is to learn a sentiment classifier.

Sentiment Analysis

IsoBN: Fine-Tuning BERT with Isotropic Batch Normalization

1 code implementation2 May 2020 Wenxuan Zhou, Bill Yuchen Lin, Xiang Ren

Fine-tuning pre-trained language models (PTLMs), such as BERT and its better variant RoBERTa, has been a common practice for advancing performance in natural language understanding (NLU) tasks.

Natural Language Understanding Representation Learning

Improving BERT Fine-tuning with Embedding Normalization

no code implementations10 Nov 2019 Wenxuan Zhou, Junyi Du, Xiang Ren

Large pre-trained sentence encoders like BERT start a new chapter in natural language processing.

General Classification Sentence +2

Learning from Explanations with Neural Execution Tree

1 code implementation ICLR 2020 Ziqi Wang, Yujia Qin, Wenxuan Zhou, Jun Yan, Qinyuan Ye, Leonardo Neves, Zhiyuan Liu, Xiang Ren

While deep neural networks have achieved impressive performance on a range of NLP tasks, these data-hungry models heavily rely on labeled data, which restricts their applications in scenarios where data annotation is expensive.

Data Augmentation Multi-hop Question Answering +6

NERO: A Neural Rule Grounding Framework for Label-Efficient Relation Extraction

2 code implementations5 Sep 2019 Wenxuan Zhou, Hongtao Lin, Bill Yuchen Lin, Ziqi Wang, Junyi Du, Leonardo Neves, Xiang Ren

The soft matching module learns to match rules with semantically similar sentences such that raw corpora can be automatically labeled and leveraged by the RE module (in a much better coverage) as augmented supervision, in addition to the exactly matched sentences.

Relation Relation Extraction +1

Environment Probing Interaction Policies

1 code implementation ICLR 2019 Wenxuan Zhou, Lerrel Pinto, Abhinav Gupta

A key challenge in reinforcement learning (RL) is environment generalization: a policy trained to solve a task in one environment often fails to solve the same task in a slightly different test environment.

Reinforcement Learning (RL)

Self-regulation: Employing a Generative Adversarial Network to Improve Event Detection

1 code implementation ACL 2018 Yu Hong, Wenxuan Zhou, Jingli Zhang, Guodong Zhou, Qiaoming Zhu

Due to the ability of encoding and mapping semantic information into a high-dimensional latent feature space, neural networks have been successfully used for detecting events to a certain extent.

Event Detection Feature Engineering +1

SWEET: Serving the Web by Exploiting Email Tunnels

1 code implementation14 Nov 2012 Amir Houmansadr, Wenxuan Zhou, Matthew Caesar, Nikita Borisov

As the operation of SWEET is not bound to specific email providers we argue that a censor will need to block all email communications in order to disrupt SWEET, which is infeasible as email constitutes an important part of today's Internet.

Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.