Search Results for author: Xudong Han

Found 10 papers, 6 papers with code

Towards Equal Opportunity Fairness through Adversarial Learning

1 code implementation12 Mar 2022 Xudong Han, Timothy Baldwin, Trevor Cohn

Adversarial training is a common approach for bias mitigation in natural language processing.

Fairness Natural Language Processing

Contrastive Learning for Fair Representations

no code implementations22 Sep 2021 Aili Shen, Xudong Han, Trevor Cohn, Timothy Baldwin, Lea Frermann

Trained classification models can unintentionally lead to biased representations and predictions, which can reinforce societal preconceptions and stereotypes.

Contrastive Learning

Balancing out Bias: Achieving Fairness Through Balanced Training

no code implementations16 Sep 2021 Xudong Han, Timothy Baldwin, Trevor Cohn

Group bias in natural language processing tasks manifests as disparities in system error rates across texts authorized by different demographic groups, typically disadvantaging minority groups.

Fairness Natural Language Processing

Learning-based Optoelectronically Innervated Tactile Finger for Rigid-Soft Interactive Grasping

no code implementations29 Jan 2021 Linhan Yang, Xudong Han, Weijie Guo, Fang Wan, Jia Pan, Chaoyang Song

This paper presents a novel design of a soft tactile finger with omni-directional adaptation using multi-channel optical fibers for rigid-soft interactive grasping.

Robotics

Diverse Adversaries for Mitigating Bias in Training

1 code implementation EACL 2021 Xudong Han, Timothy Baldwin, Trevor Cohn

Adversarial learning can learn fairer and less biased models of language than standard methods.

Cannot find the paper you are looking for? You can Submit a new open access paper.