Search Results for author: Andrew O. Arnold

Found 9 papers, 6 papers with code

Learning Dialogue Representations from Consecutive Utterances

1 code implementation NAACL 2022 Zhihan Zhou, Dejiao Zhang, Wei Xiao, Nicholas Dingwall, Xiaofei Ma, Andrew O. Arnold, Bing Xiang

In this paper, we introduce Dialogue Sentence Embedding (DSE), a self-supervised contrastive learning method that learns effective dialogue representations suitable for a wide range of dialogue tasks.

Contrastive Learning Conversational Question Answering +14

Self-Supervised Speaker Verification with Simple Siamese Network and Self-Supervised Regularization

no code implementations8 Dec 2021 Mufan Sang, Haoqi Li, Fang Liu, Andrew O. Arnold, Li Wan

With our strong online data augmentation strategy, the proposed SSReg shows the potential of self-supervised learning without using negative pairs and it can significantly improve the performance of self-supervised speaker representation learning with a simple Siamese network architecture.

Contrastive Learning Data Augmentation +3

Virtual Augmentation Supported Contrastive Learning of Sentence Representations

1 code implementation Findings (ACL) 2022 Dejiao Zhang, Wei Xiao, Henghui Zhu, Xiaofei Ma, Andrew O. Arnold

We then define an instance discrimination task regarding this neighborhood and generate the virtual augmentation in an adversarial training manner.

Contrastive Learning Data Augmentation +2

Pairwise Supervised Contrastive Learning of Sentence Representations

1 code implementation EMNLP 2021 Dejiao Zhang, Shang-Wen Li, Wei Xiao, Henghui Zhu, Ramesh Nallapati, Andrew O. Arnold, Bing Xiang

Many recent successes in sentence representation learning have been achieved by simply fine-tuning on the Natural Language Inference (NLI) datasets with triplet loss or siamese loss.

Contrastive Learning Natural Language Inference +4

Contrastive Fine-tuning Improves Robustness for Neural Rankers

no code implementations Findings (ACL) 2021 Xiaofei Ma, Cicero Nogueira dos santos, Andrew O. Arnold

The performance of state-of-the-art neural rankers can deteriorate substantially when exposed to noisy inputs or applied to a new domain.

Data Augmentation Passage Ranking

Faithful Embeddings for Knowledge Base Queries

1 code implementation NeurIPS 2020 Haitian Sun, Andrew O. Arnold, Tania Bedrax-Weiss, Fernando Pereira, William W. Cohen

We address this problem with a novel QE method that is more faithful to deductive reasoning, and show that this leads to better performance on complex queries to incomplete KBs.

Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.