In this work, we explore the method of employing contrastive learning to improve the text representation from the BERT model for relation extraction.
To better understand this issue, we study the problem of continual domain adaptation, where the model is presented with a labelled source domain and a sequence of unlabelled target domains.
In this paper, we will investigate the method of utilizing the entire layer in the fine-tuning process of BERT model.
We evaluate CVLP on several down-stream tasks, including VQA, GQA and NLVR2 to validate the superiority of contrastive learning on multi-modality representation learning.
At the core of our method, gradient regularization plays two key roles: (1) enforces the gradient of contrastive loss not to increase the supervised training loss on the source domain, which maintains the discriminative power of learned features; (2) regularizes the gradient update on the new domain not to increase the classification loss on the old target domains, which enables the model to adapt to an in-coming target domain while preserving the performance of previously observed domains.
Adversarial training is a technique of improving model performance by involving adversarial examples in the training process.
Then this domain-vector is used to encode the features from another domain through a conditional normalization, resulting in different domains' features carrying the same domain attribute.
Ranked #1 on Unsupervised Domain Adaptation on SIM10K to BDD100K
Existing methods for arterial blood pressure (BP) estimation directly map the input physiological signals to output BP values without explicitly modeling the underlying temporal dependencies in BP dynamics.
Ranked #1 on Blood pressure estimation on MIMIC-III