no code implementations • 12 Sep 2023 • Arpita Vats, Zhe Liu, Peng Su, Debjyoti Paul, Yingyi Ma, Yutong Pang, Zeeshan Ahmed, Ozlem Kalinli
To effectively perform adaptation, textual data of users is typically stored on servers or their local devices, where downstream natural language processing (NLP) models can be directly trained using such in-domain data.
no code implementations • 20 Jan 2023 • Szu-Jui Chen, Debjyoti Paul, Yutong Pang, Peng Su, Xuedong Zhang
With the emergence of automatic speech recognition (ASR) models, converting the spoken form text (from ASR) to the written form is in urgent need.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+2
1 code implementation • NAACL (BioNLP) 2021 • Peng Su, Yifan Peng, K. Vijay-Shanker
In this work, we explore the method of employing contrastive learning to improve the text representation from the BERT model for relation extraction.
no code implementations • 23 Mar 2021 • Shixiang Tang, Peng Su, Dapeng Chen, Wanli Ouyang
To better understand this issue, we study the problem of continual domain adaptation, where the model is presented with a labelled source domain and a sequence of unlabelled target domains.
no code implementations • 1 Jan 2021 • Di Qiu, Zhanghan Ke, Peng Su, Lok Ming Lui
Many important problems in the real world don't have unique solutions.
no code implementations • 1 Nov 2020 • Peng Su, K. Vijay-Shanker
In this paper, we will investigate the method of utilizing the entire layer in the fine-tuning process of BERT model.
no code implementations • 26 Jul 2020 • Lei Shi, Kai Shuang, Shijie Geng, Peng Su, Zhengkai Jiang, Peng Gao, Zuohui Fu, Gerard de Melo, Sen Su
We evaluate CVLP on several down-stream tasks, including VQA, GQA and NLVR2 to validate the superiority of contrastive learning on multi-modality representation learning.
no code implementations • 25 Jul 2020 • Peng Su, Shixiang Tang, Peng Gao, Di Qiu, Ni Zhao, Xiaogang Wang
At the core of our method, gradient regularization plays two key roles: (1) enforces the gradient of contrastive loss not to increase the supervised training loss on the source domain, which maintains the discriminative power of learned features; (2) regularizes the gradient update on the new domain not to increase the classification loss on the old target domains, which enables the model to adapt to an in-coming target domain while preserving the performance of previously observed domains.
no code implementations • 8 May 2020 • Peng Su, K. Vijay-Shanker
Adversarial training is a technique of improving model performance by involving adversarial examples in the training process.
no code implementations • ECCV 2020 • Peng Su, Kun Wang, Xingyu Zeng, Shixiang Tang, Dapeng Chen, Di Qiu, Xiaogang Wang
Then this domain-vector is used to encode the features from another domain through a conditional normalization, resulting in different domains' features carrying the same domain attribute.
Ranked #1 on
Unsupervised Domain Adaptation
on SIM10K to BDD100K
2 code implementations • 12 May 2017 • Peng Su, Xiao-Rong Ding, Yuan-Ting Zhang, Jing Liu, Fen Miao, Ni Zhao
Existing methods for arterial blood pressure (BP) estimation directly map the input physiological signals to output BP values without explicitly modeling the underlying temporal dependencies in BP dynamics.
Ranked #1 on
Blood pressure estimation
on MIMIC-III