no code implementations • 20 Dec 2022 • Bosheng Ding, Chengwei Qin, Linlin Liu, Yew Ken Chia, Shafiq Joty, Boyang Li, Lidong Bing
In this paper, we evaluate the performance of GPT-3 as a data annotator by comparing it with traditional data annotation methods and analyzing its output on a range of tasks.
no code implementations • 20 Dec 2022 • Xingxuan Li, Yutong Li, Shafiq Joty, Linlin Liu, Fei Huang, Lin Qiu, Lidong Bing
On the basis of the findings, we recommended the application of more systematic and comprehensive psychological metrics to further evaluate and improve the safety of LLMs.
1 code implementation • 16 Nov 2022 • Linlin Liu, Xingxuan Li, Megh Thakkar, Xin Li, Shafiq Joty, Luo Si, Lidong Bing
Due to the huge amount of parameters, fine-tuning of pretrained language models (PLMs) is prone to overfitting in the low resource scenarios.
no code implementations • 10 Nov 2022 • Zewei Wang, Zhidong Tang, Yumeng Yuan, Ao Guo, Xin Luo, Renhe Chen, Chengwei Cao, Linlin Liu, Zhenghang Zhi, Weican Wu, Yingjia Guo, Yongqi Hu, Liujiang Yu, Ganbing Shang, Jing Chen, Jianshi Tang, Shaojian Hu, Shoumian Chen, Yuhang Zhao, Xufeng Kou
The capability of data processing in quantum computers relies on the CMOS-based cryogenic control and storage systems.
no code implementations • 24 May 2022 • QiAn Fu, Linlin Liu, Fei Hou, Ying He
We evaluate our method on the FFHQR dataset and show that our method is effective for common portrait editing tasks, such as retouching, light editing, color transfer and expression editing.
no code implementations • 4 Apr 2022 • Linlin Liu, QiAn Fu, Fei Hou, Ying He
We develop a new method for portrait image editing, which supports fine-grained editing of geometries, colors, lights and shadows using a single neural network model.
1 code implementation • 22 Nov 2021 • Linlin Liu, Xin Li, Ruidan He, Lidong Bing, Shafiq Joty, Luo Si
In this work, we explore methods to make better use of the multilingual annotation and language agnostic property of KG triples, and present novel knowledge based multilingual language models (KMLMs) trained directly on the knowledge triples.
no code implementations • ACL 2021 • Linlin Liu, Bosheng Ding, Lidong Bing, Shafiq Joty, Luo Si, Chunyan Miao
With the source-language data as well as the translated data, a generation-based multilingual data augmentation method is introduced to further increase diversity by generating synthetic labeled data in multiple languages.
no code implementations • ACL 2021 • Ruidan He, Linlin Liu, Hai Ye, Qingyu Tan, Bosheng Ding, Liying Cheng, Jia-Wei Low, Lidong Bing, Luo Si
It works by adding light-weight adapter modules to a pretrained language model (PrLM) and only updating the parameters of adapter modules when learning on a downstream task.
1 code implementation • COLING 2022 • Linlin Liu, Thien Hai Nguyen, Shafiq Joty, Lidong Bing, Luo Si
We operationalize our framework by first proposing a novel sense-aware cross entropy loss to model word senses explicitly.
no code implementations • EMNLP 2020 • Bosheng Ding, Linlin Liu, Lidong Bing, Canasai Kruengkrai, Thien Hai Nguyen, Shafiq Joty, Luo Si, Chunyan Miao
Data augmentation techniques have been widely used to improve machine learning performance as they enhance the generalization capability of models.
1 code implementation • IJCNLP 2019 • Linlin Liu, Xiang Lin, Shafiq Joty, Simeng Han, Lidong Bing
Transition-based top-down parsing with pointer networks has achieved state-of-the-art results in multiple parsing tasks, while having a linear time complexity.