In this paper, we evaluate the performance of GPT-3 as a data annotator by comparing it with traditional data annotation methods and analyzing its output on a range of tasks.
On the basis of the findings, we recommended the application of more systematic and comprehensive psychological metrics to further evaluate and improve the safety of LLMs.
Due to the huge amount of parameters, fine-tuning of pretrained language models (PLMs) is prone to overfitting in the low resource scenarios.
no code implementations • 10 Nov 2022 • Zewei Wang, Zhidong Tang, Yumeng Yuan, Ao Guo, Xin Luo, Renhe Chen, Chengwei Cao, Linlin Liu, Zhenghang Zhi, Weican Wu, Yingjia Guo, Yongqi Hu, Liujiang Yu, Ganbing Shang, Jing Chen, Jianshi Tang, Shaojian Hu, Shoumian Chen, Yuhang Zhao, Xufeng Kou
The capability of data processing in quantum computers relies on the CMOS-based cryogenic control and storage systems.
We evaluate our method on the FFHQR dataset and show that our method is effective for common portrait editing tasks, such as retouching, light editing, color transfer and expression editing.
We develop a new method for portrait image editing, which supports fine-grained editing of geometries, colors, lights and shadows using a single neural network model.
In this work, we explore methods to make better use of the multilingual annotation and language agnostic property of KG triples, and present novel knowledge based multilingual language models (KMLMs) trained directly on the knowledge triples.
With the source-language data as well as the translated data, a generation-based multilingual data augmentation method is introduced to further increase diversity by generating synthetic labeled data in multiple languages.
It works by adding light-weight adapter modules to a pretrained language model (PrLM) and only updating the parameters of adapter modules when learning on a downstream task.
We operationalize our framework by first proposing a novel sense-aware cross entropy loss to model word senses explicitly.
Data augmentation techniques have been widely used to improve machine learning performance as they enhance the generalization capability of models.
Transition-based top-down parsing with pointer networks has achieved state-of-the-art results in multiple parsing tasks, while having a linear time complexity.