no code implementations • 5 Feb 2025 • Dongliang Zhou, Haijun Zhang, Kai Yang, Linlin Liu, Han Yan, Xiaofei Xu, Zhao Zhang, Shuicheng Yan
The field of fashion compatibility learning has attracted great attention from both the academic and industrial communities in recent years.
no code implementations • 18 Apr 2024 • Lixing Tan, Shuang Song, Kangneng Zhou, Chengbo Duan, Lanying Wang, Huayang Ren, Linlin Liu, Wei zhang, Ruoxiu Xiao
Meanwhile, we also impose a supervised process by computing the similarity of computed real DRR and synthesized DRR images.
1 code implementation • 20 Dec 2022 • Bosheng Ding, Chengwei Qin, Linlin Liu, Yew Ken Chia, Shafiq Joty, Boyang Li, Lidong Bing
In this paper, we evaluate the performance of GPT-3 as a data annotator by comparing it with traditional data annotation methods and analyzing its output on a range of tasks.
1 code implementation • 16 Nov 2022 • Linlin Liu, Xingxuan Li, Megh Thakkar, Xin Li, Shafiq Joty, Luo Si, Lidong Bing
Due to the huge amount of parameters, fine-tuning of pretrained language models (PLMs) is prone to overfitting in the low resource scenarios.
no code implementations • 10 Nov 2022 • Zhidong Tang, Zewei Wang, Yumeng Yuan, Chang He, Xin Luo, Ao Guo, Renhe Chen, Yongqi Hu, Longfei Yang, Chengwei Cao, Linlin Liu, Liujiang Yu, Ganbing Shang, Yongfeng Cao, Shoumian Chen, Yuhang Zhao, Shaojian Hu, Xufeng Kou
Furthermore, by incorporating the Cryo-CMOS compact model into the process design kit (PDK), the cryogenic 4 Kb SRAM, 5-bit flash ADC and 8-bit current steering DAC are designed, and their performance is readily investigated and optimized on the EDA-compatible platform, hence laying a solid foundation for large-scale cryogenic IC design.
no code implementations • 24 May 2022 • QiAn Fu, Linlin Liu, Fei Hou, Ying He
We evaluate our method on the FFHQR dataset and show that our method is effective for common portrait editing tasks, such as retouching, light editing, color transfer and expression editing.
no code implementations • 4 Apr 2022 • Linlin Liu, QiAn Fu, Fei Hou, Ying He
We develop a new method for portrait image editing, which supports fine-grained editing of geometries, colors, lights and shadows using a single neural network model.
1 code implementation • 22 Nov 2021 • Linlin Liu, Xin Li, Ruidan He, Lidong Bing, Shafiq Joty, Luo Si
In this work, we explore methods to make better use of the multilingual annotation and language agnostic property of KG triples, and present novel knowledge based multilingual language models (KMLMs) trained directly on the knowledge triples.
no code implementations • ACL 2021 • Linlin Liu, Bosheng Ding, Lidong Bing, Shafiq Joty, Luo Si, Chunyan Miao
With the source-language data as well as the translated data, a generation-based multilingual data augmentation method is introduced to further increase diversity by generating synthetic labeled data in multiple languages.
no code implementations • ACL 2021 • Ruidan He, Linlin Liu, Hai Ye, Qingyu Tan, Bosheng Ding, Liying Cheng, Jia-Wei Low, Lidong Bing, Luo Si
It works by adding light-weight adapter modules to a pretrained language model (PrLM) and only updating the parameters of adapter modules when learning on a downstream task.
1 code implementation • COLING 2022 • Linlin Liu, Thien Hai Nguyen, Shafiq Joty, Lidong Bing, Luo Si
We operationalize our framework by first proposing a novel sense-aware cross entropy loss to model word senses explicitly.
no code implementations • EMNLP 2020 • Bosheng Ding, Linlin Liu, Lidong Bing, Canasai Kruengkrai, Thien Hai Nguyen, Shafiq Joty, Luo Si, Chunyan Miao
Data augmentation techniques have been widely used to improve machine learning performance as they enhance the generalization capability of models.
1 code implementation • IJCNLP 2019 • Linlin Liu, Xiang Lin, Shafiq Joty, Simeng Han, Lidong Bing
Transition-based top-down parsing with pointer networks has achieved state-of-the-art results in multiple parsing tasks, while having a linear time complexity.