Search Results for author: Cong Liao

Found 6 papers, 4 papers with code

Unifying the Perspectives of NLP and Software Engineering: A Survey on Language Models for Code

1 code implementation14 Nov 2023 Ziyin Zhang, Chaoyu Chen, Bingchang Liu, Cong Liao, Zi Gong, Hang Yu, Jianguo Li, Rui Wang

In this work we systematically review the recent advancements in code processing with language models, covering 50+ models, 30+ evaluation tasks, 170+ datasets, and 800 related works.

MFTCoder: Boosting Code LLMs with Multitask Fine-Tuning

1 code implementation4 Nov 2023 Bingchang Liu, Chaoyu Chen, Cong Liao, Zi Gong, Huan Wang, Zhichao Lei, Ming Liang, Dajun Chen, Min Shen, Hailian Zhou, Hang Yu, Jianguo Li

Code LLMs have emerged as a specialized research field, with remarkable studies dedicated to enhancing model's coding capabilities through fine-tuning on pre-trained models.

Multi-Task Learning

A Meta Reinforcement Learning Approach for Predictive Autoscaling in the Cloud

1 code implementation31 May 2022 Siqiao Xue, Chao Qu, Xiaoming Shi, Cong Liao, Shiyi Zhu, Xiaoyu Tan, Lintao Ma, Shiyu Wang, Shijun Wang, Yun Hu, Lei Lei, Yangfei Zheng, Jianguo Li, James Zhang

Predictive autoscaling (autoscaling with workload forecasting) is an important mechanism that supports autonomous adjustment of computing resources in accordance with fluctuating workload demands in the Cloud.

Decision Making Management +3

Pyraformer: Low-Complexity Pyramidal Attention for Long-Range Time Series Modeling and Forecasting

2 code implementations ICLR 2022 Shizhan Liu, Hang Yu, Cong Liao, Jianguo Li, Weiyao Lin, Alex X. Liu, Schahram Dustdar

Accurate prediction of the future given the past based on time series data is of paramount importance, since it opens the door for decision making and risk management ahead of time.

Decision Making Management +2

Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation

no code implementations30 Aug 2018 Cong Liao, Haoti Zhong, Anna Squicciarini, Sencun Zhu, David Miller

Such popularity, however, may attract attackers to exploit the vulnerabilities of the deployed deep learning models and launch attacks against security-sensitive applications.

Data Poisoning General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.