Search Results for author: Chang Ma

Found 11 papers, 10 papers with code

A Survey of Neural Code Intelligence: Paradigms, Advances and Beyond

1 code implementation21 Mar 2024 Qiushi Sun, Zhirui Chen, Fangzhi Xu, Kanzhi Cheng, Chang Ma, Zhangyue Yin, Jianing Wang, Chengcheng Han, Renyu Zhu, Shuai Yuan, Qipeng Guo, Xipeng Qiu, Pengcheng Yin, XiaoLi Li, Fei Yuan, Lingpeng Kong, Xiang Li, Zhiyong Wu

Building on our examination of the developmental trajectories, we further investigate the emerging synergies between code intelligence and broader machine intelligence, uncovering new cross-domain opportunities and illustrating the substantial influence of code intelligence across various domains.

Empowering Large Language Model Agents through Action Learning

1 code implementation24 Feb 2024 Haiteng Zhao, Chang Ma, Guoyin Wang, Jing Su, Lingpeng Kong, Jingjing Xu, Zhi-Hong Deng, Hongxia Yang

Large Language Model (LLM) Agents have recently garnered increasing interest yet they are limited in their ability to learn from trial and error, a key element of intelligent behavior.

Language Modelling Large Language Model

KS-Lottery: Finding Certified Lottery Tickets for Multilingual Language Models

no code implementations5 Feb 2024 Fei Yuan, Chang Ma, Shuai Yuan, Qiushi Sun, Lei LI

We further theoretically prove that KS-Lottery can find the certified winning tickets in the embedding layer, fine-tuning on the found parameters is guaranteed to perform as well as full fine-tuning.

Translation

AgentBoard: An Analytical Evaluation Board of Multi-turn LLM Agents

2 code implementations24 Jan 2024 Chang Ma, Junlei Zhang, Zhihao Zhu, Cheng Yang, Yujiu Yang, Yaohui Jin, Zhenzhong Lan, Lingpeng Kong, Junxian He

Evaluating large language models (LLMs) as general-purpose agents is essential for understanding their capabilities and facilitating their integration into practical applications.

Benchmarking

A Challenging Benchmark for Low-Resource Learning

1 code implementation7 Mar 2023 Yudong Wang, Chang Ma, Qingxiu Dong, Lingpeng Kong, Jingjing Xu

Experiments on a wide range of models show that neural networks, even pre-trained language models, have sharp performance drops on our benchmark, demonstrating the effectiveness on evaluating the weaknesses of neural networks.

Retrieved Sequence Augmentation for Protein Representation Learning

1 code implementation24 Feb 2023 Chang Ma, Haiteng Zhao, Lin Zheng, Jiayi Xin, Qintong Li, Lijun Wu, Zhihong Deng, Yang Lu, Qi Liu, Lingpeng Kong

RSA links query protein sequences to a set of sequences with similar structures or properties in the database and combines these sequences for downstream prediction.

Property Prediction Representation Learning +1

PEER: A Comprehensive and Multi-Task Benchmark for Protein Sequence Understanding

1 code implementation5 Jun 2022 Minghao Xu, Zuobai Zhang, Jiarui Lu, Zhaocheng Zhu, Yangtian Zhang, Chang Ma, Runcheng Liu, Jian Tang

However, there is a lack of a standard benchmark to evaluate the performance of different methods, which hinders the progress of deep learning in this field.

Feature Engineering Multi-Task Learning +2

Certified Robustness Against Natural Language Attacks by Causal Intervention

1 code implementation24 May 2022 Haiteng Zhao, Chang Ma, Xinshuai Dong, Anh Tuan Luu, Zhi-Hong Deng, Hanwang Zhang

Deep learning models have achieved great success in many fields, yet they are vulnerable to adversarial examples.

TorchDrug: A Powerful and Flexible Machine Learning Platform for Drug Discovery

1 code implementation16 Feb 2022 Zhaocheng Zhu, Chence Shi, Zuobai Zhang, Shengchao Liu, Minghao Xu, Xinyu Yuan, Yangtian Zhang, Junkun Chen, Huiyu Cai, Jiarui Lu, Chang Ma, Runcheng Liu, Louis-Pascal Xhonneux, Meng Qu, Jian Tang

However, lacking domain knowledge (e. g., which tasks to work on), standard benchmarks and data preprocessing pipelines are the main obstacles for machine learning researchers to work in this domain.

BIG-bench Machine Learning Drug Discovery +2

Domain Adaptation via Maximizing Surrogate Mutual Information

1 code implementation23 Oct 2021 Haiteng Zhao, Chang Ma, Qinyu Chen, Zhi-Hong Deng

In the framework, a surrogate joint distribution models the underlying joint distribution of the unlabeled target domain.

Transfer Learning Unsupervised Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.