Search Results for author: Qiushi Sun

Found 15 papers, 10 papers with code

A Survey of Neural Code Intelligence: Paradigms, Advances and Beyond

1 code implementation21 Mar 2024 Qiushi Sun, Zhirui Chen, Fangzhi Xu, Kanzhi Cheng, Chang Ma, Zhangyue Yin, Jianing Wang, Chengcheng Han, Renyu Zhu, Shuai Yuan, Qipeng Guo, Xipeng Qiu, Pengcheng Yin, XiaoLi Li, Fei Yuan, Lingpeng Kong, Xiang Li, Zhiyong Wu

Building on our examination of the developmental trajectories, we further investigate the emerging synergies between code intelligence and broader machine intelligence, uncovering new cross-domain opportunities and illustrating the substantial influence of code intelligence across various domains.

KS-Lottery: Finding Certified Lottery Tickets for Multilingual Language Models

no code implementations5 Feb 2024 Fei Yuan, Chang Ma, Shuai Yuan, Qiushi Sun, Lei LI

We further theoretically prove that KS-Lottery can find the certified winning tickets in the embedding layer, fine-tuning on the found parameters is guaranteed to perform as well as full fine-tuning.

Translation

SeeClick: Harnessing GUI Grounding for Advanced Visual GUI Agents

1 code implementation17 Jan 2024 Kanzhi Cheng, Qiushi Sun, Yougang Chu, Fangzhi Xu, Yantao Li, Jianbing Zhang, Zhiyong Wu

In our preliminary study, we have discovered a key challenge in developing visual GUI agents: GUI grounding -- the capacity to accurately locate screen elements based on instructions.

Exchange-of-Thought: Enhancing Large Language Model Capabilities through Cross-Model Communication

1 code implementation4 Dec 2023 Zhangyue Yin, Qiushi Sun, Cheng Chang, Qipeng Guo, Junqi Dai, Xuanjing Huang, Xipeng Qiu

Large Language Models (LLMs) have recently made significant strides in complex reasoning tasks through the Chain-of-Thought technique.

Language Modelling Large Language Model

Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models

no code implementations15 Nov 2023 Fangzhi Xu, Zhiyong Wu, Qiushi Sun, Siyu Ren, Fei Yuan, Shuai Yuan, Qika Lin, Yu Qiao, Jun Liu

Although Large Language Models (LLMs) demonstrate remarkable ability in processing and generating human-like text, they do have limitations when it comes to comprehending and expressing world knowledge that extends beyond the boundaries of natural language(e. g., chemical molecular formula).

World Knowledge

Uncertainty-aware Parameter-Efficient Self-training for Semi-supervised Language Understanding

1 code implementation19 Oct 2023 Jianing Wang, Qiushi Sun, Nuo Chen, Chengyu Wang, Jun Huang, Ming Gao, Xiang Li

The recent success of large pre-trained language models (PLMs) heavily hinges on massive labeled data, which typically produces inferior performance in low-resource scenarios.

Corex: Pushing the Boundaries of Complex Reasoning through Multi-Model Collaboration

1 code implementation30 Sep 2023 Qiushi Sun, Zhangyue Yin, Xiang Li, Zhiyong Wu, Xipeng Qiu, Lingpeng Kong

Large Language Models (LLMs) are evolving at an unprecedented pace and have exhibited considerable capability in the realm of natural language processing (NLP) with world knowledge.

World Knowledge

Exchanging-based Multimodal Fusion with Transformer

1 code implementation5 Sep 2023 Renyu Zhu, Chengcheng Han, Yong Qian, Qiushi Sun, Xiang Li, Ming Gao, Xuezhi Cao, Yunsen Xian

To solve these issues, in this paper, we propose a novel exchanging-based multimodal fusion model MuSE for text-vision fusion based on Transformer.

Image Captioning Multimodal Sentiment Analysis +3

Boosting Language Models Reasoning with Chain-of-Knowledge Prompting

no code implementations10 Jun 2023 Jianing Wang, Qiushi Sun, Nuo Chen, Xiang Li, Ming Gao

To mitigate this brittleness, we propose a novel Chain-of-Knowledge (CoK) prompting, where we aim at eliciting LLMs to generate explicit pieces of knowledge evidence in the form of structure triple.

Arithmetic Reasoning

Do Large Language Models Know What They Don't Know?

1 code implementation29 May 2023 Zhangyue Yin, Qiushi Sun, Qipeng Guo, Jiawen Wu, Xipeng Qiu, Xuanjing Huang

Large language models (LLMs) have a wealth of knowledge that allows them to excel in various Natural Language Processing (NLP) tasks.

In-Context Learning

TransCoder: Towards Unified Transferable Code Representation Learning Inspired by Human Skills

no code implementations23 May 2023 Qiushi Sun, Nuo Chen, Jianing Wang, Xiang Li, Ming Gao

To tackle the issue, in this paper, we present TransCoder, a unified Transferable fine-tuning strategy for Code representation learning.

Clone Detection Code Summarization +2

HugNLP: A Unified and Comprehensive Library for Natural Language Processing

1 code implementation28 Feb 2023 Jianing Wang, Nuo Chen, Qiushi Sun, Wenkang Huang, Chengyu Wang, Ming Gao

In this paper, we introduce HugNLP, a unified and comprehensive library for natural language processing (NLP) with the prevalent backend of HuggingFace Transformers, which is designed for NLP researchers to easily utilize off-the-shelf algorithms and develop novel methods with user-defined models and tasks in real-world scenarios.

Cannot find the paper you are looking for? You can Submit a new open access paper.