Search Results for author: Dawei Gao

Found 23 papers, 11 papers with code

Do we Really Need Visual Instructions? Towards Visual Instruction-Free Fine-tuning for Large Vision-Language Models

no code implementations17 Feb 2025 Zikang Liu, Kun Zhou, Wayne Xin Zhao, Dawei Gao, Yaliang Li, Ji-Rong Wen

Despite the success, as visual instructions require images as the input, it would leave the gap in inheriting the task-solving capabilities from the backbone LLMs, and make it costly to collect a large-scale dataset.

visual instruction following Visual Reasoning

SEM-CLIP: Precise Few-Shot Learning for Nanoscale Defect Detection in Scanning Electron Microscope Image

no code implementations15 Feb 2025 Qian Jin, Yuqi Jiang, Xudong Lu, Yumeng Liu, Yining Chen, Dawei Gao, Qi Sun, Cheng Zhuo

In the field of integrated circuit manufacturing, the detection and classification of nanoscale wafer defects are critical for subsequent root cause analysis and yield enhancement.

Defect Detection Feature Engineering +2

KIMAs: A Configurable Knowledge Integrated Multi-Agent System

no code implementations13 Feb 2025 Zitao Li, Fei Wei, Yuexiang Xie, Dawei Gao, Weirui Kuang, Zhijian Ma, Bingchen Qian, Yaliang Li, Bolin Ding

Knowledge-intensive conversations supported by large language models (LLMs) have become one of the most popular and helpful applications that can assist people in different aspects.

Management RAG +1

GenSim: A General Social Simulation Platform with Large Language Model based Agents

1 code implementation6 Oct 2024 Jiakai Tang, Heyang Gao, Xuchen Pan, Lei Wang, Haoran Tan, Dawei Gao, Yushuo Chen, Xu Chen, Yankai Lin, Yaliang Li, Bolin Ding, Jingren Zhou, Jun Wang, Ji-Rong Wen

With the rapid advancement of large language models (LLMs), recent years have witnessed many promising studies on leveraging LLM-based agents to simulate human social behavior.

Language Modeling Language Modelling +1

Very Large-Scale Multi-Agent Simulation in AgentScope

1 code implementation25 Jul 2024 Xuchen Pan, Dawei Gao, Yuexiang Xie, Yushuo Chen, Zhewei Wei, Yaliang Li, Bolin Ding, Ji-Rong Wen, Jingren Zhou

Recent advances in large language models (LLMs) have opened new avenues for applying multi-agent systems in very large-scale simulations.

Less is More: High-value Data Selection for Visual Instruction Tuning

no code implementations14 Mar 2024 Zikang Liu, Kun Zhou, Wayne Xin Zhao, Dawei Gao, Yaliang Li, Ji-Rong Wen

To investigate this issue, we conduct a series of empirical studies, which reveal a significant redundancy within the visual instruction datasets, and show that greatly reducing the amount of instructions from several tasks even do not affect the performance.

Data-CUBE: Data Curriculum for Instruction-based Sentence Representation Learning

no code implementations7 Jan 2024 Yingqian Min, Kun Zhou, Dawei Gao, Wayne Xin Zhao, He Hu, Yaliang Li

Recently, multi-task instruction tuning has been applied into sentence representation learning, which endows the capability of generating specific representations with the guidance of task instruction, exhibiting strong generalization ability on new tasks.

Representation Learning Sentence +1

Message Passing Based Block Sparse Signal Recovery for DOA Estimation Using Large Arrays

no code implementations1 Sep 2023 Yiwen Mao, Dawei Gao, Qinghua Guo, Ming Jin

This work deals with directional of arrival (DOA) estimation with a large antenna array.

FederatedScope-LLM: A Comprehensive Package for Fine-tuning Large Language Models in Federated Learning

1 code implementation1 Sep 2023 Weirui Kuang, Bingchen Qian, Zitao Li, Daoyuan Chen, Dawei Gao, Xuchen Pan, Yuexiang Xie, Yaliang Li, Bolin Ding, Jingren Zhou

When several entities have similar interested tasks, but their data cannot be shared because of privacy concerns regulations, federated learning (FL) is a mainstream solution to leverage the data of different entities.

Benchmarking Federated Learning +2

Text-to-SQL Empowered by Large Language Models: A Benchmark Evaluation

1 code implementation29 Aug 2023 Dawei Gao, Haibin Wang, Yaliang Li, Xiuyu Sun, Yichen Qian, Bolin Ding, Jingren Zhou

Our explorations highlight open-source LLMs' potential in Text-to-SQL, as well as the advantages and disadvantages of the supervised fine-tuning.

Prompt Engineering Text-To-SQL

Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study

1 code implementation16 Jul 2023 Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen

Different from previous studies focused on overall performance, this work aims to investigate the impact of quantization on \emph{emergent abilities}, which are important characteristics that distinguish LLMs from small language models.

In-Context Learning Instruction Following +1

Efficient Personalized Federated Learning via Sparse Model-Adaptation

2 code implementations4 May 2023 Daoyuan Chen, Liuyi Yao, Dawei Gao, Bolin Ding, Yaliang Li

To overcome these challenges, we propose a novel approach named pFedGate for efficient personalized FL by adaptively and efficiently learning sparse local models.

model Personalized Federated Learning

FS-Real: Towards Real-World Cross-Device Federated Learning

no code implementations23 Mar 2023 Daoyuan Chen, Dawei Gao, Yuexiang Xie, Xuchen Pan, Zitao Li, Yaliang Li, Bolin Ding, Jingren Zhou

Federated Learning (FL) aims to train high-quality models in collaboration with distributed clients while not uploading their local data, which attracts increasing attention in both academia and industry.

Federated Learning

Hyper-Parameter Auto-Tuning for Sparse Bayesian Learning

no code implementations9 Nov 2022 Dawei Gao, Qinghua Guo, Ming Jin, Guisheng Liao, Yonina C. Eldar

Choosing the values of hyper-parameters in sparse Bayesian learning (SBL) can significantly impact performance.

Signal Detection in MIMO Systems with Hardware Imperfections: Message Passing on Neural Networks

no code implementations8 Oct 2022 Dawei Gao, Qinghua Guo, Guisheng Liao, Yonina C. Eldar, Yonghui Li, Yanguang Yu, Branka Vucetic

Modelling the MIMO system with NN enables the design of NN architectures based on the signal flow of the MIMO system, minimizing the number of NN layers and parameters, which is crucial to achieving efficient training with limited pilot signals.

Bayesian Inference

pFL-Bench: A Comprehensive Benchmark for Personalized Federated Learning

1 code implementation8 Jun 2022 Daoyuan Chen, Dawei Gao, Weirui Kuang, Yaliang Li, Bolin Ding

Personalized Federated Learning (pFL), which utilizes and deploys distinct local models, has gained increasing attention in recent years due to its success in handling the statistical heterogeneity of FL clients.

Fairness Personalized Federated Learning

A Benchmark for Federated Hetero-Task Learning

1 code implementation7 Jun 2022 Liuyi Yao, Dawei Gao, Zhen Wang, Yuexiang Xie, Weirui Kuang, Daoyuan Chen, Haohui Wang, Chenhe Dong, Bolin Ding, Yaliang Li

To investigate the heterogeneity in federated learning in real-world scenarios, we generalize the classic federated learning to federated hetero-task learning, which emphasizes the inconsistency across the participants in federated learning in terms of both data distribution and learning tasks.

Federated Learning Meta-Learning +2

FederatedScope: A Flexible Federated Learning Platform for Heterogeneity

1 code implementation11 Apr 2022 Yuexiang Xie, Zhen Wang, Dawei Gao, Daoyuan Chen, Liuyi Yao, Weirui Kuang, Yaliang Li, Bolin Ding, Jingren Zhou

Although remarkable progress has been made by existing federated learning (FL) platforms to provide infrastructures for development, these platforms may not well tackle the challenges brought by various types of heterogeneity, including the heterogeneity in participants' local data, resources, behaviors and learning goals.

Federated Learning Hyperparameter Optimization

Massive MIMO As an Extreme Learning Machine

no code implementations1 Jul 2020 Dawei Gao, Qinghua Guo, Yonina C. Eldar

This work shows that a massive multiple-input multiple-output (MIMO) system with low-resolution analog-to-digital converters (ADCs) forms a natural extreme learning machine (ELM).

Pruning-Aware Merging for Efficient Multitask Inference

no code implementations23 May 2019 Xiaoxi He, Dawei Gao, Zimu Zhou, Yongxin Tong, Lothar Thiele

Given a set of deep neural networks, each pre-trained for a single task, it is desired that executing arbitrary combinations of tasks yields minimal computation cost.

Network Pruning

Extreme Learning Machine-Based Receiver for MIMO LED Communications

no code implementations27 Feb 2019 Dawei Gao, Qinghua Guo

This work concerns receiver design for light-emitting diode (LED) multiple input multiple output (MIMO) communications where the LED nonlinearity can severely degrade the performance of communications.

Cannot find the paper you are looking for? You can Submit a new open access paper.