Search Results Chao Gao

Found 56 papers, 7 papers with code

$β$-DQN: Improving Deep Q-Learning By Evolving the Behavior

no code implementations1 Jan 2025 Hongming Zhang, Fengshuo Bai, Chenjun Xiao, Chao GAO, Bo Xu, Martin Müller

Motivated by this, we introduce $\beta$-DQN, a simple and efficient exploration method that augments the standard DQN with a behavior function $\beta$.

Deep Reinforcement Learning Efficient Exploration +1

MAS-Attention: Memory-Aware Stream Processing for Attention Acceleration on Resource-Constrained Edge Devices

no code implementations20 Nov 2024 Mohammadali Shakerdargah, Shan Lu, Chao GAO, Di Niu

In this paper, we propose a scheme for exact attention inference acceleration on memory-constrained edge accelerators, by parallelizing the utilization of heterogeneous compute units, i. e., vector processing units and matrix processing units.

Edge-computing Scheduling

ResAD: A Simple Framework for Class Generalizable Anomaly Detection

1 code implementation26 Oct 2024 Xincheng Yao, Zixin Chen, Chao GAO, Guangtao Zhai, Chongyang Zhang

In this work, we propose a simple but effective framework (called ResAD) that can be directly applied to detect anomalies in new classes.

Anomaly Detection

Modeling Bilingual Sentence Processing: Evaluating RNN and Transformer Architectures for Cross-Language Structural Priming

no code implementations15 May 2024 Demi Zhang, Bushi Xiao, Chao GAO, Sangpil Youm, Bonnie J Dorr

This study evaluates the performance of Recurrent Neural Network (RNN) and Transformer models in replicating cross-language structural priming, a key indicator of abstract grammatical representations in human language processing.

Retrieval Sentence

DLoRA: Distributed Parameter-Efficient Fine-Tuning Solution for Large Language Model

no code implementations8 Apr 2024 Chao GAO, Sai Qian Zhang

Due to the scale of LLM, PEFT operations are usually executed in the public environment (e. g., cloud server).

Language Modeling Language Modelling +2

Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey

no code implementations21 Mar 2024 Zeyu Han, Chao GAO, Jinyang Liu, Jeff Zhang, Sai Qian Zhang

Parameter Efficient Fine-Tuning (PEFT) provides a practical solution by efficiently adjusting the large models over the various downstream tasks.

parameter-efficient fine-tuning Survey

GIN-SD: Source Detection in Graphs with Incomplete Nodes via Positional Encoding and Attentive Fusion

no code implementations27 Feb 2024 Le Cheng, Peican Zhu, Keke Tang, Chao GAO, Zhen Wang

In this paper, we address a more challenging task, rumor source detection with incomplete user data, and propose a novel framework, i. e., Source Detection in Graphs with Incomplete Nodes via Positional Encoding and Attentive Fusion (GIN-SD), to tackle this challenge.

GAMC: An Unsupervised Method for Fake News Detection using Graph Autoencoder with Masking

1 code implementation10 Dec 2023 Shu Yin, Chao GAO, Zhen Wang

With the rise of social media, the spread of fake news has become a significant concern, potentially misleading public perceptions and impacting social stability.

Contrastive Learning Decoder +2