no code implementations • 2 Dec 2024 • Chenghao Li, Yuke Zhang, Dake Chen, Jingqi Xu, Peter A. Beerel
In this paper, we address this gap by examining and mitigating the impact of the model structure, specifically the skip connections in the diffusion model's U-Net model.
no code implementations • 20 Feb 2024 • X. San Liang, Dake Chen, Renhe Zhang
It has been said, arguably, that causality analysis should pave a promising way to interpretable deep learning and generalization.
no code implementations • 12 Oct 2023 • Dake Chen, Hanbin Wang, Yunhao Huo, Yuzhao Li, Haoyang Zhang
The large language model (LLM) based agents have demonstrated their capacity to automate and expedite software development processes.
1 code implementation • 13 Sep 2023 • Chenghao Li, Dake Chen, Yuke Zhang, Peter A. Beerel
While diffusion models demonstrate a remarkable capability for generating high-quality images, their tendency to `replicate' training data raises privacy concerns.
no code implementations • 8 Jun 2023 • Dake Chen, Christine Goins, Maxwell Waugaman, Georgios D. Dimou, Peter A. Beerel
In this paper, we describe and analyze an island-based random dynamic voltage scaling (iRDVS) approach to thwart power side-channel attacks.
no code implementations • 26 Apr 2023 • Souvik Kundu, Yuke Zhang, Dake Chen, Peter A. Beerel
Large number of ReLU and MAC operations of Deep neural networks make them ill-suited for latency and compute-efficient private inference.
no code implementations • ICCV 2023 • Yuke Zhang, Dake Chen, Souvik Kundu, Chenghao Li, Peter A. Beerel
Then, given our observation that external attention (EA) presents lower PI latency than widely-adopted self-attention (SA) at the cost of accuracy, we present a selective attention search (SAS) method to integrate the strength of EA and SA.