Search Results for author: Dake Chen

Found 6 papers, 1 papers with code

Quantitative causality, causality-guided scientific discovery, and causal machine learning

no code implementations20 Feb 2024 X. San Liang, Dake Chen, Renhe Zhang

It has been said, arguably, that causality analysis should pave a promising way to interpretable deep learning and generalization.

GameGPT: Multi-agent Collaborative Framework for Game Development

no code implementations12 Oct 2023 Dake Chen, Hanbin Wang, Yunhao Huo, Yuzhao Li, Haoyang Zhang

The large language model (LLM) based agents have demonstrated their capacity to automate and expedite software development processes.

Code Generation Hallucination +2

Mitigate Replication and Copying in Diffusion Models with Generalized Caption and Dual Fusion Enhancement

1 code implementation13 Sep 2023 Chenghao Li, Dake Chen, Yuke Zhang, Peter A. Beerel

While diffusion models demonstrate a remarkable capability for generating high-quality images, their tendency to `replicate' training data raises privacy concerns.

Language Modelling Large Language Model

Island-based Random Dynamic Voltage Scaling vs ML-Enhanced Power Side-Channel Attacks

no code implementations8 Jun 2023 Dake Chen, Christine Goins, Maxwell Waugaman, Georgios D. Dimou, Peter A. Beerel

In this paper, we describe and analyze an island-based random dynamic voltage scaling (iRDVS) approach to thwart power side-channel attacks.

Making Models Shallow Again: Jointly Learning to Reduce Non-Linearity and Depth for Latency-Efficient Private Inference

no code implementations26 Apr 2023 Souvik Kundu, Yuke Zhang, Dake Chen, Peter A. Beerel

Large number of ReLU and MAC operations of Deep neural networks make them ill-suited for latency and compute-efficient private inference.

Model Optimization

SAL-ViT: Towards Latency Efficient Private Inference on ViT using Selective Attention Search with a Learnable Softmax Approximation

no code implementations ICCV 2023 Yuke Zhang, Dake Chen, Souvik Kundu, Chenghao Li, Peter A. Beerel

Then, given our observation that external attention (EA) presents lower PI latency than widely-adopted self-attention (SA) at the cost of accuracy, we present a selective attention search (SAS) method to integrate the strength of EA and SA.

Cannot find the paper you are looking for? You can Submit a new open access paper.