1 code implementation • 10 May 2024 • Bo Hui, Haolin Yuan, Neil Gong, Philippe Burlina, Yinzhi Cao
As a result, a natural attack, called prompt leaking, is to steal the system prompt from an LLM application, which compromises the developer's intellectual property.
1 code implementation • 20 May 2023 • Yuchen Yang, Bo Hui, Haolin Yuan, Neil Gong, Yinzhi Cao
Text-to-image generative models such as Stable Diffusion and DALL$\cdot$E raise many ethical concerns due to the generation of harmful images such as Not-Safe-for-Work (NSFW) ones.
1 code implementation • 26 Oct 2022 • Haolin Yuan, Bo Hui, Yuchen Yang, Philippe Burlina, Neil Zhenqiang Gong, Yinzhi Cao
Federated learning (FL) allows multiple clients to collaboratively train a deep learning model.
no code implementations • 28 Feb 2022 • Haolin Yuan, Armin Hadzic, William Paul, Daniella Villegas de Flores, Philip Mathew, John Aucott, Yinzhi Cao, Philippe Burlina
Skin lesions can be an early indicator of a wide range of infectious and other diseases.
1 code implementation • 5 Jan 2021 • Bo Hui, Yuchen Yang, Haolin Yuan, Philippe Burlina, Neil Zhenqiang Gong, Yinzhi Cao
The success of the former heavily depends on the quality of the shadow model, i. e., the transferability between the shadow and the target; the latter, given only blackbox probing access to the target model, cannot make an effective inference of unknowns, compared with MI attacks using shadow models, due to the insufficient number of qualified samples labeled with ground truth membership information.