no code implementations • CVPR 2025 • Mingcheng Li, Xiaolu Hou, Ziyang Liu, Dingkang Yang, Ziyun Qian, Jiawei Chen, Jinjie Wei, Yue Jiang, Qingyao Xu, Lihua Zhang
Diffusion models have shown excellent performance in text-to-image generation.
1 code implementation • 15 Jan 2025 • Xiaolu Hou, Mingcheng Li, Dingkang Yang, Jiawei Chen, Ziyun Qian, Xiao Zhao, Yue Jiang, Jinjie Wei, Qingyao Xu, Lihua Zhang
To this end, we propose BloomScene, a lightweight structured 3D Gaussian splatting for crossmodal scene generation, which creates diverse and high-quality 3D scenes from text or image inputs.
no code implementations • 1 Jan 2025 • Leonard Puškáč, Marek Benovič, Jakub Breier, Xiaolu Hou
Neural network models implemented in embedded devices have been shown to be susceptible to side-channel attacks (SCAs), allowing recovery of proprietary model parameters, such as weights and biases.
no code implementations • 5 Nov 2024 • Mingcheng Li, Dingkang Yang, Yang Liu, Shunli Wang, Jiawei Chen, Shuaibing Wang, Jinjie Wei, Yue Jiang, Qingyao Xu, Xiaolu Hou, Mingyang Sun, Ziyun Qian, Dongliang Kou, Lihua Zhang
Specifically, we propose a fine-grained representation factorization module that sufficiently extracts valuable sentiment information by factorizing modality into sentiment-relevant and modality-specific representations through crossmodal translation and sentiment semantic reconstruction.
no code implementations • 16 Oct 2024 • Jinjie Wei, Dingkang Yang, Yanshu Li, Qingyao Xu, Zhaoyu Chen, Mingcheng Li, Yue Jiang, Xiaolu Hou, Lihua Zhang
Large Language Model (LLM)-driven interactive systems currently show potential promise in healthcare domains.
no code implementations • 23 Jul 2024 • Dirmanto Jap, Jakub Breier, Zdenko Lehocký, Shivam Bhasin, Xiaolu Hou
Embedded devices with neural network accelerators offer great versatility for their users, reducing the need to use cloud-based services.
no code implementations • 14 Jun 2024 • Jiawei Chen, Dingkang Yang, Tong Wu, Yue Jiang, Xiaolu Hou, Mingcheng Li, Shunli Wang, Dongling Xiao, Ke Li, Lihua Zhang
To bridge this gap, we introduce Med-HallMark, the first benchmark specifically designed for hallucination detection and evaluation within the medical multimodal domain.
no code implementations • 22 May 2024 • Patrik Velčický, Jakub Breier, Mladen Kovačević, Xiaolu Hou
In this paper, we introduce an encoding-based protection method against bit-flip attacks on neural networks, titled DeepNcode.
no code implementations • 25 Apr 2024 • Jiawei Chen, Dingkang Yang, Yue Jiang, Mingcheng Li, Jinjie Wei, Xiaolu Hou, Lihua Zhang
In the realm of Medical Visual Language Models (Med-VLMs), the quest for universal efficient fine-tuning mechanisms remains paramount, especially given researchers in interdisciplinary fields are often extremely short of training resources, yet largely unexplored.
Medical Visual Question Answering
parameter-efficient fine-tuning
+2
no code implementations • 25 Mar 2023 • Jakub Breier, Dirmanto Jap, Xiaolu Hou, Shivam Bhasin
We analyze the timing properties of several activation functions and design the desynchronization in a way that the dependency on the input and the activation type is hidden.
1 code implementation • 23 Sep 2021 • Jakub Breier, Xiaolu Hou, Martín Ochoa, Jesus Solano
In particular, we discuss attacks against ReLU activation functions that make it possible to generate a family of malicious inputs, which are called fooling inputs, to be used at inference time to induce controlled misclassifications.
no code implementations • 23 Feb 2020 • Jakub Breier, Dirmanto Jap, Xiaolu Hou, Shivam Bhasin, Yang Liu
In this paper we explore the possibility to reverse engineer neural networks with the usage of fault attacks.
no code implementations • 15 Jun 2018 • Jakub Breier, Xiaolu Hou, Dirmanto Jap, Lei Ma, Shivam Bhasin, Yang Liu
As deep learning systems are widely adopted in safety- and security-critical applications, such as autonomous vehicles, banking systems, etc., malicious faults and attacks become a tremendous concern, which potentially could lead to catastrophic consequences.