Search Results for author: Xiaolu Hou

Found 13 papers, 2 papers with code

BloomScene: Lightweight Structured 3D Gaussian Splatting for Crossmodal Scene Generation

1 code implementation15 Jan 2025 Xiaolu Hou, Mingcheng Li, Dingkang Yang, Jiawei Chen, Ziyun Qian, Xiao Zhao, Yue Jiang, Jinjie Wei, Qingyao Xu, Lihua Zhang

To this end, we propose BloomScene, a lightweight structured 3D Gaussian splatting for crossmodal scene generation, which creates diverse and high-quality 3D scenes from text or image inputs.

Point cloud reconstruction Scene Generation

Make Shuffling Great Again: A Side-Channel Resistant Fisher-Yates Algorithm for Protecting Neural Networks

no code implementations1 Jan 2025 Leonard Puškáč, Marek Benovič, Jakub Breier, Xiaolu Hou

Neural network models implemented in embedded devices have been shown to be susceptible to side-channel attacks (SCAs), allowing recovery of proprietary model parameters, such as weights and biases.

Toward Robust Incomplete Multimodal Sentiment Analysis via Hierarchical Representation Learning

no code implementations5 Nov 2024 Mingcheng Li, Dingkang Yang, Yang Liu, Shunli Wang, Jiawei Chen, Shuaibing Wang, Jinjie Wei, Yue Jiang, Qingyao Xu, Xiaolu Hou, Mingyang Sun, Ziyun Qian, Dongliang Kou, Lihua Zhang

Specifically, we propose a fine-grained representation factorization module that sufficiently extracts valuable sentiment information by factorizing modality into sentiment-relevant and modality-specific representations through crossmodal translation and sentiment semantic reconstruction.

Multimodal Sentiment Analysis Representation Learning

Side-Channel Analysis of OpenVINO-based Neural Network Models

no code implementations23 Jul 2024 Dirmanto Jap, Jakub Breier, Zdenko Lehocký, Shivam Bhasin, Xiaolu Hou

Embedded devices with neural network accelerators offer great versatility for their users, reducing the need to use cloud-based services.

Side Channel Analysis

Detecting and Evaluating Medical Hallucinations in Large Vision Language Models

no code implementations14 Jun 2024 Jiawei Chen, Dingkang Yang, Tong Wu, Yue Jiang, Xiaolu Hou, Mingcheng Li, Shunli Wang, Dongling Xiao, Ke Li, Lihua Zhang

To bridge this gap, we introduce Med-HallMark, the first benchmark specifically designed for hallucination detection and evaluation within the medical multimodal domain.

Hallucination Medical Visual Question Answering +2

DeepNcode: Encoding-Based Protection against Bit-Flip Attacks on Neural Networks

no code implementations22 May 2024 Patrik Velčický, Jakub Breier, Mladen Kovačević, Xiaolu Hou

In this paper, we introduce an encoding-based protection method against bit-flip attacks on neural networks, titled DeepNcode.

Model extraction

Efficiency in Focus: LayerNorm as a Catalyst for Fine-tuning Medical Visual Language Pre-trained Models

no code implementations25 Apr 2024 Jiawei Chen, Dingkang Yang, Yue Jiang, Mingcheng Li, Jinjie Wei, Xiaolu Hou, Lihua Zhang

In the realm of Medical Visual Language Models (Med-VLMs), the quest for universal efficient fine-tuning mechanisms remains paramount, especially given researchers in interdisciplinary fields are often extremely short of training resources, yet largely unexplored.

Medical Visual Question Answering parameter-efficient fine-tuning +2

A Desynchronization-Based Countermeasure Against Side-Channel Analysis of Neural Networks

no code implementations25 Mar 2023 Jakub Breier, Dirmanto Jap, Xiaolu Hou, Shivam Bhasin

We analyze the timing properties of several activation functions and design the desynchronization in a way that the dependency on the input and the activation type is hidden.

Model extraction Side Channel Analysis

FooBaR: Fault Fooling Backdoor Attack on Neural Network Training

1 code implementation23 Sep 2021 Jakub Breier, Xiaolu Hou, Martín Ochoa, Jesus Solano

In particular, we discuss attacks against ReLU activation functions that make it possible to generate a family of malicious inputs, which are called fooling inputs, to be used at inference time to induce controlled misclassifications.

Backdoor Attack image-classification +1

SNIFF: Reverse Engineering of Neural Networks with Fault Attacks

no code implementations23 Feb 2020 Jakub Breier, Dirmanto Jap, Xiaolu Hou, Shivam Bhasin, Yang Liu

In this paper we explore the possibility to reverse engineer neural networks with the usage of fault attacks.

DeepLaser: Practical Fault Attack on Deep Neural Networks

no code implementations15 Jun 2018 Jakub Breier, Xiaolu Hou, Dirmanto Jap, Lei Ma, Shivam Bhasin, Yang Liu

As deep learning systems are widely adopted in safety- and security-critical applications, such as autonomous vehicles, banking systems, etc., malicious faults and attacks become a tremendous concern, which potentially could lead to catastrophic consequences.

Autonomous Vehicles

Cannot find the paper you are looking for? You can Submit a new open access paper.