1 code implementation • 21 Apr 2024 • Haoyan Gong, Yuzheng Feng, Zhenrong Zhang, Xianxu Hou, Jingxin Liu, Siqi Huang, Hongbin Liu
Vehicle license plate recognition is a crucial task in intelligent traffic management systems.
no code implementations • 4 Mar 2024 • Qingyao Tian, Huai Liao, Xinyan Huang, Jian Chen, Zihui Zhang, Bingyu Yang, Sebastien Ourselin, Hongbin Liu
Specifically, the relative pose changes are fed into the registration process as the initial guess to boost its accuracy and speed.
no code implementations • 22 Feb 2024 • Hongbin Liu, Michael K. Reiter, Neil Zhenqiang Gong
However, foundation models are vulnerable to backdoor attacks and a backdoored foundation model is a single-point-of-failure of the AI ecosystem, e. g., multiple downstream classifiers inherit the backdoor vulnerabilities simultaneously.
1 code implementation • 22 Feb 2024 • Wen Huang, Hongbin Liu, Minxin Guo, Neil Zhenqiang Gong
We find that existing MLLMs such as GPT-4V, LLaVA-1. 5, and MiniGPT-v2 hallucinate for a large fraction of the instances in our benchmark.
no code implementations • 20 Feb 2024 • Qingyao Tian, Huai Liao, Xinyan Huang, Bingyu Yang, Jinlin Wu, Jian Chen, Lujie Li, Hongbin Liu
Localizing the bronchoscope in real time is essential for ensuring intervention quality.
1 code implementation • 15 Feb 2024 • Henry W. Sprueill, Carl Edwards, Khushbu Agarwal, Mariefel V. Olarte, Udishnu Sanyal, Conrad Johnston, Hongbin Liu, Heng Ji, Sutanay Choudhury
The discovery of new catalysts is essential for the design of new and more efficient chemical processes in order to transition to a sustainable future.
1 code implementation • 17 Jan 2024 • Mikel De Iturrate Reyzabal, Mingcong Chen, Wei Huang, Sebastien Ourselin, Hongbin Liu
In this paper, we present a new vision-haptic dataset (DaFoEs) with variable soft environments for the training of deep neural models.
no code implementations • 16 Nov 2023 • Xingjian Luo, You Pang, Zhen Chen, Jinlin Wu, Zongmin Zhang, Zhen Lei, Hongbin Liu
To address these two challenges, we propose a Surgical Phase LocAlization Network, named SurgPLAN, to facilitate a more accurate and stable surgical phase recognition with the principle of temporal detection.
1 code implementation • 16 Nov 2023 • Zhen Sun, Huan Xu, Jinlin Wu, Zhen Chen, Zhen Lei, Hongbin Liu
To address this issue, we propose a novel yet effective weakly-supervised surgical instrument instance segmentation approach, named Point-based Weakly-supervised Instance Segmentation (PWISeg).
no code implementations • 19 Aug 2023 • Zhenrong Zhang, Jianan Liu, Yuxuan Xia, Tao Huang, Qing-Long Han, Hongbin Liu
The state-of-the-art approaches usually employ a tracking-by-detection method, and data association plays a critical role.
no code implementations • CVPR 2023 • Jinghuai Zhang, Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong
Existing certified defenses against adversarial point clouds suffer from a key limitation: their certified robustness guarantees are probabilistic, i. e., they produce an incorrect certified robustness guarantee with some probability.
no code implementations • 6 Dec 2022 • Hongbin Liu, Wenjie Qu, Jinyuan Jia, Neil Zhenqiang Gong
In this work, we perform the first systematic, principled measurement study to understand whether and when a pre-trained encoder can address the limitations of secure or privacy-preserving supervised learning algorithms.
no code implementations • 15 Nov 2022 • Jinghuai Zhang, Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong
In this work, we take the first step to analyze the limitations of existing backdoor attacks and propose new DPBAs called CorruptEncoder to CL.
1 code implementation • 25 Jul 2022 • Xinlei He, Hongbin Liu, Neil Zhenqiang Gong, Yang Zhang
The results show that early stopping can mitigate the membership inference attack, but with the cost of model's utility degradation.
no code implementations • 13 May 2022 • Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong
In this work, we propose PoisonedEncoder, a data poisoning attack to contrastive learning.
no code implementations • 15 Jan 2022 • Yupei Liu, Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong
A pre-trained encoder may be deemed confidential because its training requires lots of data and computation resources as well as its public release may facilitate misuse of AI, e. g., for deepfakes generation.
no code implementations • 28 Oct 2021 • Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong
A pre-trained foundation model is like an ``operating system'' of the AI ecosystem.
no code implementations • 25 Aug 2021 • Hongbin Liu, Jinyuan Jia, Wenjie Qu, Neil Zhenqiang Gong
EncoderMI can be used 1) by a data owner to audit whether its (public) data was used to pre-train an image encoder without its authorization or 2) by an attacker to compromise privacy of the training data when it is private/sensitive.
no code implementations • CVPR 2021 • Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong
Our first major theoretical contribution is that we show PointGuard provably predicts the same label for a 3D point cloud when the number of adversarially modified, added, and/or deleted points is bounded.
no code implementations • 19 Feb 2021 • Hongbin Liu, Guang Hao Low, Damian S. Steiger, Thomas Häner, Markus Reiher, Matthias Troyer
Molecular science is governed by the dynamics of electrons, atomic nuclei, and their interaction with electromagnetic fields.
Quantum Physics
no code implementations • ICLR 2022 • Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Hongbin Liu, Neil Zhenqiang Gong
For instance, our method can build a classifier that achieves a certified top-3 accuracy of 69. 2\% on ImageNet when an attacker can arbitrarily perturb 5 pixels of a testing image.
1 code implementation • 3 Oct 2020 • Kun Zhao, Yongkun Liu, Siyuan Hao, Shaoxing Lu, Hongbin Liu, Lijian Zhou
Instead of using visual features of the whole image directly as common image-level models based on convolutional neural networks (CNNs) do, the proposed framework firstly obtains the bounding boxes of buildings in street view images from a detector.
no code implementations • 22 Aug 2020 • Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong
Bagging, a popular ensemble learning framework, randomly creates some subsamples of the training data, trains a base model for each subsample using a base learner, and takes majority vote among the base models when making predictions.
no code implementations • 15 Aug 2017 • Shan Luo, Leqi Zhu, Kaspar Althoefer, Hongbin Liu
A traditional method using handcrafted features with a shallow classifier was taken as a benchmark and the attained recognition rate was only 58. 22%.