no code implementations • 21 Apr 2025 • Songping Wang, Hanqing Liu, Yueming Lyu, Xiantao Hu, Ziwen He, Wei Wang, Caifeng Shan, Liang Wang
Second, existing methods struggle with the trade-off between clean accuracy and adversarial robustness.
no code implementations • 16 Apr 2025 • Songping Wang, Yueming Lyu, Shiqi Liu, Ning li, Tong Tong, Hao Sun, Caifeng Shan
The rise of customized diffusion models has spurred a boom in personalized visual content creation, but also poses risks of malicious misuse, severely threatening personal privacy and copyright protection.
no code implementations • 8 Mar 2025 • Songping Wang, Xinquan Yue, Yueming Lyu, Caifeng Shan
To explore this critical safety issue, we conduct an analysis and find that due to overfitting to the specific basis functions of KANs, they possess poor adversarial transferability among different KANs.
no code implementations • 22 Feb 2025 • Zheling Meng, Bo Peng, Xiaochuan Jin, Yueming Lyu, Wei Wang, Jing Dong
In this paper, motivated by the notion that concept erasure on the output side, i. e. generated images, may be more direct and effective, we propose to check concepts based on intermediate-generated images and correct them in the remainder of the generation process.
1 code implementation • 2 Feb 2025 • Kim Yong Tan, Yueming Lyu, Ivor Tsang, Yew-Soon Ong
In this work, we propose a novel and simple algorithm, $\textbf{Fast Direct}$, for query-efficient online black-box target generation.
no code implementations • 11 Nov 2024 • Xingrui Yu, Zhenglin Wan, David Mark Bossens, Yueming Lyu, Qing Guo, Ivor W. Tsang
Learning diverse and high-performance behaviors from a limited set of demonstrations is a grand challenge.
no code implementations • 16 Oct 2024 • Feiyang Ye, Yueming Lyu, Xuehao Wang, Masashi Sugiyama, Yu Zhang, Ivor Tsang
To address those problems in black-box optimization, we propose a novel Sharpness-Aware Black-box Optimization (SABO) algorithm, which applies a sharpness-aware minimization strategy to improve the model generalization.
no code implementations • 8 Oct 2024 • Zhenglin Wan, Xingrui Yu, David Mark Bossens, Yueming Lyu, Qing Guo, Flint Xiaofeng Fan, Ivor Tsang
Imitation learning (IL) has shown great potential in various applications, such as robot control.
1 code implementation • 7 Jun 2024 • Feng Hong, Yueming Lyu, Jiangchao Yao, Ya zhang, Ivor W. Tsang, Yanfeng Wang
The remarkable success of modern machine learning models on large datasets often demands extensive training time and resource consumption.
no code implementations • 2 Jun 2024 • Yueming Lyu, Kim Yong Tan, Yew Soon Ong, Ivor W. Tsang
Diffusion models have demonstrated great potential in generating high-quality content for images, natural language, protein domains, etc.
no code implementations • 8 Dec 2023 • Yue Jiang, Yueming Lyu, Tianxiang Ma, Bo Peng, Jing Dong
Extensive empirical evaluations demonstrate that the introduced \themodel effectively corrects the racial stereotypes of the well-trained Stable Diffusion model while leaving the original model unchanged.
1 code implementation • 12 Oct 2023 • Yueming Lyu, Kang Zhao, Bo Peng, Yue Jiang, Yingya Zhang, Jing Dong
Based on DeltaSpace, we propose a novel framework called DeltaEdit, which maps the CLIP visual feature differences to the latent space directions of a generative model during the training phase, and predicts the latent space directions from the CLIP textual feature differences during the inference phase.
no code implementations • 30 Jul 2023 • Yueming Lyu, Yue Jiang, Bo Peng, Jing Dong
InfoStyler formulates the disentanglement representation learning as an information compression problem by eliminating style statistics from the content image and removing the content structure from the style image.
no code implementations • 26 Jun 2023 • Yueming Lyu, Yue Jiang, Ziwen He, Bo Peng, Yunfan Liu, Jing Dong
The privacy and security of face data on social media are facing unprecedented challenges as it is vulnerable to unauthorized access and identification.
1 code implementation • 28 Apr 2023 • Jing Li, Yuangang Pan, Yueming Lyu, Yinghua Yao, Yulei Sui, Ivor W. Tsang
Unlike existing model tuning methods where the target data is always ready for calculating model gradients, the model providers in EXPECTED only see some feedbacks which could be as simple as scalars, such as inference accuracy or usage rate.
no code implementations • 5 Apr 2023 • Kim Yong Tan, Yueming Lyu, Yew Soon Ong, Ivor W. Tsang
This need requires the ANN search algorithm to support fast online data deletion and insertion.
no code implementations • 2 Apr 2023 • Cheng Chen, Yueming Lyu, Ivor W. Tsang
However, conventional partial-label learning (PLL) methods are still vulnerable to the high ratio of noisy partial labels, especially in a large labelling space.
1 code implementation • CVPR 2023 • Yueming Lyu, Tianwei Lin, Fu Li, Dongliang He, Jing Dong, Tieniu Tan
Our key idea is to investigate and identify a space, namely delta image and text space that has well-aligned distribution between CLIP visual feature differences of two images and CLIP textual embedding differences of source and target texts.
1 code implementation • 29 Sep 2021 • Yueming Lyu, Peibin Chen, Jingna Sun, Bo Peng, Xu Wang, Jing Dong
To evaluate the effectiveness and show the general use of our method, we conduct a set of experiments on makeup transfer and semantic image synthesis.
no code implementations • 29 Sep 2021 • Jing Li, Yuangang Pan, Yueming Lyu, Yinghua Yao, Ivor Tsang
Instead of learning from scratch, fine-tuning a pre-trained model to fit a related target dataset of interest or downstream tasks has been a handy trick to achieve the desired performance.
no code implementations • 11 Jun 2021 • Yueming Lyu, Ivor Tsang
We further establish a new generalization bound of our deep structured approximated NOK architecture.
no code implementations • 21 Apr 2021 • Yueming Lyu, Jing Dong, Bo Peng, Wei Wang, Tieniu Tan
Since human faces are symmetrical in the UV space, we can conveniently remove the undesired shadow and occlusion from the reference image by carefully designing a Flip Attention Module (FAM).
no code implementations • 1 Jan 2021 • Yueming Lyu, Xingrui Yu, Ivor Tsang
In this work, we take an initial step to designing a simple robust layer as a lightweight plug-in for vanilla deep models.
no code implementations • 1 Jan 2021 • Xingrui Yu, Yueming Lyu, Ivor Tsang
Our method learns useful planning computations with a meaningful reward function that focuses on the resulting region of an agent executing an action.
no code implementations • NeurIPS 2020 • Yueming Lyu, Yuan Yuan, Ivor W. Tsang
We theoretically prove a lower and an upper bound of the minimum pairwise distance of any non-degenerate rank-1 lattice.
1 code implementation • ICML 2020 • Xingrui Yu, Yueming Lyu, Ivor W. Tsang
Thus, our module provides the imitation agent both the intrinsic intention of the demonstrator and a better exploration ability, which is critical for the agent to outperform the demonstrator.
no code implementations • 9 Oct 2019 • Yueming Lyu, Ivor W. Tsang
Empirically, our method with full matrix update achieves competitive performance compared with one of the state-of-the-art method CMA-ES on benchmark test problems.
no code implementations • ICLR 2020 • Yueming Lyu, Ivor W. Tsang
Although the 0-1 loss has some robust properties, it is difficult to optimize.
no code implementations • 24 May 2019 • Yueming Lyu, Yuan Yuan, Ivor W. Tsang
In this work, we investigate black-box optimization from the perspective of frequentist kernel methods.
no code implementations • ICLR 2019 • Yuan Yuan, Yueming Lyu, Xi Shen, Ivor W. Tsang, Dit-yan Yeung
The MAAN employs a novel marginalized average aggregation (MAA) module and learns a set of latent discriminative probabilities in an end-to-end fashion.
Ranked #14 on
Weakly Supervised Action Localization
on ActivityNet-1.3
(mAP@0.5 metric)
Weakly Supervised Action Localization
Weakly-supervised Learning
+2
no code implementations • ICML 2017 • Yueming Lyu
According to (Brauchart \& Grabner, 2015), optimizing the discrete Riesz s-energy can generate asymptotically uniformly distributed point set on $\mathbb{S}^{d-1}$.