1 code implementation • 18 Feb 2025 • Feng Luo, Rui Yang, Hao Sun, Chunyuan Deng, Jiarui Yao, Jingyan Shen, huan zhang, Hanjie Chen
Understanding human preferences is crucial for improving foundation models and building personalized AI systems.
no code implementations • 13 Feb 2025 • Chaoyi Zhou, Xi Liu, Feng Luo, Siyu Huang
The framework consists of three stages: (1) a correspondence-aware autoencoding method that enhances the 3D consistency of 2D latent representations, (2) a latent radiance field (LRF) that lifts these 3D-aware 2D representations into 3D space, and (3) a VAE-Radiance Field (VAE-RF) alignment strategy that improves image decoding from the rendered 2D representations.
no code implementations • 16 Dec 2024 • Yuning Han, Bingyin Zhao, Rui Chu, Feng Luo, Biplab Sikdar, Yingjie Lao
In this paper, we propose UIBDiffusion, the universal imperceptible backdoor attack for diffusion models, which allows us to achieve superior attack and generation performance while evading state-of-the-art defenses.
1 code implementation • 16 Nov 2024 • Chengyuan Deng, Jie Gao, Kevin Lu, Feng Luo, Hongbin Sun, Cheng Xin
Neuc-MDS efficiently optimizes the choice of (both positive and negative) eigenvalues of the dissimilarity Gram matrix to reduce STRESS, the sum of squared pairwise error.
no code implementations • 5 Jul 2024 • Jiawei Xu, Rui Yang, Feng Luo, Meng Fang, Baoxiang Wang, Lei Han
These results highlight the potential of robust sequence modeling for learning from noisy or corrupted offline datasets, thereby promoting the reliable application of offline RL in real-world tasks.
no code implementations • 4 Jun 2024 • Cong Wang, Kuan Tian, Jun Zhang, Yonghang Guan, Feng Luo, Fei Shen, Zhiwei Jiang, Qing Gu, Xiao Han, Wei Yang
In our work on portrait video generation, we identified audio signals as particularly weak, often overshadowed by stronger signals such as facial pose and reference image.
no code implementations • 31 May 2024 • Jingjing Wang, Dan Zhang, Feng Luo
Here, we propose a unified DDDM (uDDDM) framework that generates images in one-step/multiple steps for both Variance Preserving (VP) and Variance Exploding (VE) cases.
no code implementations • 22 May 2024 • Dan Zhang, Jingjing Wang, Feng Luo
In this paper, we present the Directly Denoising Diffusion Model (DDDM): a simple and generic approach for generating realistic images with few-step sampling, while multistep sampling is still preserved for better performance.
no code implementations • 17 Apr 2024 • Mary Aiyetigbo, Alexander Korte, Ethan Anderson, Reda Chalhoub, Peter Kalivas, Feng Luo, Nianyi Li
In this paper, we introduce a novel unsupervised network to denoise microscopy videos featured by image sequences captured by a fixed location microscopy camera.
no code implementations • 1 Apr 2024 • Mingqi Li, Feng Luo
Then, we prepend soft prompts to the original pre-trained language model and only update the selected parameters together with prompt-related parameters when adapting to the downstream tasks.
2 code implementations • 15 Feb 2024 • Rui Yang, Xiaoman Pan, Feng Luo, Shuang Qiu, Han Zhong, Dong Yu, Jianshu Chen
We consider the problem of multi-objective alignment of foundation models with human preferences, which is a critical step towards helpful and harmless AI systems.
no code implementations • 15 Dec 2023 • Yucong Dai, Gen Li, Feng Luo, Xiaolong Ma, Yongkai Wu
To address this, we define a fair pruning task where a sparse model is derived subject to fairness requirements.
no code implementations • 1 Nov 2023 • Jingjing Wang, Joshua Luo, Grace Yang, Allen Hong, Feng Luo
Large Language Models (LLMs), representing a significant achievement in artificial intelligence (AI) research, have demonstrated their ability in a multitude of tasks.
1 code implementation • 18 Oct 2023 • Feng Luo, Jinxi Xiang, Jun Zhang, Xiao Han, Wei Yang
To alleviate the huge computational cost required by pixel-based diffusion SR, latent-based methods utilize a feature encoder to transform the image and then implement the SR image generation in a compact latent space.
1 code implementation • 12 Jun 2023 • Xin-Cheng Wen, Cuiyun Gao, Feng Luo, Haoyu Wang, Ge Li, Qing Liao
(2) adaptive re-weighting module, which adjusts the learning weights for different types according to the training epochs and numbers of associated samples by a novel training loss.
no code implementations • 8 Jun 2023 • Fei Ding, Dan Zhang, Yin Yang, Venkat Krovi, Feng Luo
We conduct a theoretical analysis of the proposed loss and highlight how it assigns different weights to negative samples during the process of disentangling the feature representation.
no code implementations • 2 Nov 2022 • Mingqi Li, Fei Ding, Dan Zhang, Long Cheng, Hongxin Hu, Feng Luo
In this paper, we propose Multi-level Multilingual Knowledge Distillation (MMKD), a novel method for improving multilingual language models.
no code implementations • 22 Dec 2021 • Nishant Vishwamitra, Hongxin Hu, Ziming Zhao, Long Cheng, Feng Luo
We then introduce a new type of multimodal adversarial attacks called decoupling attack in MUROAN that aims to compromise multimodal models by decoupling their fused modalities.
no code implementations • 1 Sep 2021 • Sen yang, Feng Luo, Jun Zhang, Xiyue Wang
Mitotic count is the most important morphological feature of breast cancer grading.
no code implementations • 1 Jul 2021 • Rui Yang, Meng Fang, Lei Han, Yali Du, Feng Luo, Xiu Li
Replacing original goals with virtual goals generated from interaction with a trained dynamics model leads to a novel relabeling method, model-based relabeling (MBR).
no code implementations • 18 Jun 2021 • Feng Luo, Bin-Bin Gao, Jiangpeng Yan, Xiu Li
Experiments also show that our proposed method achieves competitive performance compared to existing boundary-based methods with a lightweight design and a simple pipeline.
no code implementations • 25 Feb 2021 • Rui Yang, Jiafei Lyu, Yu Yang, Jiangpeng Yan, Feng Luo, Dijun Luo, Lanqing Li, Xiu Li
Two main challenges in multi-goal reinforcement learning are sparse rewards and sample inefficiency.
1 code implementation • 1 Dec 2020 • Fei Ding, Yin Yang, Hongxin Hu, Venkat Krovi, Feng Luo
While it is important to transfer the full knowledge from teacher to student, we introduce the Multi-level Knowledge Distillation (MLKD) by effectively considering both knowledge alignment and correlation.
no code implementations • 15 Sep 2020 • Siyuan Shen, Tianjia Shao, Kun Zhou, Chenfanfu Jiang, Feng Luo, Yin Yang
We believe our method will inspire a wide-range of new algorithms for deep learning and numerical optimization.
no code implementations • 16 Jun 2020 • Jinghua Yu, Stefan Wagner, Feng Luo
In this paper, a system-oriented approach is proposed on the basis of the System-Theoretic Process Analysis (STPA).
Cryptography and Security Software Engineering Systems and Control Systems and Control
no code implementations • 13 Nov 2019 • Fei Ding, Feng Luo, Yin Yang
We enforce the encoder and the generator of GAN to form an encoder-generator pair in addition to the generator-encoder pair, which enables us to avoid the low-diversity generation and the triviality of latent features.
1 code implementation • 9 Jul 2019 • Chien-Chun Ni, Yu-Yao Lin, Feng Luo, Jie Gao
Many complex networks in the real world have community structures -- groups of well-connected nodes with important functional roles.
Social and Information Networks Physics and Society
no code implementations • ICLR 2018 • Xiang Zhang, Nishant Vishwamitra, Hongxin Hu, Feng Luo
The numbers of convolution layers and parameters are only increased linearly in Crescendo blocks.
1 code implementation • 22 Feb 2013 • Xianfeng Gu, Feng Luo, Jian Sun, S. -T. Yau
In this paper, we develop several related finite dimensional variational principles for discrete optimal transport (DOT), Minkowski type problems for convex polytopes and discrete Monge-Ampere equation (DMAE).
Geometric Topology Differential Geometry Metric Geometry 52-XX I.3.5