no code implementations • 15 Apr 2025 • Cheng-Yen Hsieh, Xinyou Wang, Daiheng Zhang, Dongyu Xue, Fei Ye, ShuJian Huang, Zaixiang Zheng, Quanquan Gu
Multimodal protein language models (PLMs) integrate sequence and token-based structural information, serving as a powerful foundation for protein modeling, generation, and design.
no code implementations • 14 Apr 2025 • RunQing Wu, Fei Ye, Rongyao Hu, Guoxi Huang
Continual Learning (CL) epitomizes an advanced training paradigm wherein prior data samples remain inaccessible during the acquisition of new tasks.
1 code implementation • 24 Jan 2025 • Guoxi Huang, Nantheera Anantrasirichai, Fei Ye, Zipeng Qi, Ruirui Lin, Qirui Yang, David Bull
In image enhancement tasks, such as low-light and underwater image enhancement, a degraded image can correspond to multiple plausible target images due to dynamic photography conditions, such as variations in illumination.
no code implementations • 21 Jan 2025 • KaiHui Huang, RunQing Wu, Fei Ye
Continual learning has emerged as a pivotal area of research, primarily due to its advantageous characteristic that allows models to persistently acquire and retain information.
no code implementations • 15 Jan 2025 • RunQing Wu, Fei Ye, Qihe Liu, Guoxi Huang, Jinyu Guo, Rongyao Hu
Continual Learning seeks to develop a model capable of incrementally assimilating new information while retaining prior knowledge.
no code implementations • 13 Jan 2025 • RunQing Wu, KaiHui Huang, Hanyi Zhang, Qihe Liu, GuoJin Yu, JingSong Deng, Fei Ye
Furthermore, we introduce a novel information-theoretic memory optimization strategy that selectively identifies and retains diverse and informative data samples for the slow memory buffer.
no code implementations • 8 Nov 2024 • Yiming Ma, Fei Ye, Yi Zhou, Zaixiang Zheng, Dongyu Xue, Quanquan Gu
Comprehensive experiments demonstrate that ProteinWeaver: (1) generates high-quality, novel protein backbones through versatile domain assembly; (2) outperforms RFdiffusion, the current state-of-the-art in backbone design, by 13\% and 39\% for long-chain proteins; (3) shows the potential for cooperative function design through illustrative case studies.
no code implementations • 17 Oct 2024 • Xinyou Wang, Zaixiang Zheng, Fei Ye, Dongyu Xue, ShuJian Huang, Quanquan Gu
In this paper, we introduce DPLM-2, a multimodal protein foundation model that extends discrete diffusion protein language model (DPLM) to accommodate both sequences and structures.
no code implementations • 10 Sep 2024 • Fei Ye, Zaixiang Zheng, Dongyu Xue, Yuning Shen, Lihao Wang, Yiming Ma, Yan Wang, Xinyou Wang, Xiangxin Zhou, Quanquan Gu
Recent years have witnessed a surge in the development of protein foundation models, significantly improving performance in protein prediction and generative tasks ranging from 3D structure prediction and protein design to conformational dynamics.
no code implementations • 31 May 2024 • Bao Liu, Tianbao Liu, Zhongshuo Hu, Fei Ye, Lei Gao
The external archive update unit re-evaluates solutions based on non-domination and diversity to form the new population.
1 code implementation • 28 Feb 2024 • Xinyou Wang, Zaixiang Zheng, Fei Ye, Dongyu Xue, ShuJian Huang, Quanquan Gu
This paper introduces diffusion protein language model (DPLM), a versatile protein language model that demonstrates strong generative and predictive capabilities for protein sequences.
1 code implementation • CVPR 2024 • Fei Ye, Adrian G. Bors
Furthermore a novel memory pruning approach is proposed to automatically remove overlapping memory clusters through a graph relation evaluation ensuring a fixed memory capacity while maintaining the diversity among the samples stored in the memory.
1 code implementation • 30 Nov 2023 • Zipeng Qi, Guoxi Huang, Chenyang Liu, Fei Ye
To precisely control the spatial layouts of multiple visual concepts with the employment of vision guidance, we propose a universal framework, Layered Rendering Diffusion (LRDiff), which constructs an image-rendering process with multiple layers, each of which applies the vision guidance to instructively estimate the denoising direction for a single object.
1 code implementation • 27 Mar 2023 • Yiqun Wang, Yuning Shen, Shi Chen, Lihao Wang, Fei Ye, Hao Zhou
In this work, we propose a Harmonic Molecular Representation learning (HMR) framework, which represents a molecule using the Laplace-Beltrami eigenfunctions of its molecular surface.
1 code implementation • 3 Feb 2023 • Zaixiang Zheng, Yifan Deng, Dongyu Xue, Yi Zhou, Fei Ye, Quanquan Gu
This paper demonstrates that language models are strong structure-based protein designers.
1 code implementation • 28 Jan 2023 • Danqing Wang, Fei Ye, Hao Zhou
The development of general protein and antibody-specific pre-trained language models both facilitate antibody prediction tasks.
1 code implementation • ICCV 2023 • Fei Ye, Adrian G. Bors
In this paper, we propose a novel and effective framework for TFCL, which dynamically expands the architecture of a DEM model through a self-assessment mechanism evaluating the diversity of knowledge among existing experts as expansion signals.
1 code implementation • ICCV 2023 • Fei Ye, Adrian G. Bors
Despite promising achievements by the Variational Autoencoder (VAE) mixtures in continual learning, such methods ignore the redundancy among the probabilistic representations of their components when performing model expansion, leading to mixture components learning similar tasks.
1 code implementation • 28 Nov 2022 • Danqing Wang, Zeyu Wen, Fei Ye, Lei LI, Hao Zhou
By sampling in the latent space, LSSAMP can simultaneously generate peptides with ideal sequence attributes and secondary structures.
no code implementations • 12 Oct 2022 • Fei Ye, Adrian G. Bors
This paper develops a new theoretical analysis framework which provides generalization bounds based on the discrepancy distance between the visited samples and the entire information made available for training the model.
1 code implementation • 20 Jul 2022 • Fei Ye, Adrian G. Bors
Due to their inference, data representation and reconstruction properties, Variational Autoencoders (VAE) have been successfully used in continual learning classification tasks.
no code implementations • 11 Jul 2022 • Fei Ye, Adrian G. Bors
In this paper, we address a more challenging and realistic setting in CL, namely the Task-Free Continual Learning (TFCL) in which a model is trained on non-stationary data streams with no explicit task information.
1 code implementation • 25 Mar 2022 • Fei Ye, Adrian G. Bors
In this article, we provide the appendix for Lifelong Generative Modelling Using Dynamic Expansion Graph Model.
1 code implementation • 15 Dec 2021 • Fei Ye, Adrian G. Bors
In this paper we study the forgetting behaviour of VAEs using a joint GR and ENA methodology, by deriving an upper bound on the negative marginal log-likelihood.
1 code implementation • ICCV 2021 • Fei Ye, Adrian G. Bors
Recent research efforts in lifelong learning propose to grow a mixture of models to adapt to an increasing number of tasks.
no code implementations • 9 Jul 2021 • Fei Ye, Adrian G. Bors
Learning disentangled and interpretable representations is an important step towards accomplishing comprehensive data representations on the manifold.
1 code implementation • 9 Jul 2021 • Fei Ye, Adrian G. Bors
While the Student module is trained with a new given database, the Teacher module would remind the Student about the information learnt in the past.
1 code implementation • 9 Jul 2021 • Fei Ye, Adrian G. Bors
The mixing coefficients in the mixture, control the contributions of each expert in the goal representation.
no code implementations • 9 Jul 2021 • Fei Ye, Adrian G. Bors
In this paper, we propose a new continuously learning generative model, called the Lifelong Twin Generative Adversarial Networks (LT-GANs).
no code implementations • 29 May 2021 • Fei Ye, Shen Zhang, Pin Wang, Ching-Yao Chan
In this survey, we systematically summarize the current literature on studies that apply reinforcement learning (RL) to the motion planning and control of autonomous vehicles.
no code implementations • 9 Dec 2020 • Fei Ye, Huangjie Zheng, Chaoqin Huang, Ya zhang
Based on this object function we introduce a novel information theoretic framework for unsupervised image anomaly detection.
Ranked #9 on
Anomaly Detection
on One-class CIFAR-100
no code implementations • 9 Dec 2020 • Chaoqin Huang, Fei Ye, Peisen Zhao, Ya zhang, Yan-Feng Wang, Qi Tian
This paper explores semi-supervised anomaly detection, a more practical setting for anomaly detection where a small additional set of labeled samples are provided.
Ranked #27 on
Anomaly Detection
on One-class CIFAR-10
(using extra training data)
no code implementations • 28 Aug 2020 • Fei Ye, Pin Wang, Ching-Yao Chan, Jiucai Zhang
The simulation results shows that the proposed method achieves an overall success rate up to 20% higher than the benchmark model when it is generalized to the new environment of heavy traffic density.
no code implementations • 25 Jul 2020 • Shen Zhang, Fei Ye, Bingnan Wang, Thomas G. Habetler
Most of the data-driven approaches applied to bearing fault diagnosis up-to-date are trained using a large amount of fault data collected a priori.
1 code implementation • ECCV 2020 • Fei Ye, Adrian G. Bors
The proposed model supports many downstream tasks that traditional generative replay methods can not, including interpolation and inference across different data domains.
no code implementations • 7 Feb 2020 • Fei Ye, Xuxin Cheng, Pin Wang, Ching-Yao Chan, Jiucai Zhang
The simulation results demonstrate the lane change maneuvers can be efficiently learned and executed in a safe, smooth, and efficient manner.
no code implementations • 2 Dec 2019 • Shen Zhang, Fei Ye, Bingnan Wang, Thomas G. Habetler
Most of the data-driven approaches applied to bearing fault diagnosis up to date are established in the supervised learning paradigm, which usually requires a large set of labeled data collected a priori.
1 code implementation • 25 Nov 2019 • Chaoqin Huang, Fei Ye, Jinkun Cao, Maosen Li, Ya zhang, Cewu Lu
We here propose to break this equivalence by erasing selected attributes from the original data and reformulate it as a restoration task, where the normal and the anomalous data are expected to be distinguishable based on restoration errors.
Ranked #23 on
Anomaly Detection
on One-class CIFAR-10
no code implementations • 29 Apr 2018 • Haiyang Wang, Yong Tang, Ziyang Jia, Fei Ye
Second, our model connects each layer to the subsequent ones in a feed-forward fashion, which enhances the capability of the model to resist performance degeneration.