Search Results for author: Fei Ye

Found 39 papers, 18 papers with code

Elucidating the Design Space of Multimodal Protein Language Models

no code implementations15 Apr 2025 Cheng-Yen Hsieh, Xinyou Wang, Daiheng Zhang, Dongyu Xue, Fei Ye, ShuJian Huang, Zaixiang Zheng, Quanquan Gu

Multimodal protein language models (PLMs) integrate sequence and token-based structural information, serving as a powerful foundation for protein modeling, generation, and design.

Diversity Representation Learning

Self-Controlled Dynamic Expansion Model for Continual Learning

no code implementations14 Apr 2025 RunQing Wu, Fei Ye, Rongyao Hu, Guoxi Huang

Continual Learning (CL) epitomizes an advanced training paradigm wherein prior data samples remain inaccessible during the acquisition of new tasks.

Continual Learning model +3

Bayesian Neural Networks for One-to-Many Mapping in Image Enhancement

1 code implementation24 Jan 2025 Guoxi Huang, Nantheera Anantrasirichai, Fei Ye, Zipeng Qi, Ruirui Lin, Qirui Yang, David Bull

In image enhancement tasks, such as low-light and underwater image enhancement, a degraded image can correspond to multiple plausible target images due to dynamic photography conditions, such as variations in illumination.

Image Enhancement

Optimally-Weighted Maximum Mean Discrepancy Framework for Continual Learning

no code implementations21 Jan 2025 KaiHui Huang, RunQing Wu, Fei Ye

Continual learning has emerged as a pivotal area of research, primarily due to its advantageous characteristic that allows models to persistently acquire and retain information.

Benchmarking Continual Learning

Incrementally Learning Multiple Diverse Data Domains via Multi-Source Dynamic Expansion Model

no code implementations15 Jan 2025 RunQing Wu, Fei Ye, Qihe Liu, Guoxi Huang, Jinyu Guo, Rongyao Hu

Continual Learning seeks to develop a model capable of incrementally assimilating new information while retaining prior knowledge.

Continual Learning Transfer Learning

Information-Theoretic Dual Memory System for Continual Learning

no code implementations13 Jan 2025 RunQing Wu, KaiHui Huang, Hanyi Zhang, Qihe Liu, GuoJin Yu, JingSong Deng, Fei Ye

Furthermore, we introduce a novel information-theoretic memory optimization strategy that selectively identifies and retains diverse and informative data samples for the slow memory buffer.

Continual Learning

ProteinWeaver: A Divide-and-Assembly Approach for Protein Backbone Design

no code implementations8 Nov 2024 Yiming Ma, Fei Ye, Yi Zhou, Zaixiang Zheng, Dongyu Xue, Quanquan Gu

Comprehensive experiments demonstrate that ProteinWeaver: (1) generates high-quality, novel protein backbones through versatile domain assembly; (2) outperforms RFdiffusion, the current state-of-the-art in backbone design, by 13\% and 39\% for long-chain proteins; (3) shows the potential for cooperative function design through illustrative case studies.

Protein Design

DPLM-2: A Multimodal Diffusion Protein Language Model

no code implementations17 Oct 2024 Xinyou Wang, Zaixiang Zheng, Fei Ye, Dongyu Xue, ShuJian Huang, Quanquan Gu

In this paper, we introduce DPLM-2, a multimodal protein foundation model that extends discrete diffusion protein language model (DPLM) to accommodate both sequences and structures.

Language Modeling model +2

ProteinBench: A Holistic Evaluation of Protein Foundation Models

no code implementations10 Sep 2024 Fei Ye, Zaixiang Zheng, Dongyu Xue, Yuning Shen, Lihao Wang, Yiming Ma, Yan Wang, Xinyou Wang, Xiangxin Zhou, Quanquan Gu

Recent years have witnessed a surge in the development of protein foundation models, significantly improving performance in protein prediction and generative tasks ranging from 3D structure prediction and protein design to conformational dynamics.

Protein Design

Diffusion Language Models Are Versatile Protein Learners

1 code implementation28 Feb 2024 Xinyou Wang, Zaixiang Zheng, Fei Ye, Dongyu Xue, ShuJian Huang, Quanquan Gu

This paper introduces diffusion protein language model (DPLM), a versatile protein language model that demonstrates strong generative and predictive capabilities for protein sequences.

Language Modeling Protein Language Model

Online Task-Free Continual Generative and Discriminative Learning via Dynamic Cluster Memory

1 code implementation CVPR 2024 Fei Ye, Adrian G. Bors

Furthermore a novel memory pruning approach is proposed to automatically remove overlapping memory clusters through a graph relation evaluation ensuring a fixed memory capacity while maintaining the diversity among the samples stored in the memory.

Continual Learning Diversity +1

Layered Rendering Diffusion Model for Controllable Zero-Shot Image Synthesis

1 code implementation30 Nov 2023 Zipeng Qi, Guoxi Huang, Chenyang Liu, Fei Ye

To precisely control the spatial layouts of multiple visual concepts with the employment of vision guidance, we propose a universal framework, Layered Rendering Diffusion (LRDiff), which constructs an image-rendering process with multiple layers, each of which applies the vision guidance to instructively estimate the denoising direction for a single object.

Denoising Image Generation

Learning Harmonic Molecular Representations on Riemannian Manifold

1 code implementation27 Mar 2023 Yiqun Wang, Yuning Shen, Shi Chen, Lihao Wang, Fei Ye, Hao Zhou

In this work, we propose a Harmonic Molecular Representation learning (HMR) framework, which represents a molecule using the Laplace-Beltrami eigenfunctions of its molecular surface.

Drug Discovery molecular representation +2

Structure-informed Language Models Are Protein Designers

1 code implementation3 Feb 2023 Zaixiang Zheng, Yifan Deng, Dongyu Xue, Yi Zhou, Fei Ye, Quanquan Gu

This paper demonstrates that language models are strong structure-based protein designers.

On Pre-trained Language Models for Antibody

1 code implementation28 Jan 2023 Danqing Wang, Fei Ye, Hao Zhou

The development of general protein and antibody-specific pre-trained language models both facilitate antibody prediction tasks.

Drug Discovery Language Modelling +1

Self-Evolved Dynamic Expansion Model for Task-Free Continual Learning

1 code implementation ICCV 2023 Fei Ye, Adrian G. Bors

In this paper, we propose a novel and effective framework for TFCL, which dynamically expands the architecture of a DEM model through a self-assessment mechanism evaluating the diversity of knowledge among existing experts as expansion signals.

Continual Learning Diversity +1

Wasserstein Expansible Variational Autoencoder for Discriminative and Generative Continual Learning

1 code implementation ICCV 2023 Fei Ye, Adrian G. Bors

Despite promising achievements by the Variational Autoencoder (VAE) mixtures in continual learning, such methods ignore the redundancy among the probabilistic representations of their components when performing model expansion, leading to mixture components learning similar tasks.

Continual Learning Diversity

Accelerating Antimicrobial Peptide Discovery with Latent Structure

1 code implementation28 Nov 2022 Danqing Wang, Zeyu Wen, Fei Ye, Lei LI, Hao Zhou

By sampling in the latent space, LSSAMP can simultaneously generate peptides with ideal sequence attributes and secondary structures.

Quantization

Task-Free Continual Learning via Online Discrepancy Distance Learning

no code implementations12 Oct 2022 Fei Ye, Adrian G. Bors

This paper develops a new theoretical analysis framework which provides generalization bounds based on the discrepancy distance between the visited samples and the entire information made available for training the model.

Continual Learning Generalization Bounds

Continual Variational Autoencoder Learning via Online Cooperative Memorization

1 code implementation20 Jul 2022 Fei Ye, Adrian G. Bors

Due to their inference, data representation and reconstruction properties, Variational Autoencoders (VAE) have been successfully used in continual learning classification tasks.

Continual Learning Diversity +1

Learning an evolved mixture model for task-free continual learning

no code implementations11 Jul 2022 Fei Ye, Adrian G. Bors

In this paper, we address a more challenging and realistic setting in CL, namely the Task-Free Continual Learning (TFCL) in which a model is trained on non-stationary data streams with no explicit task information.

Continual Learning Diversity

Supplemental Material: Lifelong Generative Modelling Using Dynamic Expansion Graph Model

1 code implementation25 Mar 2022 Fei Ye, Adrian G. Bors

In this article, we provide the appendix for Lifelong Generative Modelling Using Dynamic Expansion Graph Model.

Lifelong Generative Modelling Using Dynamic Expansion Graph Model

1 code implementation15 Dec 2021 Fei Ye, Adrian G. Bors

In this paper we study the forgetting behaviour of VAEs using a joint GR and ENA methodology, by deriving an upper bound on the negative marginal log-likelihood.

Lifelong learning model

Lifelong Infinite Mixture Model Based on Knowledge-Driven Dirichlet Process

1 code implementation ICCV 2021 Fei Ye, Adrian G. Bors

Recent research efforts in lifelong learning propose to grow a mixture of models to adapt to an increasing number of tasks.

Lifelong learning

InfoVAEGAN : learning joint interpretable representations by information maximization and maximum likelihood

no code implementations9 Jul 2021 Fei Ye, Adrian G. Bors

Learning disentangled and interpretable representations is an important step towards accomplishing comprehensive data representations on the manifold.

Representation Learning

Lifelong Teacher-Student Network Learning

1 code implementation9 Jul 2021 Fei Ye, Adrian G. Bors

While the Student module is trained with a new given database, the Teacher module would remind the Student about the information learnt in the past.

Generative Adversarial Network Lifelong learning

Lifelong Mixture of Variational Autoencoders

1 code implementation9 Jul 2021 Fei Ye, Adrian G. Bors

The mixing coefficients in the mixture, control the contributions of each expert in the goal representation.

Lifelong learning Mixture-of-Experts

Lifelong Twin Generative Adversarial Networks

no code implementations9 Jul 2021 Fei Ye, Adrian G. Bors

In this paper, we propose a new continuously learning generative model, called the Lifelong Twin Generative Adversarial Networks (LT-GANs).

Knowledge Distillation

A Survey of Deep Reinforcement Learning Algorithms for Motion Planning and Control of Autonomous Vehicles

no code implementations29 May 2021 Fei Ye, Shen Zhang, Pin Wang, Ching-Yao Chan

In this survey, we systematically summarize the current literature on studies that apply reinforcement learning (RL) to the motion planning and control of autonomous vehicles.

Autonomous Driving Deep Reinforcement Learning +2

Deep Unsupervised Image Anomaly Detection: An Information Theoretic Framework

no code implementations9 Dec 2020 Fei Ye, Huangjie Zheng, Chaoqin Huang, Ya zhang

Based on this object function we introduce a novel information theoretic framework for unsupervised image anomaly detection.

Anomaly Detection

ESAD: End-to-end Deep Semi-supervised Anomaly Detection

no code implementations9 Dec 2020 Chaoqin Huang, Fei Ye, Peisen Zhao, Ya zhang, Yan-Feng Wang, Qi Tian

This paper explores semi-supervised anomaly detection, a more practical setting for anomaly detection where a small additional set of labeled samples are provided.

Ranked #27 on Anomaly Detection on One-class CIFAR-10 (using extra training data)

Decoder Medical Diagnosis +2

Meta Reinforcement Learning-Based Lane Change Strategy for Autonomous Vehicles

no code implementations28 Aug 2020 Fei Ye, Pin Wang, Ching-Yao Chan, Jiucai Zhang

The simulation results shows that the proposed method achieves an overall success rate up to 20% higher than the benchmark model when it is generalized to the new environment of heavy traffic density.

Autonomous Vehicles Imitation Learning +4

Few-Shot Bearing Fault Diagnosis Based on Model-Agnostic Meta-Learning

no code implementations25 Jul 2020 Shen Zhang, Fei Ye, Bingnan Wang, Thomas G. Habetler

Most of the data-driven approaches applied to bearing fault diagnosis up-to-date are trained using a large amount of fault data collected a priori.

Anomaly Detection Fault Diagnosis +1

Learning latent representations across multiple data domains using Lifelong VAEGAN

1 code implementation ECCV 2020 Fei Ye, Adrian G. Bors

The proposed model supports many downstream tasks that traditional generative replay methods can not, including interpolation and inference across different data domains.

Lifelong learning Representation Learning

Automated Lane Change Strategy using Proximal Policy Optimization-based Deep Reinforcement Learning

no code implementations7 Feb 2020 Fei Ye, Xuxin Cheng, Pin Wang, Ching-Yao Chan, Jiucai Zhang

The simulation results demonstrate the lane change maneuvers can be efficiently learned and executed in a safe, smooth, and efficient manner.

Autonomous Driving Deep Reinforcement Learning +2

Semi-Supervised Learning of Bearing Anomaly Detection via Deep Variational Autoencoders

no code implementations2 Dec 2019 Shen Zhang, Fei Ye, Bingnan Wang, Thomas G. Habetler

Most of the data-driven approaches applied to bearing fault diagnosis up to date are established in the supervised learning paradigm, which usually requires a large set of labeled data collected a priori.

Anomaly Detection Fault Diagnosis

Attribute Restoration Framework for Anomaly Detection

1 code implementation25 Nov 2019 Chaoqin Huang, Fei Ye, Jinkun Cao, Maosen Li, Ya zhang, Cewu Lu

We here propose to break this equivalence by erasing selected attributes from the original data and reformulate it as a restoration task, where the normal and the anomalous data are expected to be distinguishable based on restoration errors.

Anomaly Detection Attribute +1

Dense Adaptive Cascade Forest: A Self Adaptive Deep Ensemble for Classification Problems

no code implementations29 Apr 2018 Haiyang Wang, Yong Tang, Ziyang Jia, Fei Ye

Second, our model connects each layer to the subsequent ones in a feed-forward fashion, which enhances the capability of the model to resist performance degeneration.

Ensemble Learning General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.