Search Results for author: Minkai Xu

Found 26 papers, 20 papers with code

RealCompo: Dynamic Equilibrium between Realism and Compositionality Improves Text-to-Image Diffusion Models

1 code implementation20 Feb 2024 Xinchen Zhang, Ling Yang, Yaqi Cai, Zhaochen Yu, Jiake Xie, Ye Tian, Minkai Xu, Yong Tang, Yujiu Yang, Bin Cui

In this paper, we propose a new training-free and transferred-friendly text-to-image generation framework, namely RealCompo, which aims to leverage the advantages of text-to-image and layout-to-image models to enhance both realism and compositionality of the generated images.

Denoising

Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMs

1 code implementation22 Jan 2024 Ling Yang, Zhaochen Yu, Chenlin Meng, Minkai Xu, Stefano Ermon, Bin Cui

In this paper, we propose a brand new training-free text-to-image generation/editing framework, namely Recaption, Plan and Generate (RPG), harnessing the powerful chain-of-thought reasoning ability of multimodal LLMs to enhance the compositionality of text-to-image diffusion models.

Diffusion Personalization Tuning Free Large Language Model

Equivariant Graph Neural Operator for Modeling 3D Dynamics

no code implementations19 Jan 2024 Minkai Xu, Jiaqi Han, Aaron Lou, Jean Kossaifi, Arvind Ramanathan, Kamyar Azizzadenesheli, Jure Leskovec, Stefano Ermon, Anima Anandkumar

Modeling the complex three-dimensional (3D) dynamics of relational systems is an important problem in the natural sciences, with applications ranging from molecular simulations to particle mechanics.

Operator learning

Equivariant Flow Matching with Hybrid Probability Transport

no code implementations12 Dec 2023 Yuxuan Song, Jingjing Gong, Minkai Xu, Ziyao Cao, Yanyan Lan, Stefano Ermon, Hao Zhou, Wei-Ying Ma

The generation of 3D molecules requires simultaneously deciding the categorical features~(atom types) and continuous features~(atom coordinates).

VQGraph: Rethinking Graph Representation Space for Bridging GNNs and MLPs

1 code implementation4 Aug 2023 Ling Yang, Ye Tian, Minkai Xu, Zhongyi Liu, Shenda Hong, Wei Qu, Wentao Zhang, Bin Cui, Muhan Zhang, Jure Leskovec

To address this issue, we propose to learn a new powerful graph representation space by directly labeling nodes' diverse local structures for GNN-to-MLP distillation.

Knowledge Distillation Quantization +1

MADiff: Offline Multi-agent Learning with Diffusion Models

1 code implementation27 May 2023 Zhengbang Zhu, Minghuan Liu, Liyuan Mao, Bingyi Kang, Minkai Xu, Yong Yu, Stefano Ermon, Weinan Zhang

To the best of our knowledge, MADiff is the first diffusion-based multi-agent offline RL framework, which behaves as both a decentralized policy and a centralized controller.

Offline RL Trajectory Prediction

Geometric Latent Diffusion Models for 3D Molecule Generation

1 code implementation2 May 2023 Minkai Xu, Alexander Powers, Ron Dror, Stefano Ermon, Jure Leskovec

Generative models, especially diffusion models (DMs), have achieved promising results for generating feature-rich geometries and advancing foundational science problems such as molecule design.

3D Molecule Generation valid

MUDiff: Unified Diffusion for Complete Molecule Generation

no code implementations28 Apr 2023 Chenqing Hua, Sitao Luan, Minkai Xu, Rex Ying, Jie Fu, Stefano Ermon, Doina Precup

Our model is a promising approach for designing stable and diverse molecules and can be applied to a wide range of tasks in molecular modeling.

Drug Discovery valid

When Do Graph Neural Networks Help with Node Classification? Investigating the Impact of Homophily Principle on Node Distinguishability

1 code implementation25 Apr 2023 Sitao Luan, Chenqing Hua, Minkai Xu, Qincheng Lu, Jiaqi Zhu, Xiao-Wen Chang, Jie Fu, Jure Leskovec, Doina Precup

Homophily principle, i. e., nodes with the same labels are more likely to be connected, has been believed to be the main reason for the performance superiority of Graph Neural Networks (GNNs) over Neural Networks on node classification tasks.

Node Classification Stochastic Block Model

FusionRetro: Molecule Representation Fusion via In-Context Learning for Retrosynthetic Planning

1 code implementation30 Sep 2022 Songtao Liu, Zhengkai Tu, Minkai Xu, Zuobai Zhang, Lu Lin, Rex Ying, Jian Tang, Peilin Zhao, Dinghao Wu

Current strategies use a decoupled approach of single-step retrosynthesis models and search algorithms, taking only the product as the input to predict the reactants for each planning step and ignoring valuable context information along the synthetic route.

Drug Discovery In-Context Learning +3

GeoDiff: a Geometric Diffusion Model for Molecular Conformation Generation

2 code implementations ICLR 2022 Minkai Xu, Lantao Yu, Yang song, Chence Shi, Stefano Ermon, Jian Tang

GeoDiff treats each atom as a particle and learns to directly reverse the diffusion process (i. e., transforming from a noise distribution to stable conformations) as a Markov chain.

Drug Discovery

Generative Coarse-Graining of Molecular Conformations

1 code implementation28 Jan 2022 Wujie Wang, Minkai Xu, Chen Cai, Benjamin Kurt Miller, Tess Smidt, Yusu Wang, Jian Tang, Rafael Gómez-Bombarelli

Coarse-graining (CG) of molecular simulations simplifies the particle representation by grouping selected atoms into pseudo-beads and drastically accelerates simulation.

Predicting Molecular Conformation via Dynamic Graph Score Matching

no code implementations NeurIPS 2021 Shitong Luo, Chence Shi, Minkai Xu, Jian Tang

However, these non-bonded atoms may be proximal to each other in 3D space, and modeling their interactions is of crucial importance to accurately determine molecular conformations, especially for large molecules and multi-molecular complexes.

An End-to-End Framework for Molecular Conformation Generation via Bilevel Programming

1 code implementation15 May 2021 Minkai Xu, Wujie Wang, Shitong Luo, Chence Shi, Yoshua Bengio, Rafael Gomez-Bombarelli, Jian Tang

Specifically, the molecular graph is first encoded in a latent space, and then the 3D structures are generated by solving a principled bilevel optimization program.

Bilevel Optimization

Learning Gradient Fields for Molecular Conformation Generation

6 code implementations9 May 2021 Chence Shi, Shitong Luo, Minkai Xu, Jian Tang

We study a fundamental problem in computational chemistry known as molecular conformation generation, trying to predict stable 3D structures from 2D molecular graphs.

Translation

Learning Neural Generative Dynamics for Molecular Conformation Generation

3 code implementations ICLR 2021 Minkai Xu, Shitong Luo, Yoshua Bengio, Jian Peng, Jian Tang

Inspired by the recent progress in deep generative models, in this paper, we propose a novel probabilistic framework to generate valid and diverse conformations given a molecular graph.

valid

Towards Generalized Implementation of Wasserstein Distance in GANs

1 code implementation7 Dec 2020 Minkai Xu, Zhiming Zhou, Guansong Lu, Jian Tang, Weinan Zhang, Yong Yu

Wasserstein GANs (WGANs), built upon the Kantorovich-Rubinstein (KR) duality of Wasserstein distance, is one of the most theoretically sound GAN models.

Reciprocal Supervised Learning Improves Neural Machine Translation

1 code implementation5 Dec 2020 Minkai Xu, Mingxuan Wang, Zhouhan Lin, Hao Zhou, Weinan Zhang, Lei LI

Despite the recent success on image classification, self-training has only achieved limited gains on structured prediction tasks such as neural machine translation (NMT).

Image Classification Knowledge Distillation +4

Energy-Based Imitation Learning

1 code implementation20 Apr 2020 Minghuan Liu, Tairan He, Minkai Xu, Wei-Nan Zhang

We tackle a common scenario in imitation learning (IL), where agents try to recover the optimal policy from expert demonstrations without further access to the expert or environment reward signals.

Imitation Learning reinforcement-learning +1

Infomax Neural Joint Source-Channel Coding via Adversarial Bit Flip

1 code implementation3 Apr 2020 Yuxuan Song, Minkai Xu, Lantao Yu, Hao Zhou, Shuo Shao, Yong Yu

In this paper, motivated by the inherent connections between neural joint source-channel coding and discrete representation learning, we propose a novel regularization method called Infomax Adversarial-Bit-Flip (IABF) to improve the stability and robustness of the neural joint source-channel coding scheme.

Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.