Search Results for author: Minkai Xu

Found 39 papers, 33 papers with code

3D Interaction Geometric Pre-training for Molecular Relational Learning

1 code implementation4 Dec 2024 Namkyeong Lee, Yunhak Oh, Heewoong Noh, Gyoung S. Na, Minkai Xu, Hanchen Wang, Tianfan Fu, Chanyoung Park

Molecular Relational Learning (MRL) is a rapidly growing field that focuses on understanding the interaction dynamics between molecules, which is crucial for applications ranging from catalyst engineering to drug discovery.

Contrastive Learning Drug Discovery +1

$f$-PO: Generalizing Preference Optimization with $f$-divergence Minimization

1 code implementation29 Oct 2024 Jiaqi Han, Mingjian Jiang, Yuxuan Song, Jure Leskovec, Stefano Ermon, Minkai Xu

Preference optimization has made significant progress recently, with numerous methods developed to align language models with human preferences.

Language Modeling Language Modelling

Energy-Based Diffusion Language Models for Text Generation

no code implementations28 Oct 2024 Minkai Xu, Tomas Geffner, Karsten Kreis, Weili Nie, Yilun Xu, Jure Leskovec, Stefano Ermon, Arash Vahdat

Unfortunately, these models still underperform the autoregressive counterparts, with the performance gap increasing when reducing the number of sampling steps.

Language Modeling Language Modelling +1

TabDiff: a Multi-Modal Diffusion Model for Tabular Data Generation

1 code implementation27 Oct 2024 Juntong Shi, Minkai Xu, Harper Hua, Hengrui Zhang, Stefano Ermon, Jure Leskovec

In this paper, we introduce TabDiff, a joint diffusion framework that models all multi-modal distributions of tabular data in one model.

Imputation Tabular Data Generation

Geometric Trajectory Diffusion Models

1 code implementation16 Oct 2024 Jiaqi Han, Minkai Xu, Aaron Lou, Haotian Ye, Stefano Ermon

In this work, we propose geometric trajectory diffusion models (GeoTDM), the first diffusion model for modeling the temporal distribution of 3D geometric trajectories.

Protein Design

SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights

1 code implementation11 Oct 2024 Ling Yang, Zhaochen Yu, Tianjun Zhang, Minkai Xu, Joseph E. Gonzalez, Bin Cui, Shuicheng Yan

Large language models (LLMs) like GPT-4, PaLM, and LLaMA have shown significant improvements in various reasoning tasks.

GSM8K Math +1

Trans4D: Realistic Geometry-Aware Transition for Compositional Text-to-4D Synthesis

1 code implementation9 Oct 2024 Bohan Zeng, Ling Yang, Siyu Li, Jiaming Liu, Zixiang Zhang, Juanxi Tian, Kaixin Zhu, Yongzhen Guo, Fu-Yun Wang, Minkai Xu, Stefano Ermon, Wentao Zhang

Then we propose a geometry-aware 4D transition network to realize a complex scene-level 4D transition based on the plan, which involves expressive geometrical object deformation.

Video Generation

TFG: Unified Training-Free Guidance for Diffusion Models

1 code implementation24 Sep 2024 Haotian Ye, Haowei Lin, Jiaqi Han, Minkai Xu, Sheng Liu, Yitao Liang, Jianzhu Ma, James Zou, Stefano Ermon

Given an unconditional diffusion model and a predictor for a target property of interest (e. g., a classifier), the goal of training-free guidance is to generate samples with desirable target properties without additional training.

The Heterophilic Graph Learning Handbook: Benchmarks, Models, Theoretical Analysis, Applications and Challenges

no code implementations12 Jul 2024 Sitao Luan, Chenqing Hua, Qincheng Lu, Liheng Ma, Lirong Wu, Xinyu Wang, Minkai Xu, Xiao-Wen Chang, Doina Precup, Rex Ying, Stan Z. Li, Jian Tang, Guy Wolf, Stefanie Jegelka

In this survey, we provide a comprehensive review of the latest progress on heterophilic graph learning, including an extensive summary of benchmark datasets and evaluation of homophily metrics on synthetic graphs, meticulous classification of the most updated supervised and unsupervised learning methods, thorough digestion of the theoretical analysis on homophily/heterophily, and broad exploration of the heterophily-related applications.

Graph Learning Graph Representation Learning

Consistency Flow Matching: Defining Straight Flows with Velocity Consistency

1 code implementation2 Jul 2024 Ling Yang, Zixiang Zhang, Zhilong Zhang, Xingchao Liu, Minkai Xu, Wentao Zhang, Chenlin Meng, Stefano Ermon, Bin Cui

Additionally, we propose a multi-segment training approach for Consistency-FM to enhance expressiveness, achieving a better trade-off between sampling quality and speed.

Image Generation

Aligning Target-Aware Molecule Diffusion Models with Exact Energy Optimization

1 code implementation1 Jul 2024 Siyi Gu, Minkai Xu, Alexander Powers, Weili Nie, Tomas Geffner, Karsten Kreis, Jure Leskovec, Arash Vahdat, Stefano Ermon

AliDiff shifts the target-conditioned chemical distribution towards regions with higher binding affinity and structural rationality, specified by user-defined reward functions, via the preference optimization approach.

Avg Drug Design

Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models

1 code implementation6 Jun 2024 Ling Yang, Zhaochen Yu, Tianjun Zhang, Shiyi Cao, Minkai Xu, Wentao Zhang, Joseph E. Gonzalez, Bin Cui

We introduce Buffer of Thoughts (BoT), a novel and versatile thought-augmented reasoning approach for enhancing accuracy, efficiency and robustness of large language models (LLMs).

Arithmetic Reasoning Code Generation +2

Contextualized Diffusion Models for Text-Guided Image and Video Generation

1 code implementation26 Feb 2024 Ling Yang, Zhilong Zhang, Zhaochen Yu, Jingwei Liu, Minkai Xu, Stefano Ermon, Bin Cui

To address this issue, we propose a novel and general contextualized diffusion model (ContextDiff) by incorporating the cross-modal context encompassing interactions and alignments between text condition and visual sample into forward and reverse processes.

Text-to-Image Generation Text-to-Video Editing +2

RealCompo: Balancing Realism and Compositionality Improves Text-to-Image Diffusion Models

2 code implementations20 Feb 2024 Xinchen Zhang, Ling Yang, Yaqi Cai, Zhaochen Yu, Kai-Ni Wang, Jiake Xie, Ye Tian, Minkai Xu, Yong Tang, Yujiu Yang, Bin Cui

In this paper, we propose RealCompo, a new training-free and transferred-friendly text-to-image generation framework, which aims to leverage the respective advantages of text-to-image models and spatial-aware image diffusion models (e. g., layout, keypoints and segmentation maps) to enhance both realism and compositionality of the generated images.

Denoising Text-to-Image Generation

Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMs

1 code implementation22 Jan 2024 Ling Yang, Zhaochen Yu, Chenlin Meng, Minkai Xu, Stefano Ermon, Bin Cui

In this paper, we propose a brand new training-free text-to-image generation/editing framework, namely Recaption, Plan and Generate (RPG), harnessing the powerful chain-of-thought reasoning ability of multimodal LLMs to enhance the compositionality of text-to-image diffusion models.

Diffusion Personalization Tuning Free Large Language Model

Equivariant Graph Neural Operator for Modeling 3D Dynamics

1 code implementation19 Jan 2024 Minkai Xu, Jiaqi Han, Aaron Lou, Jean Kossaifi, Arvind Ramanathan, Kamyar Azizzadenesheli, Jure Leskovec, Stefano Ermon, Anima Anandkumar

Comprehensive experiments in multiple domains, including particle simulations, human motion capture, and molecular dynamics, demonstrate the significantly superior performance of EGNO against existing methods, thanks to the equivariant temporal modeling.

Operator learning

Equivariant Flow Matching with Hybrid Probability Transport

1 code implementation12 Dec 2023 Yuxuan Song, Jingjing Gong, Minkai Xu, Ziyao Cao, Yanyan Lan, Stefano Ermon, Hao Zhou, Wei-Ying Ma

The generation of 3D molecules requires simultaneously deciding the categorical features~(atom types) and continuous features~(atom coordinates).

VQGraph: Rethinking Graph Representation Space for Bridging GNNs and MLPs

1 code implementation4 Aug 2023 Ling Yang, Ye Tian, Minkai Xu, Zhongyi Liu, Shenda Hong, Wei Qu, Wentao Zhang, Bin Cui, Muhan Zhang, Jure Leskovec

To address this issue, we propose to learn a new powerful graph representation space by directly labeling nodes' diverse local structures for GNN-to-MLP distillation.

Knowledge Distillation Quantization +1

MADiff: Offline Multi-agent Learning with Diffusion Models

1 code implementation27 May 2023 Zhengbang Zhu, Minghuan Liu, Liyuan Mao, Bingyi Kang, Minkai Xu, Yong Yu, Stefano Ermon, Weinan Zhang

Offline reinforcement learning (RL) aims to learn policies from pre-existing datasets without further interactions, making it a challenging task.

Offline RL Q-Learning +2

Geometric Latent Diffusion Models for 3D Molecule Generation

2 code implementations2 May 2023 Minkai Xu, Alexander Powers, Ron Dror, Stefano Ermon, Jure Leskovec

Generative models, especially diffusion models (DMs), have achieved promising results for generating feature-rich geometries and advancing foundational science problems such as molecule design.

3D Molecule Generation valid

MUDiff: Unified Diffusion for Complete Molecule Generation

no code implementations28 Apr 2023 Chenqing Hua, Sitao Luan, Minkai Xu, Rex Ying, Jie Fu, Stefano Ermon, Doina Precup

Our model is a promising approach for designing stable and diverse molecules and can be applied to a wide range of tasks in molecular modeling.

3D geometry Drug Discovery +1

When Do Graph Neural Networks Help with Node Classification? Investigating the Impact of Homophily Principle on Node Distinguishability

1 code implementation25 Apr 2023 Sitao Luan, Chenqing Hua, Minkai Xu, Qincheng Lu, Jiaqi Zhu, Xiao-Wen Chang, Jie Fu, Jure Leskovec, Doina Precup

Homophily principle, i. e., nodes with the same labels are more likely to be connected, has been believed to be the main reason for the performance superiority of Graph Neural Networks (GNNs) over Neural Networks on node classification tasks.

Node Classification Stochastic Block Model

FusionRetro: Molecule Representation Fusion via In-Context Learning for Retrosynthetic Planning

1 code implementation30 Sep 2022 Songtao Liu, Zhengkai Tu, Minkai Xu, Zuobai Zhang, Lu Lin, Rex Ying, Jian Tang, Peilin Zhao, Dinghao Wu

Current strategies use a decoupled approach of single-step retrosynthesis models and search algorithms, taking only the product as the input to predict the reactants for each planning step and ignoring valuable context information along the synthetic route.

Drug Discovery In-Context Learning +3

GeoDiff: a Geometric Diffusion Model for Molecular Conformation Generation

2 code implementations ICLR 2022 Minkai Xu, Lantao Yu, Yang song, Chence Shi, Stefano Ermon, Jian Tang

GeoDiff treats each atom as a particle and learns to directly reverse the diffusion process (i. e., transforming from a noise distribution to stable conformations) as a Markov chain.

Drug Discovery

Generative Coarse-Graining of Molecular Conformations

1 code implementation28 Jan 2022 Wujie Wang, Minkai Xu, Chen Cai, Benjamin Kurt Miller, Tess Smidt, Yusu Wang, Jian Tang, Rafael Gómez-Bombarelli

Coarse-graining (CG) of molecular simulations simplifies the particle representation by grouping selected atoms into pseudo-beads and drastically accelerates simulation.

Predicting Molecular Conformation via Dynamic Graph Score Matching

no code implementations NeurIPS 2021 Shitong Luo, Chence Shi, Minkai Xu, Jian Tang

However, these non-bonded atoms may be proximal to each other in 3D space, and modeling their interactions is of crucial importance to accurately determine molecular conformations, especially for large molecules and multi-molecular complexes.

Computational chemistry

An End-to-End Framework for Molecular Conformation Generation via Bilevel Programming

1 code implementation15 May 2021 Minkai Xu, Wujie Wang, Shitong Luo, Chence Shi, Yoshua Bengio, Rafael Gomez-Bombarelli, Jian Tang

Specifically, the molecular graph is first encoded in a latent space, and then the 3D structures are generated by solving a principled bilevel optimization program.

Bilevel Optimization

Learning Gradient Fields for Molecular Conformation Generation

6 code implementations9 May 2021 Chence Shi, Shitong Luo, Minkai Xu, Jian Tang

We study a fundamental problem in computational chemistry known as molecular conformation generation, trying to predict stable 3D structures from 2D molecular graphs.

Computational chemistry Translation

Learning Neural Generative Dynamics for Molecular Conformation Generation

3 code implementations ICLR 2021 Minkai Xu, Shitong Luo, Yoshua Bengio, Jian Peng, Jian Tang

Inspired by the recent progress in deep generative models, in this paper, we propose a novel probabilistic framework to generate valid and diverse conformations given a molecular graph.

valid

Towards Generalized Implementation of Wasserstein Distance in GANs

1 code implementation7 Dec 2020 Minkai Xu, Zhiming Zhou, Guansong Lu, Jian Tang, Weinan Zhang, Yong Yu

Wasserstein GANs (WGANs), built upon the Kantorovich-Rubinstein (KR) duality of Wasserstein distance, is one of the most theoretically sound GAN models.

Reciprocal Supervised Learning Improves Neural Machine Translation

1 code implementation5 Dec 2020 Minkai Xu, Mingxuan Wang, Zhouhan Lin, Hao Zhou, Weinan Zhang, Lei LI

Despite the recent success on image classification, self-training has only achieved limited gains on structured prediction tasks such as neural machine translation (NMT).

Image Classification Knowledge Distillation +4

Energy-Based Imitation Learning

1 code implementation20 Apr 2020 Minghuan Liu, Tairan He, Minkai Xu, Wei-Nan Zhang

We tackle a common scenario in imitation learning (IL), where agents try to recover the optimal policy from expert demonstrations without further access to the expert or environment reward signals.

Imitation Learning reinforcement-learning +2

Infomax Neural Joint Source-Channel Coding via Adversarial Bit Flip

1 code implementation3 Apr 2020 Yuxuan Song, Minkai Xu, Lantao Yu, Hao Zhou, Shuo Shao, Yong Yu

In this paper, motivated by the inherent connections between neural joint source-channel coding and discrete representation learning, we propose a novel regularization method called Infomax Adversarial-Bit-Flip (IABF) to improve the stability and robustness of the neural joint source-channel coding scheme.

Decoder Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.