no code implementations • 20 Feb 2025 • Maya Bechler-Speicher, Ben Finkelshtein, Fabrizio Frasca, Luis Müller, Jan Tönshoff, Antoine Siraudin, Viktor Zaverkin, Michael M. Bronstein, Mathias Niepert, Bryan Perozzi, Mikhail Galkin, Christopher Morris
While machine learning on graphs has demonstrated promise in drug design and molecular property prediction, significant benchmarking challenges hinder its further progress and relevance.
no code implementations • 11 Feb 2025 • Anji Liu, Xuejie Liu, Dayuan Zhao, Mathias Niepert, Yitao Liang, Guy Van Den Broeck
Recent advancements in NAR models, such as diffusion language models, have demonstrated superior performance in unconditional generation compared to AR models (e. g., GPTs) of similar sizes.
no code implementations • 27 Jan 2025 • Federico Errica, Henrik Christiansen, Viktor Zaverkin, Mathias Niepert, Francesco Alesiani
For almost 70 years, researchers have mostly relied on hyper-parameter tuning to pick the width of neural networks' layers out of many possible choices.
1 code implementation • 10 Oct 2024 • Andrei Manolache, Dragos Tantaru, Mathias Niepert
In this work, we propose a simple transformer-based baseline for multimodal molecular representation learning, integrating three distinct modalities: SMILES strings, 2D graph representations, and 3D conformers of molecules.
no code implementations • 3 Oct 2024 • Duy M. H. Nguyen, Nghiem T. Diep, Trung Q. Nguyen, Hoang-Bao Le, Tai Nguyen, Tien Nguyen, TrungTin Nguyen, Nhat Ho, Pengtao Xie, Roger Wattenhofer, James Zhou, Daniel Sonntag, Mathias Niepert
State-of-the-art medical multi-modal large language models (med-MLLM), like LLaVA-Med or BioMedGPT, leverage instruction-following data in pre-training.
no code implementations • 2 Oct 2024 • Anji Liu, Oliver Broadrick, Mathias Niepert, Guy Van Den Broeck
When we apply this approach to autoregressive copula models, the combined model outperforms both models individually in unconditional and conditional text generation.
no code implementations • 30 Sep 2024 • Laurène Vaugrante, Mathias Niepert, Thilo Hagendorff
In an era where large language models (LLMs) are increasingly integrated into a wide range of everyday applications, research into these models' behavior has surged.
2 code implementations • 2 Aug 2024 • Daniel Musekamp, Marimuthu Kalimuthu, David Holzmüller, Makoto Takamoto, Mathias Niepert
While AL is more common in other domains, it has yet to be studied extensively for neural PDE solvers.
no code implementations • 23 Jul 2024 • Makoto Takamoto, Viktor Zaverkin, Mathias Niepert
Our approach improves the accuracy of MLIPs applied to training tasks with sparse training data sets and reduces the need for pre-training computationally demanding models with large data sets.
no code implementations • 5 Jul 2024 • Duy M. H. Nguyen, An T. Le, Trung Q. Nguyen, Nghiem T. Diep, Tai Nguyen, Duy Duong-Tran, Jan Peters, Li Shen, Mathias Niepert, Daniel Sonntag
Prompt learning methods are gaining increasing attention due to their ability to customize large vision-language models to new domains using pre-trained contextual knowledge and minimal training data.
1 code implementation • 6 Jun 2024 • Jan Hagnberger, Marimuthu Kalimuthu, Daniel Musekamp, Mathias Niepert
Transformer models are increasingly used for solving Partial Differential Equations (PDEs).
1 code implementation • 27 May 2024 • Chendi Qian, Andrei Manolache, Christopher Morris, Mathias Niepert
Message-passing graph neural networks (MPNNs) have emerged as a powerful paradigm for graph-based machine learning.
1 code implementation • 25 May 2024 • Hoai-Chau Tran, Duy M. H. Nguyen, Duy M. Nguyen, Trung-Tin Nguyen, Ngan Le, Pengtao Xie, Daniel Sonntag, James Y. Zou, Binh T. Nguyen, Mathias Niepert
Increasing the throughput of the Transformer architecture, a foundational component used in numerous state-of-the-art models for vision and language tasks (e. g., GPT, LLaVa), is an important problem in machine learning.
1 code implementation • 24 May 2024 • Vinh Tong, Trung-Dung Hoang, Anji Liu, Guy Van Den Broeck, Mathias Niepert
We achieve FIDs of 2. 38 (10 NFE), and 2. 27 (10 NFE) on unconditional CIFAR10 and AFHQv2 in 5-10 minutes of training.
1 code implementation • 23 May 2024 • Viktor Zaverkin, Francesco Alesiani, Takashi Maruyama, Federico Errica, Henrik Christiansen, Makoto Takamoto, Nicolas Weber, Mathias Niepert
By learning from high-quality data, machine-learned interatomic potentials achieve accuracy on par with ab initio and first-principles methods at a fraction of their computational cost.
1 code implementation • 3 Feb 2024 • Duy M. H. Nguyen, Nina Lukashina, Tai Nguyen, An T. Le, TrungTin Nguyen, Nhat Ho, Jan Peters, Daniel Sonntag, Viktor Zaverkin, Mathias Niepert
Inspired by recent work on using ensembles of conformers in conjunction with 2D graph representations, we propose $\mathrm{E}$(3)-invariant molecular conformer aggregation networks.
1 code implementation • 27 Dec 2023 • Federico Errica, Henrik Christiansen, Viktor Zaverkin, Takashi Maruyama, Mathias Niepert, Francesco Alesiani
Long-range interactions are essential for the correct description of complex systems in many scientific fields.
1 code implementation • 3 Dec 2023 • Viktor Zaverkin, David Holzmüller, Henrik Christiansen, Federico Errica, Francesco Alesiani, Makoto Takamoto, Mathias Niepert, Johannes Kästner
Efficiently creating a concise but comprehensive data set for training machine-learned interatomic potentials (MLIPs) is an under-explored problem.
1 code implementation • 28 Nov 2023 • Anji Liu, Mathias Niepert, Guy Van Den Broeck
Further, with the help of an image encoder and decoder, our method can readily accept semantic constraints on specific regions of the image, which opens up the potential for more controlled image generation tasks.
no code implementations • 18 Nov 2023 • Duy Minh Ho Nguyen, Tan Ngoc Pham, Nghiem Tuong Diep, Nghi Quoc Phan, Quang Pham, Vinh Tong, Binh T. Nguyen, Ngan Hoang Le, Nhat Ho, Pengtao Xie, Daniel Sonntag, Mathias Niepert
Constructing a robust model that can effectively generalize to test samples under distribution shifts remains a significant challenge in the field of medical imaging.
no code implementations • 21 Oct 2023 • Francesco Alesiani, Shujian Yu, Mathias Niepert
Invariant risk minimization (IRM) is a recent proposal for discovering environment-invariant representations.
1 code implementation • 3 Oct 2023 • Chendi Qian, Andrei Manolache, Kareem Ahmed, Zhe Zeng, Guy Van Den Broeck, Mathias Niepert, Christopher Morris
Message-passing graph neural networks (MPNNs) emerged as powerful tools for processing graph-structured input.
no code implementations • 12 Aug 2023 • Michael Cochez, Dimitrios Alivanistos, Erik Arakelyan, Max Berrendorf, Daniel Daza, Mikhail Galkin, Pasquale Minervini, Mathias Niepert, Hongyu Ren
We will first provide an overview of the different query types which can be supported by these methods and datasets typically used for evaluation, as well as an insight into their limitations.
1 code implementation • NeurIPS 2021 • David Friede, Mathias Niepert
We analyze the behavior of more complex stochastic computations graphs with multiple sequential discrete components.
1 code implementation • 26 Jul 2023 • David Friede, Christian Reimers, Heiner Stuckenschmidt, Mathias Niepert
Recent successes in image generation, model-based reinforcement learning, and text-to-image generation have demonstrated the empirical advantages of discrete latent representations, although the reasons behind their benefits remain unclear.
1 code implementation • NeurIPS 2023 • Duy M. H. Nguyen, Hoang Nguyen, Nghiem T. Diep, Tan N. Pham, Tri Cao, Binh T. Nguyen, Paul Swoboda, Nhat Ho, Shadi Albarqouni, Pengtao Xie, Daniel Sonntag, Mathias Niepert
While pre-trained deep networks on ImageNet and vision-language foundation models trained on web-scale data are prevailing approaches, their effectiveness on medical tasks is limited due to the significant domain shift between natural and medical images.
3 code implementations • 17 May 2023 • Federico Errica, Mathias Niepert
We introduce Graph-Induced Sum-Product Networks (GSPNs), a new probabilistic framework for graph representation learning that can tractably answer probabilistic queries.
2 code implementations • 27 Apr 2023 • Makoto Takamoto, Francesco Alesiani, Mathias Niepert
The experiments also show several advantages of CAPE, such as its increased ability to generalize to unseen PDE parameters without large increases inference time and parameter count.
no code implementations • 3 Apr 2023 • Zhao Xu, Daniel Onoro Rubio, Giuseppe Serra, Mathias Niepert
The resulting sparsity of a representation is not fixed, but fits to the observation itself under the pre-defined restriction.
no code implementations • 10 Dec 2022 • Cheng Wang, Carolin Lawrence, Mathias Niepert
We aim to address both shortcomings with a class of recurrent networks that use a stochastic state transition mechanism between cell applications.
1 code implementation • 17 Oct 2022 • Vinh Tong, Dat Quoc Nguyen, Trung Thanh Huynh, Tam Thanh Nguyen, Quoc Viet Hung Nguyen, Mathias Niepert
The proposed model combines two components that jointly accomplish KG completion and alignment.
Ranked #1 on
Knowledge Graph Completion
on DPB-5L (French)
5 code implementations • 13 Oct 2022 • Makoto Takamoto, Timothy Praditia, Raphael Leiteritz, Dan MacKinlay, Francesco Alesiani, Dirk Pflüger, Mathias Niepert
With those metrics we identify tasks which are challenging for recent ML methods and propose these tasks as future challenges for the community.
1 code implementation • 4 Oct 2022 • Kareem Ahmed, Zhe Zeng, Mathias Niepert, Guy Van Den Broeck
$k$-subset sampling is ubiquitous in machine learning, enabling regularization and interpretability through sparsity.
1 code implementation • 28 Sep 2022 • Giuseppe Serra, Mathias Niepert
Graph Neural Networks (GNNs) are a popular class of machine learning models.
1 code implementation • 11 Sep 2022 • Pasquale Minervini, Luca Franceschi, Mathias Niepert
In this work, we present Adaptive IMLE (AIMLE), the first adaptive gradient estimator for complex discrete distributions: it adaptively identifies the target distribution for IMLE by trading off the density of gradient information with the degree of bias in the gradient estimates.
no code implementations • 22 Jun 2022 • Chendi Qian, Gaurav Rattan, Floris Geerts, Christopher Morris, Mathias Niepert
Numerous subgraph-enhanced graph neural networks (GNNs) have emerged recently, provably boosting the expressive power of standard (message-passing) GNNs.
no code implementations • ACL 2022 • Bhushan Kotnis, Kiril Gashteovski, Daniel Oñoro Rubio, Vanesa Rodriguez-Tembras, Ammar Shaker, Makoto Takamoto, Mathias Niepert, Carolin Lawrence
In contrast, we explore the hypothesis that it may be beneficial to extract triple slots iteratively: first extract easy slots, followed by the difficult ones by conditioning on the easy slots, and therefore achieve a better overall extraction.
1 code implementation • ACL 2022 • Niklas Friedrich, Kiril Gashteovski, Mingying Yu, Bhushan Kotnis, Carolin Lawrence, Mathias Niepert, Goran Glavaš
Open Information Extraction (OIE) is the task of extracting facts from sentences in the form of relations and their corresponding arguments in schema-free manner.
1 code implementation • ACL 2022 • Kiril Gashteovski, Mingying Yu, Bhushan Kotnis, Carolin Lawrence, Mathias Niepert, Goran Glavaš
In this work, we introduce BenchIE: a benchmark and evaluation framework for comprehensive evaluation of OIE systems for English, Chinese, and German.
Ranked #1 on
Open Information Extraction
on BenchIE
no code implementations • 25 Jun 2021 • Jun Cheng, Carolin Lawrence, Mathias Niepert
In contrast, we propose VEGN, which models variant effect prediction using a graph neural network (GNN) that operates on a heterogeneous graph with genes and variants.
no code implementations • AKBC 2021 • Wiem Ben Rim, Carolin Lawrence, Kiril Gashteovski, Mathias Niepert, Naoaki Okazaki
With an extensive set of experiments, we perform and analyze these tests for several KGE models.
3 code implementations • NeurIPS 2021 • Mathias Niepert, Pasquale Minervini, Luca Franceschi
We propose Implicit Maximum Likelihood Estimation (I-MLE), a framework for end-to-end learning of models combining discrete exponential family distributions and differentiable neural components.
no code implementations • ICLR 2021 • Cheng Wang, Carolin Lawrence, Mathias Niepert
Uncertainty quantification is crucial for building reliable and trustable machine learning systems.
1 code implementation • 12 Oct 2020 • Carolin Lawrence, Timo Sztyler, Mathias Niepert
Moreover, we show theoretically that the difference between gradient rollback's influence approximation and the true influence on a model's behavior is smaller than known bounds on the stability of stochastic gradient descent.
1 code implementation • 6 Apr 2020 • Bhushan Kotnis, Carolin Lawrence, Mathias Niepert
Representation learning for knowledge graphs (KGs) has focused on the problem of answering simple link prediction queries.
no code implementations • IJCNLP 2019 • Kosuke Akimoto, Takuya Hiraoka, Kunihiko Sadamasa, Mathias Niepert
Most existing relation extraction approaches exclusively target binary relations, and n-ary relation extraction is relatively unexplored.
1 code implementation • IJCNLP 2019 • Carolin Lawrence, Bhushan Kotnis, Mathias Niepert
Treated as a node in a fully connected graph, a placeholder token can take past and future tokens into consideration when generating the actual output token.
2 code implementations • 28 Mar 2019 • Luca Franceschi, Mathias Niepert, Massimiliano Pontil, Xiao He
With this work, we propose to jointly learn the graph structure and the parameters of graph convolutional networks (GCNs) by approximately solving a bilevel program that learns a discrete probability distribution on the edges of the graph.
Ranked #3 on
Node Classification
on Cora: fixed 20 node per class
no code implementations • 26 Mar 2019 • Cheng Wang, Mathias Niepert, Hui Li
Although various transfer learning methods have shown promising performance in this context, our proposed novel method RecSys-DAN focuses on alleviating the cross-domain and within-domain data sparsity and data imbalance and learns transferable latent representations for users, items and their interactions.
5 code implementations • 13 Mar 2019 • Ye Liu, Hui Li, Alberto Garcia-Duran, Mathias Niepert, Daniel Onoro-Rubio, David S. Rosenblum
We present MMKG, a collection of three knowledge graphs that contain both numerical features and (links to) images for all entities as well as entity alignments between pairs of KGs.
no code implementations • 25 Jan 2019 • Cheng Wang, Mathias Niepert
We aim to address both shortcomings with a class of recurrent networks that use a stochastic state transition mechanism between cell applications.
no code implementations • 12 Nov 2018 • Brandon Malone, Alberto Garcia-Duran, Mathias Niepert
Extracting actionable insight from Electronic Health Records (EHRs) poses several challenges for traditional machine learning approaches.
no code implementations • 22 Oct 2018 • Brandon Malone, Alberto García-Durán, Mathias Niepert
The polypharmacy side effect prediction problem considers cases in which two drugs taken individually do not result in a particular side effect; however, when the two drugs are taken in combination, the side effect manifests.
no code implementations • 19 Oct 2018 • Nicolas Weber, Mathias Niepert, Felipe Huici
While the efficiency problem can be partially addressed with specialized hardware and its corresponding proprietary libraries, we believe that neural network acceleration should be transparent to the user and should support all hardware platforms and deep learning libraries.
no code implementations • 27 Sep 2018 • Cheng Wang, Mathias Niepert
We aim to address both shortcomings with a class of recurrent networks that use a stochastic state transition mechanism between cell applications.
3 code implementations • EMNLP 2018 • Alberto García-Durán, Sebastijan Dumančić, Mathias Niepert
In line with previous work on static knowledge graphs, we propose to address this problem by learning latent entity and relation type representations.
no code implementations • EMNLP 2018 • Cheng Wang, Mathias Niepert, Hui Li
More importantly, LRMM is more robust to previous methods in alleviating data-sparsity and the cold-start problem.
1 code implementation • 29 Jun 2018 • Sebastijan Dumancic, Alberto Garcia-Duran, Mathias Niepert
Many real-world domains can be expressed as graphs and, more generally, as multi-relational knowledge graphs.
no code implementations • 8 Jun 2018 • Daniel Oñoro-Rubio, Mathias Niepert
These shortcut connections improve the performance and it is hypothesized that this is due to mitigating effects on the vanishing gradient problem and the ability of the model to combine feature maps from earlier and later layers.
no code implementations • 8 May 2018 • Daniel Oñoro-Rubio, Mathias Niepert, Roberto J. López-Sastre
Standard short-cut connections are connections between layers in deep neural networks which skip at least one intermediate layer.
no code implementations • 4 May 2018 • Mathias Niepert, Alberto Garcia-Duran
We present our ongoing work on understanding the limitations of graph convolutional networks (GCNs) as well as our work on generalizations of graph convolutions for representing more complex node attribute dependencies.
no code implementations • 23 Apr 2018 • Nicolas Weber, Florian Schmidt, Mathias Niepert, Felipe Huici
Neural network frameworks such as PyTorch and TensorFlow are the workhorses of numerous machine learning applications ranging from object recognition to machine translation.
no code implementations • 2 Feb 2018 • Florian Schmidt, Mathias Niepert, Felipe Huici
Creating a model of a computer system that can be used for tasks such as predicting future resource usage and detecting anomalies is a challenging problem.
no code implementations • 30 Jan 2018 • Alberto Garcia-Duran, Roberto Gonzalez, Daniel Onoro-Rubio, Mathias Niepert, Hui Li
This is exploited in sentiment analysis where machine learning models are used to predict the review score from the text of the review.
no code implementations • NeurIPS 2017 • Alberto Garcia-Duran, Mathias Niepert
We propose Embedding Propagation (EP), an unsupervised learning framework for graph-structured data.
2 code implementations • 14 Sep 2017 • Alberto Garcia-Duran, Mathias Niepert
We present KBLRN, a framework for end-to-end learning of knowledge base representations from latent, relational, and numerical features.
2 code implementations • AKBC 2019 • Daniel Oñoro-Rubio, Mathias Niepert, Alberto García-Durán, Roberto González, Roberto J. López-Sastre
A visual-relational knowledge graph (KG) is a multi-relational graph whose entities are associated with images.
no code implementations • 10 May 2017 • Roberto Gonzalez, Filipe Manco, Alberto Garcia-Duran, Jose Mendes, Felipe Huici, Saverio Niccolini, Mathias Niepert
We present Net2Vec, a flexible high-performance platform that allows the execution of deep learning algorithms in the communication network.
no code implementations • NeurIPS 2016 • Mathias Niepert
We present discriminative Gaifman models, a novel family of relational machine learning models.
Ranked #28 on
Link Prediction
on WN18
2 code implementations • 17 May 2016 • Mathias Niepert, Mohamed Ahmed, Konstantin Kutzkov
Numerous important problems can be framed as learning from graph data.
Ranked #3 on
Graph Classification
on COX2
no code implementations • 1 Dec 2014 • Guy Van den Broeck, Mathias Niepert
Lifted probabilistic inference algorithms have been successfully applied to a large number of symmetric graphical models.
no code implementations • 9 Aug 2014 • Mathias Niepert
Thus, we present the first lifted MCMC algorithm for probabilistic graphical models.
no code implementations • 9 Aug 2014 • Mathias Niepert, Dirk Van Gucht, Marc Gyssens
A lattice-theoretic framework is introduced that permits the study of the conditional independence (CI) implication problem relative to the class of discrete probability measures.
no code implementations • 2 May 2014 • Mathias Niepert, Pedro Domingos
A sequence of random variables is exchangeable if its joint distribution is invariant under variable permutations.
no code implementations • 7 Jan 2014 • Mathias Niepert, Guy Van Den Broeck
We develop a theory of finite exchangeability and its relation to tractable probabilistic inference.
no code implementations • 16 Apr 2013 • Jan Noessner, Mathias Niepert, Heiner Stuckenschmidt
RockIt is a maximum a-posteriori (MAP) query engine for statistical relational models.
no code implementations • 9 Apr 2013 • Mathias Niepert
The Rao-Blackwell theorem is utilized to analyze and improve the scalability of inference in large probabilistic models that exhibit symmetries.