1 code implementation • 4 Dec 2023 • Danqi Liao, Chen Liu, Benjamin W. Christensen, Alexander Tong, Guillaume Huguet, Guy Wolf, Maximilian Nickel, Ian Adelstein, Smita Krishnaswamy
Entropy and mutual information in neural networks provide rich information on the learning process, but they have proven difficult to compute reliably in high dimensions.
no code implementations • 3 Oct 2023 • Guan-Horng Liu, Yaron Lipman, Maximilian Nickel, Brian Karrer, Evangelos A. Theodorou, Ricky T. Q. Chen
Modern distribution matching algorithms for training diffusion or flow models directly prescribe the time evolution of the marginal distributions between two boundary distributions.
no code implementations • 18 Sep 2023 • Dhananjay Bhaskar, Yanlei Zhang, Charles Xu, Xingzhi Sun, Oluwadamilola Fasina, Guy Wolf, Maximilian Nickel, Michael Perlmutter, Smita Krishnaswamy
In this paper, we propose Graph Differential Equation Network (GDeNet), an approach that harnesses the expressive power of solutions to PDEs on a graph to obtain continuous node- and graph-level representations for various downstream tasks.
1 code implementation • 11 Jul 2023 • Arjun Subramonian, Adina Williams, Maximilian Nickel, Yizhou Sun, Levent Sagun
The expressive power of graph neural networks is usually measured by comparing how many pairs of graphs or nodes an architecture can possibly distinguish as non-isomorphic to those distinguishable by the $k$-dimensional Weisfeiler-Lehman ($k$-WL) test.
no code implementations • 11 Jun 2023 • Neta Shaul, Ricky T. Q. Chen, Maximilian Nickel, Matt Le, Yaron Lipman
We investigate Kinetic Optimal (KO) Gaussian paths and offer the following observations: (i) We show the KE takes a simplified form on the space of Gaussian paths, where the data is incorporated only through a single, one dimensional scalar function, called the \emph{data separation function}.
1 code implementation • 1 Jun 2023 • Oluwadamilola Fasina, Guillaume Huguet, Alexander Tong, Yanlei Zhang, Guy Wolf, Maximilian Nickel, Ian Adelstein, Smita Krishnaswamy
Although data diffusion embeddings are ubiquitous in unsupervised learning and have proven to be a viable technique for uncovering the underlying intrinsic geometry of data, diffusion embeddings are inherently limited due to their discrete nature.
1 code implementation • International Conference on Machine Learning Workshop on TAGML 2023 • Danqi Liao*, Chen Liu*, Alexander Tong, Guillaume Huguet, Guy Wolf, Maximilian Nickel, Ian Adelstein, Smita Krishnaswamy
We also see that there is an increase in DSMI with the class label over time.
1 code implementation • 18 Apr 2023 • Karan Desai, Maximilian Nickel, Tanmay Rajpurohit, Justin Johnson, Ramakrishna Vedantam
Visual and linguistic concepts naturally organize themselves in a hierarchy, where a textual concept "dog" entails all images that contain dogs.
no code implementations • 28 Dec 2022 • Ricky T. Q. Chen, Matthew Le, Matthew Muckley, Maximilian Nickel, Karen Ullrich
We empirically verify our approach on multiple domains involving compression of video and motion capture sequences, showing that our approaches can automatically achieve reductions in bit rates by learning how to discretize.
1 code implementation • 6 Oct 2022 • Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu, Maximilian Nickel, Matt Le
These paths are more efficient than diffusion paths, provide faster training and sampling, and result in better generalization.
no code implementations • 11 Jul 2022 • Heli Ben-Hamu, samuel cohen, Joey Bose, Brandon Amos, Aditya Grover, Maximilian Nickel, Ricky T. Q. Chen, Yaron Lipman
Continuous Normalizing Flows (CNFs) are a class of generative models that transform a prior distribution to a model distribution by solving an ordinary differential equation (ODE).
1 code implementation • 14 Mar 2022 • Ricky T. Q. Chen, Brandon Amos, Maximilian Nickel
Mapping between discrete and continuous distributions is a difficult task and many have had to resort to heuristical approaches.
no code implementations • 11 Mar 2022 • Tyler L. Hayes, Maximilian Nickel, Christopher Kanan, Ludovic Denoyer, Arthur Szlam
Using this framing, we introduce an active sampling method that asks for examples from the tail of the data distribution and show that it outperforms classical active learning methods on Visual Genome.
no code implementations • 29 Sep 2021 • Aaron Lou, Maximilian Nickel, Mustafa Mukadam, Brandon Amos
We present Deep Riemannian Manifolds, a new class of neural network parameterized Riemannian manifolds that can represent and learn complex geometric structures.
1 code implementation • NeurIPS 2021 • Noam Rozen, Aditya Grover, Maximilian Nickel, Yaron Lipman
MF also produces a CNF via a solution to the change-of-variable formula, however differently from other CNF methods, its model (learned) density is parameterized as the source (prior) density minus the divergence of a neural network (NN).
1 code implementation • ICLR 2021 • Ricky T. Q. Chen, Brandon Amos, Maximilian Nickel
We propose a new class of parameterizations for spatio-temporal point processes which leverage Neural ODEs as a computational method and enable flexible, high-fidelity models of discrete events that are localized in continuous time and space.
1 code implementation • ICLR 2021 • Ricky T. Q. Chen, Brandon Amos, Maximilian Nickel
The existing Neural ODE formulation relies on an explicit knowledge of the termination time.
1 code implementation • 6 Oct 2020 • Ramakrishna Vedantam, Arthur Szlam, Maximilian Nickel, Ari Morcos, Brenden Lake
Humans can learn and reason under substantial uncertainty in a space of infinitely many concepts, including structured relational concepts ("a scene with objects that have the same color") and ad-hoc categories defined through goals ("objects that could fall on one's head").
no code implementations • NeurIPS 2020 • Emile Mathieu, Maximilian Nickel
Normalizing flows have shown great promise for modelling flexible probability distributions in a computationally tractable way.
no code implementations • 28 Feb 2020 • Maximilian Nickel, Matthew Le
Multivariate Hawkes Processes (MHPs) are an important class of temporal point processes that have enabled key advances in understanding and predicting social information systems.
no code implementations • IJCNLP 2019 • Matthew Le, Y-Lan Boureau, Maximilian Nickel
Theory of mind, i. e., the ability to reason about intents and beliefs of agents is an important task in artificial intelligence and central to resolving ambiguous references in natural language dialogue.
1 code implementation • NeurIPS 2019 • Qi Liu, Maximilian Nickel, Douwe Kiela
Learning from graph-structured data is an important task in machine learning and artificial intelligence, for which Graph Neural Networks (GNNs) have shown great promise.
1 code implementation • ICCV 2019 • Senthil Purushwalkam, Maximilian Nickel, Abhinav Gupta, Marc'Aurelio Ranzato
When extending the evaluation to the generalized setting which accounts also for pairs seen during training, we discover that naive baseline methods perform similarly or better than current approaches.
no code implementations • ACL 2019 • Matt Le, Stephen Roller, Laetitia Papaxanthos, Douwe Kiela, Maximilian Nickel
Moreover -- and in contrast with other methods -- the hierarchical nature of hyperbolic space allows us to learn highly efficient representations and to improve the taxonomic consistency of the inferred hierarchies.
3 code implementations • ICML 2018 • Maximilian Nickel, Douwe Kiela
We are concerned with the discovery of hierarchical relationships from large-scale unstructured similarity scores.
2 code implementations • ACL 2018 • Stephen Roller, Douwe Kiela, Maximilian Nickel
Methods for unsupervised hypernym detection may broadly be categorized according to two paradigms: pattern-based and distributional methods.
1 code implementation • CVPR 2018 • Andreas Veit, Maximilian Nickel, Serge Belongie, Laurens van der Maaten
The variety, abundance, and structured nature of hashtags make them an interesting data source for training vision models.
1 code implementation • 30 Oct 2017 • Armand Joulin, Edouard Grave, Piotr Bojanowski, Maximilian Nickel, Tomas Mikolov
This paper shows that a simple baseline based on a Bag-of-Words (BoW) representation learns surprisingly good knowledge graph embeddings.
no code implementations • NAACL 2018 • Douwe Kiela, Alexis Conneau, Allan Jabri, Maximilian Nickel
We introduce a variety of models, trained on a supervised image captioning corpus to predict the image features for a given caption, to perform sentence representation grounding.
no code implementations • 5 Jul 2017 • Théo Trouillon, Maximilian Nickel
Embeddings of knowledge graphs have received significant attention due to their excellent performance for tasks like link prediction and entity resolution.
9 code implementations • NeurIPS 2017 • Maximilian Nickel, Douwe Kiela
Representation learning has become an invaluable approach for learning from symbolic data such as text and graphs.
Ranked #2 on
Link Prediction
on WordNet
no code implementations • 11 Sep 2016 • Volker Tresp, Maximilian Nickel
We provide a survey on relational models.
4 code implementations • 16 Oct 2015 • Maximilian Nickel, Lorenzo Rosasco, Tomaso Poggio
Learning embeddings of entities and relations is an efficient and versatile method to perform machine learning on relational data such as knowledge graphs.
Ranked #8 on
Link Prediction
on FB15k
2 code implementations • 2 Mar 2015 • Maximilian Nickel, Kevin Murphy, Volker Tresp, Evgeniy Gabrilovich
In this paper, we provide a review of how such statistical models can be "trained" on large knowledge graphs, and then used to predict new facts about the world (which is equivalent to predicting new edges in the graph).
no code implementations • NeurIPS 2014 • Maximilian Nickel, Xueyan Jiang, Volker Tresp
Tensor factorizations have become popular methods for learning from multi-relational data.
no code implementations • 10 Jun 2013 • Maximilian Nickel, Volker Tresp
Tensor factorizations have become increasingly popular approaches for various learning tasks on structured data.