Search Results for author: Maximilian Nickel

Found 36 papers, 19 papers with code

Assessing Neural Network Representations During Training Using Noise-Resilient Diffusion Spectral Entropy

no code implementations4 Dec 2023 Danqi Liao, Chen Liu, Benjamin W. Christensen, Alexander Tong, Guillaume Huguet, Guy Wolf, Maximilian Nickel, Ian Adelstein, Smita Krishnaswamy

Entropy and mutual information in neural networks provide rich information on the learning process, but they have proven difficult to compute reliably in high dimensions.

Generalized Schrödinger Bridge Matching

1 code implementation3 Oct 2023 Guan-Horng Liu, Yaron Lipman, Maximilian Nickel, Brian Karrer, Evangelos A. Theodorou, Ricky T. Q. Chen

Modern distribution matching algorithms for training diffusion or flow models directly prescribe the time evolution of the marginal distributions between two boundary distributions.

Graph topological property recovery with heat and wave dynamics-based features on graphs

no code implementations18 Sep 2023 Dhananjay Bhaskar, Yanlei Zhang, Charles Xu, Xingzhi Sun, Oluwadamilola Fasina, Guy Wolf, Maximilian Nickel, Michael Perlmutter, Smita Krishnaswamy

In this paper, we propose Graph Differential Equation Network (GDeNet), an approach that harnesses the expressive power of solutions to PDEs on a graph to obtain continuous node- and graph-level representations for various downstream tasks.

Weisfeiler and Leman Go Measurement Modeling: Probing the Validity of the WL Test

1 code implementation11 Jul 2023 Arjun Subramonian, Adina Williams, Maximilian Nickel, Yizhou Sun, Levent Sagun

The expressive power of graph neural networks is usually measured by comparing how many pairs of graphs or nodes an architecture can possibly distinguish as non-isomorphic to those distinguishable by the $k$-dimensional Weisfeiler-Leman ($k$-WL) test.

On Kinetic Optimal Probability Paths for Generative Models

no code implementations11 Jun 2023 Neta Shaul, Ricky T. Q. Chen, Maximilian Nickel, Matt Le, Yaron Lipman

We investigate Kinetic Optimal (KO) Gaussian paths and offer the following observations: (i) We show the KE takes a simplified form on the space of Gaussian paths, where the data is incorporated only through a single, one dimensional scalar function, called the \emph{data separation function}.

Neural FIM for learning Fisher Information Metrics from point cloud data

1 code implementation1 Jun 2023 Oluwadamilola Fasina, Guillaume Huguet, Alexander Tong, Yanlei Zhang, Guy Wolf, Maximilian Nickel, Ian Adelstein, Smita Krishnaswamy

Although data diffusion embeddings are ubiquitous in unsupervised learning and have proven to be a viable technique for uncovering the underlying intrinsic geometry of data, diffusion embeddings are inherently limited due to their discrete nature.

Hyperbolic Image-Text Representations

1 code implementation18 Apr 2023 Karan Desai, Maximilian Nickel, Tanmay Rajpurohit, Justin Johnson, Ramakrishna Vedantam

Visual and linguistic concepts naturally organize themselves in a hierarchy, where a textual concept "dog" entails all images that contain dogs.

Image Classification Retrieval +1

Latent Discretization for Continuous-time Sequence Compression

no code implementations28 Dec 2022 Ricky T. Q. Chen, Matthew Le, Matthew Muckley, Maximilian Nickel, Karen Ullrich

We empirically verify our approach on multiple domains involving compression of video and motion capture sequences, showing that our approaches can automatically achieve reductions in bit rates by learning how to discretize.

Flow Matching for Generative Modeling

1 code implementation6 Oct 2022 Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu, Maximilian Nickel, Matt Le

These paths are more efficient than diffusion paths, provide faster training and sampling, and result in better generalization.

Density Estimation

Matching Normalizing Flows and Probability Paths on Manifolds

no code implementations11 Jul 2022 Heli Ben-Hamu, samuel cohen, Joey Bose, Brandon Amos, Aditya Grover, Maximilian Nickel, Ricky T. Q. Chen, Yaron Lipman

Continuous Normalizing Flows (CNFs) are a class of generative models that transform a prior distribution to a model distribution by solving an ordinary differential equation (ODE).

Semi-Discrete Normalizing Flows through Differentiable Tessellation

1 code implementation14 Mar 2022 Ricky T. Q. Chen, Brandon Amos, Maximilian Nickel

Mapping between discrete and continuous distributions is a difficult task and many have had to resort to heuristical approaches.

Quantization

Can I see an Example? Active Learning the Long Tail of Attributes and Relations

no code implementations11 Mar 2022 Tyler L. Hayes, Maximilian Nickel, Christopher Kanan, Ludovic Denoyer, Arthur Szlam

Using this framing, we introduce an active sampling method that asks for examples from the tail of the data distribution and show that it outperforms classical active learning methods on Visual Genome.

Active Learning

Learning Complex Geometric Structures from Data with Deep Riemannian Manifolds

no code implementations29 Sep 2021 Aaron Lou, Maximilian Nickel, Mustafa Mukadam, Brandon Amos

We present Deep Riemannian Manifolds, a new class of neural network parameterized Riemannian manifolds that can represent and learn complex geometric structures.

Moser Flow: Divergence-based Generative Modeling on Manifolds

1 code implementation NeurIPS 2021 Noam Rozen, Aditya Grover, Maximilian Nickel, Yaron Lipman

MF also produces a CNF via a solution to the change-of-variable formula, however differently from other CNF methods, its model (learned) density is parameterized as the source (prior) density minus the divergence of a neural network (NN).

Density Estimation

Neural Spatio-Temporal Point Processes

1 code implementation ICLR 2021 Ricky T. Q. Chen, Brandon Amos, Maximilian Nickel

We propose a new class of parameterizations for spatio-temporal point processes which leverage Neural ODEs as a computational method and enable flexible, high-fidelity models of discrete events that are localized in continuous time and space.

Epidemiology Point Processes

CURI: A Benchmark for Productive Concept Learning Under Uncertainty

1 code implementation6 Oct 2020 Ramakrishna Vedantam, Arthur Szlam, Maximilian Nickel, Ari Morcos, Brenden Lake

Humans can learn and reason under substantial uncertainty in a space of infinitely many concepts, including structured relational concepts ("a scene with objects that have the same color") and ad-hoc categories defined through goals ("objects that could fall on one's head").

Meta-Learning Systematic Generalization

Riemannian Continuous Normalizing Flows

no code implementations NeurIPS 2020 Emile Mathieu, Maximilian Nickel

Normalizing flows have shown great promise for modelling flexible probability distributions in a computationally tractable way.

Learning Multivariate Hawkes Processes at Scale

no code implementations28 Feb 2020 Maximilian Nickel, Matthew Le

Multivariate Hawkes Processes (MHPs) are an important class of temporal point processes that have enabled key advances in understanding and predicting social information systems.

Point Processes

Revisiting the Evaluation of Theory of Mind through Question Answering

no code implementations IJCNLP 2019 Matthew Le, Y-Lan Boureau, Maximilian Nickel

Theory of mind, i. e., the ability to reason about intents and beliefs of agents is an important task in artificial intelligence and central to resolving ambiguous references in natural language dialogue.

Question Answering

Hyperbolic Graph Neural Networks

1 code implementation NeurIPS 2019 Qi Liu, Maximilian Nickel, Douwe Kiela

Learning from graph-structured data is an important task in machine learning and artificial intelligence, for which Graph Neural Networks (GNNs) have shown great promise.

BIG-bench Machine Learning Representation Learning

Task-Driven Modular Networks for Zero-Shot Compositional Learning

1 code implementation ICCV 2019 Senthil Purushwalkam, Maximilian Nickel, Abhinav Gupta, Marc'Aurelio Ranzato

When extending the evaluation to the generalized setting which accounts also for pairs seen during training, we discover that naive baseline methods perform similarly or better than current approaches.

Attribute Novel Concepts +1

Inferring Concept Hierarchies from Text Corpora via Hyperbolic Embeddings

no code implementations ACL 2019 Matt Le, Stephen Roller, Laetitia Papaxanthos, Douwe Kiela, Maximilian Nickel

Moreover -- and in contrast with other methods -- the hierarchical nature of hyperbolic space allows us to learn highly efficient representations and to improve the taxonomic consistency of the inferred hierarchies.

Learning Continuous Hierarchies in the Lorentz Model of Hyperbolic Geometry

3 code implementations ICML 2018 Maximilian Nickel, Douwe Kiela

We are concerned with the discovery of hierarchical relationships from large-scale unstructured similarity scores.

Hearst Patterns Revisited: Automatic Hypernym Detection from Large Text Corpora

2 code implementations ACL 2018 Stephen Roller, Douwe Kiela, Maximilian Nickel

Methods for unsupervised hypernym detection may broadly be categorized according to two paradigms: pattern-based and distributional methods.

Separating Self-Expression and Visual Content in Hashtag Supervision

1 code implementation CVPR 2018 Andreas Veit, Maximilian Nickel, Serge Belongie, Laurens van der Maaten

The variety, abundance, and structured nature of hashtags make them an interesting data source for training vision models.

Retrieval

Fast Linear Model for Knowledge Graph Embeddings

1 code implementation30 Oct 2017 Armand Joulin, Edouard Grave, Piotr Bojanowski, Maximilian Nickel, Tomas Mikolov

This paper shows that a simple baseline based on a Bag-of-Words (BoW) representation learns surprisingly good knowledge graph embeddings.

General Classification Knowledge Base Completion +2

Learning Visually Grounded Sentence Representations

no code implementations NAACL 2018 Douwe Kiela, Alexis Conneau, Allan Jabri, Maximilian Nickel

We introduce a variety of models, trained on a supervised image captioning corpus to predict the image features for a given caption, to perform sentence representation grounding.

Language Modelling Representation Learning +2

Complex and Holographic Embeddings of Knowledge Graphs: A Comparison

no code implementations5 Jul 2017 Théo Trouillon, Maximilian Nickel

Embeddings of knowledge graphs have received significant attention due to their excellent performance for tasks like link prediction and entity resolution.

Entity Resolution Knowledge Graph Embeddings +2

Holographic Embeddings of Knowledge Graphs

4 code implementations16 Oct 2015 Maximilian Nickel, Lorenzo Rosasco, Tomaso Poggio

Learning embeddings of entities and relations is an efficient and versatile method to perform machine learning on relational data such as knowledge graphs.

Knowledge Graphs Link Prediction +1

A Review of Relational Machine Learning for Knowledge Graphs

2 code implementations2 Mar 2015 Maximilian Nickel, Kevin Murphy, Volker Tresp, Evgeniy Gabrilovich

In this paper, we provide a review of how such statistical models can be "trained" on large knowledge graphs, and then used to predict new facts about the world (which is equivalent to predicting new edges in the graph).

BIG-bench Machine Learning Knowledge Graphs

Logistic Tensor Factorization for Multi-Relational Data

no code implementations10 Jun 2013 Maximilian Nickel, Volker Tresp

Tensor factorizations have become increasingly popular approaches for various learning tasks on structured data.

Relational Reasoning

Cannot find the paper you are looking for? You can Submit a new open access paper.