1 code implementation • 7 Feb 2022 • Hannes Stärk, Octavian-Eugen Ganea, Lagnajit Pattanaik, Regina Barzilay, Tommi Jaakkola
Predicting how a drug-like molecule binds to a specific protein target is a core problem in drug discovery.
Ranked #6 on Blind Docking on PDBBind
1 code implementation • ICLR 2022 • Octavian-Eugen Ganea, Xinyuan Huang, Charlotte Bunne, Yatao Bian, Regina Barzilay, Tommi Jaakkola, Andreas Krause
Protein complex formation is a central problem in biology, being involved in most of the cell's processes, and essential for applications, e. g. drug design or protein engineering.
5 code implementations • ICLR 2022 • Tian Xie, Xiang Fu, Octavian-Eugen Ganea, Regina Barzilay, Tommi Jaakkola
Generating the periodic structure of stable materials is a long-standing challenge for the material design community.
1 code implementation • NeurIPS 2021 • Octavian-Eugen Ganea, Lagnajit Pattanaik, Connor W. Coley, Regina Barzilay, Klavs F. Jensen, William H. Green, Tommi S. Jaakkola
Prediction of a molecule's 3D conformer ensemble from the molecular graph holds a key role in areas of cheminformatics and drug discovery.
1 code implementation • 24 Nov 2020 • Lagnajit Pattanaik, Octavian-Eugen Ganea, Ian Coley, Klavs F. Jensen, William H. Green, Connor W. Coley
Molecules with identical graph connectivity can exhibit different physical and biological properties if they exhibit stereochemistry-a spatial structural characteristic.
2 code implementations • 8 Jun 2020 • Benson Chen, Gary Bécigneul, Octavian-Eugen Ganea, Regina Barzilay, Tommi Jaakkola
Current graph neural network (GNN) architectures naively average or sum node embeddings into an aggregated graph representation -- potentially losing structural or semantic information.
Ranked #1 on Graph Regression on Lipophilicity (using extra training data)
1 code implementation • 20 Feb 2020 • Calin Cruceru, Gary Bécigneul, Octavian-Eugen Ganea
Representing graphs as sets of node embeddings in certain curved Riemannian manifolds has recently gained momentum in machine learning due to their desirable geometric inductive biases, e. g., hierarchical structures benefit from hyperbolic geometry.
1 code implementation • ICLR 2020 • Ondrej Skopek, Octavian-Eugen Ganea, Gary Bécigneul
Euclidean geometry has historically been the typical "workhorse" for machine learning applications due to its power and simplicity.
no code implementations • ICML 2020 • Gregor Bachmann, Gary Bécigneul, Octavian-Eugen Ganea
Interest has been rising lately towards methods representing data in non-Euclidean spaces, e. g. hyperbolic or spherical, that provide specific inductive biases useful for certain real-world data properties, e. g. scale-free, hierarchical or cyclical.
no code implementations • 23 Jul 2019 • Octavian-Eugen Ganea, Yashas Annadani, Gary Bécigneul
We take steps towards understanding the "posterior collapse (PC)" difficulty in variational autoencoders (VAEs),~i. e.
no code implementations • 21 Feb 2019 • Octavian-Eugen Ganea, Sylvain Gelly, Gary Bécigneul, Aliaksei Severyn
The Softmax function on top of a final linear layer is the de facto method to output probability distributions in neural networks.
1 code implementation • 15 Oct 2018 • Alexandru Tifrea, Gary Bécigneul, Octavian-Eugen Ganea
Words are not created equal.
1 code implementation • ICLR 2019 • Gary Bécigneul, Octavian-Eugen Ganea
Several first order stochastic optimization methods commonly used in the Euclidean domain such as stochastic gradient descent (SGD), accelerated gradient descent or variance reduced methods have already been adapted to certain Riemannian settings.
no code implementations • WS 2018 • Valentin Trifonov, Octavian-Eugen Ganea, Anna Potapenko, Thomas Hofmann
Previous research on word embeddings has shown that sparse representations, which can be either learned on top of existing dense embeddings or obtained through model constraints during training time, have the benefit of increased interpretability properties: to some degree, each dimension can be understood by a human and associated with a recognizable feature in the data.
1 code implementation • CONLL 2018 • Nikolaos Kolitsas, Octavian-Eugen Ganea, Thomas Hofmann
Entity Linking (EL) is an essential task for semantic text understanding and information extraction.
Ranked #1 on Entity Linking on OKE-2015
3 code implementations • NeurIPS 2018 • Octavian-Eugen Ganea, Gary Bécigneul, Thomas Hofmann
However, the representational power of hyperbolic geometry is not yet on par with Euclidean geometry, mostly because of the absence of corresponding hyperbolic neural network layers.
3 code implementations • ICML 2018 • Octavian-Eugen Ganea, Gary Bécigneul, Thomas Hofmann
Learning graph representations via low-dimensional embeddings that preserve relevant network properties is an important class of problems in machine learning.
Ranked #1 on Link Prediction on WordNet
3 code implementations • EMNLP 2017 • Octavian-Eugen Ganea, Thomas Hofmann
We propose a novel deep learning model for joint document-level entity disambiguation, which leverages learned neural representations.
Ranked #4 on Entity Disambiguation on WNED-CWEB
1 code implementation • 21 Feb 2017 • Till Haug, Octavian-Eugen Ganea, Paulina Grnarova
Second, paraphrases of logical forms and questions are embedded in a jointly learned vector space using word and character convolutional neural networks.
1 code implementation • 8 Sep 2015 • Octavian-Eugen Ganea, Marina Ganea, Aurelien Lucchi, Carsten Eickhoff, Thomas Hofmann
We demonstrate the accuracy of our approach on a wide range of benchmark datasets, showing that it matches, and in many cases outperforms, existing state-of-the-art methods.