Predicting how a drug-like molecule binds to a specific protein target is a core problem in drug discovery.
Ranked #5 on Blind Docking on PDBBind
Protein complex formation is a central problem in biology, being involved in most of the cell's processes, and essential for applications, e. g. drug design or protein engineering.
Generating the periodic structure of stable materials is a long-standing challenge for the material design community.
Prediction of a molecule's 3D conformer ensemble from the molecular graph holds a key role in areas of cheminformatics and drug discovery.
Molecules with identical graph connectivity can exhibit different physical and biological properties if they exhibit stereochemistry-a spatial structural characteristic.
Current graph neural network (GNN) architectures naively average or sum node embeddings into an aggregated graph representation -- potentially losing structural or semantic information.
Ranked #1 on Graph Regression on Lipophilicity (using extra training data)
Representing graphs as sets of node embeddings in certain curved Riemannian manifolds has recently gained momentum in machine learning due to their desirable geometric inductive biases, e. g., hierarchical structures benefit from hyperbolic geometry.
Interest has been rising lately towards methods representing data in non-Euclidean spaces, e. g. hyperbolic or spherical, that provide specific inductive biases useful for certain real-world data properties, e. g. scale-free, hierarchical or cyclical.
The Softmax function on top of a final linear layer is the de facto method to output probability distributions in neural networks.
Several first order stochastic optimization methods commonly used in the Euclidean domain such as stochastic gradient descent (SGD), accelerated gradient descent or variance reduced methods have already been adapted to certain Riemannian settings.
Previous research on word embeddings has shown that sparse representations, which can be either learned on top of existing dense embeddings or obtained through model constraints during training time, have the benefit of increased interpretability properties: to some degree, each dimension can be understood by a human and associated with a recognizable feature in the data.
Entity Linking (EL) is an essential task for semantic text understanding and information extraction.
Ranked #1 on Entity Linking on OKE-2015
However, the representational power of hyperbolic geometry is not yet on par with Euclidean geometry, mostly because of the absence of corresponding hyperbolic neural network layers.
Learning graph representations via low-dimensional embeddings that preserve relevant network properties is an important class of problems in machine learning.
Ranked #1 on Link Prediction on WordNet
We propose a novel deep learning model for joint document-level entity disambiguation, which leverages learned neural representations.
Ranked #4 on Entity Disambiguation on WNED-CWEB
Second, paraphrases of logical forms and questions are embedded in a jointly learned vector space using word and character convolutional neural networks.
We demonstrate the accuracy of our approach on a wide range of benchmark datasets, showing that it matches, and in many cases outperforms, existing state-of-the-art methods.