Search Results for author: Michael Boratko

Found 14 papers, 8 papers with code

Box-To-Box Transformations for Modeling Joint Hierarchies

no code implementations ACL (RepL4NLP) 2021 Shib Sankar Dasgupta, Xiang Lorraine Li, Michael Boratko, Dongxu Zhang, Andrew McCallum

In Patel et al., (2020), the authors demonstrate that only the transitive reduction is required and further extend box embeddings to capture joint hierarchies by augmenting the graph with new nodes.

Capacity and Bias of Learned Geometric Embeddings for Directed Graphs

1 code implementation NeurIPS 2021 Michael Boratko, Dongxu Zhang, Nicholas Monath, Luke Vilnis, Kenneth Clarkson, Andrew McCallum

While vectors in Euclidean space can theoretically represent any graph, much recent work shows that alternatives such as complex, hyperbolic, order, or box embeddings have geometric properties better suited to modeling real-world graphs.

Knowledge Base Completion Multi-Label Classification

Probabilistic Box Embeddings for Uncertain Knowledge Graph Reasoning

1 code implementation NAACL 2021 Xuelu Chen, Michael Boratko, Muhao Chen, Shib Sankar Dasgupta, Xiang Lorraine Li, Andrew McCallum

Knowledge bases often consist of facts which are harvested from a variety of sources, many of which are noisy and some of which conflict, resulting in a level of uncertainty for each triple.

Knowledge Graph Embedding

Modeling Fine-Grained Entity Types with Box Embeddings

1 code implementation ACL 2021 Yasumasa Onoe, Michael Boratko, Andrew McCallum, Greg Durrett

Neural entity typing models typically represent fine-grained entity types as vectors in a high-dimensional space, but such spaces are not well-suited to modeling these types' complex interdependencies.

Entity Typing

Box-To-Box Transformation for Modeling Joint Hierarchies

no code implementations1 Jan 2021 Shib Sankar Dasgupta, Xiang Li, Michael Boratko, Dongxu Zhang, Andrew McCallum

In Patel et al. (2020), the authors demonstrate that only the transitive reduction is required, and further extend box embeddings to capture joint hierarchies by augmenting the graph with new nodes.

Knowledge Graphs

Improving Local Identifiability in Probabilistic Box Embeddings

1 code implementation NeurIPS 2020 Shib Sankar Dasgupta, Michael Boratko, Dongxu Zhang, Luke Vilnis, Xiang Lorraine Li, Andrew McCallum

Geometric embeddings have recently received attention for their natural ability to represent transitive asymmetric relations via containment.

Representing Joint Hierarchies with Box Embeddings

1 code implementation AKBC 2020 Dhruvesh Patel, Shib Sankar Dasgupta, Michael Boratko, Xiang Li, Luke Vilnis, Andrew McCallum

Box Embeddings [Vilnis et al., 2018, Li et al., 2019] represent concepts with hyperrectangles in $n$-dimensional space and are shown to be capable of modeling tree-like structures efficiently by training on a large subset of the transitive closure of the WordNet hypernym graph.

Smoothing the Geometry of Probabilistic Box Embeddings

no code implementations ICLR 2019 Xiang Li, Luke Vilnis, Dongxu Zhang, Michael Boratko, Andrew McCallum

However, the hard edges of the boxes present difficulties for standard gradient based optimization; that work employed a special surrogate function for the disjoint case, but we find this method to be fragile.

Inductive Bias

Cannot find the paper you are looking for? You can Submit a new open access paper.