1 code implementation • ACL 2022 • EunJeong Hwang, Jay-Yoon Lee, Tianyi Yang, Dhruvesh Patel, Dongxu Zhang, Andrew McCallum
To understand a story with multiple events, it is important to capture the proper relations across these events.
no code implementations • ACL (RepL4NLP) 2021 • Shib Sankar Dasgupta, Xiang Lorraine Li, Michael Boratko, Dongxu Zhang, Andrew McCallum
In Patel et al., (2020), the authors demonstrate that only the transitive reduction is required and further extend box embeddings to capture joint hierarchies by augmenting the graph with new nodes.
1 code implementation • LREC 2022 • Jui Shah, Dongxu Zhang, Sam Brody, Andrew McCallum
In this work, we introduce a method for enhancing distant supervision with state-change information for relation extraction.
2 code implementations • LREC 2022 • Dongxu Zhang, Sunil Mohan, Michaela Torkar, Andrew McCallum
We introduce ChemDisGene, a new dataset for training and evaluating multi-class multi-label document-level biomedical relation extraction models.
1 code implementation • NeurIPS 2021 • Michael Boratko, Dongxu Zhang, Nicholas Monath, Luke Vilnis, Kenneth Clarkson, Andrew McCallum
While vectors in Euclidean space can theoretically represent any graph, much recent work shows that alternatives such as complex, hyperbolic, order, or box embeddings have geometric properties better suited to modeling real-world graphs.
no code implementations • 1 Jan 2021 • Shib Sankar Dasgupta, Xiang Li, Michael Boratko, Dongxu Zhang, Andrew McCallum
In Patel et al. (2020), the authors demonstrate that only the transitive reduction is required, and further extend box embeddings to capture joint hierarchies by augmenting the graph with new nodes.
1 code implementation • NeurIPS 2020 • Shib Sankar Dasgupta, Michael Boratko, Dongxu Zhang, Luke Vilnis, Xiang Lorraine Li, Andrew McCallum
Geometric embeddings have recently received attention for their natural ability to represent transitive asymmetric relations via containment.
no code implementations • ICLR 2019 • Xiang Li, Luke Vilnis, Dongxu Zhang, Michael Boratko, Andrew McCallum
However, the hard edges of the boxes present difficulties for standard gradient based optimization; that work employed a special surrogate function for the disjoint case, but we find this method to be fragile.
1 code implementation • NAACL 2019 • Dongxu Zhang, Subhabrata Mukherjee, Colin Lockard, Xin Luna Dong, Andrew McCallum
In this paper, we consider advancing web-scale knowledge extraction and alignment by integrating OpenIE extractions in the form of (subject, predicate, object) triples with Knowledge Bases (KB).
no code implementations • 22 Dec 2018 • Amirmohammad Rooshenas, Dongxu Zhang, Gopal Sharma, Andrew McCallum
In this paper, we instead use efficient truncated randomized search in this reward function to train structured prediction energy networks (SPENs), which provide efficient test-time inference using gradient-based search on a smooth, learned representation of the score landscape, and have previously yielded state-of-the-art results in structured prediction.
1 code implementation • 22 Apr 2018 • Dongxu Zhang, Zhichao Yang
In this technique report, we aim to mitigate the overfitting problem of natural language by applying data augmentation methods.
no code implementations • COLING 2016 • Dongxu Zhang, Boliang Zhang, Xiaoman Pan, Xiaocheng Feng, Heng Ji, Weiran Xu
Instead of directly relying on word alignment results, this framework combines advantages of rule-based methods and deep learning methods by implementing two steps: First, generates a high-confidence entity annotation set on IL side with strict searching methods; Second, uses this high-confidence set to weakly supervise the model training.
1 code implementation • 5 Aug 2015 • Dongxu Zhang, Dong Wang
Deep learning has gained much success in sentence-level relation classification.
no code implementations • 5 Aug 2015 • Dongxu Zhang, Tianyi Luo, Dong Wang, Rong Liu
Latent Dirichlet Allocation (LDA) is a three-level hierarchical Bayesian model for topic inference.