1 code implementation • 29 Oct 2024 • Mark Neumann, James Gin, Benjamin Rhodes, Steven Bennett, Zhiyi Li, Hitarth Choubisa, Arthur Hussey, Jonathan Godwin
We introduce Orb, a family of universal interatomic potentials for atomistic modelling of materials.
1 code implementation • ACL 2021 • Mark Neumann, Zejiang Shen, Sam Skjonsberg
Adobe's Portable Document Format (PDF) is a popular way of distributing view-only documents with a rich visual markup.
1 code implementation • EMNLP (NLPOSS) 2020 • Nipun Sadvilkar, Mark Neumann
In this paper, we present a rule-based sentence boundary disambiguation Python package that works out-of-the-box for 22 languages.
2 code implementations • ACL 2020 • Kyle Lo, Lucy Lu Wang, Mark Neumann, Rodney Kinney, Dan S. Weld
We introduce S2ORC, a large corpus of 81. 1M English-language academic papers spanning many academic disciplines.
1 code implementation • IJCNLP 2019 • Matthew E. Peters, Mark Neumann, Robert L. Logan IV, Roy Schwartz, Vidur Joshi, Sameer Singh, Noah A. Smith
Contextual word representations, typically trained on unstructured, unlabeled text, do not contain any explicit grounding to real world entities and are often unable to remember facts about those entities.
Ranked #9 on Relation Classification on TACRED
no code implementations • 30 May 2019 • Kevin Lin, Ben Bogin, Mark Neumann, Jonathan Berant, Matt Gardner
The sequence-to-sequence paradigm employed by neural text-to-SQL models typically performs token-level decoding and does not consider generating SQL hierarchically from a grammar.
1 code implementation • WS 2019 • Mark Neumann, Daniel King, Iz Beltagy, Waleed Ammar
Despite recent advances in natural language processing, many statistical models for processing text perform extremely poorly under domain shift.
no code implementations • EMNLP 2018 • Matthew E. Peters, Mark Neumann, Luke Zettlemoyer, Wen-tau Yih
Contextual word representations derived from pre-trained bidirectional language models (biLMs) have recently been shown to provide significant improvements to the state of the art for a wide range of NLP tasks.
1 code implementation • WS 2018 • Lucy Lu Wang, Chandra Bhagavatula, Mark Neumann, Kyle Lo, Chris Wilhelm, Waleed Ammar
Ontology alignment is the task of identifying semantically equivalent entities from two given ontologies.
2 code implementations • WS 2018 • Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson Liu, Matthew Peters, Michael Schmitz, Luke Zettlemoyer
This paper describes AllenNLP, a platform for research on deep learning methods in natural language understanding.
46 code implementations • NAACL 2018 • Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer
We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e. g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i. e., to model polysemy).
Ranked #3 on Only Connect Walls Dataset Task 1 (Grouping) on OCW (Wasserstein Distance (WD) metric, using extra training data)
Citation Intent Classification Conversational Response Selection +8
no code implementations • 24 Oct 2016 • Mark Neumann, Pontus Stenetorp, Sebastian Riedel
Multi-hop inference is necessary for machine learning systems to successfully solve tasks such as Recognising Textual Entailment and Machine Reading.