1 code implementation • 6 Jan 2025 • Jack Boylan, Chris Hokamp, Demian Gholipour Ghalandari
We introduce GLiREL (Generalist Lightweight model for zero-shot Relation Extraction), an efficient architecture and training paradigm for zero-shot relation classification.
Ranked #1 on Zero-shot Relation Classification on FewRel (Macro F1 metric)
1 code implementation • 10 Jul 2024 • Aaron Zolnai-Lucas, Jack Boylan, Chris Hokamp, Parsa Ghaffari
We present Simplified Text-Attributed Graph Embeddings (STAGE), a straightforward yet effective method for enhancing node features in Graph Neural Network (GNN) models that encode Text-Attributed Graphs (TAGs).
no code implementations • 24 Apr 2024 • Jack Boylan, Shashank Mangla, Dominic Thorn, Demian Gholipour Ghalandari, Parsa Ghaffari, Chris Hokamp
This study explores the use of Large Language Models (LLMs) for automatic evaluation of knowledge graph (KG) completion models.
no code implementations • 18 Dec 2023 • Chris Hokamp, Demian Gholipour Ghalandari, Parsa Ghaffari
We present an open-source Python library for building and using datasets where inputs are clusters of textual data, and outputs are sequences of real values representing one or more time series signals.
1 code implementation • ACL 2022 • Demian Gholipour Ghalandari, Chris Hokamp, Georgiana Ifrim
Sentence compression reduces the length of text by removing non-essential content while preserving important facts and grammaticality.
1 code implementation • 15 Jun 2020 • Chris Hokamp, Demian Gholipour Ghalandari, Nghia The Pham, John Glover
Sequence-to-sequence (s2s) models are the basis for extensive work in natural language processing.
1 code implementation • ACL 2020 • Demian Gholipour Ghalandari, Chris Hokamp, Nghia The Pham, John Glover, Georgiana Ifrim
Multi-document summarization (MDS) aims to compress the content in large document collections into short summaries and has important applications in story clustering for newsfeeds, presentation of search results, and timeline generation.
no code implementations • 14 Jul 2019 • John Glover, Chris Hokamp
One of the questions that arises when designing models that learn to solve multiple tasks simultaneously is how much of the available training budget should be devoted to each individual task.
no code implementations • WS 2019 • Chris Hokamp, John Glover, Demian Gholipour
To our knowledge, this is the largest evaluation of multi-lingual translation yet conducted in terms of the total size of the training data we use, and in terms of the diversity of zero-shot translation pairs we evaluate.
no code implementations • 6 Nov 2018 • Chris Hokamp, Sebastian Ruder, John Glover
We frame unsupervised machine translation (MT) in the context of multi-task learning (MTL), combining insights from both directions.
no code implementations • WS 2018 • Henry Elder, Chris Hokamp
This work presents a new state of the art in reconstruction of surface realizations from obfuscated text.
1 code implementation • WS 2017 • Chris Hokamp
This work presents a novel approach to Automatic Post-Editing (APE) and Word-Level Quality Estimation (QE) using ensembles of specialized Neural Machine Translation (NMT) systems.
1 code implementation • ACL 2017 • Chris Hokamp, Qun Liu
Lexical constraints take the form of phrases or words that must be present in the output sequence.
no code implementations • TACL 2017 • Andr{\'e} F. T. Martins, Marcin Junczys-Dowmunt, Fabio N. Kepler, Ram{\'o}n Astudillo, Chris Hokamp, Roman Grundkiewicz
Translation quality estimation is a task of growing importance in NLP, due to its potential to reduce post-editing human effort in disruptive ways.
1 code implementation • LREC 2016 • Varvara Logacheva, Chris Hokamp, Lucia Specia
The tool has a set of state-of-the-art features for QE, and new features can easily be added.
no code implementations • WS 2015 • Ond{\v{r}}ej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, Marco Turchi
no code implementations • LREC 2014 • Chris Hokamp, Rada Mihalcea, Peter Schuelke
We describe the results of several experiments with interactive interfaces for native and L2 English students, designed to collect implicit feedback from students as they complete a reading activity.