no code implementations • GWC 2016 • Roxane Segers, Egoitz Laparra, Marco Rospocher, Piek Vossen, German Rigau, Filip Ilievski
This paper presents the Event and Implied Situation Ontology (ESO), a resource which formalizes the pre and post situations of events and the roles of the entities affected by an event.
no code implementations • GWC 2018 • Piek Vossen, Filip Ilievski, Marten Postrma
In this paper, we present ReferenceNet: a semantic-pragmatic network of reference relations between synsets.
no code implementations • EMNLP 2021 • Avijit Thawani, Jay Pujara, Filip Ilievski
This paper studies the effect of using six different number encoders on the task of masked word prediction (MWP), as a proxy for evaluating literacy.
no code implementations • 27 Jan 2023 • Zhivar Sourati, Filip Ilievski, Hông-Ân Sandlin, Alain Mermoud
The ease and the speed of spreading misinformation and propaganda on the Web motivate the need to develop trustworthy technology for detecting fallacies in natural language arguments.
1 code implementation • 12 Dec 2022 • Zhivar Sourati, Vishnu Priya Prasanna Venkatesh, Darshan Deshpande, Himanshu Rawlani, Filip Ilievski, Hông-Ân Sandlin, Alain Mermoud
Our three-stage framework natively consolidates prior datasets and methods from existing tasks, like propaganda detection, serving as an overarching evaluation testbed.
no code implementations • 11 Dec 2022 • Abhinav Kumar Thakur, Filip Ilievski, Hông-Ân Sandlin, Alain Mermoud, Zhivar Sourati, Luca Luceri, Riccardo Tommasini
In this paper, we pursue a modular and explainable architecture for Internet meme understanding.
1 code implementation • 11 Dec 2022 • Aravinda Kolla, Filip Ilievski, Hông-Ân Sandlin, Alain Mermoud
Considering the large amount of content created online by the minute, slang-aware automatic tools are critically needed to promote social good, and assist policymakers and moderators in restricting the spread of offensive language, abuse, and hate speech.
no code implementations • 4 Dec 2022 • Jiarui Zhang, Filip Ilievski, Aravinda Kollaa, Jonathan Francis, Kaixin Ma, Alessandro Oltramari
Understanding novel situations in the traffic domain requires an intricate combination of domain-specific and causal commonsense knowledge.
1 code implementation • 3 Nov 2022 • Peifeng Wang, Aaron Chan, Filip Ilievski, Muhao Chen, Xiang Ren
Neural language models (LMs) have achieved impressive results on various language-based reasoning tasks by utilizing latent knowledge encoded in their own pretrained parameters.
no code implementations • 2 Oct 2022 • Filip Ilievski, Jay Pujara, Kartik Shenoy
Analogical reasoning methods have been built over various resources, including commonsense knowledge bases, lexical resources, language models, or their combination.
1 code implementation • COLING 2022 • Kaixin Ma, Filip Ilievski, Jonathan Francis, Eric Nyberg, Alessandro Oltramari
In this paper, we propose Coalescing Global and Local Information (CGLI), a new model that builds entity- and timestep-aware input representations (local input) considering the whole context (global input), and we jointly model the entity states with a structured prediction objective (global output).
no code implementations • 1 Jul 2022 • Bohui Zhang, Filip Ilievski, Pedro Szekely
We present a novel workflow that includes gap detection, source selection, schema alignment, and semantic validation.
1 code implementation • 14 Jun 2022 • Thiloshon Nagarajah, Filip Ilievski, Jay Pujara
Experiments with language models and neuro-symbolic AI reasoners on these tasks reveal that state-of-the-art methods can be applied to reason by analogy with a limited success, motivating the need for further research towards comprehensive and scalable analogical reasoning by AI.
no code implementations • 21 May 2022 • Jiarui Zhang, Filip Ilievski, Kaixin Ma, Jonathan Francis, Alessandro Oltramari
In this paper, we study the effect of knowledge sampling strategies and sizes that can be used to generate synthetic data for adapting language models.
1 code implementation • 26 Mar 2022 • Jiang Wang, Filip Ilievski, Pedro Szekely, Ke-Thia Yao
Experiments on legacy benchmarks and a new large benchmark, DWD, show that augmenting the knowledge graph with quantities and years is beneficial for predicting both entities and numbers, as KGA outperforms the vanilla models and other relevant baselines.
no code implementations • 17 Jan 2022 • Alessandro Oltramari, Jonathan Francis, Filip Ilievski, Kaixin Ma, Roshanak Mirzaee
This chapter illustrates how suitable neuro-symbolic models for language understanding can enable domain generalizability and robustness in downstream tasks.
1 code implementation • ICLR 2022 • Peifeng Wang, Jonathan Zamora, Junfeng Liu, Filip Ilievski, Muhao Chen, Xiang Ren
In this paper, we propose an Imagine-and-Verbalize (I&V) method, which learns to imagine a relational scene knowledge graph (SKG) with relations between the input concepts, and leverage the SKG as a constraint when generating a plausible scene description.
1 code implementation • EMNLP 2021 • Kaixin Ma, Filip Ilievski, Jonathan Francis, Satoru Ozaki, Eric Nyberg, Alessandro Oltramari
In this paper, we investigate what models learn from commonsense reasoning datasets.
no code implementations • AKBC Workshop CSKB 2021 • Filip Ilievski, Jay Pujara, Hanzhi Zhang
Our method aligns story types with commonsense axioms, and queries to a commonsense knowledge graph, enabling the generation of hundreds of thousands of stories.
no code implementations • 11 Aug 2021 • Zaina Shaik, Filip Ilievski, Fred Morstatter
Through this analysis, we discovered that there is an overrepresentation of white individuals and those with citizenship in Europe and North America; the rest of the groups are generally underrepresented.
1 code implementation • 11 Aug 2021 • Filip Ilievski, Pedro Szekely, Gleb Satyukov, Amandeep Singh
While the similarity between two concept words has been evaluated and studied for decades, much less attention has been devoted to algorithms that can compute the similarity of nodes in very large knowledge graphs, like Wikidata.
no code implementations • 6 Aug 2021 • Hans Chalupsky, Pedro Szekely, Filip Ilievski, Daniel Garijo, Kartik Shenoy
Application developers today have three choices for exploiting the knowledge present in Wikidata: they can download the Wikidata dumps in JSON or RDF format, they can use the Wikidata API to get data about individual entities, or they can use the Wikidata SPARQL endpoint.
1 code implementation • 1 Jul 2021 • Kartik Shenoy, Filip Ilievski, Daniel Garijo, Daniel Schwabe, Pedro Szekely
Wikidata has been increasingly adopted by many communities for a wide variety of applications, which demand high-quality knowledge to deliver successful results.
1 code implementation • Findings (ACL) 2021 • Peifeng Wang, Filip Ilievski, Muhao Chen, Xiang Ren
Inspired by evidence that pretrained language models (LMs) encode commonsense knowledge, recent work has applied LMs to automatically populate commonsense knowledge graphs (CKGs).
no code implementations • 18 Apr 2021 • Ehsan Qasemi, Filip Ilievski, Muhao Chen, Pedro Szekely
To address this gap, we propose a novel challenge of reasoning with circumstantial preconditions.
no code implementations • NAACL 2021 • Avijit Thawani, Jay Pujara, Pedro A. Szekely, Filip Ilievski
NLP systems rarely give special consideration to numbers found in text.
no code implementations • 12 Jan 2021 • Filip Ilievski, Alessandro Oltramari, Kaixin Ma, Bin Zhang, Deborah L. McGuinness, Pedro Szekely
Recently, the focus has been on large text-based sources, which facilitate easier integration with neural (language) models and application to textual tasks, typically at the expense of the semantics of the sources and their harmonization.
1 code implementation • 21 Dec 2020 • Filip Ilievski, Pedro Szekely, Bin Zhang
Sources of commonsense knowledge support applications in natural language understanding, computer vision, and knowledge graphs.
1 code implementation • 7 Nov 2020 • Kaixin Ma, Filip Ilievski, Jonathan Francis, Yonatan Bisk, Eric Nyberg, Alessandro Oltramari
Guided by a set of hypotheses, the framework studies how to transform various pre-existing knowledge resources into a form that is most effective for pre-training models.
no code implementations • 18 Aug 2020 • Filip Ilievski, Pedro Szekely, Daniel Schwabe
Our experiments reveal that: 1) albeit Wikidata-CS represents a small portion of Wikidata, it is an indicator that Wikidata contains relevant commonsense knowledge, which can be mapped to 15 ConceptNet relations; 2) the overlap between Wikidata-CS and other commonsense sources is low, motivating the value of knowledge integration; 3) Wikidata-CS has been evolving over time at a slightly slower rate compared to the overall Wikidata, indicating a possible lack of focus on commonsense knowledge.
no code implementations • 10 Jun 2020 • Filip Ilievski, Pedro Szekely, Jingwei Cheng, Fu Zhang, Ehsan Qasemi
Commonsense reasoning is an important aspect of building robust AI systems and is receiving significant attention in the natural language understanding, computer vision, and knowledge graphs communities.
1 code implementation • 29 May 2020 • Filip Ilievski, Daniel Garijo, Hans Chalupsky, Naren Teja Divvala, Yixiang Yao, Craig Rogers, Rongpeng Li, Jun Liu, Amandeep Singh, Daniel Schwabe, Pedro Szekely
Knowledge graphs (KGs) have become the preferred technology for representing, sharing and adding knowledge to modern AI applications.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Peifeng Wang, Nanyun Peng, Filip Ilievski, Pedro Szekely, Xiang Ren
In this paper, we augment a general commonsense QA framework with a knowledgeable path generator.
no code implementations • LREC 2020 • Piek Vossen, Filip Ilievski, Marten Postma, Antske Fokkens, Gosse Minnema, Levi Remijnse
In this article, we lay out the basic ideas and principles of the project Framing Situations in the Dutch Language.
no code implementations • LREC 2020 • Marten Postma, Levi Remijnse, Filip Ilievski, Antske Fokkens, Sam Titarsolej, Piek Vossen
The user can apply two types of annotations: 1) mappings from expressions to frames and frame elements, 2) reference relations from mentions to events and participants of the structured data.
no code implementations • 1 Oct 2018 • Filip Ilievski, Eduard Hovy, Qizhe Xie, Piek Vossen
The human mind is a powerful multifunctional knowledge storage and management system that performs generalization, type inference, anomaly detection, stereotyping, and other tasks.
no code implementations • COLING 2018 • Filip Ilievski, Piek Vossen, Stefan Schlobach
In this paper we report on a series of hypotheses regarding the long tail phenomena in entity linking datasets, their interaction, and their impact on system performance.
no code implementations • SEMEVAL 2018 • Marten Postma, Filip Ilievski, Piek Vossen
This paper discusses SemEval-2018 Task 5: a referential quantification task of counting events and participants in local, long-tail news documents with high ambiguity.
no code implementations • COLING 2016 • Filip Ilievski, Marten Postma, Piek Vossen
Semantic text processing faces the challenge of defining the relation between lexical expressions and the world to which they make reference within a period of time.
no code implementations • LREC 2016 • Filip Ilievski, Giuseppe Rizzo, Marieke van Erp, Julien Plu, Rapha{\"e}l Troncy
More and more knowledge bases are publicly available as linked data.
no code implementations • LREC 2016 • Marieke van Erp, Pablo Mendes, Heiko Paulheim, Filip Ilievski, Julien Plu, Giuseppe Rizzo, Joerg Waitelonis
Entity linking has become a popular task in both natural language processing and semantic web communities.