Search Results for author: Vid Kocijan

Found 13 papers, 8 papers with code

Systematic Comparison of Neural Architectures and Training Approaches for Open Information Extraction

no code implementations EMNLP 2020 Patrick Hohenecker, Frank Mtumbuka, Vid Kocijan, Thomas Lukasiewicz

The goal of open information extraction (OIE) is to extract facts from natural language text, and to represent them as structured triples of the form {\textless}subject, predicate, object{\textgreater}.

Open Information Extraction Sentence

Pre-training and Diagnosing Knowledge Base Completion Models

1 code implementation27 Jan 2024 Vid Kocijan, Myeongjun Erik Jang, Thomas Lukasiewicz

The method works for both canonicalized knowledge bases and uncanonicalized or open knowledge bases, i. e., knowledge bases where more than one copy of a real-world entity or relation may exist.

General Knowledge Knowledge Base Completion +3

Counter-GAP: Counterfactual Bias Evaluation through Gendered Ambiguous Pronouns

no code implementations11 Feb 2023 Zhongbin Xie, Vid Kocijan, Thomas Lukasiewicz, Oana-Maria Camburu

Bias-measuring datasets play a critical role in detecting biased behavior of language models and in evaluating progress of bias mitigation methods.

coreference-resolution counterfactual +1

The Defeat of the Winograd Schema Challenge

no code implementations7 Jan 2022 Vid Kocijan, Ernest Davis, Thomas Lukasiewicz, Gary Marcus, Leora Morgenstern

The Winograd Schema Challenge - a set of twin sentences involving pronoun reference disambiguation that seem to require the use of commonsense knowledge - was proposed by Hector Levesque in 2011.

Knowledge Base Completion Meets Transfer Learning

1 code implementation EMNLP 2021 Vid Kocijan, Thomas Lukasiewicz

The aim of knowledge base completion is to predict unseen facts from existing facts in knowledge bases.

Knowledge Base Completion Relation +1

The Gap on GAP: Tackling the Problem of Differing Data Distributions in Bias-Measuring Datasets

1 code implementation3 Nov 2020 Vid Kocijan, Oana-Maria Camburu, Thomas Lukasiewicz

For example, if the feminine subset of a gender-bias-measuring coreference resolution dataset contains sentences with a longer average distance between the pronoun and the correct candidate, an RNN-based model may perform worse on this subset due to long-term dependencies.

coreference-resolution

A Review of Winograd Schema Challenge Datasets and Approaches

no code implementations23 Apr 2020 Vid Kocijan, Thomas Lukasiewicz, Ernest Davis, Gary Marcus, Leora Morgenstern

The Winograd Schema Challenge is both a commonsense reasoning and natural language understanding challenge, introduced as an alternative to the Turing test.

Natural Language Understanding

A Surprisingly Robust Trick for the Winograd Schema Challenge

no code implementations ACL 2019 Vid Kocijan, Ana-Maria Cretu, Oana-Maria Camburu, Yordan Yordanov, Thomas Lukasiewicz

The Winograd Schema Challenge (WSC) dataset WSC273 and its inference counterpart WNLI are popular benchmarks for natural language understanding and commonsense reasoning.

Language Modelling Natural Language Understanding +1

A Surprisingly Robust Trick for Winograd Schema Challenge

2 code implementations15 May 2019 Vid Kocijan, Ana-Maria Cretu, Oana-Maria Camburu, Yordan Yordanov, Thomas Lukasiewicz

The Winograd Schema Challenge (WSC) dataset WSC273 and its inference counterpart WNLI are popular benchmarks for natural language understanding and commonsense reasoning.

Common Sense Reasoning Coreference Resolution +4

Cannot find the paper you are looking for? You can Submit a new open access paper.