1 code implementation • EACL 2021 • Lisheng Fu, Ralph Grishman
We propose to use prototypical examples to represent each relation type and use these examples to augment related types from a different dataset.
no code implementations • WS 2018 • Lisheng Fu, Bonan Min, Thien Huu Nguyen, Ralph Grishman
Typical relation extraction models are trained on a single corpus annotated with a pre-defined relation schema.
no code implementations • IJCNLP 2017 • Lisheng Fu, Thien Huu Nguyen, Bonan Min, Ralph Grishman
Our method is a joint model consisting of a CNN-based relation classifier and a domain-adversarial classifier.
1 code implementation • LREC 2016 • Maria Pershina, Yifan He, Ralph Grishman
The task of Named Entity Linking is to link entity mentions in the document to their correct entries in a knowledge base and to cluster NIL mentions.
no code implementations • 18 Nov 2015 • Thien Huu Nguyen, Ralph Grishman
The last decade has witnessed the success of the traditional feature-based method on exploiting the discrete structures such as words or lexical patterns to extract relations from text.
Ranked #1 on
Relation Extraction
on ACE 2005
(Cross Sentence metric)
no code implementations • WS 2015 • Thien Huu Nguyen, Ralph Grishman
Ranked #1 on
Relation Extraction
on ACE 2005
(Cross Sentence metric)
no code implementations • 10 May 2015 • Miao Fan, Qiang Zhou, Andrew Abel, Thomas Fang Zheng, Ralph Grishman
This paper contributes a novel embedding model which measures the probability of each belief $\langle h, r, t, m\rangle$ in a large-scale knowledge repository via simultaneously learning distributed representations for entities ($h$ and $t$), relations ($r$), and the words in relation mentions ($m$).
no code implementations • RANLP 2015 • Miao Fan, Kai Cao, Yifan He, Ralph Grishman
This paper contributes a joint embedding model for predicting relations between a pair of entities in the scenario of relation inference.
no code implementations • 7 Apr 2015 • Miao Fan, Qiang Zhou, Thomas Fang Zheng, Ralph Grishman
Traditional way of storing facts in triplets ({\it head\_entity, relation, tail\_entity}), abbreviated as ({\it h, r, t}), makes the knowledge intuitively displayed and easily acquired by mankind, but hardly computed or even reasoned by AI machines.
no code implementations • LREC 2012 • Bonan Min, Ralph Grishman
The Knowledge Based Population (KBP) evaluation track of the Text Analysis Conferences (TAC) has been held for the past 3 years.