We introduce a method for improving the structural understanding abilities of language models.
We present SelfKG with efficient strategies to optimize this objective for aligning entities without label supervision.
We cast a suite of information extraction tasks into a text-to-triple translation framework.
Ranked #1 on Open Information Extraction on OIE2016 (using extra training data)
We present SelfKG by leveraging this discovery to design a contrastive learning strategy across two KGs.