5 code implementations • 28 Sep 2019 • Benedek Rozemberczki, Carl Allen, Rik Sarkar
We present network embedding algorithms that capture information about a node from the local distribution over node attributes around it, as observed over random walks following an approach similar to Skip-gram.
5 code implementations • IJCNLP 2019 • Ivana Balažević, Carl Allen, Timothy M. Hospedales
Knowledge graphs are structured representations of real world facts.
Ranked #10 on Link Prediction on WN18
1 code implementation • NeurIPS 2019 • Ivana Balažević, Carl Allen, Timothy Hospedales
Hyperbolic embeddings have recently gained attention in machine learning due to their ability to represent hierarchical data more accurately and succinctly than their Euclidean analogues.
Ranked #38 on Link Prediction on WN18RR
1 code implementation • 21 Aug 2018 • Ivana Balažević, Carl Allen, Timothy M. Hospedales
Knowledge graphs are graphical representations of large databases of facts, which typically suffer from incompleteness.
Ranked #10 on Link Prediction on WN18
1 code implementation • WS 2020 • David Chang, Ivana Balazevic, Carl Allen, Daniel Chawla, Cynthia Brandt, Richard Andrew Taylor
Much of biomedical and healthcare data is encoded in discrete, symbolic form such as text and medical codes.
1 code implementation • 24 Oct 2022 • Yifan Hou, Wenxiang Jiao, Meizhen Liu, Carl Allen, Zhaopeng Tu, Mrinmaya Sachan
Specifically, we introduce a lightweight adapter set to enhance MLLMs with cross-lingual entity alignment and facts from MLKGs for many languages.
1 code implementation • 6 Jul 2020 • Ivana Balažević, Carl Allen, Timothy Hospedales
In this work, we propose a probabilistically principled general approach to SSL that considers the distribution over label predictions, for labels of different complexity, from "one-hot" vectors to binary vectors and images.
1 code implementation • 17 May 2023 • Shehzaad Dhuliawala, Mrinmaya Sachan, Carl Allen
We present a latent variable model for classification that provides a novel probabilistic interpretation of neural network softmax classifiers.
no code implementations • NeurIPS 2019 • Carl Allen, Ivana Balažević, Timothy Hospedales
We show that different interactions between PMI vectors reflect semantic word relationships, such as similarity and paraphrasing, that are encoded in low dimensional word embeddings under a suitable projection, theoretically explaining why embeddings of W2V and GloVe work.
no code implementations • 28 Jan 2019 • Carl Allen, Timothy Hospedales
Word embeddings generated by neural network methods such as word2vec (W2V) are well known to exhibit seemingly linear behaviour, e. g. the embeddings of analogy "woman is to queen as man is to king" approximately describe a parallelogram.
no code implementations • ICLR 2021 • Carl Allen, Ivana Balažević, Timothy Hospedales
Many models learn representations of knowledge graph data by exploiting its low-rank latent structure, encoding known relations between entities and enabling unknown facts to be inferred.
no code implementations • 10 Jun 2020 • Carl Allen, Ivana Balažević, Timothy Hospedales
Much progress has been made in semi-supervised learning (SSL) by combining methods that exploit different aspects of the data distribution, e. g. consistency regularisation relies on properties of $p(x)$, whereas entropy minimisation pertains to the label distribution $p(y|x)$.
no code implementations • 1 Feb 2022 • Carl Allen
To address this: 1. we theoretically justify the empirical observation that particular geometric relationships between word embeddings learned by algorithms such as word2vec and GloVe correspond to semantic relations between words; and 2. we extend this correspondence between semantics and geometry to the entities and relations of knowledge graphs, providing a model for the latent structure of knowledge graph representation linked to that of word embeddings.
no code implementations • 26 Sep 2022 • Đorđe Miladinović, Kumar Shridhar, Kushal Jain, Max B. Paulus, Joachim M. Buhmann, Mrinmaya Sachan, Carl Allen
In principle, applying variational autoencoders (VAEs) to sequential data offers a method for controlled sequence generation, manipulation, and structured representation learning.
no code implementations • 2 Feb 2024 • Alice Bizeul, Bernhard Schölkopf, Carl Allen
Self-supervised learning (SSL) learns representations by leveraging an auxiliary unsupervised task, such as classifying semantically related samples, e. g. different data augmentations or modalities.