Search Results for author: Danushka Bollegala

Found 98 papers, 32 papers with code

Learning to Borrow– Relation Representation for Without-Mention Entity-Pairs for Knowledge Graph Completion

1 code implementation NAACL 2022 Huda Hakami, Mona Hakami, Angrosh Mandya, Danushka Bollegala

In this paper, we propose and evaluate several methods to address this problem, where we borrow LDPs from the entity pairs that co-occur in sentences in the corpus (i. e. with mentions entity pairs) to represent entity pairs that do not co-occur in any sentence in the corpus (i. e. without mention entity pairs).

Entity Embeddings Knowledge Graph Embedding +3

Position-based Prompting for Health Outcome Generation

no code implementations BioNLP (ACL) 2022 Micheal Abaho, Danushka Bollegala, Paula Williamson, Susanna Dodd

Probing factual knowledge in Pre-trained Language Models (PLMs) using prompts has indirectly implied that language models (LMs) can be treated as knowledge bases.

Position

Debiasing Isn’t Enough! – on the Effectiveness of Debiasing MLMs and Their Social Biases in Downstream Tasks

no code implementations COLING 2022 Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki

We study the relationship between task-agnostic intrinsic and task-specific extrinsic social bias evaluation measures for MLMs, and find that there exists only a weak correlation between these two types of evaluation measures.

Query Obfuscation by Semantic Decomposition

no code implementations LREC 2022 Danushka Bollegala, Tomoya Machide, Ken-ichi Kawarabayashi

Our experimental results show that the proposed method can accurately reconstruct the search results for user queries, without compromising the privacy of the search engine users.

Representation Learning Word Embeddings

Detect and Classify – Joint Span Detection and Classification for Health Outcomes

1 code implementation EMNLP 2021 Micheal Abaho, Danushka Bollegala, Paula Williamson, Susanna Dodd

To address this, we propose a method that uses both word-level and sentence-level information to simultaneously perform outcome span detection and outcome type classification.

Classification Decision Making +1

Improving Pre-trained Language Model Sensitivity via Mask Specific losses: A case study on Biomedical NER

no code implementations26 Mar 2024 Micheal Abaho, Danushka Bollegala, Gary Leeming, Dan Joyce, Iain E Buchan

To address insensitive fine-tuning, we propose Mask Specific Language Modeling (MSLM), an approach that efficiently acquires target domain knowledge by appropriately weighting the importance of domain-specific terms (DS-terms) during fine-tuning.

Language Modelling NER

Evaluating Unsupervised Dimensionality Reduction Methods for Pretrained Sentence Embeddings

no code implementations20 Mar 2024 Gaifan Zhang, Yi Zhou, Danushka Bollegala

Sentence embeddings produced by Pretrained Language Models (PLMs) have received wide attention from the NLP community due to their superior performance when representing texts in numerous downstream applications.

Dimensionality Reduction Sentence +1

A Semantic Distance Metric Learning approach for Lexical Semantic Change Detection

1 code implementation1 Mar 2024 Taichi Aida, Danushka Bollegala

Detecting temporal semantic changes of words is an important task for various NLP applications that must make time-sensitive predictions.

Change Detection Metric Learning +1

Eagle: Ethical Dataset Given from Real Interactions

no code implementations22 Feb 2024 Masahiro Kaneko, Danushka Bollegala, Timothy Baldwin

The existing evaluation metrics and methods to address these ethical challenges use datasets intentionally created by instructing humans to create instances including ethical problems.

The Gaps between Pre-train and Downstream Settings in Bias Evaluation and Debiasing

no code implementations16 Jan 2024 Masahiro Kaneko, Danushka Bollegala, Timothy Baldwin

Moreover, the performance degradation due to debiasing is also lower in the ICL case compared to that in the FT case.

In-Context Learning

A Predictive Factor Analysis of Social Biases and Task-Performance in Pretrained Masked Language Models

no code implementations19 Oct 2023 Yi Zhou, Jose Camacho-Collados, Danushka Bollegala

Various types of social biases have been reported with pretrained Masked Language Models (MLMs) in prior work.

$\textit{Swap and Predict}$ -- Predicting the Semantic Changes in Words across Corpora by Context Swapping

1 code implementation16 Oct 2023 Taichi Aida, Danushka Bollegala

Intuitively, if the meaning of $w$ does not change between $\mathcal{C}_1$ and $\mathcal{C}_2$, we would expect the distributions of contextualised word embeddings of $w$ to remain the same before and after this random swapping process.

Change Detection Language Modelling +1

Can Word Sense Distribution Detect Semantic Changes of Words?

1 code implementation16 Oct 2023 Xiaohang Tang, Yi Zhou, Taichi Aida, Procheta Sen, Danushka Bollegala

Given this relationship between WSD and SCD, we explore the possibility of predicting whether a target word has its meaning changed between two corpora collected at different time steps, by comparing the distributions of senses of that word in each corpora.

Change Detection Word Sense Disambiguation

A Neighbourhood-Aware Differential Privacy Mechanism for Static Word Embeddings

no code implementations19 Sep 2023 Danushka Bollegala, Shuichi Otake, Tomoya Machide, Ken-ichi Kawarabayashi

We propose a Neighbourhood-Aware Differential Privacy (NADP) mechanism considering the neighbourhood of a word in a pretrained static word embedding space to determine the minimal amount of noise required to guarantee a specified privacy level.

Word Embeddings

The Impact of Debiasing on the Performance of Language Models in Downstream Tasks is Underestimated

no code implementations16 Sep 2023 Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki

In this study, we compare the impact of debiasing on performance across multiple downstream tasks using a wide-range of benchmark datasets that containing female, male, and stereotypical words.

In-Contextual Gender Bias Suppression for Large Language Models

1 code implementation13 Sep 2023 Daisuke Oba, Masahiro Kaneko, Danushka Bollegala

We show that, using CrowsPairs dataset, our textual preambles covering counterfactual statements can suppress gender biases in English LLMs such as LLaMA2.

counterfactual Data Augmentation +1

Learning to Predict Concept Ordering for Common Sense Generation

1 code implementation12 Sep 2023 Tianhui Zhang, Danushka Bollegala, Bei Peng

Prior work has shown that the ordering in which concepts are shown to a commonsense generator plays an important role, affecting the quality of the generated sentence.

Common Sense Reasoning Sentence

Metrics for quantifying isotropy in high dimensional unsupervised clustering tasks in a materials context

no code implementations25 May 2023 Samantha Durdy, Michael W. Gaultois, Vladimir Gusev, Danushka Bollegala, Matthew J. Rosseinsky

Using fractional anisotropy, a common method used in medical imaging for comparison, we then expand these measures to examine the average isotropy of a set of clusters.

Clustering

Unsupervised Semantic Variation Prediction using the Distribution of Sibling Embeddings

1 code implementation15 May 2023 Taichi Aida, Danushka Bollegala

However, some of the previously associated meanings of a target word can become obsolete over time (e. g. meaning of gay as happy), while novel usages of existing words are observed (e. g. meaning of cell as a mobile phone).

Evaluating the Robustness of Discrete Prompts

1 code implementation11 Feb 2023 Yoichi Ishibashi, Danushka Bollegala, Katsuhito Sudoh, Satoshi Nakamura

To address this question, we conduct a systematic study of the robustness of discrete prompts by applying carefully designed perturbations into an application using AutoPrompt and then measure their performance in two Natural Language Inference (NLI) datasets.

Natural Language Inference

Comparing Intrinsic Gender Bias Evaluation Measures without using Human Annotated Examples

no code implementations28 Jan 2023 Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki

Prior works have relied on human annotated examples to compare existing intrinsic bias evaluation measures.

On the Curious Case of $\ell_2$ norm of Sense Embeddings

no code implementations26 Oct 2022 Yi Zhou, Danushka Bollegala

We show that the $\ell_2$ norm of a static sense embedding encodes information related to the frequency of that sense in the training corpus used to learn the sense embeddings.

Word Embeddings Word Sense Disambiguation

Debiasing isn't enough! -- On the Effectiveness of Debiasing MLMs and their Social Biases in Downstream Tasks

no code implementations6 Oct 2022 Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki

We study the relationship between task-agnostic intrinsic and task-specific extrinsic social bias evaluation measures for Masked Language Models (MLMs), and find that there exists only a weak correlation between these two types of evaluation measures.

Learning Dynamic Contextualised Word Embeddings via Template-based Temporal Adaptation

1 code implementation23 Aug 2022 Xiaohang Tang, Yi Zhou, Danushka Bollegala

We then generate prompts by filling manually compiled templates using the extracted pivot and anchor terms.

Language Modelling Word Embeddings

Random projections and Kernelised Leave One Cluster Out Cross-Validation: Universal baselines and evaluation tools for supervised machine learning for materials properties

1 code implementation17 Jun 2022 Samantha Durdy, Michael Gaultois, Vladimir Gusev, Danushka Bollegala, Matthew J. Rosseinsky

We also find that the radial basis function improves the linear separability of chemical datasets in all 10 datasets tested and provide a framework for the application of this function in the LOCO-CV process to improve the outcome of LOCO-CV measurements regardless of machine learning algorithm, choice of metric, and choice of compound representation.

Band Gap BIG-bench Machine Learning

Gender Bias in Meta-Embeddings

no code implementations19 May 2022 Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki

Different methods have been proposed to develop meta-embeddings from a given set of source embeddings.

Gender Bias in Masked Language Models for Multiple Languages

1 code implementation NAACL 2022 Masahiro Kaneko, Aizhan Imankulova, Danushka Bollegala, Naoaki Okazaki

Unfortunately, it was reported that MLMs also learn discriminative biases regarding attributes such as gender and race.

Attribute Sentence

Learning to Borrow -- Relation Representation for Without-Mention Entity-Pairs for Knowledge Graph Completion

1 code implementation27 Apr 2022 Huda Hakami, Mona Hakami, Angrosh Mandya, Danushka Bollegala

In this paper, we propose and evaluate several methods to address this problem, where we borrow LDPs from the entity pairs that co-occur in sentences in the corpus (i. e. with mention entity pairs) to represent entity pairs that do not co-occur in any sentence in the corpus (i. e. without mention entity pairs).

Entity Embeddings Knowledge Graph Embedding +3

Learning Meta Word Embeddings by Unsupervised Weighted Concatenation of Source Embeddings

no code implementations26 Apr 2022 Danushka Bollegala

Given multiple source word embeddings learnt using diverse algorithms and lexical resources, meta word embedding learning methods attempt to learn more accurate and wide-coverage word embeddings.

Word Embeddings

A Survey on Word Meta-Embedding Learning

no code implementations25 Apr 2022 Danushka Bollegala, James O'Neill

Meta-embedding (ME) learning is an emerging approach that attempts to learn more accurate word embeddings given existing (source) word embeddings as the sole input.

Word Embeddings

Unsupervised Attention-based Sentence-Level Meta-Embeddings from Contextualised Language Models

no code implementations LREC 2022 Keigo Takahashi, Danushka Bollegala

A variety of contextualised language models have been proposed in the NLP community, which are trained on diverse corpora to produce numerous Neural Language Models (NLMs).

Semantic Textual Similarity Sentence +2

Sense Embeddings are also Biased--Evaluating Social Biases in Static and Contextualised Sense Embeddings

1 code implementation14 Mar 2022 Yi Zhou, Masahiro Kaneko, Danushka Bollegala

Sense embedding learning methods learn different embeddings for the different senses of an ambiguous word.

Word Embeddings

Assessment of contextualised representations in detecting outcome phrases in clinical trials

no code implementations13 Feb 2022 Micheal Abaho, Danushka Bollegala, Paula R Williamson, Susanna Dodd

We reach a consensus on which contextualized representations are best suited for detecting outcomes from clinical-trial abstracts.

Decision Making Specificity

Learning Sense-Specific Static Embeddings using Contextualised Word Embeddings as a Proxy

no code implementations PACLIC 2021 Yi Zhou, Danushka Bollegala

Contextualised word embeddings generated from Neural Language Models (NLMs), such as BERT, represent a word with a vector that considers the semantics of the target word as well its context.

Word Embeddings Word Sense Disambiguation

Backretrieval: An Image-Pivoted Evaluation Metric for Cross-Lingual Text Representations Without Parallel Corpora

no code implementations11 May 2021 Mikhail Fain, Niall Twomey, Danushka Bollegala

Cross-lingual text representations have gained popularity lately and act as the backbone of many tasks such as unsupervised machine translation and cross-lingual information retrieval, to name a few.

Cross-Lingual Information Retrieval Retrieval +2

Detect and Classify -- Joint Span Detection and Classification for Health Outcomes

1 code implementation15 Apr 2021 Michael Abaho, Danushka Bollegala, Paula Williamson, Susanna Dodd

To address this, we propose a method that uses both word-level and sentence-level information to simultaneously perform outcome span detection and outcome type classification.

Classification Decision Making +2

Unmasking the Mask -- Evaluating Social Biases in Masked Language Models

1 code implementation15 Apr 2021 Masahiro Kaneko, Danushka Bollegala

To overcome the above-mentioned disfluencies, we propose All Unmasked Likelihood (AUL), a bias evaluation measure that predicts all tokens in a test case given the MLM embedding of the unmasked input.

Selection bias Sentence

RelWalk - A Latent Variable Model Approach to Knowledge Graph Embedding

no code implementations EACL 2021 Danushka Bollegala, Huda Hakami, Yuichi Yoshida, Ken-ichi Kawarabayashi

Embedding entities and relations of a knowledge graph in a low-dimensional space has shown impressive performance in predicting missing links between entities.

Knowledge Graph Embedding Knowledge Graph Embeddings +1

Semantically-Conditioned Negative Samples for Efficient Contrastive Learning

no code implementations12 Feb 2021 James O' Neill, Danushka Bollegala

In the knowledge distillation setting, (1) the performance of student networks increase by 4. 56\% percentage points on Tiny-ImageNet-200 and 3. 29\% on CIFAR-100 over student networks trained with no teacher and (2) 1. 23\% and 1. 72\% respectively over a \textit{hard-to-beat} baseline (Hinton et al., 2015).

Contrastive Learning Knowledge Distillation

RelWalk A Latent Variable Model Approach to Knowledge Graph Embedding

1 code implementation25 Jan 2021 Danushka Bollegala, Huda Hakami, Yuichi Yoshida, Ken-ichi Kawarabayashi

Embedding entities and relations of a knowledge graph in a low-dimensional space has shown impressive performance in predicting missing links between entities.

Knowledge Graph Embedding Knowledge Graph Embeddings +1

Dictionary-based Debiasing of Pre-trained Word Embeddings

1 code implementation EACL 2021 Masahiro Kaneko, Danushka Bollegala

Word embeddings trained on large corpora have shown to encode high levels of unfair discriminatory gender, racial, religious and ethnic biases.

Word Embeddings

Debiasing Pre-trained Contextualised Embeddings

2 code implementations EACL 2021 Masahiro Kaneko, Danushka Bollegala

In comparison to the numerous debiasing methods proposed for the static non-contextualised word embeddings, the discriminative biases in contextualised embeddings have received relatively little attention.

Sentence Word Embeddings

$k$-Neighbor Based Curriculum Sampling for Sequence Prediction

no code implementations22 Jan 2021 James O' Neill, Danushka Bollegala

At test time, a sequence predictor is required to make predictions given past predictions as the input, instead of the past targets that are provided during training.

Language Modelling

Autoencoding Improves Pre-trained Word Embeddings

no code implementations COLING 2020 Masahiro Kaneko, Danushka Bollegala

Prior work investigating the geometry of pre-trained word embeddings have shown that word embeddings to be distributed in a narrow cone and by centering and projecting using principal component vectors one can increase the accuracy of a given set of pre-trained word embeddings.

Word Embeddings

Spatio-temporal Attention Model for Tactile Texture Recognition

no code implementations10 Aug 2020 Guanqun Cao, Yi Zhou, Danushka Bollegala, Shan Luo

Recently, tactile sensing has attracted great interest in robotics, especially for facilitating exploration of unstructured environments and effective manipulation.

Tree-Structured Neural Topic Model

no code implementations ACL 2020 Masaru Isonuma, Junichiro Mori, Danushka Bollegala, Ichiro Sakata

This paper presents a tree-structured neural topic model, which has a topic distribution over a tree with an infinite number of branches.

Do not let the history haunt you -- Mitigating Compounding Errors in Conversational Question Answering

no code implementations12 May 2020 Angrosh Mandya, James O'Neill, Danushka Bollegala, Frans Coenen

The Conversational Question Answering (CoQA) task involves answering a sequence of inter-related conversational questions about a contextual paragraph.

Conversational Question Answering

Do not let the history haunt you: Mitigating Compounding Errors in Conversational Question Answering

no code implementations LREC 2020 M, Angrosh ya, James O{'} Neill, Danushka Bollegala, Frans Coenen

The Conversational Question Answering (CoQA) task involves answering a sequence of inter-related conversational questions about a contextual paragraph.

Conversational Question Answering

Contextualised Graph Attention for Improved Relation Extraction

1 code implementation22 Apr 2020 Angrosh Mandya, Danushka Bollegala, Frans Coenen

This paper presents a contextualized graph attention network that combines edge features and multiple sub-graphs for improving relation extraction.

Graph Attention Relation +1

Dividing and Conquering Cross-Modal Recipe Retrieval: from Nearest Neighbours Baselines to SoTA

no code implementations28 Nov 2019 Mikhail Fain, Niall Twomey, Andrey Ponikar, Ryan Fox, Danushka Bollegala

We also use our method for comparing image and text encoders trained using different modern approaches, thus addressing the issues hindering the development of novel methods for cross-modal recipe retrieval.

Cross-Modal Retrieval Retrieval

Query Obfuscation Semantic Decomposition

no code implementations12 Sep 2019 Danushka Bollegala, Tomoya Machide, Ken-ichi Kawarabayashi

Our experimental results show that the proposed method can accurately reconstruct the search results for user queries, without compromising the privacy of the search engine users.

Clustering Information Retrieval +2

Transfer Reward Learning for Policy Gradient-Based Text Generation

no code implementations9 Sep 2019 James O' Neill, Danushka Bollegala

However, we argue that current n-gram overlap based measures that are used as rewards can be improved by using model-based rewards transferred from tasks that directly compare the similarity of sentence pairs.

Conditional Text Generation Image Captioning +5

Self-Adaptation for Unsupervised Domain Adaptation

no code implementations RANLP 2019 Xia Cui, Danushka Bollegala

Using the labelled data from the source domain, we first learn a projection that maximises the distance among the nearest neighbours with opposite labels in the source domain.

Unsupervised Domain Adaptation

Gender-preserving Debiasing for Pre-trained Word Embeddings

1 code implementation ACL 2019 Masahiro Kaneko, Danushka Bollegala

Word embeddings learnt from massive text collections have demonstrated significant levels of discriminative biases such as gender, racial or ethnic biases, which in turn bias the down-stream NLP applications that use those word embeddings.

Word Embeddings

RelWalk -- A Latent Variable Model Approach to Knowledge Graph Embedding

no code implementations ICLR 2019 Danushka Bollegala, Huda Hakami, Yuichi Yoshida, Ken-ichi Kawarabayashi

Existing methods for learning KGEs can be seen as a two-stage process where (a) entities and relations in the knowledge graph are represented using some linear algebraic structures (embeddings), and (b) a scoring function is defined that evaluates the strength of a relation that holds between two entities using the corresponding relation and entity embeddings.

Entity Embeddings Knowledge Graph Embedding +2

Error-Correcting Neural Sequence Prediction

no code implementations21 Jan 2019 James O' Neill, Danushka Bollegala

We propose a novel neural sequence prediction method based on \textit{error-correcting output codes} that avoids exact softmax normalization and allows for a tradeoff between speed and performance.

Image Captioning Language Modelling +1

Learning Relation Representations from Word Representations

no code implementations AKBC 2019 Huda Hakami, Danushka Bollegala

We model relation representation as a supervised learning problem and learn parametrised operators that map pre-trained word embeddings to relation representations.

Knowledge Base Completion Relation +1

Joint Learning of Hierarchical Word Embeddings from a Corpus and a Taxonomy

no code implementations AKBC 2019 Mohammed Alsuhaibani, Takanori Maehara, Danushka Bollegala

To learn the word embeddings, the proposed method considers not only the hypernym relations that exists between words on a taxonomy, but also their contextual information in a large text corpus.

Word Embeddings

Combining Long Short Term Memory and Convolutional Neural Network for Cross-Sentence n-ary Relation Extraction

no code implementations AKBC 2019 Angrosh Mandya, Danushka Bollegala, Frans Coenen, Katie Atkinson

We propose in this paper a combined model of Long Short Term Memory and Convolutional Neural Networks (LSTM-CNN) that exploits word embeddings and positional embeddings for cross-sentence n-ary relation extraction.

Relation Relation Extraction +2

Analysing Dropout and Compounding Errors in Neural Language Models

no code implementations2 Nov 2018 James O' Neill, Danushka Bollegala

Moreover, we propose an extension of variational dropout to concrete dropout and curriculum dropout with varying schedules.

Language Modelling

Meta-Embedding as Auxiliary Task Regularization

no code implementations16 Sep 2018 James O' Neill, Danushka Bollegala

For intrinsic task evaluation, supervision comes from various labeled word similarity datasets.

Self-Supervised Learning Sentence +3

Curriculum-Based Neighborhood Sampling For Sequence Prediction

no code implementations16 Sep 2018 James O' Neill, Danushka Bollegala

At test time, a language model is required to make predictions given past predictions as input, instead of the past targets that are provided during training.

Language Modelling

Angular-Based Word Meta-Embedding Learning

no code implementations13 Aug 2018 James O' Neill, Danushka Bollegala

This work compares meta-embeddings trained for different losses, namely loss functions that account for angular distance between the reconstructed embedding and the target and those that account normalized distances based on the vector length.

Meta-Learning Word Embeddings +1

Learning Word Meta-Embeddings by Autoencoding

1 code implementation COLING 2018 Danushka Bollegala, Cong Bao

Distributed word embeddings have shown superior performances in numerous Natural Language Processing (NLP) tasks.

Dependency Parsing Machine Translation +3

Why does PairDiff work? - A Mathematical Analysis of Bilinear Relational Compositional Operators for Analogy Detection

no code implementations COLING 2018 Huda Hakami, Kohei Hayashi, Danushka Bollegala

We show that, if the word embed- dings are standardised and uncorrelated, such an operator will be independent of bilinear terms, and can be simplified to a linear form, where PairDiff is a special case.

Information Retrieval Knowledge Base Completion +2

An Empirical Study on Fine-Grained Named Entity Recognition

no code implementations COLING 2018 Khai Mai, Thai-Hoang Pham, Minh Trung Nguyen, Tuan Duc Nguyen, Danushka Bollegala, Ryohei Sasano, Satoshi Sekine

However, there is little research on fine-grained NER (FG-NER), in which hundreds of named entity categories must be recognized, especially for non-English languages.

Chatbot named-entity-recognition +3

Is Something Better than Nothing? Automatically Predicting Stance-based Arguments Using Deep Learning and Small Labelled Dataset

no code implementations NAACL 2018 Pavithra Rajendran, Danushka Bollegala, Simon Parsons

In the work described here, we automatically annotate stance as implicit or explicit and our results show that the datasets we generate, although noisy, can be used to learn better models for implicit/explicit opinion classification.

Abstract Argumentation Argument Mining +3

Solving Feature Sparseness in Text Classification using Core-Periphery Decomposition

no code implementations SEMEVAL 2018 Xia Cui, Sadamori Kojaku, Naoki Masuda, Danushka Bollegala

We observe that prioritising features that are common to both training and test instances as cores during the CP decomposition to further improve the accuracy of text classification.

Domain Adaptation General Classification +4

Dropping Networks for Transfer Learning

no code implementations23 Apr 2018 James O' Neill, Danushka Bollegala

We also compare against models that are fully trained on the target task in the standard supervised learning setup.

Few-Shot Learning Natural Language Inference +2

ClassiNet -- Predicting Missing Features for Short-Text Classification

no code implementations14 Apr 2018 Danushka Bollegala, Vincent Atanasov, Takanori Maehara, Ken-ichi Kawarabayashi

We propose \emph{ClassiNet} -- a network of classifiers trained for predicting missing features in a given instance, to overcome the feature sparseness problem.

General Classification text-classification +1

Why PairDiff works? -- A Mathematical Analysis of Bilinear Relational Compositional Operators for Analogy Detection

no code implementations19 Sep 2017 Huda Hakami, Danushka Bollegala, Hayashi Kohei

We show that, if the word embeddings are standardised and uncorrelated, such an operator will be independent of bilinear terms, and can be simplified to a linear form, where \PairDiff is a special case.

Information Retrieval Knowledge Base Completion +3

Compositional Approaches for Representing Relations Between Words: A Comparative Study

no code implementations4 Sep 2017 Huda Hakami, Danushka Bollegala

In contrast, a compositional approach for representing relations between words overcomes these issues by using the attributes of each individual word to indirectly compose a representation for the common relations that hold between the two words.

Knowledge Base Completion Relation

Joint Word Representation Learning using a Corpus and a Semantic Lexicon

1 code implementation19 Nov 2015 Danushka Bollegala, Alsuhaibani Mohammed, Takanori Maehara, Ken-ichi Kawarabayashi

For this purpose, we propose a joint word representation learning method that simultaneously predicts the co-occurrences of two words in a sentence subject to the relational constrains given by the semantic lexicon.

Representation Learning Semantic Similarity +2

Unsupervised Cross-Domain Word Representation Learning

no code implementations IJCNLP 2015 Danushka Bollegala, Takanori Maehara, Ken-ichi Kawarabayashi

Given a pair of \emph{source}-\emph{target} domains, we propose an unsupervised method for learning domain-specific word representations that accurately capture the domain-specific aspects of word semantics.

Domain Adaptation Representation Learning +2

Embedding Semantic Relations into Word Representations

no code implementations1 May 2015 Danushka Bollegala, Takanori Maehara, Ken-ichi Kawarabayashi

We propose an unsupervised method for learning vector representations for words such that the learnt representations are sensitive to the semantic relations that exist between two words.

Relation Classification

Learning Word Representations from Relational Graphs

no code implementations7 Dec 2014 Danushka Bollegala, Takanori Maehara, Yuichi Yoshida, Ken-ichi Kawarabayashi

To evaluate the accuracy of the word representations learnt using the proposed method, we use the learnt word representations to solve semantic word analogy problems.

Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.