no code implementations • NAACL (CMCL) 2021 • Steven Derby, Paul Miller, Barry Devereux
Furthermore, in order to make more meaningful comparisons with theories of human language comprehension in psycholinguistics, we focus on two key stages where the meaning of a particular target word may arise: immediately before the word’s presentation to the model (comparable to forward inferencing), and immediately after the word token has been input into the network.
no code implementations • 1 Jan 2021 • Mark Lennox, Neil M. Robertson, Barry Devereux
Modern sequencing technology has produced a vast quantity of proteomic data, which has been key to the development of various deep learning models within the field.
no code implementations • 1 Jan 2021 • Mark Lennox, Neil M. Robertson, Barry Devereux
In this paper, we seek to leverage a set of BERT-style models that have been pre-trained on vast quantities of both protein and drug data.
no code implementations • COLING 2020 • Steven Derby, Paul Miller, Barry Devereux
Semantic models derived from visual information have helped to overcome some of the limitations of solely text-based distributional semantic models.
1 code implementation • CONLL 2020 • Steven Derby, Paul Miller, Barry Devereux
Researchers have recently demonstrated that tying the neural weights between the input look-up table and the output classification layer can improve training and lower perplexity on sequence learning tasks such as language modelling.
no code implementations • LREC 2020 • Liam Watson, Anna Jurek-Loughrey, Barry Devereux, Brian Murphy
Previous work in the area of sentiment analysis has focused on using information from within a sentence to predict a valence value for that sentence.
1 code implementation • IJCNLP 2019 • Steven Derby, Paul Miller, Barry Devereux
We propose a method for mapping human property knowledge onto a distributional semantic space, which adapts the word2vec architecture to the task of modelling concept features.
no code implementations • WS 2019 • Mark Ormerod, Jes{\'u}s Mart{\'\i}nez-del-Rinc{\'o}n, Neil Robertson, Bernadette McGuinness, Barry Devereux
Despite recent advances in the application of deep neural networks to various kinds of medical data, extracting information from unstructured textual sources remains a challenging task.
no code implementations • WS 2018 • Steven Derby, Paul Miller, Brian Murphy, Barry Devereux
Performance in language modelling has been significantly improved by training recurrent neural networks on large corpora.
no code implementations • CONLL 2018 • Steven Derby, Paul Miller, Brian Murphy, Barry Devereux
In this paper, we combine multimodal information from both text and image-based representations derived from state-of-the-art distributional models to produce sparse, interpretable vectors using Joint Non-Negative Sparse Embedding.