1 code implementation • EMNLP (ClinicalNLP) 2020 • Zixu Wang, Julia Ive, Sinead Moylett, Christoph Mueller, Rudolf Cardinal, Sumithra Velupillai, John O’Brien, Robert Stewart
To the best of our knowledge, this is the first attempt to distinguish DLB from AD using mental health records, and to improve the reliability of DLB predictions.
no code implementations • NAACL 2022 • Atijit Anuchitanukul, Julia Ive
The performance of Reinforcement Learning (RL) for natural language tasks including Machine Translation (MT) is crucially dependent on the reward formulation.
no code implementations • NAACL (AutoSimTrans) 2022 • Ruiqing Zhang, Chuanqiang Zhang, Zhongjun He, Hua Wu, Haifeng Wang, Liang Huang, Qun Liu, Julia Ive, Wolfgang Macherey
This paper reports the results of the shared task we hosted on the Third Workshop of Automatic Simultaneous Translation (AutoSimTrans).
no code implementations • NAACL (CLPsych) 2022 • Falwah Alhamed, Julia Ive, Lucia Specia
The second is predicting the degree of suicide risk as a user-level classification task.
no code implementations • 6 Dec 2022 • Damith Chamalke Senadeera, Julia Ive
In order to achieve this task we mainly introduce the novel soft prompt tuning method of using soft prompts at both encoder and decoder levels together in a T5 model and investigate the performance as the behaviour of an additional soft prompt related to the decoder of a T5 model in controlled text generation remained unexplored.
no code implementations • 29 May 2022 • Hongshu Liu, Nabeel Seedat, Julia Ive
Computational models providing accurate estimates of their uncertainty are crucial for risk management associated with decision making in healthcare contexts.
no code implementations • 24 May 2022 • Heng-Yi Wu, Jingqing Zhang, Julia Ive, Tong Li, Vibhor Gupta, Bingyuan Chen, Yike Guo
Structured (tabular) data in the preclinical and clinical domains contains valuable information about individuals and an efficient table-to-text summarization system can drastically reduce manual efforts to condense this data into reports.
no code implementations • 19 Apr 2022 • Ashwani Tanwar, Jingqing Zhang, Julia Ive, Vibhor Gupta, Yike Guo
Extracting phenotypes from clinical text has been shown to be useful for a variety of clinical use cases such as identifying patients with rare diseases.
no code implementations • 24 Nov 2021 • Atijit Anuchitanukul, Julia Ive, Lucia Specia
We then propose to bring these findings into computational detection models by introducing and evaluating (a) neural architectures for contextual toxicity detection that are aware of the conversational structure, and (b) data augmentation strategies that can help model contextual toxicity detection.
no code implementations • EMNLP 2021 • Jingqing Zhang, Luis Bolanos, Tong Li, Ashwani Tanwar, Guilherme Freire, Xian Yang, Julia Ive, Vibhor Gupta, Yike Guo
Contextualised word embeddings is a powerful tool to detect contextual synonyms.
no code implementations • 24 Jul 2021 • Jingqing Zhang, Luis Bolanos, Ashwani Tanwar, Julia Ive, Vibhor Gupta, Yike Guo
We propose the automatic annotation of phenotypes from clinical notes as a method to capture essential information, which is complementary to typically used vital signs and laboratory test results, to predict outcomes in the Intensive Care Unit (ICU).
1 code implementation • EACL 2021 • Julia Ive, Andy Mingren Li, Yishu Miao, Ozan Caglayan, Pranava Madhyastha, Lucia Specia
This paper addresses the problem of simultaneous machine translation (SiMT) by exploring two main concepts: (a) adaptive policies to learn a good trade-off between high translation quality and low latency; and (b) visual information to support this process by providing additional (visual) contextual information which may be available before the textual input is produced.
1 code implementation • EACL 2021 • Julia Ive, Zixu Wang, Marina Fomicheva, Lucia Specia
Reinforcement Learning (RL) is a powerful framework to address the discrepancy between loss functions used during training and the final evaluation metrics to be used at test time.
1 code implementation • EMNLP 2020 • Ozan Caglayan, Julia Ive, Veneta Haralampieva, Pranava Madhyastha, Loïc Barrault, Lucia Specia
Simultaneous machine translation (SiMT) aims to translate a continuous input text stream into another language with the lowest latency and highest quality possible.
no code implementations • LREC 2020 • Julia Ive, Lucia Specia, Sara Szoc, Tom Vanallemeersch, Joachim Van den Bogaert, Eduardo Farah, Christine Maroti, Artur Ventura, Maxim Khalilov
We introduce a machine translation dataset for three pairs of languages in the legal domain with post-edited high-quality neural machine translation and independent human references.
2 code implementations • LREC 2020 • Ali Amin-Nejad, Julia Ive, Sumithra Velupillai
Natural Language Processing (NLP) can help unlock the vast troves of unstructured data in clinical text and thus improve healthcare research.
1 code implementation • IJCNLP 2019 • Julia Ive, Pranava Madhyastha, Lucia Specia
Most text-to-text generation tasks, for example text summarisation and text simplification, require copying words from the input to the output.
no code implementations • EMNLP (IWSLT) 2019 • Zixiu Wu, Ozan Caglayan, Julia Ive, Josiah Wang, Lucia Specia
Upon conducting extensive experiments, we found that (i) the explored visual integration schemes often harm the translation performance for the transformer and additive deliberation, but considerably improve the cascade deliberation; (ii) the transformer and cascade deliberation integrate the visual modality better than the additive deliberation, as shown by the incongruence analysis.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+3
no code implementations • 5 Aug 2019 • Zixiu Wu, Julia Ive, Josiah Wang, Pranava Madhyastha, Lucia Specia
The question we ask ourselves is whether visual features can support the translation process, in particular, given that this is a dataset extracted from videos, we focus on the translation of actions, which we believe are poorly captured in current static image-text datasets currently used for multimodal translation.
no code implementations • WS 2019 • Zixu Wang, Julia Ive, Sumithra Velupillai, Lucia Specia
A major obstacle to the development of Natural Language Processing (NLP) methods in the biomedical domain is data accessibility.
1 code implementation • ACL 2019 • Julia Ive, Pranava Madhyastha, Lucia Specia
Previous work on multimodal machine translation has shown that visual information is only needed in very specific cases, for example in the presence of ambiguous words where the textual context is not sufficient.
Ranked #3 on
Multimodal Machine Translation
on Multi30K
(Meteor (EN-FR) metric)
no code implementations • WS 2018 • Julia Ive, Carolina Scarton, Fr{\'e}d{\'e}ric Blain, Lucia Specia
In this paper we present the University of Sheffield submissions for the WMT18 Quality Estimation shared task.
2 code implementations • COLING 2018 • Julia Ive, Fr{\'e}d{\'e}ric Blain, Lucia Specia
Our approach is significantly faster and yields performance improvements for a range of document-level quality estimation tasks.
no code implementations • WS 2018 • Julia Ive, George Gkotsis, Rina Dutta, Robert Stewart, Sumithra Velupillai
In this paper, we apply a hierarchical Recurrent neural network (RNN) architecture with an attention mechanism on social media data related to mental health.
no code implementations • COLING 2016 • Julia Ive, Fran{\c{c}}ois Yvon
In this paper, we study ways to extend sentence compression in a bilingual context, where the goal is to obtain parallel compressions of parallel sentences.