1 code implementation • 7 Feb 2024 • Jinghui Lu, Ziwei Yang, Yanjie Wang, Xuejing Liu, Brian Mac Namee, Can Huang
In this study, we aim to reduce generation latency for Named Entity Recognition (NER) with Large Language Models (LLMs).
no code implementations • 1 Dec 2023 • Svetoslav Nizhnichenkov, Rahul Nair, Elizabeth Daly, Brian Mac Namee
In this paper, we aim to characterise impacted cohorts when mitigation interventions are applied.
1 code implementation • 11 Sep 2023 • Misgina Tsighe Hagos, Niamh Belton, Kathleen M. Curran, Brian Mac Namee
eXplanation Based Learning (XBL) is an interactive learning approach that provides a transparent method of training deep learning models by interacting with their explanations.
no code implementations • 2 Aug 2023 • Misgina Tsighe Hagos, Kathleen M. Curran, Brian Mac Namee
We train a deep learning model using a Covid-19 chest X-ray dataset and we showcase how this dataset can lead to spurious correlations due to unintended confounding regions.
no code implementations • 12 Jul 2023 • Misgina Tsighe Hagos, Kathleen M. Curran, Brian Mac Namee
eXplanation Based Learning (XBL) is a form of Interactive Machine Learning (IML) that provides a model refining approach via user feedback collected on model explanations.
1 code implementation • 14 Apr 2023 • Misgina Tsighe Hagos, Niamh Belton, Ronan P. Killeen, Kathleen M. Curran, Brian Mac Namee
To this end, we select progressive MCI patients from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset and construct an ordinal dataset with a prediction target that indicates the time to progression to AD.
1 code implementation • 27 Nov 2022 • Jinghui Lu, Rui Zhao, Brian Mac Namee, Fei Tan
In this work, we present a ``versatile'' model -- the Prompting-based Unified NER system (PUnifiedNER) -- that works with data from different domains and can recognise up to 37 entity types simultaneously, and theoretically it could be as many as possible.
no code implementations • 15 Nov 2022 • Misgina Tsighe Hagos, Kathleen M. Curran, Brian Mac Namee
Identifying spurious correlations learned by a trained model is at the core of refining a trained model and building a trustworthy model.
1 code implementation • 30 Sep 2022 • Jinghui Lu, Dongsheng Zhu, Weidong Han, Rui Zhao, Brian Mac Namee, Fei Tan
Current methods for prompt learning in zeroshot scenarios widely rely on a development set with sufficient human-annotated data to select the best-performing prompt template a posteriori.
no code implementations • 26 Sep 2022 • Misgina Tsighe Hagos, Kathleen M. Curran, Brian Mac Namee
Explanatory Interactive Learning (XIL) collects user feedback on visual model explanations to implement a Human-in-the-Loop (HITL) based interactive learning scenario.
no code implementations • 20 Apr 2022 • Paul Albert, Mohamed Saadeldin, Badri Narayanan, Brian Mac Namee, Deirdre Hennessy, Aisling H. O'Connor, Noel E. O'Connor, Kevin McGuinness
Sward species composition estimation is a tedious one.
1 code implementation • 18 Apr 2022 • Paul Albert, Mohamed Saadeldin, Badri Narayanan, Jaime Fernandez, Brian Mac Namee, Deirdre Hennessey, Noel E. O'Connor, Kevin McGuinness
In this context, deep learning algorithms offer a tempting alternative to the usual means of sward composition estimation, which involves the destructive process of cutting a sample from the herbage field and sorting by hand all plant species in the herbage.
1 code implementation • ACL 2022 • Jinghui Lu, Linyi Yang, Brian Mac Namee, Yue Zhang
We present a novel rationale-centric framework with human-in-the-loop -- Rationales-centric Double-robustness Learning (RDL) -- to boost model out-of-distribution performance in few-shot learning scenarios.
no code implementations • 26 Oct 2021 • Paul Albert, Mohamed Saadeldin, Badri Narayanan, Brian Mac Namee, Deirdre Hennessy, Aisling O'Connor, Noel O'Connor, Kevin McGuinness
Deep learning for computer vision is a powerful tool in this context as it can accurately estimate the dry biomass of a herbage parcel using images of the grass canopy taken using a portable device.
no code implementations • 25 Sep 2021 • Payel Sadhukhan, Arjun Pakrashi, Brian Mac Namee
In this work, we propose Random Walk-steered Majority Undersampling (RWMaU), which undersamples the majority points of a class imbalanced dataset, in order to balance the classes.
no code implementations • 25 Sep 2021 • Payel Sadhukhan, Arjun Pakrashi, Sarbani Palit, Brian Mac Namee
The training dataset is augmented with the set of label-specific synthetic minority points, and classifiers are trained to predict the relevance of each label independently.
no code implementations • 16 Jul 2021 • Qin Ruan, Brian Mac Namee, Ruihai Dong
Leveraging unlabelled data through weak or distant supervision is a compelling approach to developing more effective text classification models.
1 code implementation • 15 Jul 2021 • John Mitros, Brian Mac Namee
Neural networks are often utilised in critical domain applications (e. g. self-driving cars, financial markets, and aerospace engineering), even though they exhibit overconfident predictions for ambiguous inputs.
Out-of-Distribution Detection Out of Distribution (OOD) Detection +1
1 code implementation • 12 Jun 2021 • Jinghui Lu, Maeve Henchion, Ivan Bacher, Brian Mac Namee
While with the recent emergence of BERT, deep learning language models can achieve reasonably good performance in document classification with few labelled instances, there is a lack of evidence in the utility of applying BERT-like models on long document classification.
no code implementations • 29 Jan 2021 • Mehran H. Z. Bazargani, Arjun Pakrashi, Brian Mac Namee
The Radial Basis Function Data Descriptor (RBFDD) network is an effective solution for anomaly detection, however, it is a shallow model that does not deal effectively with raw data representations.
no code implementations • 8 Jan 2021 • Badri Narayanan, Mohamed Saadeldin, Paul Albert, Kevin McGuinness, Brian Mac Namee
In this paper, we demonstrate that applying data augmentation and transfer learning is effective in predicting multi-target biomass percentages of different plant species, even with a small training dataset.
no code implementations • 6 Jan 2021 • Cathal Ryan, Christophe Guéret, Donagh Berry, Medb Corcoran, Mark T. Keane, Brian Mac Namee
Mastitis is a billion dollar health problem for the modern dairy industry, with implications for antibiotic resistance.
no code implementations • 5 Nov 2020 • Cathal Ryan, Christophe Guéret, Donagh Berry, Brian Mac Namee
The aim of this study was to build a modelling framework that would allow us to be able to detect mastitis infections before they would normally be found by farmers through the introduction of machine learning techniques.
1 code implementation • 3 Sep 2020 • John Mitros, Arjun Pakrashi, Brian Mac Namee
Deep neural networks have been successful in diverse discriminative classification tasks, although, they are poorly calibrated often assigning high probability to misclassified predictions.
no code implementations • 1 Jun 2020 • Ellen Rushe, Brian Mac Namee
A common assumption of novelty detection is that the distribution of both "normal" and "novel" data are static.
no code implementations • LREC 2020 • Jinghui Lu, Maeve Henchion, Brian Mac Namee
Jensen-Shannon divergence (JSD) is a distribution similarity measurement widely used in natural language processing.
no code implementations • 3 Dec 2019 • John Mitros, Brian Mac Namee
Deep neural networks (DNN) are versatile parametric models utilised successfully in a diverse number of tasks and domains.
no code implementations • 29 Oct 2019 • Luis Miralles, M. Atif Qureshi, Brian Mac Namee
In contrast, the objective of our research consists of optimising RTB campaigns by finding out configurations that maximise both the number of impressions and their average profitability.
2 code implementations • 4 Oct 2019 • Jinghui Lu, Maeve Henchion, Brian Mac Namee
Active learning has been shown to be an effective way to alleviate some of the effort required in utilising large collections of unlabelled data for machine learning tasks without needing to fully label them.
no code implementations • 6 Jun 2019 • Matthieu Bellucci, Luis Miralles, M. Atif Qureshi, Brian Mac Namee
Although the interpolation methods for periodic sampling have been a topic of research for a long time, there is a lack of study in methods capable of taking advantage of the Lebesgue sampling characteristics to reconstruct time series more accurately.
no code implementations • 23 Apr 2019 • Arjun Pakrashi, Brian Mac Namee
Multi-label classification is an approach which allows a datapoint to be labelled with more than one class at the same time.
no code implementations • 23 Apr 2019 • Arjun Pakrashi, Brian Mac Namee
The Kalman Filter-based Heuristic Ensemble (KFHE) is an ensemble method that exploits the sensor fusion properties of the Kalman filter to combine several classifier models, and that has been shown to be very effective.
no code implementations • 4 Apr 2019 • John Mitros, Brian Mac Namee
The ubiquity of machine learning based predictive models in modern society naturally leads people to ask how trustworthy those models are?
1 code implementation • 6 Oct 2018 • Arjun Pakrashi, Elham Alghamdi, Brian Mac Namee, Derek Greene
Meetup. com is a global online platform which facilitates the organisation of meetups in different parts of the world.
Social and Information Networks Computers and Society
no code implementations • 30 Jul 2018 • Arjun Pakrashi, Brian Mac Namee
This paper introduces a new perspective on multi-class ensemble classification that considers training an ensemble as a state estimation problem.
1 code implementation • 22 Jul 2018 • Quan Le, Oisín Boydell, Brian Mac Namee, Mark Scanlon
Current malware detection and classification approaches generally rely on time consuming and knowledge intensive processes to extract patterns (signatures) and behaviors from malware, which are then used for identification.
Ranked #1 on Malware Classification on Microsoft Malware Classification Challenge (F1 score (5-fold) metric)
1 code implementation • 23 Feb 2017 • Mark Belford, Brian Mac Namee, Derek Greene
Topic models can provide us with an insight into the underlying latent structure of a large corpus of documents.
1 code implementation • Evolving Systems 2011 • Patrick Lindstrom, Brian Mac Namee, Sarah Jane Delany
Data generated from naturally occurring processes tends to be non-stationary.