no code implementations • 7 Mar 2025 • Parameswaran Kamalaruban, Mark Anderson, Stuart Burrell, Maeve Madigan, Piotr Skalski, David Sutton
To address this issue, we introduce a set of LoRA-based fine-tuning methods that can be trained in a distributed fashion, where model developers and fairness auditors collaborate without sharing sensitive attributes or predictors.
no code implementations • 11 Oct 2024 • Poroshat Yazdanbakhsh, Mark Anderson, Zhisheng Shuai
We introduce a new quantity known as the network heterogeneity index, denoted by $\mathcal{H}$, which facilitates the investigation of disease propagation and population persistence in heterogeneous environments.
no code implementations • 22 Oct 2023 • Anthi Papadopoulou, Pierre Lison, Mark Anderson, Lilja Øvrelid, Ildikó Pilán
The text sanitization process starts with a privacy-oriented entity recognizer that seeks to determine the text spans expressing identifiable personal information.
no code implementations • 20 Feb 2023 • Mark Anderson, Tomi Kinnunen, Naomi Harte
We show that although performance is overall improved, the filterbanks exhibit strong sensitivity to their initialisation strategy.
no code implementations • 27 Oct 2022 • Alberto Muñoz-Ortiz, Mark Anderson, David Vilares, Carlos Gómez-Rodríguez
PoS tags, once taken for granted as a useful resource for syntactic parsing, have become more situational with the popularization of deep learning.
no code implementations • 3 Oct 2022 • Mark Anderson, Naomi Harte
Combining this data with species agnostic bird activity detection systems enables the monitoring of activity levels of bird populations.
1 code implementation • CL (ACL) 2022 • Mark Anderson, Carlos Gómez-Rodríguez
We contribute to the discussion on parsing performance in NLP by introducing a measurement that evaluates the differences between the distributions of edge displacement (the directed distance of edges) seen in training and test data.
1 code implementation • *SEM (NAACL) 2022 • Mark Anderson, Jose Camacho-Collados
The increase in performance in NLP due to the prevalence of distributional models and deep learning has brought with it a reciprocal decrease in interpretability.
no code implementations • 16 Dec 2021 • Mark Anderson, John Kennedy, Naomi Harte
This paper explores low resource classifiers and features for the detection of bird activity, suitable for embedded Automatic Recording Units which are typically deployed for long term remote monitoring of bird populations.
no code implementations • 16 Dec 2021 • Mark Anderson, Naomi Harte
This report presents deep learning and data augmentation techniques used by a system entered into the Few-Shot Bioacoustic Event Detection for the DCASE2021 Challenge.
no code implementations • 27 Nov 2021 • Luca Pion-Tonachini, Kristofer Bouchard, Hector Garcia Martin, Sean Peisert, W. Bradley Holtz, Anil Aswani, Dipankar Dwivedi, Haruko Wainwright, Ghanshyam Pilania, Benjamin Nachman, Babetta L. Marrone, Nicola Falco, Prabhat, Daniel Arnold, Alejandro Wolf-Yadlin, Sarah Powers, Sharlee Climer, Quinn Jackson, Ty Carlson, Michael Sohn, Petrus Zwart, Neeraj Kumar, Amy Justice, Claire Tomlin, Daniel Jacobson, Gos Micklem, Georgios V. Gkoutos, Peter J. Bickel, Jean-Baptiste Cazier, Juliane Müller, Bobbie-Jo Webb-Robertson, Rick Stevens, Mark Anderson, Ken Kreutz-Delgado, Michael W. Mahoney, James B. Brown
We outline emerging opportunities and challenges to enhance the utility of AI for scientific discovery.
no code implementations • ACL 2021 • Mark Anderson, Anders S{\o}gaard, Carlos G{\'o}mez-Rodr{\'\i}guez
S{\o}gaard (2020) obtained results suggesting the fraction of trees occurring in the test data isomorphic to trees in the training set accounts for a non-trivial variation in parser performance.
no code implementations • ACL (IWPT) 2021 • Mark Anderson, Carlos Gómez-Rodríguez
We present the system submission from the FASTPARSE team for the EUD Shared Task at IWPT 2021.
no code implementations • ACL (IWPT) 2021 • Mark Anderson, Mathieu Dehouck, Carlos Gómez Rodríguez
We evaluate the efficacy of predicted UPOS tags as input features for dependency parsers in lower resource settings to evaluate how treebank size affects the impact tagging accuracy has on parsing performance.
no code implementations • ACL (IWPT) 2021 • Mark Anderson, Carlos Gómez Rodríguez
We evaluate three leading dependency parser systems from different paradigms on a small yet diverse subset of languages in terms of their accuracy-efficiency Pareto front.
no code implementations • 2 Jun 2021 • Yova Kementchedjhieva, Mark Anderson, Anders Søgaard
We hypothesize that the temporary challenge humans face in integrating the two contradicting signals, one from the lexical semantics of the verb, one from the sentence-level semantics, would be reflected in higher error rates for models on tasks dependent on causal links.
no code implementations • 1 Jun 2021 • Mark Anderson, Anders Søgaard, Carlos Gómez Rodríguez
S{\o}gaard (2020) obtained results suggesting the fraction of trees occurring in the test data isomorphic to trees in the training set accounts for a non-trivial variation in parser performance.
no code implementations • NoDaLiDa 2021 • Mark Anderson, Carlos Gómez-Rodríguez
We present an error analysis of neural UPOS taggers to evaluate why using gold standard tags has such a large positive contribution to parsing performance while using predicted UPOS tags either harms performance or offers a negligible improvement.
no code implementations • 8 Dec 2020 • Andrew Reicks, Alfred Tsubaki, Mark Anderson, Jace Wieseler, Larousse Khosravi Khorashad, Jeffrey E. Shield, George Gogos, Dennis Alexander, Christos Argyropoulos, Craig Zuhlke
It is very challenging to achieve near perfect absorption/emission that is both broadband and omnidirectional while utilizing a scalable fabrication process.
Optics
no code implementations • CONLL 2020 • Mark Anderson, Carlos Gómez-Rodríguez
We present an analysis on the effect UPOS accuracy has on parsing performance.
no code implementations • WS 2020 • Mathieu Dehouck, Mark Anderson, Carlos Gómez-Rodríguez
We present the system submission from the FASTPARSE team for the EUD Shared Task at IWPT 2020.
no code implementations • WS 2020 • Mark Anderson, Carlos Gómez-Rodríguez
The carbon footprint of natural language processing research has been increasing in recent years due to its reliance on large and inefficient neural network implementations.
no code implementations • LREC 2020 • Mark Anderson, Carlos Gómez-Rodríguez
Empirical studies have shown that performance varies across different treebanks in such a way that one algorithm outperforms another on one treebank and the reverse is true for a different treebank.
no code implementations • WS 2019 • Mark Anderson, David Vilares, Carlos Gómez-Rodríguez
We introduce a language-agnostic evolutionary technique for automatically extracting chunks from dependency treebanks.