Search Results for author: Dustin Wright

Found 14 papers, 6 papers with code

Understanding Fine-grained Distortions in Reports of Scientific Findings

no code implementations19 Feb 2024 Amelie Wührl, Dustin Wright, Roman Klinger, Isabelle Augenstein

Distorted science communication harms individuals and society as it can lead to unhealthy behavior change and decrease trust in scientific institutions.

Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI

no code implementations5 Sep 2023 Dustin Wright, Christian Igel, Gabrielle Samuel, Raghavendra Selvan

The solution lionized by both industry and the ML community to improve the environmental sustainability of ML is to increase the efficiency with which ML systems operate in terms of both compute and energy consumption.

Multi-View Knowledge Distillation from Crowd Annotations for Out-of-Domain Generalization

no code implementations19 Dec 2022 Dustin Wright, Isabelle Augenstein

Selecting an effective training signal for tasks in natural language processing is difficult: expert annotations are expensive, and crowd-sourced annotations may not be reliable.

Domain Generalization Knowledge Distillation

Revisiting Softmax for Uncertainty Approximation in Text Classification

no code implementations25 Oct 2022 Andreas Nugaard Holm, Dustin Wright, Isabelle Augenstein

A cheaper alternative is to simply use the softmax based on a single forward pass without dropout to estimate model uncertainty.

Domain Adaptation text-classification +1

Modeling Information Change in Science Communication with Semantically Matched Paraphrases

no code implementations24 Oct 2022 Dustin Wright, Jiaxin Pei, David Jurgens, Isabelle Augenstein

Whether the media faithfully communicate scientific information has long been a core issue to the science community.

Fact Checking Retrieval

Generating Scientific Claims for Zero-Shot Scientific Fact Checking

1 code implementation ACL 2022 Dustin Wright, David Wadden, Kyle Lo, Bailey Kuehl, Arman Cohan, Isabelle Augenstein, Lucy Lu Wang

To address this challenge, we propose scientific claim generation, the task of generating one or more atomic and verifiable claims from scientific sentences, and demonstrate its usefulness in zero-shot fact checking for biomedical claims.

Fact Checking Negation

Semi-Supervised Exaggeration Detection of Health Science Press Releases

1 code implementation EMNLP 2021 Dustin Wright, Isabelle Augenstein

Given this, we present a formalization of and study into the problem of exaggeration detection in science communication.

Benchmarking Few-Shot Learning

Longitudinal Citation Prediction using Temporal Graph Neural Networks

no code implementations10 Dec 2020 Andreas Nugaard Holm, Barbara Plank, Dustin Wright, Isabelle Augenstein

Citation count prediction is the task of predicting the number of citations a paper has gained after a period of time.

Citation Prediction

Generating Label Cohesive and Well-Formed Adversarial Claims

1 code implementation EMNLP 2020 Pepa Atanasova, Dustin Wright, Isabelle Augenstein

However, for inference tasks such as fact checking, these triggers often inadvertently invert the meaning of instances they are inserted in.

Fact Checking Language Modelling +2

Transformer Based Multi-Source Domain Adaptation

1 code implementation EMNLP 2020 Dustin Wright, Isabelle Augenstein

Here, we investigate the problem of unsupervised multi-source domain adaptation, where a model is trained on labelled data from multiple source domains and must make predictions on a domain for which no labelled data has been seen.

Domain Adaptation

NormCo: Deep Disease Normalization for Biomedical Knowledge Base Construction

no code implementations AKBC 2019 Dustin Wright, Yannis Katsis, Raghav Mehta, Chun-Nan Hsu

Biomedical knowledge bases are crucial in modern data-driven biomedical sciences, but auto-mated biomedical knowledge base construction remains challenging.

Word Embeddings

Rethinking Recurrent Latent Variable Model for Music Composition

no code implementations7 Oct 2018 Eunjeong Stella Koh, Shlomo Dubnov, Dustin Wright

Our results suggest that the proposed model has a better statistical resemblance to the musical structure of the training data, which improves the creation of new sequences of music in the style of the originals.

Cannot find the paper you are looking for? You can Submit a new open access paper.