HumAID (Human-Annotated Disaster Incidents Data)

Introduced by Alam et al. in HumAID: Human-Annotated Disaster Incidents Data from Twitter with Deep Learning Benchmarks

Social networks are widely used for information consumption and dissemination, especially during time-critical events such as natural disasters. Despite its significantly large volume, social media content is often too noisy for direct use in any application. Therefore, it is important to filter, categorize, and concisely summarize the available content to facilitate effective consumption and decision-making. To address such issues automatic classification systems have been developed using supervised modeling approaches, thanks to the earlier efforts on creating labeled datasets. However, existing datasets are limited in different aspects (e.g., size, contains duplicates) and less suitable to support more advanced and data-hungry deep learning models.

HumAID is a large-scale dataset for crisis informatics research with ~77K human-labeled tweets, sampled from a pool of ~24 million tweets across 19 disaster events that happened between 2016 and 2019. The annotations in the provided datasets consists of following humanitarian categories. The dataset consists only english tweets and it is the largest dataset for crisis informatics so far.

Humanitarian categories: * Caution and advice * Displaced people and evacuations * Don't know can't judge * Infrastructure and utility damage * Injured or dead people * Missing or found people * Not humanitarian * Other relevant information * Requests or urgent needs * Rescue volunteering or donation effort * Sympathy and support

Papers


Paper Code Results Date Stars

Dataset Loaders


No data loaders found. You can submit your data loader here.

Tasks


Similar Datasets


License


  • Unknown

Modalities


Languages