no code implementations • SMM4H (COLING) 2020 • Parsa Bagherzadeh, Sabine Bergler
For the detection of personal tweets, where a parent speaks of a child’s birth defect, CLaC combines ELMo word embeddings and gazetteer lists from external resources with a GCNN (for encoding dependencies), in a multi layer, transformer inspired architecture.
no code implementations • NAACL (DeeLIO) 2021 • Parsa Bagherzadeh, Sabine Bergler
This paper presents a way to inject and leverage existing knowledge from external sources in a Deep Learning environment, extending the recently proposed Recurrent Independent Mechnisms (RIMs) architecture, which comprises a set of interacting yet independent modules.
no code implementations • EMNLP (BlackboxNLP) 2021 • Parsa Bagherzadeh, Sabine Bergler
In this paper we investigate the recently proposed multi-input RIM for inspectability.
no code implementations • Findings (EMNLP) 2021 • Parsa Bagherzadeh, Sabine Bergler
This paper presents a neural framework of untied independent modules, used here for integrating off the shelf knowledge sources such as language models, lexica, POS information, and dependency relations.
no code implementations • EACL (Louhi) 2021 • Parsa Bagherzadeh, Sabine Bergler
This paper investigates incorporating quality knowledge sources developed by experts for the medical domain as well as syntactic information for classification of tweets into four different health oriented categories.
1 code implementation • SMM4H (COLING) 2022 • Harsh Verma, Parsa Bagherzadeh, Sabine Bergler
This paper summarizes the CLaC submission for SMM4H 2022 Task 10 which concerns the recognition of diseases mentioned in Spanish tweets.
no code implementations • 20 Apr 2022 • Qingyu Chen, Alexis Allot, Robert Leaman, Rezarta Islamaj Doğan, Jingcheng Du, Li Fang, Kai Wang, Shuo Xu, Yuefu Zhang, Parsa Bagherzadeh, Sabine Bergler, Aakash Bhatnagar, Nidhir Bhavsar, Yung-Chun Chang, Sheng-Jie Lin, Wentai Tang, Hongtong Zhang, Ilija Tavchioski, Senja Pollak, Shubo Tian, Jinfeng Zhang, Yulia Otmakhova, Antonio Jimeno Yepes, Hang Dong, Honghan Wu, Richard Dufour, Yanis Labrak, Niladri Chatterjee, Kushagri Tandon, Fréjus Laleye, Loïc Rakotoson, Emmanuele Chersoni, Jinghang Gu, Annemarie Friedrich, Subhash Chandra Pujari, Mariia Chizhikova, Naveen Sivadasan, Zhiyong Lu
To close the gap, we organized the BioCreative LitCovid track to call for a community effort to tackle automated topic annotation for COVID-19 literature.
no code implementations • SEMEVAL 2021 • Benjamin Therien, Parsa Bagherzadeh, Sabine Bergler
We analyze ablation experiments and demonstrate how the system components, namely tokenizer, unit identifier, modifier classifier, and language model, affect the overall score.
no code implementations • SEMEVAL 2020 • MinGyou Sung, Parsa Bagherzadeh, Sabine Bergler
We consider detection of the span of antecedents and consequents in argumentative prose a structural, grammatical task.
no code implementations • WS 2019 • Parsa Bagherzadeh, Nadia Sheikh, Sabine Bergler
CLaC labs participated in Task 1 and 4 of SMM4H 2019.
no code implementations • WS 2018 • Parsa Bagherzadeh, Nadia Sheikh, Sabine Bergler
CLaC Labs participated in Tasks 1, 2, and 4 using the same base architecture for all tasks with various parameter variations.
no code implementations • 2 Jul 2016 • Parsa Bagherzadeh, Hadi Sadoghi Yazdi
The presence of outliers is prevalent in machine learning applications and may produce misleading results.
no code implementations • 14 Jun 2016 • Amir Ahooye Atashin, Parsa Bagherzadeh, Kamaledin Ghiasi-Shirazi
In the proposed method, the denoising auto encoders are employed for learning robust features.