no code implementations • 30 Jul 2024 • Hossein Rajaby Faghihi, Aliakbar Nafar, Andrzej Uszok, Hamid Karimian, Parisa Kordjamshidi
This approach empowers domain experts, even those not well-versed in ML/AI, to formally declare their knowledge to be incorporated in customized neural models in the DomiKnowS framework.
no code implementations • 6 Feb 2024 • Hossein Rajaby Faghihi, Parisa Kordjamshidi
This paper introduces a novel decision-making framework that promotes consistency among decisions made by diverse models while utilizing external knowledge.
1 code implementation • 16 Feb 2023 • Hossein Rajaby Faghihi, Aliakbar Nafar, Chen Zheng, Roshanak Mirzaee, Yue Zhang, Andrzej Uszok, Alexander Wan, Tanawan Premsri, Dan Roth, Parisa Kordjamshidi
Recent research has shown that integrating domain knowledge into deep learning architectures is effective -- it helps reduce the amount of required data, improves the accuracy of the models' decisions, and improves the interpretability of models.
1 code implementation • 14 Feb 2023 • Hossein Rajaby Faghihi, Parisa Kordjamshidi, Choh Man Teng, James Allen
In this paper, we investigate whether symbolic semantic representations, extracted from deep semantic parsers, can help reasoning over the states of involved entities in a procedural text.
1 code implementation • 25 Oct 2022 • Hossein Rajaby Faghihi, Bashar Alhafni, Ke Zhang, Shihao Ran, Joel Tetreault, Alejandro Jaimes
This paper presents CrisisLTLSum, the largest dataset of local crisis event timelines available to date.
1 code implementation • EMNLP (ACL) 2021 • Hossein Rajaby Faghihi, Quan Guo, Andrzej Uszok, Aliakbar Nafar, Elaheh Raisi, Parisa Kordjamshidi
We demonstrate a library for the integration of domain knowledge in deep learning architectures.
2 code implementations • NAACL 2021 • Roshanak Mirzaee, Hossein Rajaby Faghihi, Qiang Ning, Parisa Kordjamshidi
This paper proposes a question-answering (QA) benchmark for spatial reasoning on natural language text which contains more realistic spatial phenomena not covered by prior work and is challenging for state-of-the-art language models (LM).
1 code implementation • NAACL 2021 • Hossein Rajaby Faghihi, Parisa Kordjamshidi
This enables us to use pre-trained transformer-based language models on other QA benchmarks by adapting those to the procedural text understanding.
Ranked #1 on
Procedural Text Understanding
on ProPara
1 code implementation • 12 Apr 2021 • Roshanak Mirzaee, Hossein Rajaby Faghihi, Qiang Ning, Parisa Kordjmashidi
This paper proposes a question-answering (QA) benchmark for spatial reasoning on natural language text which contains more realistic spatial phenomena not covered by prior work and is challenging for state-of-the-art language models (LM).
1 code implementation • WS 2020 • Hossein Rajaby Faghihi, Roshanak Mirzaee, Sudarshan Paliwal, Parisa Kordjamshidi
We propose a novel alignment mechanism to deal with procedural reasoning on a newly released multimodal QA dataset, named RecipeQA.
Ranked #1 on
Question Answering
on RecipeQA
no code implementations • 24 Jun 2019 • Hossein Rajaby Faghihi, Mohammad Amin Fazli, Jafar Habibi
Obtaining knowledge from the environment is often through sensors, and the response to a particular circumstance is offered by actuators.