1 code implementation • 6 Sep 2024 • Aliakbar Nafar, Kristen Brent Venable, Parisa Kordjamshidi
In this work, we propose a framework for evaluating in-context learning mechanisms, which we claim are a combination of retrieving internal knowledge and learning from in-context examples by focusing on regression tasks.
no code implementations • 30 Jul 2024 • Hossein Rajaby Faghihi, Aliakbar Nafar, Andrzej Uszok, Hamid Karimian, Parisa Kordjamshidi
This approach empowers domain experts, even those not well-versed in ML/AI, to formally declare their knowledge to be incorporated in customized neural models in the DomiKnowS framework.
1 code implementation • 14 Feb 2024 • Aliakbar Nafar, Kristen Brent Venable, Parisa Kordjamshidi
This paper considers the challenges Large Language Models (LLMs) face when reasoning over text that includes information involving uncertainty explicitly quantified via probability values.
no code implementations • 22 May 2023 • Aliakbar Nafar, Kristen Brent Venable, Parisa Kordjamshidi
In this paper, we evaluate the capability of transformer-based language models in making inferences over uncertain text that includes uncertain rules of reasoning.
1 code implementation • 16 Feb 2023 • Hossein Rajaby Faghihi, Aliakbar Nafar, Chen Zheng, Roshanak Mirzaee, Yue Zhang, Andrzej Uszok, Alexander Wan, Tanawan Premsri, Dan Roth, Parisa Kordjamshidi
Recent research has shown that integrating domain knowledge into deep learning architectures is effective -- it helps reduce the amount of required data, improves the accuracy of the models' decisions, and improves the interpretability of models.
1 code implementation • EMNLP (ACL) 2021 • Hossein Rajaby Faghihi, Quan Guo, Andrzej Uszok, Aliakbar Nafar, Elaheh Raisi, Parisa Kordjamshidi
We demonstrate a library for the integration of domain knowledge in deep learning architectures.