no code implementations • Findings (ACL) 2022 • Chen Zheng, Parisa Kordjamshidi
We study the challenge of learning causal reasoning over procedural text to answer “What if...” questions when external commonsense knowledge is required.
1 code implementation • ACL 2022 • Yue Zhang, Parisa Kordjamshidi
In this paper, we investigate the problem of vision and language navigation.
1 code implementation • 9 Nov 2023 • Guangyue Xu, Joyce Chai, Parisa Kordjamshidi
In this work, we propose GIP-COL (Graph-Injected Soft Prompting for COmpositional Learning) to better explore the compositional zero-shot learning (CZSL) ability of VLMs within the prompt-based learning framework.
no code implementations • 7 Nov 2023 • Danial Kamali, Parisa Kordjamshidi
Compositional generalization, the ability of intelligent models to extrapolate understanding of components to novel compositions, is a fundamental yet challenging facet in AI research, especially within multimodal environments.
no code implementations • 2 Nov 2023 • Guangyue Xu, Parisa Kordjamshidi, Joyce Chai
Inspired by this observation, in this paper, we propose MetaReVision, a retrieval-enhanced meta-learning model to address the visually grounded compositional concept learning problem.
no code implementations • 25 Oct 2023 • Roshanak Mirzaee, Parisa Kordjamshidi
Spatial reasoning over text is challenging as the models not only need to extract the direct spatial information from the text but also reason over those and infer implicit spatial relations.
no code implementations • 22 May 2023 • Aliakbar Nafar, Kristen Brent Venable, Parisa Kordjamshidi
In this work, we analyze the use of probabilistic logical rules in transformer-based language models.
1 code implementation • 18 Feb 2023 • Yue Zhang, Parisa Kordjamshidi
The mentioned landmarks are not recognizable by the navigation agent due to the different vision abilities of the instructor and the modeled agent.
1 code implementation • 16 Feb 2023 • Hossein Rajaby Faghihi, Aliakbar Nafar, Chen Zheng, Roshanak Mirzaee, Yue Zhang, Andrzej Uszok, Alexander Wan, Tanawan Premsri, Dan Roth, Parisa Kordjamshidi
Recent research has shown that integrating domain knowledge into deep learning architectures is effective -- it helps reduce the amount of required data, improves the accuracy of the models' decisions, and improves the interpretability of models.
1 code implementation • 14 Feb 2023 • Hossein Rajaby Faghihi, Parisa Kordjamshidi, Choh Man Teng, James Allen
In this paper, we investigate whether symbolic semantic representations, extracted from deep semantic parsers, can help reasoning over the states of involved entities in a procedural text.
no code implementations • 11 Nov 2022 • Danial Kamali, Joseph Romain, Huiyi Liu, Wei Peng, Jingbo Meng, Parisa Kordjamshidi
The spread of misinformation is a prominent problem in today's society, and many researchers in academia and industry are trying to combat it.
no code implementations • 9 Nov 2022 • Guangyue Xu, Parisa Kordjamshidi, Joyce Chai
This work explores the zero-shot compositional learning ability of large pre-trained vision-language models(VLMs) within the prompt-based learning framework and propose a model (\textit{PromptCompVL}) to solve the compositonal zero-shot learning (CZSL) problem.
1 code implementation • 30 Oct 2022 • Roshanak Mirzaee, Parisa Kordjamshidi
Recent research shows synthetic data as a source of supervision helps pretrained language models (PLM) transfer learning to new target tasks/domains.
1 code implementation • COLING 2022 • Yue Zhang, Parisa Kordjamshidi
Understanding spatial and visual information is essential for a navigation agent who follows natural language instructions.
1 code implementation • COLING 2022 • Chen Zheng, Parisa Kordjamshidi
DRGN operates on a given KG subgraph based on the question and answers entities and uses the relevance scores between the nodes to establish new edges dynamically for learning node representations in the graph network.
1 code implementation • 21 Mar 2022 • Chen Zheng, Parisa Kordjamshidi
We study the challenge of learning causal reasoning over procedural text to answer "What if..." questions when external commonsense knowledge is required.
1 code implementation • EMNLP (ACL) 2021 • Hossein Rajaby Faghihi, Quan Guo, Andrzej Uszok, Aliakbar Nafar, Elaheh Raisi, Parisa Kordjamshidi
We demonstrate a library for the integration of domain knowledge in deep learning architectures.
no code implementations • ACL (MetaNLP) 2021 • Guangyue Xu, Parisa Kordjamshidi, Joyce Y. Chai
In this paper, we study the problem of recognizing compositional attribute-object concepts within the zero-shot learning (ZSL) framework.
2 code implementations • NAACL 2021 • Roshanak Mirzaee, Hossein Rajaby Faghihi, Qiang Ning, Parisa Kordjamshidi
This paper proposes a question-answering (QA) benchmark for spatial reasoning on natural language text which contains more realistic spatial phenomena not covered by prior work and is challenging for state-of-the-art language models (LM).
1 code implementation • 27 May 2021 • Chen Zheng, Parisa Kordjamshidi
We propose a novel relational gating network that learns to filter the key entities and relationships and learns contextual and cross representations of both procedure and question for finding the answer.
no code implementations • ACL (splurobonlp) 2021 • Yue Zhang, Quan Guo, Parisa Kordjamshidi
Additionally, the experimental results demonstrate that explicit modeling of spatial semantic elements in the instructions can improve the grounding and spatial reasoning of the model.
1 code implementation • NAACL 2021 • Hossein Rajaby Faghihi, Parisa Kordjamshidi
This enables us to use pre-trained transformer-based language models on other QA benchmarks by adapting those to the procedural text understanding.
Ranked #1 on
Procedural Text Understanding
on ProPara
1 code implementation • WS 2020 • Hossein Rajaby Faghihi, Roshanak Mirzaee, Sudarshan Paliwal, Parisa Kordjamshidi
We propose a novel alignment mechanism to deal with procedural reasoning on a newly released multimodal QA dataset, named RecipeQA.
Ranked #1 on
Question Answering
on RecipeQA
no code implementations • EMNLP 2020 • Parisa Kordjamshidi, James Pustejovsky, Marie-Francine Moens
Understating spatial semantics expressed in natural language can become highly complex in real-world applications.
1 code implementation • EMNLP 2020 • Chen Zheng, Parisa Kordjamshidi
This work deals with the challenge of learning and reasoning over multi-hop question answering (QA).
no code implementations • LREC 2020 • Soham Dan, Parisa Kordjamshidi, Julia Bonn, Archna Bhatia, Jon Cai, Martha Palmer, Dan Roth
To exhibit the applicability of our representation scheme, we annotate text taken from diverse datasets and show how we extend the capabilities of existing spatial representation languages with the fine-grained decomposition of semantics and blend it seamlessly with AMRs of sentences and discourse representations as a whole.
1 code implementation • ACL 2020 • Chen Zheng, Quan Guo, Parisa Kordjamshidi
This work deals with the challenge of learning and reasoning over language and vision data for the related downstream tasks such as visual question answering (VQA) and natural language for visual reasoning (NLVR).
no code implementations • 18 Jun 2019 • Parisa Kordjamshidi, Dan Roth, Kristian Kersting
Data-driven approaches are becoming more common as problem-solving techniques in many areas of research and industry.
no code implementations • WS 2018 • Umar Manzoor, Parisa Kordjamshidi
Spatial relation extraction from generic text is a challenging problem due to the ambiguity of the prepositions spatial meaning as well as the nesting structure of the spatial descriptions.
no code implementations • NAACL 2018 • Taher Rahgooy, Umar Manzoor, Parisa Kordjamshidi
Extraction of spatial relations from sentences with complex/nesting relationships is very challenging as often needs resolving inherent semantic ambiguities.
no code implementations • WS 2017 • Parisa Kordjamshidi, Taher Rahgooy, Umar Manzoor
The DeLBP framework facilitates combining modalities and representing various data in a unified graph.
no code implementations • 25 Jul 2017 • Parisa Kordjamshidi, Sameer Singh, Daniel Khashabi, Christos Christodoulopoulos, Mark Summons, Saurabh Sinha, Dan Roth
In particular, we provide an initial prototype for a relational and graph traversal query language where queries are directly used as relational features for structured machine learning models.
1 code implementation • COLING 2016 • Parisa Kordjamshidi, Daniel Khashabi, Christos Christodoulopoulos, Bhargav Mangipudi, Sameer Singh, Dan Roth
We present a novel way for designing complex joint inference and learning models using Saul (Kordjamshidi et al., 2015), a recently-introduced declarative learning-based programming language (DeLBP).
no code implementations • LREC 2016 • Mark Sammons, Christos Christodoulopoulos, Parisa Kordjamshidi, Daniel Khashabi, Vivek Srikumar, Dan Roth
We present EDISON, a Java library of feature generation functions used in a suite of state-of-the-art NLP tools, based on a set of generic NLP data structures.
no code implementations • 28 Mar 2016 • Oswaldo Ludwig, Xiao Liu, Parisa Kordjamshidi, Marie-Francine Moens
This paper introduces the visually informed embedding of word (VIEW), a continuous vector representation for a word extracted from a deep neural model trained using the Microsoft COCO data set to forecast the spatial arrangements between visual objects, given a textual description.
no code implementations • LREC 2014 • Goran Glava{\v{s}}, Jan {\v{S}}najder, Marie-Francine Moens, Parisa Kordjamshidi
In this work, we present HiEve, a corpus for recognizing relations of spatiotemporal containment between events.