Search Results for author: Parisa Kordjamshidi

Found 56 papers, 31 papers with code

Relevant CommonSense Subgraphs for “What if...” Procedural Reasoning

no code implementations Findings (ACL) 2022 Chen Zheng, Parisa Kordjamshidi

We study the challenge of learning causal reasoning over procedural text to answer “What if...” questions when external commonsense knowledge is required.

Exploring Spatial Language Grounding Through Referring Expressions

no code implementations4 Feb 2025 Akshar Tumu, Parisa Kordjamshidi

In this work, we propose using the Referring Expression Comprehension task instead as a platform for the evaluation of spatial reasoning by VLMs.

Image Captioning Negation +7

Do Vision-Language Models Represent Space and How? Evaluating Spatial Frame of Reference Under Ambiguities

1 code implementation22 Oct 2024 Zheyuan Zhang, Fengyuan Hu, Jayjun Lee, Freda Shi, Parisa Kordjamshidi, Joyce Chai, Ziqiao Ma

Spatial expressions in situated communication can be ambiguous, as their meanings vary depending on the frames of reference (FoR) adopted by speakers and listeners.

Spatial Reasoning

SPARTUN3D: Situated Spatial Understanding of 3D World in Large Language Models

no code implementations4 Oct 2024 Yue Zhang, Zhiyang Xu, Ying Shen, Parisa Kordjamshidi, Lifu Huang

2) the architectures of existing 3D-based LLMs lack explicit alignment between the spatial representations of 3D scenes and natural language, limiting their performance in tasks requiring precise spatial reasoning.

Scene Understanding Spatial Reasoning

Learning vs Retrieval: The Role of In-Context Examples in Regression with LLMs

1 code implementation6 Sep 2024 Aliakbar Nafar, Kristen Brent Venable, Parisa Kordjamshidi

In this work, we propose a framework for evaluating in-context learning mechanisms, which we claim are a combination of retrieving internal knowledge and learning from in-context examples by focusing on regression tasks.

In-Context Learning Meta-Learning +2

Narrowing the Gap between Vision and Action in Navigation

1 code implementation19 Aug 2024 Yue Zhang, Parisa Kordjamshidi

First, VLN-CE agents that discretize the visual environment are primarily trained with high-level view selection, which causes them to ignore crucial spatial reasoning within the low-level action movements.

Decoder Spatial Reasoning +1

Prompt2DeModel: Declarative Neuro-Symbolic Modeling with Natural Language

no code implementations30 Jul 2024 Hossein Rajaby Faghihi, Aliakbar Nafar, Andrzej Uszok, Hamid Karimian, Parisa Kordjamshidi

This approach empowers domain experts, even those not well-versed in ML/AI, to formally declare their knowledge to be incorporated in customized neural models in the DomiKnowS framework.

Retrieval

Vision-and-Language Navigation Today and Tomorrow: A Survey in the Era of Foundation Models

1 code implementation9 Jul 2024 Yue Zhang, Ziqiao Ma, Jialu Li, Yanyuan Qiao, Zun Wang, Joyce Chai, Qi Wu, Mohit Bansal, Parisa Kordjamshidi

Vision-and-Language Navigation (VLN) has gained increasing attention over recent years and many approaches have emerged to advance their development.

Vision and Language Navigation

SHINE: Saliency-aware HIerarchical NEgative Ranking for Compositional Temporal Grounding

1 code implementation6 Jul 2024 Zixu Cheng, Yujiang Pu, Shaogang Gong, Parisa Kordjamshidi, Yu Kong

Temporal grounding, also known as video moment retrieval, aims at locating video segments corresponding to a given query sentence.

Language Modeling Language Modelling +4

Disentangling Knowledge-based and Visual Reasoning by Question Decomposition in KB-VQA

no code implementations27 Jun 2024 Elham J. Barezi, Parisa Kordjamshidi

We study the Knowledge-Based visual question-answering problem, for which given a question, the models need to ground it into the visual modality to find the answer.

General Knowledge Question Answering +2

Neuro-symbolic Training for Reasoning over Spatial Language

1 code implementation19 Jun 2024 Tanawan Premsri, Parisa Kordjamshidi

Recent research shows that more data and larger models can provide more accurate solutions to natural language problems requiring reasoning.

Spatial Reasoning Transfer Learning

A Survey on Compositional Learning of AI Models: Theoretical and Experimental Practices

no code implementations13 Jun 2024 Sania Sinha, Tanawan Premsri, Parisa Kordjamshidi

Compositional learning, mastering the ability to combine basic concepts and construct more intricate ones, is crucial for human cognition, especially in human language comprehension and visual perception.

Find The Gap: Knowledge Base Reasoning For Visual Question Answering

no code implementations16 Apr 2024 Elham J. Barezi, Parisa Kordjamshidi

2) How do task-specific and LLM-based models perform in the integration of visual and external knowledge, and multi-hop reasoning over both sources of information?

Question Answering Retrieval +1

Reasoning over Uncertain Text by Generative Large Language Models

1 code implementation14 Feb 2024 Aliakbar Nafar, Kristen Brent Venable, Parisa Kordjamshidi

This paper considers the challenges Large Language Models (LLMs) face when reasoning over text that includes information involving uncertainty explicitly quantified via probability values.

Decision Making Mathematical Reasoning +1

Consistent Joint Decision-Making with Heterogeneous Learning Models

no code implementations6 Feb 2024 Hossein Rajaby Faghihi, Parisa Kordjamshidi

This paper introduces a novel decision-making framework that promotes consistency among decisions made by diverse models while utilizing external knowledge.

Decision Making

NavHint: Vision and Language Navigation Agent with a Hint Generator

1 code implementation4 Feb 2024 Yue Zhang, Quan Guo, Parisa Kordjamshidi

The hint generator assists the navigation agent in developing a global understanding of the visual environment.

Vision and Language Navigation

GIPCOL: Graph-Injected Soft Prompting for Compositional Zero-Shot Learning

1 code implementation9 Nov 2023 Guangyue Xu, Joyce Chai, Parisa Kordjamshidi

In this work, we propose GIP-COL (Graph-Injected Soft Prompting for COmpositional Learning) to better explore the compositional zero-shot learning (CZSL) ability of VLMs within the prompt-based learning framework.

Attribute Compositional Zero-Shot Learning

Syntax-Guided Transformers: Elevating Compositional Generalization and Grounding in Multimodal Environments

no code implementations7 Nov 2023 Danial Kamali, Parisa Kordjamshidi

Compositional generalization, the ability of intelligent models to extrapolate understanding of components to novel compositions, is a fundamental yet challenging facet in AI research, especially within multimodal environments.

Compositional Generalization (AVG) Dependency Parsing

MetaReVision: Meta-Learning with Retrieval for Visually Grounded Compositional Concept Acquisition

1 code implementation2 Nov 2023 Guangyue Xu, Parisa Kordjamshidi, Joyce Chai

Inspired by this observation, in this paper, we propose MetaReVision, a retrieval-enhanced meta-learning model to address the visually grounded compositional concept learning problem.

Meta-Learning Retrieval

Disentangling Extraction and Reasoning in Multi-hop Spatial Reasoning

1 code implementation25 Oct 2023 Roshanak Mirzaee, Parisa Kordjamshidi

Spatial reasoning over text is challenging as the models not only need to extract the direct spatial information from the text but also reason over those and infer implicit spatial relations.

Spatial Reasoning

Teaching Probabilistic Logical Reasoning to Transformers

1 code implementation22 May 2023 Aliakbar Nafar, Kristen Brent Venable, Parisa Kordjamshidi

In this paper, we evaluate the capability of transformer-based language models in making inferences over uncertain text that includes uncertain rules of reasoning.

Logical Reasoning Question Answering

VLN-Trans: Translator for the Vision and Language Navigation Agent

1 code implementation18 Feb 2023 Yue Zhang, Parisa Kordjamshidi

The mentioned landmarks are not recognizable by the navigation agent due to the different vision abilities of the instructor and the modeled agent.

Vision and Language Navigation

GLUECons: A Generic Benchmark for Learning Under Constraints

1 code implementation16 Feb 2023 Hossein Rajaby Faghihi, Aliakbar Nafar, Chen Zheng, Roshanak Mirzaee, Yue Zhang, Andrzej Uszok, Alexander Wan, Tanawan Premsri, Dan Roth, Parisa Kordjamshidi

Recent research has shown that integrating domain knowledge into deep learning architectures is effective -- it helps reduce the amount of required data, improves the accuracy of the models' decisions, and improves the interpretability of models.

The Role of Semantic Parsing in Understanding Procedural Text

1 code implementation14 Feb 2023 Hossein Rajaby Faghihi, Parisa Kordjamshidi, Choh Man Teng, James Allen

In this paper, we investigate whether symbolic semantic representations, extracted from deep semantic parsers, can help reasoning over the states of involved entities in a procedural text.

Semantic Parsing Semantic Role Labeling

Using Persuasive Writing Strategies to Explain and Detect Health Misinformation

1 code implementation11 Nov 2022 Danial Kamali, Joseph Romain, Huiyi Liu, Wei Peng, Jingbo Meng, Parisa Kordjamshidi

We evaluate fine-tuning and prompt-engineering techniques with pre-trained language models of the BERT family and the generative large language models of the GPT family using persuasive strategies as an additional source of information.

Fake News Detection Language Modelling +9

Prompting Large Pre-trained Vision-Language Models For Compositional Concept Learning

no code implementations9 Nov 2022 Guangyue Xu, Parisa Kordjamshidi, Joyce Chai

This work explores the zero-shot compositional learning ability of large pre-trained vision-language models(VLMs) within the prompt-based learning framework and propose a model (\textit{PromptCompVL}) to solve the compositonal zero-shot learning (CZSL) problem.

Zero-Shot Learning

Transfer Learning with Synthetic Corpora for Spatial Role Labeling and Reasoning

1 code implementation30 Oct 2022 Roshanak Mirzaee, Parisa Kordjamshidi

Recent research shows synthetic data as a source of supervision helps pretrained language models (PLM) transfer learning to new target tasks/domains.

Question Answering Transfer Learning

Dynamic Relevance Graph Network for Knowledge-Aware Question Answering

1 code implementation COLING 2022 Chen Zheng, Parisa Kordjamshidi

DRGN operates on a given KG subgraph based on the question and answers entities and uses the relevance scores between the nodes to establish new edges dynamically for learning node representations in the graph network.

Graph Neural Network Question Answering

Relevant CommonSense Subgraphs for "What if..." Procedural Reasoning

1 code implementation21 Mar 2022 Chen Zheng, Parisa Kordjamshidi

We study the challenge of learning causal reasoning over procedural text to answer "What if..." questions when external commonsense knowledge is required.

Zero-Shot Compositional Concept Learning

no code implementations ACL (MetaNLP) 2021 Guangyue Xu, Parisa Kordjamshidi, Joyce Y. Chai

In this paper, we study the problem of recognizing compositional attribute-object concepts within the zero-shot learning (ZSL) framework.

Attribute Zero-Shot Learning

SPARTQA: A Textual Question Answering Benchmark for Spatial Reasoning

2 code implementations NAACL 2021 Roshanak Mirzaee, Hossein Rajaby Faghihi, Qiang Ning, Parisa Kordjamshidi

This paper proposes a question-answering (QA) benchmark for spatial reasoning on natural language text which contains more realistic spatial phenomena not covered by prior work and is challenging for state-of-the-art language models (LM).

Question Answering Spatial Reasoning

Relational Gating for "What If" Reasoning

1 code implementation27 May 2021 Chen Zheng, Parisa Kordjamshidi

We propose a novel relational gating network that learns to filter the key entities and relationships and learns contextual and cross representations of both procedure and question for finding the answer.

Towards Navigation by Reasoning over Spatial Configurations

no code implementations ACL (splurobonlp) 2021 Yue Zhang, Quan Guo, Parisa Kordjamshidi

Additionally, the experimental results demonstrate that explicit modeling of spatial semantic elements in the instructions can improve the grounding and spatial reasoning of the model.

Spatial Reasoning

From Spatial Relations to Spatial Configurations

no code implementations LREC 2020 Soham Dan, Parisa Kordjamshidi, Julia Bonn, Archna Bhatia, Jon Cai, Martha Palmer, Dan Roth

To exhibit the applicability of our representation scheme, we annotate text taken from diverse datasets and show how we extend the capabilities of existing spatial representation languages with the fine-grained decomposition of semantics and blend it seamlessly with AMRs of sentences and discourse representations as a whole.

Abstract Meaning Representation Natural Language Understanding +1

Cross-Modality Relevance for Reasoning on Language and Vision

1 code implementation ACL 2020 Chen Zheng, Quan Guo, Parisa Kordjamshidi

This work deals with the challenge of learning and reasoning over language and vision data for the related downstream tasks such as visual question answering (VQA) and natural language for visual reasoning (NLVR).

Question Answering Visual Question Answering +1

Declarative Learning-Based Programming as an Interface to AI Systems

no code implementations18 Jun 2019 Parisa Kordjamshidi, Dan Roth, Kristian Kersting

Data-driven approaches are becoming more common as problem-solving techniques in many areas of research and industry.

BIG-bench Machine Learning

Anaphora Resolution for Improving Spatial Relation Extraction from Text

no code implementations WS 2018 Umar Manzoor, Parisa Kordjamshidi

Spatial relation extraction from generic text is a challenging problem due to the ambiguity of the prepositions spatial meaning as well as the nesting structure of the spatial descriptions.

Relation Relation Extraction

Visually Guided Spatial Relation Extraction from Text

no code implementations NAACL 2018 Taher Rahgooy, Umar Manzoor, Parisa Kordjamshidi

Extraction of spatial relations from sentences with complex/nesting relationships is very challenging as often needs resolving inherent semantic ambiguities.

Activity Recognition Image Captioning +6

Relational Learning and Feature Extraction by Querying over Heterogeneous Information Networks

no code implementations25 Jul 2017 Parisa Kordjamshidi, Sameer Singh, Daniel Khashabi, Christos Christodoulopoulos, Mark Summons, Saurabh Sinha, Dan Roth

In particular, we provide an initial prototype for a relational and graph traversal query language where queries are directly used as relational features for structured machine learning models.

BIG-bench Machine Learning Knowledge Graphs +1

Better call Saul: Flexible Programming for Learning and Inference in NLP

1 code implementation COLING 2016 Parisa Kordjamshidi, Daniel Khashabi, Christos Christodoulopoulos, Bhargav Mangipudi, Sameer Singh, Dan Roth

We present a novel way for designing complex joint inference and learning models using Saul (Kordjamshidi et al., 2015), a recently-introduced declarative learning-based programming language (DeLBP).

Part-Of-Speech Tagging Probabilistic Programming +1

EDISON: Feature Extraction for NLP, Simplified

no code implementations LREC 2016 Mark Sammons, Christos Christodoulopoulos, Parisa Kordjamshidi, Daniel Khashabi, Vivek Srikumar, Dan Roth

We present EDISON, a Java library of feature generation functions used in a suite of state-of-the-art NLP tools, based on a set of generic NLP data structures.

Deep Embedding for Spatial Role Labeling

1 code implementation28 Mar 2016 Oswaldo Ludwig, Xiao Liu, Parisa Kordjamshidi, Marie-Francine Moens

This paper introduces the visually informed embedding of word (VIEW), a continuous vector representation for a word extracted from a deep neural model trained using the Microsoft COCO data set to forecast the spatial arrangements between visual objects, given a textual description.

Cannot find the paper you are looking for? You can Submit a new open access paper.