Search Results for author: Parisa Kordjamshidi

Found 40 papers, 17 papers with code

Relevant CommonSense Subgraphs for “What if...” Procedural Reasoning

no code implementations Findings (ACL) 2022 Chen Zheng, Parisa Kordjamshidi

We study the challenge of learning causal reasoning over procedural text to answer “What if...” questions when external commonsense knowledge is required.

GIPCOL: Graph-Injected Soft Prompting for Compositional Zero-Shot Learning

1 code implementation9 Nov 2023 Guangyue Xu, Joyce Chai, Parisa Kordjamshidi

In this work, we propose GIP-COL (Graph-Injected Soft Prompting for COmpositional Learning) to better explore the compositional zero-shot learning (CZSL) ability of VLMs within the prompt-based learning framework.

Compositional Zero-Shot Learning

Syntax-Guided Transformers: Elevating Compositional Generalization and Grounding in Multimodal Environments

no code implementations7 Nov 2023 Danial Kamali, Parisa Kordjamshidi

Compositional generalization, the ability of intelligent models to extrapolate understanding of components to novel compositions, is a fundamental yet challenging facet in AI research, especially within multimodal environments.

Dependency Parsing

MetaReVision: Meta-Learning with Retrieval for Visually Grounded Compositional Concept Acquisition

no code implementations2 Nov 2023 Guangyue Xu, Parisa Kordjamshidi, Joyce Chai

Inspired by this observation, in this paper, we propose MetaReVision, a retrieval-enhanced meta-learning model to address the visually grounded compositional concept learning problem.

Meta-Learning Retrieval

Disentangling Extraction and Reasoning in Multi-hop Spatial Reasoning

no code implementations25 Oct 2023 Roshanak Mirzaee, Parisa Kordjamshidi

Spatial reasoning over text is challenging as the models not only need to extract the direct spatial information from the text but also reason over those and infer implicit spatial relations.

Teaching Probabilistic Logical Reasoning to Transformers

no code implementations22 May 2023 Aliakbar Nafar, Kristen Brent Venable, Parisa Kordjamshidi

In this work, we analyze the use of probabilistic logical rules in transformer-based language models.

Logical Reasoning Question Answering

VLN-Trans: Translator for the Vision and Language Navigation Agent

1 code implementation18 Feb 2023 Yue Zhang, Parisa Kordjamshidi

The mentioned landmarks are not recognizable by the navigation agent due to the different vision abilities of the instructor and the modeled agent.

Vision and Language Navigation

GLUECons: A Generic Benchmark for Learning Under Constraints

1 code implementation16 Feb 2023 Hossein Rajaby Faghihi, Aliakbar Nafar, Chen Zheng, Roshanak Mirzaee, Yue Zhang, Andrzej Uszok, Alexander Wan, Tanawan Premsri, Dan Roth, Parisa Kordjamshidi

Recent research has shown that integrating domain knowledge into deep learning architectures is effective -- it helps reduce the amount of required data, improves the accuracy of the models' decisions, and improves the interpretability of models.

The Role of Semantic Parsing in Understanding Procedural Text

1 code implementation14 Feb 2023 Hossein Rajaby Faghihi, Parisa Kordjamshidi, Choh Man Teng, James Allen

In this paper, we investigate whether symbolic semantic representations, extracted from deep semantic parsers, can help reasoning over the states of involved entities in a procedural text.

Semantic Parsing Semantic Role Labeling

Using Persuasive Writing Strategies to Explain and Detect Health Misinformation

no code implementations11 Nov 2022 Danial Kamali, Joseph Romain, Huiyi Liu, Wei Peng, Jingbo Meng, Parisa Kordjamshidi

The spread of misinformation is a prominent problem in today's society, and many researchers in academia and industry are trying to combat it.

Language Modelling Misinformation +2

Prompting Large Pre-trained Vision-Language Models For Compositional Concept Learning

no code implementations9 Nov 2022 Guangyue Xu, Parisa Kordjamshidi, Joyce Chai

This work explores the zero-shot compositional learning ability of large pre-trained vision-language models(VLMs) within the prompt-based learning framework and propose a model (\textit{PromptCompVL}) to solve the compositonal zero-shot learning (CZSL) problem.

Zero-Shot Learning

Transfer Learning with Synthetic Corpora for Spatial Role Labeling and Reasoning

1 code implementation30 Oct 2022 Roshanak Mirzaee, Parisa Kordjamshidi

Recent research shows synthetic data as a source of supervision helps pretrained language models (PLM) transfer learning to new target tasks/domains.

Question Answering Transfer Learning

LOViS: Learning Orientation and Visual Signals for Vision and Language Navigation

1 code implementation COLING 2022 Yue Zhang, Parisa Kordjamshidi

Understanding spatial and visual information is essential for a navigation agent who follows natural language instructions.

Vision and Language Navigation

Dynamic Relevance Graph Network for Knowledge-Aware Question Answering

1 code implementation COLING 2022 Chen Zheng, Parisa Kordjamshidi

DRGN operates on a given KG subgraph based on the question and answers entities and uses the relevance scores between the nodes to establish new edges dynamically for learning node representations in the graph network.

Question Answering

Relevant CommonSense Subgraphs for "What if..." Procedural Reasoning

1 code implementation21 Mar 2022 Chen Zheng, Parisa Kordjamshidi

We study the challenge of learning causal reasoning over procedural text to answer "What if..." questions when external commonsense knowledge is required.

Zero-Shot Compositional Concept Learning

no code implementations ACL (MetaNLP) 2021 Guangyue Xu, Parisa Kordjamshidi, Joyce Y. Chai

In this paper, we study the problem of recognizing compositional attribute-object concepts within the zero-shot learning (ZSL) framework.

Zero-Shot Learning

SPARTQA: A Textual Question Answering Benchmark for Spatial Reasoning

2 code implementations NAACL 2021 Roshanak Mirzaee, Hossein Rajaby Faghihi, Qiang Ning, Parisa Kordjamshidi

This paper proposes a question-answering (QA) benchmark for spatial reasoning on natural language text which contains more realistic spatial phenomena not covered by prior work and is challenging for state-of-the-art language models (LM).

Question Answering

Relational Gating for "What If" Reasoning

1 code implementation27 May 2021 Chen Zheng, Parisa Kordjamshidi

We propose a novel relational gating network that learns to filter the key entities and relationships and learns contextual and cross representations of both procedure and question for finding the answer.

Towards Navigation by Reasoning over Spatial Configurations

no code implementations ACL (splurobonlp) 2021 Yue Zhang, Quan Guo, Parisa Kordjamshidi

Additionally, the experimental results demonstrate that explicit modeling of spatial semantic elements in the instructions can improve the grounding and spatial reasoning of the model.

From Spatial Relations to Spatial Configurations

no code implementations LREC 2020 Soham Dan, Parisa Kordjamshidi, Julia Bonn, Archna Bhatia, Jon Cai, Martha Palmer, Dan Roth

To exhibit the applicability of our representation scheme, we annotate text taken from diverse datasets and show how we extend the capabilities of existing spatial representation languages with the fine-grained decomposition of semantics and blend it seamlessly with AMRs of sentences and discourse representations as a whole.

Natural Language Understanding

Cross-Modality Relevance for Reasoning on Language and Vision

1 code implementation ACL 2020 Chen Zheng, Quan Guo, Parisa Kordjamshidi

This work deals with the challenge of learning and reasoning over language and vision data for the related downstream tasks such as visual question answering (VQA) and natural language for visual reasoning (NLVR).

Question Answering Visual Question Answering +1

Declarative Learning-Based Programming as an Interface to AI Systems

no code implementations18 Jun 2019 Parisa Kordjamshidi, Dan Roth, Kristian Kersting

Data-driven approaches are becoming more common as problem-solving techniques in many areas of research and industry.

BIG-bench Machine Learning

Anaphora Resolution for Improving Spatial Relation Extraction from Text

no code implementations WS 2018 Umar Manzoor, Parisa Kordjamshidi

Spatial relation extraction from generic text is a challenging problem due to the ambiguity of the prepositions spatial meaning as well as the nesting structure of the spatial descriptions.

Relation Extraction

Visually Guided Spatial Relation Extraction from Text

no code implementations NAACL 2018 Taher Rahgooy, Umar Manzoor, Parisa Kordjamshidi

Extraction of spatial relations from sentences with complex/nesting relationships is very challenging as often needs resolving inherent semantic ambiguities.

Activity Recognition Image Captioning +5

Relational Learning and Feature Extraction by Querying over Heterogeneous Information Networks

no code implementations25 Jul 2017 Parisa Kordjamshidi, Sameer Singh, Daniel Khashabi, Christos Christodoulopoulos, Mark Summons, Saurabh Sinha, Dan Roth

In particular, we provide an initial prototype for a relational and graph traversal query language where queries are directly used as relational features for structured machine learning models.

BIG-bench Machine Learning Knowledge Graphs +1

Better call Saul: Flexible Programming for Learning and Inference in NLP

1 code implementation COLING 2016 Parisa Kordjamshidi, Daniel Khashabi, Christos Christodoulopoulos, Bhargav Mangipudi, Sameer Singh, Dan Roth

We present a novel way for designing complex joint inference and learning models using Saul (Kordjamshidi et al., 2015), a recently-introduced declarative learning-based programming language (DeLBP).

Part-Of-Speech Tagging Probabilistic Programming +1

EDISON: Feature Extraction for NLP, Simplified

no code implementations LREC 2016 Mark Sammons, Christos Christodoulopoulos, Parisa Kordjamshidi, Daniel Khashabi, Vivek Srikumar, Dan Roth

We present EDISON, a Java library of feature generation functions used in a suite of state-of-the-art NLP tools, based on a set of generic NLP data structures.

Deep Embedding for Spatial Role Labeling

no code implementations28 Mar 2016 Oswaldo Ludwig, Xiao Liu, Parisa Kordjamshidi, Marie-Francine Moens

This paper introduces the visually informed embedding of word (VIEW), a continuous vector representation for a word extracted from a deep neural model trained using the Microsoft COCO data set to forecast the spatial arrangements between visual objects, given a textual description.

Cannot find the paper you are looking for? You can Submit a new open access paper.