Search Results for author: Mehrdad Zakershahrak

Found 9 papers, 0 papers with code

Adapting LLMs for Efficient, Personalized Information Retrieval: Methods and Implications

no code implementations21 Nov 2023 Samira Ghodratnama, Mehrdad Zakershahrak

The advent of Large Language Models (LLMs) heralds a pivotal shift in online user interactions with information.

Chatbot Hallucination +2

Adaptive Summaries: A Personalized Concept-based Summarization Approach by Learning from Users' Feedback

no code implementations24 Dec 2020 Samira Ghodratnama, Mehrdad Zakershahrak, Fariborz Sobhanmanesh

Exploring the tremendous amount of data efficiently to make a decision, similar to answering a complicated question, is challenging with many real-world application scenarios.

Am I Rare? An Intelligent Summarization Approach for Identifying Hidden Anomalies

no code implementations24 Dec 2020 Samira Ghodratnama, Mehrdad Zakershahrak, Fariborz Sobhanmanesh

The experimental results on benchmark datasets prove a summary of the data can be a substitute for original data in the anomaly detection task.

Anomaly Detection Clustering

Are We On The Same Page? Hierarchical Explanation Generation for Planning Tasks in Human-Robot Teaming using Reinforcement Learning

no code implementations22 Dec 2020 Mehrdad Zakershahrak, Samira Ghodratnama

In this work, we argue that the agent-generated explanations, especially the complex ones, should be abstracted to be aligned with the level of details the human teammate desires to maintain the recipient's cognitive load.

Decision Making Explanation Generation

Online Explanation Generation for Human-Robot Teaming

no code implementations15 Mar 2019 Mehrdad Zakershahrak, Ze Gong, Nikhillesh Sadassivam, Yu Zhang

The new explanation generation methods are based on a model reconciliation setting introduced in our prior work.

Decision Making Explanation Generation

Progressive Explanation Generation for Human-robot Teaming

no code implementations2 Feb 2019 Yu Zhang, Mehrdad Zakershahrak

A progressive explanation improves understanding by limiting the cognitive effort required at each step of making the explanation.

Decision Making Explanation Generation

Interactive Plan Explicability in Human-Robot Teaming

no code implementations17 Jan 2019 Mehrdad Zakershahrak, Yu Zhang

Being aware of the human teammates' expectation leads to robot behaviors that better align with human expectation, thus facilitating more efficient and potentially safer teams.

Cannot find the paper you are looking for? You can Submit a new open access paper.