Search Results for author: Eda Okur

Found 18 papers, 0 papers with code

Inspecting Spoken Language Understanding from Kids for Basic Math Learning at Home

no code implementations1 Jun 2023 Eda Okur, Roddy Fuentes Alba, Saurav Sahay, Lama Nachman

Enriching the quality of early childhood education with interactive math learning at home systems, empowered by recent advances in conversational AI technologies, is slowly becoming a reality.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +5

End-to-End Evaluation of a Spoken Dialogue System for Learning Basic Mathematics

no code implementations7 Nov 2022 Eda Okur, Saurav Sahay, Roddy Fuentes Alba, Lama Nachman

The advances in language-based Artificial Intelligence (AI) technologies applied to build educational applications can present AI for social-good opportunities with a broader positive impact.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

NLU for Game-based Learning in Real: Initial Evaluations

no code implementations games (LREC) 2022 Eda Okur, Saurav Sahay, Lama Nachman

Intelligent systems designed for play-based interactions should be contextually aware of the users and their surroundings.

Intent Recognition Math +2

Semi-supervised Interactive Intent Labeling

no code implementations NAACL (DaSH) 2021 Saurav Sahay, Eda Okur, Nagib Hakim, Lama Nachman

Building the Natural Language Understanding (NLU) modules of task-oriented Spoken Dialogue Systems (SDS) involves a definition of intents and entities, collection of task-relevant data, annotating the data with intents and entities, and then repeating the same process over and over again for adding any functionality/enhancement to the SDS.

Clustering Data Augmentation +2

Audio-Visual Understanding of Passenger Intents for In-Cabin Conversational Agents

no code implementations WS 2020 Eda Okur, Shachi H. Kumar, Saurav Sahay, Lama Nachman

To this end, understanding passenger intents from spoken interactions and vehicle vision systems is a crucial component for developing contextual and visually grounded conversational agents for AV.

Dialogue Understanding Intent Detection

Low Rank Fusion based Transformers for Multimodal Sequences

no code implementations WS 2020 Saurav Sahay, Eda Okur, Shachi H. Kumar, Lama Nachman

In this work, we experiment with modeling modality-specific sensory signals to attend to our latent multimodal emotional intentions and vice versa expressed via low-rank multimodal fusion and multimodal transformers.

Emotion Recognition

Leveraging Topics and Audio Features with Multimodal Attention for Audio Visual Scene-Aware Dialog

no code implementations20 Dec 2019 Shachi H. Kumar, Eda Okur, Saurav Sahay, Jonathan Huang, Lama Nachman

With the recent advancements in Artificial Intelligence (AI), Intelligent Virtual Assistants (IVA) such as Alexa, Google Home, etc., have become a ubiquitous part of many homes.

Audio Classification Response Generation

Modeling Intent, Dialog Policies and Response Adaptation for Goal-Oriented Interactions

no code implementations20 Dec 2019 Saurav Sahay, Shachi H. Kumar, Eda Okur, Haroon Syed, Lama Nachman

Building a machine learning driven spoken dialog system for goal-oriented interactions involves careful design of intents and data collection along with development of intent recognition models and dialog policy learning algorithms.

Intent Recognition

Exploring Context, Attention and Audio Features for Audio Visual Scene-Aware Dialog

no code implementations20 Dec 2019 Shachi H. Kumar, Eda Okur, Saurav Sahay, Jonathan Huang, Lama Nachman

Recent progress in visual grounding techniques and Audio Understanding are enabling machines to understand shared semantic concepts and listen to the various sensory events in the environment.

Audio Classification Visual Grounding

Towards Multimodal Understanding of Passenger-Vehicle Interactions in Autonomous Vehicles: Intent/Slot Recognition Utilizing Audio-Visual Data

no code implementations20 Sep 2019 Eda Okur, Shachi H. Kumar, Saurav Sahay, Lama Nachman

Understanding passenger intents from spoken interactions and car's vision (both inside and outside the vehicle) are important building blocks towards developing contextual dialog systems for natural interactions in autonomous vehicles (AV).

Autonomous Vehicles Intent Detection +2

Natural Language Interactions in Autonomous Vehicles: Intent Detection and Slot Filling from Passenger Utterances

no code implementations23 Apr 2019 Eda Okur, Shachi H. Kumar, Saurav Sahay, Asli Arslan Esme, Lama Nachman

Understanding passenger intents and extracting relevant slots are important building blocks towards developing contextual dialogue systems for natural interactions in autonomous vehicles (AV).

Automatic Speech Recognition Automatic Speech Recognition (ASR) +5

Unobtrusive and Multimodal Approach for Behavioral Engagement Detection of Students

no code implementations16 Jan 2019 Nese Alyuz, Eda Okur, Utku Genc, Sinem Aslan, Cagri Tanriover, Asli Arslan Esme

We propose a multimodal approach for detection of students' behavioral engagement states (i. e., On-Task vs. Off-Task), based on three unobtrusive modalities: Appearance, Context-Performance, and Mouse.

Detecting Behavioral Engagement of Students in the Wild Based on Contextual and Visual Data

no code implementations15 Jan 2019 Eda Okur, Nese Alyuz, Sinem Aslan, Utku Genc, Cagri Tanriover, Asli Arslan Esme

To investigate the detection of students' behavioral engagement (On-Task vs. Off-Task), we propose a two-phase approach in this study.

The Importance of Socio-Cultural Differences for Annotating and Detecting the Affective States of Students

no code implementations12 Jan 2019 Eda Okur, Sinem Aslan, Nese Alyuz, Asli Arslan Esme, Ryan S. Baker

One open question in annotating affective data for affect detection is whether the labelers (i. e., human experts) need to be socio-culturally similar to the students being labeled, as this impacts the cost feasibility of obtaining the labels.

Open-Ended Question Answering

Named Entity Recognition on Twitter for Turkish using Semi-supervised Learning with Word Embeddings

no code implementations LREC 2016 Eda Okur, Hakan Demir, Arzucan Özgür

We made use of these obtained word embeddings, together with language independent features that are engineered to work better on informal text types, for generating a Turkish NER system on microblog texts.

named-entity-recognition Named Entity Recognition +2

Cannot find the paper you are looking for? You can Submit a new open access paper.