no code implementations • 27 May 2021 • Junxiao Shen, John Dudley, Per Ola Kristensson
Insufficient training data results in over-fitting and data augmentation is one approach to address this challenge.
no code implementations • ICLR 2022 • Hang Ren, Aivar Sootla, Taher Jafferjee, Junxiao Shen, Jun Wang, Haitham Bou-Ammar
We consider a context-dependent Reinforcement Learning (RL) setting, which is characterized by: a) an unknown finite number of not directly observable contexts; b) abrupt (discontinuous) context changes occurring during an episode; and c) Markovian context evolution.
no code implementations • 28 Mar 2023 • Xuhai Xu, Mengjie Yu, Tanya R. Jonker, Kashyap Todi, Feiyu Lu, Xun Qian, João Marcelo Evangelista Belo, Tianyi Wang, Michelle Li, Aran Mun, Te-Yen Wu, Junxiao Shen, Ting Zhang, Narine Kokhlikyan, Fulton Wang, Paul Sorenson, Sophie Kahyun Kim, Hrvoje Benko
The framework was based on a multi-disciplinary literature review of XAI and HCI research, a large-scale survey probing 500+ end-users' preferences for AR-based explanations, and three workshops with 12 experts collecting their insights about XAI design in AR.
no code implementations • 10 Aug 2023 • Junxiao Shen, John Dudley, Per Ola Kristensson
Additionally, in a user study, our system received a higher mean response score of 4. 13/5 compared to the human participants' score of 2. 46/5 on real-life episodic memory tasks.
no code implementations • 12 Oct 2023 • Junxiao Shen, John J. Dudley, Jingyao Zheng, Bill Byrne, Per Ola Kristensson
However, the task of prompting large language models to specialize in specific text prediction tasks can be challenging, particularly for designers without expertise in prompt engineering.
no code implementations • 20 Jan 2024 • Junxiao Shen, Xuhai Xu, Ran Tan, Amy Karlson, Evan Strasnick
Training a real-time gesture recognition model heavily relies on annotated data.
no code implementations • 20 Jan 2024 • Junxiao Shen, Matthias De Lange, Xuhai "Orson" Xu, Enmin Zhou, Ran Tan, Naveen Suda, Maciej Lazarewicz, Per Ola Kristensson, Amy Karlson, Evan Strasnick
We propose leveraging continual learning to make machine learning models adaptive to new tasks without degrading performance on previously learned tasks.