no code implementations • 20 Aug 2024 • Jaylin Herskovitz, Andi Xu, Rahaf Alharbi, Anhong Guo
Existing visual assistive technologies are built for simple and common use cases, and have few avenues for blind people to customize their functionalities.
no code implementations • 18 Aug 2024 • Lei Zhang, Jin Pan, Jacob Gettig, Steve Oney, Anhong Guo
Through a series of user studies, we evaluated the potential and challenges in manual, scaffolded, and automatic creation in immersive authoring.
no code implementations • 13 Aug 2024 • Ruei-Che Chang, Yuxuan Liu, Lotus Zhang, Anhong Guo
To address this, we developed EditScribe, a prototype system that makes image editing accessible using natural language verification loops powered by large multimodal models.
no code implementations • 13 Aug 2024 • Ruei-Che Chang, Yuxuan Liu, Anhong Guo
In this work, we develop WorldScribe, a system that generates automated live real-world visual descriptions that are customizable and adaptive to users' contexts: (i) WorldScribe's descriptions are tailored to users' intents and prioritized based on semantic relevance.
no code implementations • 10 Mar 2021 • Solon Barocas, Anhong Guo, Ece Kamar, Jacquelyn Krones, Meredith Ringel Morris, Jennifer Wortman Vaughan, Duncan Wadsworth, Hanna Wallach
Disaggregated evaluations of AI systems, in which system performance is assessed and reported separately for different groups of people, are conceptually simple.
1 code implementation • 20 Aug 2019 • Anhong Guo, Junhan Kong, Michael Rivera, Frank F. Xu, Jeffrey P. Bigham
Second, using the state diagrams, StateLens automatically generates conversational agents to guide blind users through specifying the tasks that the interface can perform, allowing the StateLens iOS application to provide interactive guidance and feedback so that blind users can access the interface.
no code implementations • 4 Jul 2019 • Anhong Guo, Ece Kamar, Jennifer Wortman Vaughan, Hanna Wallach, Meredith Ringel Morris
AI technologies have the potential to dramatically impact the lives of people with disabilities (PWD).
1 code implementation • CVPR 2018 • Danna Gurari, Qing Li, Abigale J. Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, Jeffrey P. Bigham
The study of algorithms to automatically answer visual questions currently is motivated by visual question answering (VQA) datasets constructed in artificial VQA settings.