no code implementations • 14 Feb 2024 • Tiantian Feng, Daniel Yang, Digbalay Bose, Shrikanth Narayanan
Specifically, we propose a simple but effective multi-modal learning framework GTI-MM to enhance the data efficiency and model robustness against missing visual modality by imputing the missing data with generative transformers.
no code implementations • 6 Nov 2023 • Daniel Yang, Aditya Kommineni, Mohammad Alshehri, Nilamadhab Mohanty, Vedant Modi, Jonathan Gratch, Shrikanth Narayanan
In this work, we propose a formal definition of textual context to motivate a prompting strategy to enhance such contextual information.
no code implementations • 18 Sep 2023 • Yoonsoo Nam, Adam Lehavi, Daniel Yang, Digbalay Bose, Swabha Swayamdipta, Shrikanth Narayanan
Video summarization remains a huge challenge in computer vision due to the size of the input videos to be summarized.
1 code implementation • 3 May 2023 • Daniel Yang, Levi Cai, Stewart Jamieson, Yogesh Girdhar
Coral reefs are fast-changing and complex ecosystems that are crucial to monitor and study.
no code implementations • 11 Oct 2022 • Biling Wang, Michael Dohopolski, Ti Bai, Junjie Wu, Raquibul Hannan, Neil Desai, Aurelie Garant, Daniel Yang, Dan Nguyen, Mu-Han Lin, Robert Timmerman, Xinlei Wang, Steve Jiang
The bladder contour quality was primarily affected by using IV contrast.
2 code implementations • 29 Jul 2020 • Daniel Yang, TJ Tsai
This paper presents a method for large-scale retrieval of piano sheet music images.
no code implementations • 3 Mar 2020 • Tarik Tosun, Daniel Yang, Ben Eisner, Volkan Isler, Daniel Lee
We present a novel approach to robotic grasp planning using both a learned grasp proposal network and a learned 3D shape reconstruction network.
Robotics
no code implementations • 25 Sep 2019 • Riley Simmons-Edler, Ben Eisner, Daniel Yang, Anthony Bisulco, Eric Mitchell, Sebastian Seung, Daniel Lee
We implement the objective with an adversarial Q-learning method in which Q and Qx are the action-value functions for extrinsic and secondary rewards, respectively.
no code implementations • 19 Jun 2019 • Riley Simmons-Edler, Ben Eisner, Daniel Yang, Anthony Bisulco, Eric Mitchell, Sebastian Seung, Daniel Lee
We then propose a deep reinforcement learning method, QXplore, which exploits the temporal difference error of a Q-function to solve hard exploration tasks in high-dimensional MDPs.
no code implementations • CVPR 2018 • Kuan Fang, Te-Lin Wu, Daniel Yang, Silvio Savarese, Joseph J. Lim
Watching expert demonstrations is an important way for humans and robots to reason about affordances of unseen objects.
Ranked #2 on Video-to-image Affordance Grounding on OPRA (28x28)
no code implementations • 24 May 2018 • Nicha C. Dvornek, Daniel Yang, Archana Venkataraman, Pamela Ventola, Lawrence H. Staib, Kevin A. Pelphrey, James S. Duncan
We propose predicting patient response to PRT from baseline task-based fMRI by the novel application of a random forest and tree bagging strategy.