no code implementations • 25 May 2023 • Jiawei Qin, Takuru Shimoyama, Xucong Zhang, Yusuke Sugano
This work proposes an effective model training pipeline consisting of a training data synthesis and a gaze estimation model for unsupervised domain adaptation.
1 code implementation • 22 May 2023 • Yoichiro Hisadome, Tianyi Wu, Jiawei Qin, Yusuke Sugano
This work proposes a generalizable multi-view gaze estimation task and a cross-view feature fusion method to address this issue.
1 code implementation • 5 Oct 2022 • Tianyi Wu, Yusuke Sugano
In this work, we address the task of one-way eye contact detection for videos in the wild.
1 code implementation • 20 Jan 2022 • Jiawei Qin, Takuru Shimoyama, Yusuke Sugano
Despite recent advances in appearance-based gaze estimation techniques, the need for training data that covers the target head pose and gaze distribution remains a crucial challenge for practical deployment.
no code implementations • CVPR 2022 • Lijin Yang, Yifei HUANG, Yusuke Sugano, Yoichi Sato
Different from previous works, we find that the cross-domain alignment can be more effectively done by using cross-modal interaction first.
no code implementations • 2 Dec 2021 • Lijin Yang, Yifei HUANG, Yusuke Sugano, Yoichi Sato
Previous works explored to address this problem by applying temporal attention but failed to consider the global context of the full video, which is critical for determining the relatively significant parts.
8 code implementations • CVPR 2022 • Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Kumar Ramakrishnan, Fiona Ryan, Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal, Dhruv Batra, Vincent Cartillier, Sean Crane, Tien Do, Morrie Doulaty, Akshay Erapalli, Christoph Feichtenhofer, Adriano Fragomeni, Qichen Fu, Abrham Gebreselasie, Cristina Gonzalez, James Hillis, Xuhua Huang, Yifei HUANG, Wenqi Jia, Weslie Khoo, Jachym Kolar, Satwik Kottur, Anurag Kumar, Federico Landini, Chao Li, Yanghao Li, Zhenqiang Li, Karttikeya Mangalam, Raghava Modhugu, Jonathan Munro, Tullie Murrell, Takumi Nishiyasu, Will Price, Paola Ruiz Puentes, Merey Ramazanova, Leda Sari, Kiran Somasundaram, Audrey Southerland, Yusuke Sugano, Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi, Ziwei Zhao, Yunyi Zhu, Pablo Arbelaez, David Crandall, Dima Damen, Giovanni Maria Farinella, Christian Fuegen, Bernard Ghanem, Vamsi Krishna Ithapu, C. V. Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li, Richard Newcombe, Aude Oliva, Hyun Soo Park, James M. Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio Torralba, Lorenzo Torresani, Mingfei Yan, Jitendra Malik
We introduce Ego4D, a massive-scale egocentric video dataset and benchmark suite.
no code implementations • 18 Jun 2021 • Lijin Yang, Yifei HUANG, Yusuke Sugano, Yoichi Sato
In this report, we describe the technical details of our submission to the 2021 EPIC-KITCHENS-100 Unsupervised Domain Adaptation Challenge for Action Recognition.
no code implementations • 30 Jan 2021 • Haruya Sakashita, Christoph Flothow, Noriko Takemura, Yusuke Sugano
Together with the recent advances in semantic segmentation, many domain adaptation methods have been proposed to overcome the domain gap between training and deployment environments.
no code implementations • 29 Nov 2018 • Yutaro Miyauchi, Yusuke Sugano, Yasuyuki Matsushita
Conditional image generation is effective for diverse tasks including training data synthesis for learning-based computer vision.
no code implementations • ECCV 2018 • Hiroaki Santo, Michael Waechter, Masaki Samejima, Yusuke Sugano, Yasuyuki Matsushita
We present a practical method for geometric point light source calibration.
1 code implementation • LREC 2018 • Arif Khan, Ingmar Steiner, Yusuke Sugano, Andreas Bulling, Ross Macdonald
Phonetic segmentation is the process of splitting speech into distinct phonetic units.
6 code implementations • 24 Nov 2017 • Xucong Zhang, Yusuke Sugano, Mario Fritz, Andreas Bulling
Second, we present an extensive evaluation of state-of-the-art gaze estimation methods on three current datasets, including MPIIGaze.
4 code implementations • 27 Nov 2016 • Xucong Zhang, Yusuke Sugano, Mario Fritz, Andreas Bulling
Eye gaze is an important non-verbal cue for human affect analysis.
no code implementations • 18 Aug 2016 • Yusuke Sugano, Andreas Bulling
Gaze reflects how humans process visual scenes and is therefore increasingly used in computer vision systems.
no code implementations • 11 Jan 2016 • Mohsen Mansouryar, Julian Steil, Yusuke Sugano, Andreas Bulling
3D gaze information is important for scene-centric attention analysis but accurate estimation and analysis of 3D gaze in real-world environments remains challenging.
no code implementations • 18 Nov 2015 • Marc Tonsen, Xucong Zhang, Yusuke Sugano, Andreas Bulling
We further study the influence of image resolution, vision aids, as well as recording location (indoor, outdoor) on pupil detection performance.
no code implementations • ICCV 2015 • Erroll Wood, Tadas Baltrusaitis, Xucong Zhang, Yusuke Sugano, Peter Robinson, Andreas Bulling
Images of the eye are key in several computer vision problems, such as shape registration and gaze estimation.
6 code implementations • CVPR 2015 • Xucong Zhang, Yusuke Sugano, Mario Fritz, Andreas Bulling
Appearance-based gaze estimation is believed to work well in real-world settings, but existing datasets have been collected under controlled laboratory conditions and methods have been not evaluated across multiple datasets.
no code implementations • CVPR 2014 • Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato
Unlike existing appearance-based methods that assume person-specific training data, we use a large amount of cross-subject training data to train a 3D gaze estimator.