Search Results for author: Yusuke Sugano

Found 20 papers, 8 papers with code

Domain-Adaptive Full-Face Gaze Estimation via Novel-View-Synthesis and Feature Disentanglement

no code implementations25 May 2023 Jiawei Qin, Takuru Shimoyama, Xucong Zhang, Yusuke Sugano

This work proposes an effective model training pipeline consisting of a training data synthesis and a gaze estimation model for unsupervised domain adaptation.

3D Reconstruction Disentanglement +3

Rotation-Constrained Cross-View Feature Fusion for Multi-View Appearance-based Gaze Estimation

1 code implementation22 May 2023 Yoichiro Hisadome, Tianyi Wu, Jiawei Qin, Yusuke Sugano

This work proposes a generalizable multi-view gaze estimation task and a cross-view feature fusion method to address this issue.

Domain Generalization Gaze Estimation

Learning-by-Novel-View-Synthesis for Full-Face Appearance-Based 3D Gaze Estimation

1 code implementation20 Jan 2022 Jiawei Qin, Takuru Shimoyama, Yusuke Sugano

Despite recent advances in appearance-based gaze estimation techniques, the need for training data that covers the target head pose and gaze distribution remains a crucial challenge for practical deployment.

3D Face Reconstruction Data Augmentation +2

Stacked Temporal Attention: Improving First-person Action Recognition by Emphasizing Discriminative Clips

no code implementations2 Dec 2021 Lijin Yang, Yifei HUANG, Yusuke Sugano, Yoichi Sato

Previous works explored to address this problem by applying temporal attention but failed to consider the global context of the full video, which is critical for determining the relatively significant parts.

Action Recognition Video Understanding

Ego4D: Around the World in 3,000 Hours of Egocentric Video

8 code implementations CVPR 2022 Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Kumar Ramakrishnan, Fiona Ryan, Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal, Dhruv Batra, Vincent Cartillier, Sean Crane, Tien Do, Morrie Doulaty, Akshay Erapalli, Christoph Feichtenhofer, Adriano Fragomeni, Qichen Fu, Abrham Gebreselasie, Cristina Gonzalez, James Hillis, Xuhua Huang, Yifei HUANG, Wenqi Jia, Weslie Khoo, Jachym Kolar, Satwik Kottur, Anurag Kumar, Federico Landini, Chao Li, Yanghao Li, Zhenqiang Li, Karttikeya Mangalam, Raghava Modhugu, Jonathan Munro, Tullie Murrell, Takumi Nishiyasu, Will Price, Paola Ruiz Puentes, Merey Ramazanova, Leda Sari, Kiran Somasundaram, Audrey Southerland, Yusuke Sugano, Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi, Ziwei Zhao, Yunyi Zhu, Pablo Arbelaez, David Crandall, Dima Damen, Giovanni Maria Farinella, Christian Fuegen, Bernard Ghanem, Vamsi Krishna Ithapu, C. V. Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li, Richard Newcombe, Aude Oliva, Hyun Soo Park, James M. Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio Torralba, Lorenzo Torresani, Mingfei Yan, Jitendra Malik

We introduce Ego4D, a massive-scale egocentric video dataset and benchmark suite.

De-identification Ethics

EPIC-KITCHENS-100 Unsupervised Domain Adaptation Challenge for Action Recognition 2021: Team M3EM Technical Report

no code implementations18 Jun 2021 Lijin Yang, Yifei HUANG, Yusuke Sugano, Yoichi Sato

In this report, we describe the technical details of our submission to the 2021 EPIC-KITCHENS-100 Unsupervised Domain Adaptation Challenge for Action Recognition.

Action Recognition Unsupervised Domain Adaptation

DRIV100: In-The-Wild Multi-Domain Dataset and Evaluation for Real-World Domain Adaptation of Semantic Segmentation

no code implementations30 Jan 2021 Haruya Sakashita, Christoph Flothow, Noriko Takemura, Yusuke Sugano

Together with the recent advances in semantic segmentation, many domain adaptation methods have been proposed to overcome the domain gap between training and deployment environments.

Benchmarking Domain Adaptation +1

Shape-conditioned Image Generation by Learning Latent Appearance Representation from Unpaired Data

no code implementations29 Nov 2018 Yutaro Miyauchi, Yusuke Sugano, Yasuyuki Matsushita

Conditional image generation is effective for diverse tasks including training data synthesis for learning-based computer vision.

Conditional Image Generation Object

MPIIGaze: Real-World Dataset and Deep Appearance-Based Gaze Estimation

6 code implementations24 Nov 2017 Xucong Zhang, Yusuke Sugano, Mario Fritz, Andreas Bulling

Second, we present an extensive evaluation of state-of-the-art gaze estimation methods on three current datasets, including MPIIGaze.

Gaze Estimation

Seeing with Humans: Gaze-Assisted Neural Image Captioning

no code implementations18 Aug 2016 Yusuke Sugano, Andreas Bulling

Gaze reflects how humans process visual scenes and is therefore increasingly used in computer vision systems.

Image Captioning Object +3

3D Gaze Estimation from 2D Pupil Positions on Monocular Head-Mounted Eye Trackers

no code implementations11 Jan 2016 Mohsen Mansouryar, Julian Steil, Yusuke Sugano, Andreas Bulling

3D gaze information is important for scene-centric attention analysis but accurate estimation and analysis of 3D gaze in real-world environments remains challenging.

Gaze Estimation

Labeled pupils in the wild: A dataset for studying pupil detection in unconstrained environments

no code implementations18 Nov 2015 Marc Tonsen, Xucong Zhang, Yusuke Sugano, Andreas Bulling

We further study the influence of image resolution, vision aids, as well as recording location (indoor, outdoor) on pupil detection performance.

Pupil Detection

Appearance-Based Gaze Estimation in the Wild

6 code implementations CVPR 2015 Xucong Zhang, Yusuke Sugano, Mario Fritz, Andreas Bulling

Appearance-based gaze estimation is believed to work well in real-world settings, but existing datasets have been collected under controlled laboratory conditions and methods have been not evaluated across multiple datasets.

Gaze Estimation

Learning-by-Synthesis for Appearance-based 3D Gaze Estimation

no code implementations CVPR 2014 Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato

Unlike existing appearance-based methods that assume person-specific training data, we use a large amount of cross-subject training data to train a 3D gaze estimator.

3D Reconstruction Gaze Estimation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.