Search Results for author: Kensho Hara

Found 8 papers, 5 papers with code

Traffic Incident Database with Multiple Labels Including Various Perspective Environmental Information

1 code implementation17 Dec 2023 Shota Nishiyama, Takuma Saito, Ryo Nakamura, Go Ohtani, Hirokatsu Kataoka, Kensho Hara

Our proposed dataset aims to improve the performance of traffic accident recognition by annotating ten types of environmental information as teacher labels in addition to the presence or absence of traffic accidents.

Diffusion-based Holistic Texture Rectification and Synthesis

no code implementations26 Sep 2023 Guoqing Hao, Satoshi Iizuka, Kensho Hara, Edgar Simo-Serra, Hirokatsu Kataoka, Kazuhiro Fukui

We present a novel framework for rectifying occlusions and distortions in degraded texture samples from natural images.

Texture Synthesis

Retrieving and Highlighting Action with Spatiotemporal Reference

1 code implementation19 May 2020 Seito Kasai, Yuchi Ishikawa, Masaki Hayashi, Yoshimitsu Aoki, Kensho Hara, Hirokatsu Kataoka

In this paper, we present a framework that jointly retrieves and spatiotemporally highlights actions in videos by enhancing current deep cross-modal retrieval methods.

Action Recognition Cross-Modal Retrieval +5

Would Mega-scale Datasets Further Enhance Spatiotemporal 3D CNNs?

10 code implementations10 Apr 2020 Hirokatsu Kataoka, Tenga Wakamiya, Kensho Hara, Yutaka Satoh

Therefore, in the present paper, we conduct exploration study in order to improve spatiotemporal 3D CNNs as follows: (i) Recently proposed large-scale video datasets help improve spatiotemporal 3D CNNs in terms of video classification accuracy.

General Classification Open-Ended Question Answering +2

Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet?

26 code implementations CVPR 2018 Kensho Hara, Hirokatsu Kataoka, Yutaka Satoh

The purpose of this study is to determine whether current video datasets have sufficient data for training very deep convolutional neural networks (CNNs) with spatio-temporal three-dimensional (3D) kernels.

Action Recognition

Learning Spatio-Temporal Features with 3D Residual Networks for Action Recognition

1 code implementation25 Aug 2017 Kensho Hara, Hirokatsu Kataoka, Yutaka Satoh

The 3D ResNets trained on the Kinetics did not suffer from overfitting despite the large number of parameters of the model, and achieved better performance than relatively shallow networks, such as C3D.

Action Recognition Hand-Gesture Recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.