Search Results for author: Hajime Nagahara

Found 39 papers, 13 papers with code

YOLO in the Dark - Domain Adaptation Method for Merging Multiple Models -

no code implementations ECCV 2020 Yukihiro Sasagawa, Hajime Nagahara

We propose a new method of domain adaptation for merging multiple models with less effort than creating an additional dataset.

Domain Adaptation Knowledge Distillation +2

Acquiring Dynamic Light Fields through Coded Aperture Camera

no code implementations ECCV 2020 Kohei Sakai, Keita Takahashi, Toshiaki Fujii, Hajime Nagahara

A promising solution for compressive light field acquisition is to use a coded aperture camera, with which an entire light field can be computationally reconstructed from several images captured through differently-coded aperture patterns.

Multi-Scale Spatio-Temporal Graph Convolutional Network for Facial Expression Spotting

no code implementations24 Mar 2024 Yicheng Deng, Hideaki Hayashi, Hajime Nagahara

In this paper, we propose a Multi-Scale Spatio-Temporal Graph Convolutional Network (SpoT-GCN) for facial expression spotting.

Contrastive Learning Micro-Expression Spotting

Time-Efficient Light-Field Acquisition Using Coded Aperture and Events

no code implementations12 Mar 2024 Shuji Habuchi, Keita Takahashi, Chihiro Tsutake, Toshiaki Fujii, Hajime Nagahara

Different from the conventional coded-aperture imaging method, our method applies a sequence of coding patterns during a single exposure for an image frame.

NeISF: Neural Incident Stokes Field for Geometry and Material Estimation

no code implementations22 Nov 2023 Chenhao Li, Taishi Ono, Takeshi Uemori, Hajime Mihara, Alexander Gatto, Hajime Nagahara, Yusuke Moriuchi

To address this problem, we propose Neural Incident Stokes Fields (NeISF), a multi-view inverse rendering framework that reduces ambiguities using polarization cues.

Inverse Rendering

Instruct Me More! Random Prompting for Visual In-Context Learning

1 code implementation7 Nov 2023 Jiahao Zhang, Bowen Wang, Liangzhi Li, Yuta Nakashima, Hajime Nagahara

Our findings suggest that InMeMo offers a versatile and efficient way to enhance the performance of visual ICL with lightweight training.

Foreground Segmentation In-Context Learning +2

Learning Bottleneck Concepts in Image Classification

1 code implementation CVPR 2023 Bowen Wang, Liangzhi Li, Yuta Nakashima, Hajime Nagahara

Using some image classification tasks as our testbed, we demonstrate BotCL's potential to rebuild neural networks for better interpretability.

Classification Image Classification

Human-Imperceptible Identification with Learnable Lensless Imaging

no code implementations4 Feb 2023 Thuong Nguyen Canh, Trung Thanh Ngo, Hajime Nagahara

Lensless imaging protects visual privacy by capturing heavily blurred images that are imperceptible for humans to recognize the subject but contain enough information for machines to infer information.

Inference Time Evidences of Adversarial Attacks for Forensic on Transformers

no code implementations31 Jan 2023 Hugo Lemarchant, Liangzi Li, Yiming Qian, Yuta Nakashima, Hajime Nagahara

Vision Transformers (ViTs) are becoming a very popular paradigm for vision tasks as they achieve state-of-the-art performance on image classification.

Image Classification

Contrastive Losses Are Natural Criteria for Unsupervised Video Summarization

1 code implementation18 Nov 2022 Zongshang Pang, Yuta Nakashima, Mayu Otani, Hajime Nagahara

Video summarization aims to select the most informative subset of frames in a video to facilitate efficient video browsing.

Image Classification Representation Learning +1

A General Scattering Phase Function for Inverse Rendering

no code implementations28 Sep 2022 Thanh-Trung Ngo, Hajime Nagahara

We tackle the problem of modeling light scattering in homogeneous translucent material and estimating its scattering parameters.

Inverse Rendering

Privacy-Preserving Action Recognition via Motion Difference Quantization

1 code implementation4 Aug 2022 Sudhakar Kumawat, Hajime Nagahara

This is followed by the Difference module to apply a pixel-wise intensity subtraction between consecutive frames to highlight motion features and suppress obvious high-level privacy attributes.

Action Recognition Privacy Preserving +2

Acquiring a Dynamic Light Field through a Single-Shot Coded Image

no code implementations CVPR 2022 Ryoya Mizuno, Keita Takahashi, Michitaka Yoshida, Chihiro Tsutake, Toshiaki Fujii, Hajime Nagahara

To our knowledge, our method is the first to achieve a finer temporal resolution than the camera itself in compressive light-field acquisition.

Built Year Prediction from Buddha Face with Heterogeneous Labels

no code implementations2 Sep 2021 Yiming Qian, Cheikh Brahim El Vaigh, Yuta Nakashima, Benjamin Renoust, Hajime Nagahara, Yutaka Fujioka

Buddha statues are a part of human culture, especially of the Asia area, and they have been alongside human civilisation for more than 2, 000 years.

Cultural Vocal Bursts Intensity Prediction

WRIME: A New Dataset for Emotional Intensity Estimation with Subjective and Objective Annotations

1 code implementation NAACL 2021 Tomoyuki Kajiwara, Chenhui Chu, Noriko Takemura, Yuta Nakashima, Hajime Nagahara

We annotate 17, 000 SNS posts with both the writer{'}s subjective emotional intensity and the reader{'}s objective one to construct a Japanese emotion analysis dataset.

Emotion Recognition

Development of a Vertex Finding Algorithm using Recurrent Neural Network

no code implementations28 Jan 2021 Kiichi Goto, Taikan Suehara, Tamaki Yoshioka, Masakazu Kurata, Hajime Nagahara, Yuta Nakashima, Noriko Takemura, Masako Iwasaki

Deep learning is a rapidly-evolving technology with possibility to significantly improve physics reach of collider experiments.

Match Them Up: Visually Explainable Few-shot Image Classification

1 code implementation25 Nov 2020 Bowen Wang, Liangzhi Li, Manisha Verma, Yuta Nakashima, Ryo Kawasaki, Hajime Nagahara

Few-shot learning (FSL) approaches are usually based on an assumption that the pre-trained knowledge can be obtained from base (seen) categories and can be well transferred to novel (unseen) categories.

Classification Few-Shot Image Classification +2

Constructing a Public Meeting Corpus

no code implementations LREC 2020 Koji Tanaka, Chenhui Chu, Haolin Ren, Benjamin Renoust, Yuta Nakashima, Noriko Takemura, Hajime Nagahara, Takao Fujikawa

In this paper, we propose a full pipeline of analysis of a large corpus about a century of public meeting in historical Australian news papers, from construction to visual exploration.

Optical Character Recognition (OCR)

5D Light Field Synthesis from a Monocular Video

no code implementations23 Dec 2019 Kyuho Bae, Andre Ivan, Hajime Nagahara, In Kyu Park

To tackle this problem, we propose a deep learning-based method for synthesizing a light field video from a monocular video.

Depth Estimation

BUDA.ART: A Multimodal Content-Based Analysis and Retrieval System for Buddha Statues

no code implementations17 Sep 2019 Benjamin Renoust, Matheus Oliveira Franca, Jacob Chan, Van Le, Ayaka Uesaka, Yuta Nakashima, Hajime Nagahara, Jueren Wang, Yutaka Fujioka

We introduce BUDA. ART, a system designed to assist researchers in Art History, to explore and analyze an archive of pictures of Buddha statues.

Retrieval

Using Natural Language Processing to Develop an Automated Orthodontic Diagnostic System

no code implementations31 May 2019 Tomoyuki Kajiwara, Chihiro Tanikawa, Yuujin Shimizu, Chenhui Chu, Takashi Yamashiro, Hajime Nagahara

We work on the task of automatically designing a treatment plan from the findings included in the medical certificate written by the dentist.

Learning to Capture Light Fields through a Coded Aperture Camera

no code implementations ECCV 2018 Yasutaka Inagaki, Yuto Kobayashi, Keita Takahashi, Toshiaki Fujii, Hajime Nagahara

To make the acquisition process efficient, coded aperture cameras were successfully adopted; using these cameras, a light field is computationally reconstructed from several images that are acquired with different aperture patterns.

Joint optimization for compressive video sensing and reconstruction under hardware constraints

no code implementations ECCV 2018 Michitaka Yoshida, Akihiko Torii, Masatoshi Okutomi, Kenta Endo, Yukinobu Sugiyama, Rin-ichiro Taniguchi, Hajime Nagahara

Compressive video sensing is the process of encoding multiple sub-frames into a single frame with controlled sensor exposures and reconstructing the sub-frames from the single compressed frame.

Compressive Sensing

Shape and Light Directions From Shading and Polarization

no code implementations CVPR 2015 Trung Ngo Thanh, Hajime Nagahara, Rin-ichiro Taniguchi

In contrast, photometric stereo method with multiple light sources can disambiguate the surface orientation and give a strong relationship between the surface normals and light directions.

Background Modeling Based on Bidirectional Analysis

no code implementations CVPR 2013 Atsushi Shimada, Hajime Nagahara, Rin-ichiro Taniguchi

Although a result will be output with some delay because information is taken from a future period, our proposed approach improves the accuracy by about 30% if only a 33-millisecond of delay is acceptable.

Light Field Distortion Feature for Transparent Object Recognition

no code implementations CVPR 2013 Kazuki Maeno, Hajime Nagahara, Atsushi Shimada, Rin-ichiro Taniguchi

These approaches though cannot apply to transparent objects made of glass or plastic, as such objects take on the visual features of background objects, and the appearance of such objects dramatically varies with changes in scene background.

Object Object Recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.