no code implementations • ECCV 2020 • Yukihiro Sasagawa, Hajime Nagahara
We propose a new method of domain adaptation for merging multiple models with less effort than creating an additional dataset.
no code implementations • ECCV 2020 • Kohei Sakai, Keita Takahashi, Toshiaki Fujii, Hajime Nagahara
A promising solution for compressive light field acquisition is to use a coded aperture camera, with which an entire light field can be computationally reconstructed from several images captured through differently-coded aperture patterns.
1 code implementation • EMNLP (WNUT) 2020 • Sora Ohashi, Tomoyuki Kajiwara, Chenhui Chu, Noriko Takemura, Yuta Nakashima, Hajime Nagahara
We introduce the IDSOU submission for the WNUT-2020 task 2: identification of informative COVID-19 English Tweets.
1 code implementation • LREC 2022 • Haruya Suzuki, Yuto Miyauchi, Kazuki Akiyama, Tomoyuki Kajiwara, Takashi Ninomiya, Noriko Takemura, Yuta Nakashima, Hajime Nagahara
We annotate 35, 000 SNS posts with both the writer’s subjective sentiment polarity labels and the reader’s objective ones to construct a Japanese sentiment analysis dataset.
no code implementations • 24 Mar 2024 • Yicheng Deng, Hideaki Hayashi, Hajime Nagahara
In this paper, we propose a Multi-Scale Spatio-Temporal Graph Convolutional Network (SpoT-GCN) for facial expression spotting.
no code implementations • 12 Mar 2024 • Shuji Habuchi, Keita Takahashi, Chihiro Tsutake, Toshiaki Fujii, Hajime Nagahara
Different from the conventional coded-aperture imaging method, our method applies a sequence of coding patterns during a single exposure for an image frame.
no code implementations • 22 Nov 2023 • Chenhao Li, Taishi Ono, Takeshi Uemori, Hajime Mihara, Alexander Gatto, Hajime Nagahara, Yusuke Moriuchi
To address this problem, we propose Neural Incident Stokes Fields (NeISF), a multi-view inverse rendering framework that reduces ambiguities using polarization cues.
1 code implementation • 7 Nov 2023 • Jiahao Zhang, Bowen Wang, Liangzhi Li, Yuta Nakashima, Hajime Nagahara
Our findings suggest that InMeMo offers a versatile and efficient way to enhance the performance of visual ICL with lightweight training.
1 code implementation • CVPR 2023 • Chenhao Li, Trung Thanh Ngo, Hajime Nagahara
To enhance the supervision of the proposed neural renderer, we also propose an augmented loss.
1 code implementation • CVPR 2023 • Bowen Wang, Liangzhi Li, Yuta Nakashima, Hajime Nagahara
Using some image classification tasks as our testbed, we demonstrate BotCL's potential to rebuild neural networks for better interpretability.
no code implementations • 4 Feb 2023 • Thuong Nguyen Canh, Trung Thanh Ngo, Hajime Nagahara
Lensless imaging protects visual privacy by capturing heavily blurred images that are imperceptible for humans to recognize the subject but contain enough information for machines to infer information.
no code implementations • 31 Jan 2023 • Hugo Lemarchant, Liangzi Li, Yiming Qian, Yuta Nakashima, Hajime Nagahara
Vision Transformers (ViTs) are becoming a very popular paradigm for vision tasks as they achieve state-of-the-art performance on image classification.
1 code implementation • 18 Nov 2022 • Zongshang Pang, Yuta Nakashima, Mayu Otani, Hajime Nagahara
Video summarization aims to select the most informative subset of frames in a video to facilitate efficient video browsing.
no code implementations • 28 Sep 2022 • Thanh-Trung Ngo, Hajime Nagahara
We tackle the problem of modeling light scattering in homogeneous translucent material and estimating its scattering parameters.
no code implementations • 23 Aug 2022 • Tianwei Chen, Noa Garcia, Mayu Otani, Chenhui Chu, Yuta Nakashima, Hajime Nagahara
Is more data always better to train vision-and-language models?
1 code implementation • 4 Aug 2022 • Sudhakar Kumawat, Hajime Nagahara
This is followed by the Difference module to apply a pixel-wise intensity subtraction between consecutive frames to highlight motion features and suppress obvious high-level privacy attributes.
no code implementations • CVPR 2022 • Ryoya Mizuno, Keita Takahashi, Michitaka Yoshida, Chihiro Tsutake, Toshiaki Fujii, Hajime Nagahara
To our knowledge, our method is the first to achieve a finer temporal resolution than the camera itself in compressive light-field acquisition.
no code implementations • 2 Sep 2021 • Yiming Qian, Cheikh Brahim El Vaigh, Yuta Nakashima, Benjamin Renoust, Hajime Nagahara, Yutaka Fujioka
Buddha statues are a part of human culture, especially of the Asia area, and they have been alongside human civilisation for more than 2, 000 years.
1 code implementation • NAACL 2021 • Tomoyuki Kajiwara, Chenhui Chu, Noriko Takemura, Yuta Nakashima, Hajime Nagahara
We annotate 17, 000 SNS posts with both the writer{'}s subjective emotional intensity and the reader{'}s objective one to construct a Japanese emotion analysis dataset.
no code implementations • 25 May 2021 • Cheikh Brahim El Vaigh, Noa Garcia, Benjamin Renoust, Chenhui Chu, Yuta Nakashima, Hajime Nagahara
In this paper, we propose a novel use of a knowledge graph, that is constructed on annotated data and pseudo-labeled data.
no code implementations • 28 Jan 2021 • Kiichi Goto, Taikan Suehara, Tamaki Yoshioka, Masakazu Kurata, Hajime Nagahara, Yuta Nakashima, Noriko Takemura, Masako Iwasaki
Deep learning is a rapidly-evolving technology with possibility to significantly improve physics reach of collider experiments.
1 code implementation • 25 Nov 2020 • Bowen Wang, Liangzhi Li, Manisha Verma, Yuta Nakashima, Ryo Kawasaki, Hajime Nagahara
Few-shot learning (FSL) approaches are usually based on an assumption that the pre-trained knowledge can be obtained from base (seen) categories and can be well transferred to novel (unseen) categories.
Ranked #34 on Few-Shot Image Classification on CIFAR-FS 5-way (5-shot)
1 code implementation • 7 Nov 2020 • Liangzhi Li, Manisha Verma, Bowen Wang, Yuta Nakashima, Hajime Nagahara, Ryo Kawasaki
Our severity grading method was able to validate crossing points with precision and recall of 96. 3% and 96. 3%, respectively.
no code implementations • 19 Oct 2020 • Bowen Wang, Liangzhi Li, Yuta Nakashima, Ryo Kawasaki, Hajime Nagahara, Yasushi Yagi
Semantic video segmentation is a key challenge for various applications.
1 code implementation • ICCV 2021 • Liangzhi Li, Bowen Wang, Manisha Verma, Yuta Nakashima, Ryo Kawasaki, Hajime Nagahara
Explainable artificial intelligence has been gaining attention in the past few years.
1 code implementation • MIDL 2019 • Liangzhi Li, Manisha Verma, Yuta Nakashima, Ryo Kawasaki, Hajime Nagahara
Retinal imaging serves as a valuable tool for diagnosis of various diseases.
no code implementations • LREC 2020 • Koji Tanaka, Chenhui Chu, Haolin Ren, Benjamin Renoust, Yuta Nakashima, Noriko Takemura, Hajime Nagahara, Takao Fujikawa
In this paper, we propose a full pipeline of analysis of a large corpus about a century of public meeting in historical Australian news papers, from construction to visual exploration.
no code implementations • 23 Dec 2019 • Kyuho Bae, Andre Ivan, Hajime Nagahara, In Kyu Park
To tackle this problem, we propose a deep learning-based method for synthesizing a light field video from a monocular video.
2 code implementations • 12 Dec 2019 • Liangzhi Li, Manisha Verma, Yuta Nakashima, Hajime Nagahara, Ryo Kawasaki
Retinal vessel segmentation is of great interest for diagnosis of retinal vascular diseases.
Ranked #5 on Retinal Vessel Segmentation on CHASE_DB1
no code implementations • 17 Sep 2019 • Benjamin Renoust, Matheus Oliveira Franca, Jacob Chan, Van Le, Ayaka Uesaka, Yuta Nakashima, Hajime Nagahara, Jueren Wang, Yutaka Fujioka
We introduce BUDA. ART, a system designed to assist researchers in Art History, to explore and analyze an archive of pictures of Buddha statues.
no code implementations • 17 Sep 2019 • Benjamin Renoust, Matheus Oliveira Franca, Jacob Chan, Noa Garcia, Van Le, Ayaka Uesaka, Yuta Nakashima, Hajime Nagahara, Jueren Wang, Yutaka Fujioka
While Buddhism has spread along the Silk Roads, many pieces of art have been displaced.
no code implementations • 31 May 2019 • Tomoyuki Kajiwara, Chihiro Tanikawa, Yuujin Shimizu, Chenhui Chu, Takashi Yamashiro, Hajime Nagahara
We work on the task of automatically designing a treatment plan from the findings included in the medical certificate written by the dentist.
no code implementations • ECCV 2018 • Yasutaka Inagaki, Yuto Kobayashi, Keita Takahashi, Toshiaki Fujii, Hajime Nagahara
To make the acquisition process efficient, coded aperture cameras were successfully adopted; using these cameras, a light field is computationally reconstructed from several images that are acquired with different aperture patterns.
no code implementations • ECCV 2018 • Michitaka Yoshida, Akihiko Torii, Masatoshi Okutomi, Kenta Endo, Yukinobu Sugiyama, Rin-ichiro Taniguchi, Hajime Nagahara
Compressive video sensing is the process of encoding multiple sub-frames into a single frame with controlled sensor exposures and reconstructing the sub-frames from the single compressed frame.
no code implementations • ICCV 2015 • Yichao Xu, Hajime Nagahara, Atsushi Shimada, Rin-ichiro Taniguchi
The segmentation of transparent objects can be very useful in computer vision applications.
no code implementations • CVPR 2015 • Trung Ngo Thanh, Hajime Nagahara, Rin-ichiro Taniguchi
In contrast, photometric stereo method with multiple light sources can disambiguate the surface orientation and give a strong relationship between the surface normals and light directions.
no code implementations • 16 Jul 2014 • Yichao Xu, Kazuki Maeno, Hajime Nagahara, Rin-ichiro Taniguchi
The light field camera is useful for computer graphics and vision applications.
no code implementations • CVPR 2013 • Atsushi Shimada, Hajime Nagahara, Rin-ichiro Taniguchi
Although a result will be output with some delay because information is taken from a future period, our proposed approach improves the accuracy by about 30% if only a 33-millisecond of delay is acceptable.
no code implementations • CVPR 2013 • Kazuki Maeno, Hajime Nagahara, Atsushi Shimada, Rin-ichiro Taniguchi
These approaches though cannot apply to transparent objects made of glass or plastic, as such objects take on the visual features of background objects, and the appearance of such objects dramatically varies with changes in scene background.