Search Results for author: Go Irie

Found 16 papers, 6 papers with code

Estimating Indoor Scene Depth Maps from Ultrasonic Echoes

no code implementations5 Sep 2024 Junpei Honma, Akisato Kimura, Go Irie

Measuring 3D geometric structures of indoor scenes requires dedicated depth sensors, which are not always available.

Depth Estimation

Can Pre-trained Networks Detect Familiar Out-of-Distribution Data?

1 code implementation2 Oct 2023 Atsuyuki Miyai, Qing Yu, Go Irie, Kiyoharu Aizawa

We consider that such data may significantly affect the performance of large pre-trained networks because the discriminability of these OOD data depends on the pre-training algorithm.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Open-Set Domain Adaptation with Visual-Language Foundation Models

no code implementations30 Jul 2023 Qing Yu, Go Irie, Kiyoharu Aizawa

Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge obtained from a source domain with labeled data to a target domain with unlabeled data.

Unsupervised Domain Adaptation

LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning

1 code implementation NeurIPS 2023 Atsuyuki Miyai, Qing Yu, Go Irie, Kiyoharu Aizawa

CLIP's local features have a lot of ID-irrelevant nuisances (e. g., backgrounds), and by learning to push them away from the ID class text embeddings, we can remove the nuisances in the ID class text embeddings and enhance the separation between ID and OOD.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Zero-Shot In-Distribution Detection in Multi-Object Settings Using Vision-Language Foundation Models

2 code implementations10 Apr 2023 Atsuyuki Miyai, Qing Yu, Go Irie, Kiyoharu Aizawa

First, images should be collected using only the name of the ID class without training on the ID data.

Listening Human Behavior: 3D Human Pose Estimation With Acoustic Signals

no code implementations CVPR 2023 Yuto Shibata, Yutaka Kawashima, Mariko Isogawa, Go Irie, Akisato Kimura, Yoshimitsu Aoki

Aiming to capture subtle sound changes to reveal detailed pose information, we explicitly extract phase features from the acoustic signals together with typical spectrum features and feed them into our human pose estimation network.

3D Human Pose Estimation

Rethinking Rotation in Self-Supervised Contrastive Learning: Adaptive Positive or Negative Data Augmentation

1 code implementation23 Oct 2022 Atsuyuki Miyai, Qing Yu, Daiki Ikami, Go Irie, Kiyoharu Aizawa

The semantics of an image can be rotation-invariant or rotation-variant, so whether the rotated image is treated as positive or negative should be determined based on the content of the image.

Contrastive Learning Data Augmentation

Generalized Domain Adaptation

no code implementations CVPR 2021 Yu Mitsuzumi, Go Irie, Daiki Ikami, Takashi Shibata

The key to our approach is self-supervised class-destructive learning, which enables the learning of class-invariant representations and domain-adversarial classifiers without using any domain labels.

Unsupervised Domain Adaptation

Computational Attention System for Children, Adults and Elderly

no code implementations18 Apr 2019 Onkar Krishna, Kiyoharu Aizawa, Go Irie

Observer's of different age-group have shown different scene viewing tendencies independent to the class of the image viewed.

Parallel Grid Pooling for Data Augmentation

1 code implementation30 Mar 2018 Akito Takeki, Daiki Ikami, Go Irie, Kiyoharu Aizawa

Convolutional neural network (CNN) architectures utilize downsampling layers, which restrict the subsequent layers to learn spatially invariant features while reducing computational costs.

General Classification Image Augmentation +1

Locally Linear Hashing for Extracting Non-Linear Manifolds

no code implementations CVPR 2014 Go Irie, Zhenguo Li, Xiao-Ming Wu, Shih-Fu Chang

Previous efforts in hashing intend to preserve data variance or pairwise affinity, but neither is adequate in capturing the manifold structures hidden in most visual data.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.