Search Results for author: Qing Yu

Found 20 papers, 10 papers with code

Global Estimation of Building-Integrated Facade and Rooftop Photovoltaic Potential by Integrating 3D Building Footprint and Spatio-Temporal Datasets

1 code implementation2 Dec 2024 Qing Yu, Kechuan Dong, Zhiling Guo, Jiaxing Li, Hongjun Tan, Yanxiu Jin, Jian Yuan, Haoran Zhang, Junwei Liu, Qi Chen, Jinyue Yan

This research tackles the challenges of estimating Building-Integrated Photovoltaics (BIPV) potential across various temporal and spatial scales, accounting for different geographical climates and urban morphology.

Multiscale spatiotemporal heterogeneity analysis of bike-sharing system's self-loop phenomenon: Evidence from Shanghai

no code implementations26 Nov 2024 Yichen Wang, Qing Yu, Yancun Song

Marginal treatment effects of residential land use is higher on streets with middle-aged residents, high fixed employment, and low car ownership.

Chronologically Accurate Retrieval for Temporal Grounding of Motion-Language Models

no code implementations22 Jul 2024 Kent Fujiwara, Mikihiro Tanaka, Qing Yu

To achieve better temporal alignment between text and motion, we further propose to use these texts with shuffled sequence of events as negative samples during training to reinforce the motion-language models.

Motion Generation Retrieval

Exploring Vision Transformers for 3D Human Motion-Language Models with Motion Patches

no code implementations CVPR 2024 Qing Yu, Mikihiro Tanaka, Kent Fujiwara

To build a cross-modal latent space between 3D human motion and language, acquiring large-scale and high-quality human motion data is crucial.

Human Interaction Recognition Transfer Learning

Can Pre-trained Networks Detect Familiar Out-of-Distribution Data?

1 code implementation2 Oct 2023 Atsuyuki Miyai, Qing Yu, Go Irie, Kiyoharu Aizawa

We consider that such data may significantly affect the performance of large pre-trained networks because the discriminability of these OOD data depends on the pre-training algorithm.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Open-Set Domain Adaptation with Visual-Language Foundation Models

no code implementations30 Jul 2023 Qing Yu, Go Irie, Kiyoharu Aizawa

Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge obtained from a source domain with labeled data to a target domain with unlabeled data.

Unsupervised Domain Adaptation

LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning

1 code implementation NeurIPS 2023 Atsuyuki Miyai, Qing Yu, Go Irie, Kiyoharu Aizawa

CLIP's local features have a lot of ID-irrelevant nuisances (e. g., backgrounds), and by learning to push them away from the ID class text embeddings, we can remove the nuisances in the ID class text embeddings and enhance the separation between ID and OOD.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Noisy Universal Domain Adaptation via Divergence Optimization for Visual Recognition

1 code implementation20 Apr 2023 Qing Yu, Atsushi Hashimoto, Yoshitaka Ushiku

To transfer the knowledge learned from a labeled source domain to an unlabeled target domain, many studies have worked on universal domain adaptation (UniDA), where there is no constraint on the label sets of the source domain and target domain.

Universal Domain Adaptation

Zero-Shot In-Distribution Detection in Multi-Object Settings Using Vision-Language Foundation Models

2 code implementations10 Apr 2023 Atsuyuki Miyai, Qing Yu, Go Irie, Kiyoharu Aizawa

First, images should be collected using only the name of the ID class without training on the ID data.

Rethinking Rotation in Self-Supervised Contrastive Learning: Adaptive Positive or Negative Data Augmentation

1 code implementation23 Oct 2022 Atsuyuki Miyai, Qing Yu, Daiki Ikami, Go Irie, Kiyoharu Aizawa

The semantics of an image can be rotation-invariant or rotation-variant, so whether the rotated image is treated as positive or negative should be determined based on the content of the image.

Contrastive Learning Data Augmentation

Noisy Annotation Refinement for Object Detection

no code implementations20 Oct 2021 Jiafeng Mao, Qing Yu, Yoko Yamakata, Kiyoharu Aizawa

In this study, we propose a new problem setting of training object detectors on datasets with entangled noises of annotations of class labels and bounding boxes.

Object object-detection +1

Divergence Optimization for Noisy Universal Domain Adaptation

1 code implementation CVPR 2021 Qing Yu, Atsushi Hashimoto, Yoshitaka Ushiku

Hence, we consider a new realistic setting called Noisy UniDA, in which classifiers are trained with noisy labeled data from the source domain and unlabeled data with an unknown class distribution from the target domain.

Universal Domain Adaptation

The Gross-Llewellyn Smith sum rule up to ${\cal O}(α_s^4)$-order QCD corrections

no code implementations26 Jan 2021 Xu-Dong Huang, Xing-Gang Wu, Qing Yu, Xu-Chang Zheng, Jun Zeng

In the paper, we analyze the properties of Gross-Llewellyn Smith (GLS) sum rule by using the $\mathcal{O}(\alpha_s^4)$-order QCD corrections with the help of principle of maximum conformality (PMC).

High Energy Physics - Phenomenology

Unsupervised Out-of-Distribution Detection by Maximum Classifier Discrepancy

1 code implementation ICCV 2019 Qing Yu, Kiyoharu Aizawa

Unlike previous methods, we also utilize unlabeled data for unsupervised training and we use these unlabeled data to maximize the discrepancy between the decision boundaries of two classifiers to push OOD samples outside the manifold of the in-distribution (ID) samples, which enables us to detect OOD samples that are far from the support of the ID samples.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.