1 code implementation • 2 Dec 2024 • Qing Yu, Kechuan Dong, Zhiling Guo, Jiaxing Li, Hongjun Tan, Yanxiu Jin, Jian Yuan, Haoran Zhang, Junwei Liu, Qi Chen, Jinyue Yan
This research tackles the challenges of estimating Building-Integrated Photovoltaics (BIPV) potential across various temporal and spatial scales, accounting for different geographical climates and urban morphology.
no code implementations • 26 Nov 2024 • Yichen Wang, Qing Yu, Yancun Song
Marginal treatment effects of residential land use is higher on streets with middle-aged residents, high fixed employment, and low car ownership.
no code implementations • 22 Jul 2024 • Kent Fujiwara, Mikihiro Tanaka, Qing Yu
To achieve better temporal alignment between text and motion, we further propose to use these texts with shuffled sequence of events as negative samples during training to reinforce the motion-language models.
no code implementations • CVPR 2024 • Qing Yu, Mikihiro Tanaka, Kent Fujiwara
To build a cross-modal latent space between 3D human motion and language, acquiring large-scale and high-quality human motion data is crucial.
1 code implementation • 29 Mar 2024 • Atsuyuki Miyai, Jingkang Yang, Jingyang Zhang, Yifei Ming, Qing Yu, Go Irie, Yixuan Li, Hai Li, Ziwei Liu, Kiyoharu Aizawa
This paper introduces a novel and significant challenge for Vision Language Models (VLMs), termed Unsolvable Problem Detection (UPD).
1 code implementation • 2 Oct 2023 • Atsuyuki Miyai, Qing Yu, Go Irie, Kiyoharu Aizawa
We consider that such data may significantly affect the performance of large pre-trained networks because the discriminability of these OOD data depends on the pre-training algorithm.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
no code implementations • CVPR 2024 • Shunli Wang, Qing Yu, Shuaibing Wang, Dingkang Yang, Liuzhen Su, Xiao Zhao, Haopeng Kuang, Peixuan Zhang, Peng Zhai, Lihua Zhang
For the first time, this paper constructs a vision-based system to complete error action recognition and skill assessment in CPR.
no code implementations • 30 Jul 2023 • Qing Yu, Go Irie, Kiyoharu Aizawa
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge obtained from a source domain with labeled data to a target domain with unlabeled data.
1 code implementation • NeurIPS 2023 • Atsuyuki Miyai, Qing Yu, Go Irie, Kiyoharu Aizawa
CLIP's local features have a lot of ID-irrelevant nuisances (e. g., backgrounds), and by learning to push them away from the ID class text embeddings, we can remove the nuisances in the ID class text embeddings and enhance the separation between ID and OOD.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
1 code implementation • 20 Apr 2023 • Qing Yu, Atsushi Hashimoto, Yoshitaka Ushiku
To transfer the knowledge learned from a labeled source domain to an unlabeled target domain, many studies have worked on universal domain adaptation (UniDA), where there is no constraint on the label sets of the source domain and target domain.
2 code implementations • 10 Apr 2023 • Atsuyuki Miyai, Qing Yu, Go Irie, Kiyoharu Aizawa
First, images should be collected using only the name of the ID class without training on the ID data.
1 code implementation • 23 Oct 2022 • Atsuyuki Miyai, Qing Yu, Daiki Ikami, Go Irie, Kiyoharu Aizawa
The semantics of an image can be rotation-invariant or rotation-variant, so whether the rotated image is treated as positive or negative should be determined based on the content of the image.
no code implementations • 20 Apr 2022 • Shunli Wang, Dingkang Yang, Peng Zhai, Qing Yu, Tao Suo, Zhan Sun, Ka Li, Lihua Zhang
Most of the existing work focuses on sports and medical care.
no code implementations • 20 Oct 2021 • Jiafeng Mao, Qing Yu, Yoko Yamakata, Kiyoharu Aizawa
In this study, we propose a new problem setting of training object detectors on datasets with entangled noises of annotations of class labels and bounding boxes.
1 code implementation • CVPR 2021 • Qing Yu, Atsushi Hashimoto, Yoshitaka Ushiku
Hence, we consider a new realistic setting called Noisy UniDA, in which classifiers are trained with noisy labeled data from the source domain and unlabeled data with an unknown class distribution from the target domain.
no code implementations • 26 Jan 2021 • Xu-Dong Huang, Xing-Gang Wu, Qing Yu, Xu-Chang Zheng, Jun Zeng
In the paper, we analyze the properties of Gross-Llewellyn Smith (GLS) sum rule by using the $\mathcal{O}(\alpha_s^4)$-order QCD corrections with the help of principle of maximum conformality (PMC).
High Energy Physics - Phenomenology
no code implementations • 3 Nov 2020 • Takumi Kawashima, Qing Yu, Akari Asai, Daiki Ikami, Kiyoharu Aizawa
We propose a new optimization framework for aleatoric uncertainty estimation in regression problems.
no code implementations • ECCV 2020 • Qing Yu, Daiki Ikami, Go Irie, Kiyoharu Aizawa
Semi-supervised learning (SSL) has been proposed to leverage unlabeled data for training powerful models when only limited labeled data is available.
1 code implementation • IEEE Biomedical Circuits and Systems (BIOCAS) 2019 • Yi Ma, Xinzi Xu, Qing Yu, Yuhang Zhang, Yongfu Li, Jian Zhao and Guoxing Wang
Improving access to health care services for the medically under-served population is vital to ensure that critical illness can be addressed immediately.
Ranked #19 on Audio Classification on ICBHI Respiratory Sound Database
1 code implementation • ICCV 2019 • Qing Yu, Kiyoharu Aizawa
Unlike previous methods, we also utilize unlabeled data for unsupervised training and we use these unlabeled data to maximize the discrepancy between the decision boundaries of two classifiers to push OOD samples outside the manifold of the in-distribution (ID) samples, which enables us to detect OOD samples that are far from the support of the ID samples.
Out-of-Distribution Detection Out of Distribution (OOD) Detection