no code implementations • 14 Oct 2021 • Xinyue Wei, Weichao Qiu, Yi Zhang, Zihao Xiao, Alan Yuille
Nuisance factors are those irrelevant to a task, and an ideal model should be invariant to them.
no code implementations • CVPR 2022 • Nataniel Ruiz, Adam Kortylewski, Weichao Qiu, Cihang Xie, Sarah Adel Bargal, Alan Yuille, Stan Sclaroff
In this work, we propose a framework for learning how to test machine learning algorithms using simulators in an adversarial manner in order to find weaknesses in the model before deploying it in critical scenarios.
1 code implementation • ICCV 2021 • Jiteng Mu, Weichao Qiu, Adam Kortylewski, Alan Yuille, Nuno Vasconcelos, Xiaolong Wang
To deal with the large shape variance, we introduce Articulated Signed Distance Functions (A-SDF) to represent articulated shapes with a disentangled latent space, where we have separate codes for encoding shape and articulation.
1 code implementation • CVPR 2022 • Qing Liu, Adam Kortylewski, Zhishuai Zhang, Zizhang Li, Mengqi Guo, Qihao Liu, Xiaoding Yuan, Jiteng Mu, Weichao Qiu, Alan Yuille
We believe our dataset provides a rich testbed to study UDA for part segmentation and will help to significantly push forward research in this area.
no code implementations • 30 Nov 2020 • Qihao Liu, Weichao Qiu, Weiyao Wang, Gregory D. Hager, Alan L. Yuille
We propose an unsupervised vision-based system to estimate the joint configurations of the robot arm from a sequence of RGB or RGB-D images without knowing the model a priori, and then adapt it to the task of category-independent articulated object pose estimation.
2 code implementations • 26 Oct 2020 • Zhe Zhang, Chunyu Wang, Weichao Qiu, Wenhu Qin, Wenjun Zeng
To make the task truly unconstrained, we present AdaFuse, an adaptive multiview fusion method, which can enhance the features in occluded views by leveraging those in visible views.
Ranked #1 on
3D Human Pose Estimation
on Total Capture
2 code implementations • CVPR 2020 • Jiteng Mu, Weichao Qiu, Gregory Hager, Alan Yuille
Despite great success in human parsing, progress for parsing other deformable articulated objects, like animals, is still limited by the lack of labeled data.
no code implementations • 13 Dec 2019 • Jialing Lyu, Weichao Qiu, Xinyue Wei, Yi Zhang, Alan Yuille, Zheng-Jun Zha
This can explain why an activity classification model usually fails to generalize to datasets it is not trained on.
no code implementations • 9 Dec 2019 • Pengfei Li, Weichao Qiu, Michael Peven, Gregory D. Hager, Alan L. Yuille
Scene context is a powerful constraint on the geometry of objects within the scene in cases, such as surveillance, where the camera geometry is unknown and image quality may be poor.
no code implementations • 8 Dec 2019 • Tae Soo Kim, Jonathan D. Jones, Michael Peven, Zihao Xiao, Jin Bai, Yi Zhang, Weichao Qiu, Alan Yuille, Gregory D. Hager
There are many realistic applications of activity recognition where the set of potential activity descriptions is combinatorially large.
no code implementations • 3 Dec 2019 • Yi Zhang, Xinyue Wei, Weichao Qiu, Zihao Xiao, Gregory D. Hager, Alan Yuille
In this paper, we propose the Randomized Simulation as Augmentation (RSA) framework which augments real-world training data with synthetic data to improve the robustness of action recognition networks.
no code implementations • 25 Nov 2019 • Michelle Shu, Chenxi Liu, Weichao Qiu, Alan Yuille
Different from the existing strategy to always give the same (distribution of) test data, the adversarial examiner will dynamically select the next test data to hand out based on the testing history so far, with the goal being to undermine the model's performance.
no code implementations • 20 May 2019 • Qingfu Wan, Weichao Qiu, Alan L. Yuille
State-of-the-art 3D human pose estimation approaches typically estimate pose from the entire RGB image in a single forward run.
1 code implementation • CVPR 2019 • Yiming Zuo, Weichao Qiu, Lingxi Xie, Fangwei Zhong, Yizhou Wang, Alan L. Yuille
We also construct a vision-based control system for task accomplishment, for which we train a reinforcement learning agent in a virtual environment and apply it to the real-world.
1 code implementation • ICCV 2019 • Yutong Bai, Qing Liu, Lingxi Xie, Weichao Qiu, Yan Zheng, Alan Yuille
In particular, this enables images in the training dataset to be matched to a virtual 3D model of the object (for simplicity, we assume that the object viewpoint can be estimated by standard techniques).
no code implementations • 1 Apr 2018 • Qi Chen, Weichao Qiu, Yi Zhang, Lingxi Xie, Alan Yuille
But, this raises an important problem in active vision: given an {\bf infinite} data space, how to effectively sample a {\bf finite} subset to train a visual classifier?
no code implementations • CVPR 2019 • Xiaohui Zeng, Chenxi Liu, Yu-Siang Wang, Weichao Qiu, Lingxi Xie, Yu-Wing Tai, Chi Keung Tang, Alan L. Yuille
Though image-space adversaries can be interpreted as per-pixel albedo change, we verify that they cannot be well explained along these physically meaningful dimensions, which often have a non-local effect.
no code implementations • ICCV 2017 • Siyuan Qiao, Wei Shen, Weichao Qiu, Chenxi Liu, Alan Yuille
We argue that estimation of object scales in images is helpful for generating object proposals, especially for supermarket images where object scales are usually within a small range.
no code implementations • 14 Dec 2016 • Yi Zhang, Weichao Qiu, Qi Chen, Xiaolin Hu, Alan Yuille
We generate a large synthetic image dataset with automatically computed hazardous regions and analyze algorithms on these regions.
1 code implementation • 5 Sep 2016 • Weichao Qiu, Alan Yuille
Computer graphics can not only generate synthetic images and ground truth but it also offers the possibility of constructing virtual worlds in which: (i) an agent can perceive, navigate, and take actions guided by AI algorithms, (ii) properties of the worlds can be modified (e. g., material and reflectance), (iii) physical simulations can be performed, and (iv) algorithms can be learnt and evaluated.
no code implementations • 21 Nov 2015 • Xuan Dong, Boyan Bonev, Weixin Li, Weichao Qiu, Xianjie Chen, Alan Yuille
Base-detail separation is a fundamental computer vision problem consisting of modeling a smooth base layer with the coarse structures, and a detail layer containing the texture-like structures.