no code implementations • 19 Jul 2017 • Yang Song, Yuan Li, Bo Wu, Chao-Yeh Chen, Xiao Zhang, Hartwig Adam
To ease the training difficulty, a novel learning scheme is proposed by using the output from specialized models as learning targets so that L2 loss can be used instead of triplet loss.
no code implementations • 11 Jul 2016 • Chao-Yeh Chen, Kristen Grauman
We show that this detection strategy permits an efficient branch-and-cut solution for the best-scoring---and possibly non-cubically shaped---portion of the video for a given activity classifier.
no code implementations • 17 Apr 2016 • Chao-Yeh Chen, Kristen Grauman
We propose to predict the "interactee" in novel images---that is, to localize the \emph{object} of a person's action.
no code implementations • CVPR 2014 • Chao-Yeh Chen, Kristen Grauman
The appearance of an attribute can vary considerably from class to class (e. g., a "fluffy" dog vs. a "fluffy" towel), making standard class-independent attribute models break down.
no code implementations • CVPR 2014 • Chao-Yeh Chen, Kristen Grauman
We pose unseen view synthesis as a probabilistic tensor completion problem.
no code implementations • CVPR 2013 • Chao-Yeh Chen, Kristen Grauman
We propose an approach to learn action categories from static images that leverages prior observations of generic human motion to augment its training process.