no code implementations • 11 Dec 2023 • Guglielmo Camporese, Alessandro Bergamo, Xunyu Lin, Joseph Tighe, Davide Modolo
For example, on early recognition observing only the first 10% of each video, our method improves the SOTA by +2. 23 Top-1 accuracy on Something-Something-v2, +3. 55 on UCF-101, +3. 68 on SSsub21, and +5. 03 on EPIC-Kitchens-55, where prior work used either multi-modal inputs (e. g. optical-flow) or batched inference.
1 code implementation • 22 Oct 2018 • Bo Zhou, Xunyu Lin, Brendan Eck, Jun Hou, David L. Wilson
Dual-energy (DE) chest radiographs provide greater diagnostic information than standard radiographs by separating the image into bone and soft tissue, revealing suspicious lesions which may otherwise be obstructed from view.
1 code implementation • 13 Jul 2017 • Xunyu Lin, Victor Campos, Xavier Giro-i-Nieto, Jordi Torres, Cristian Canton Ferrer
This paper introduces an unsupervised framework to extract semantically rich features for video representation.
1 code implementation • 25 Jun 2017 • Ruben Villegas, Jimei Yang, Seunghoon Hong, Xunyu Lin, Honglak Lee
To the best of our knowledge, this is the first end-to-end trainable network architecture with motion and content separation to model the spatiotemporal dynamics for pixel-level future prediction in natural videos.
Ranked #1 on
Video Prediction
on KTH
(Cond metric)
2 code implementations • ICML 2017 • Ruben Villegas, Jimei Yang, Yuliang Zou, Sungryull Sohn, Xunyu Lin, Honglak Lee
To avoid inherent compounding errors in recursive pixel-level prediction, we propose to first estimate high-level structure in the input frames, then predict how that structure evolves in the future, and finally by observing a single frame from the past and the predicted high-level structure, we construct the future frames without having to observe any of the pixel-level predictions.