no code implementations • 30 Dec 2022 • Colorado J. Reed, Ritwik Gupta, Shufan Li, Sarah Brockman, Christopher Funk, Brian Clipp, Kurt Keutzer, Salvatore Candido, Matt Uyttendaele, Trevor Darrell
Large, pretrained models are commonly finetuned with imagery that is heavily augmented to mimic different conditions and scales, with the resulting models used for various tasks with imagery from a range of spatial scales.
no code implementations • 25 Nov 2022 • Shufan Li, Congxi Lu, Linkai Li, Haoshuai Zhou
We collected two datasets consisting of real camera photos for evaluation.
1 code implementation • 25 Aug 2022 • Akash Gokul, Konstantinos Kallidromitis, Shufan Li, Yusuke Kato, Kazuki Kozuka, Trevor Darrell, Colorado J Reed
Recent works in self-supervised learning have demonstrated strong performance on scene-level dense prediction tasks by pretraining with object-centric or region-based correspondence objectives.
1 code implementation • 17 Dec 2021 • Shufan Li, Congxi Lu, Linkai Li, Jirong Duan, Xinping Fu, Haoshuai Zhou
Audiograms are a particular type of line charts representing individuals' hearing level at various frequencies.