1 code implementation • CVPR 2023 • Zhifeng Lin, Changxing Ding, Huan Yao, Zengsheng Kuang, Shaoli Huang
Notably, the performance of our model on hand pose estimation even surpasses that of existing works that only perform the single-hand pose estimation task.
Ranked #2 on hand-object pose on DexYCB
2 code implementations • 16 Jul 2022 • Zhiyin Shao, Xinyu Zhang, Meng Fang, Zhifeng Lin, Jian Wang, Changxing Ding
In PGU, we adopt a set of shared and learnable prototypes as the queries to extract diverse and semantically aligned features for both modalities in the granularity-unified feature space, which further promotes the ReID performance.
1 code implementation • 7 Dec 2019 • Krishna Giri Narra, Zhifeng Lin, Yongqin Wang, Keshav Balasubramaniam, Murali Annavaram
However, the overhead of blinding and unblinding the data is a limiting factor to scalability.
no code implementations • 22 Oct 2019 • Zhifeng Lin, Krishna Giri Narra, Mingchao Yu, Salman Avestimehr, Murali Annavaram
Most of the model training is performed on high performance compute nodes and the training data is stored near these nodes for faster training.
no code implementations • 5 Jun 2019 • Krishna Narra, Zhifeng Lin, Ganesh Ananthanarayanan, Salman Avestimehr, Murali Annavaram
In this work, we argue that MLaaS platforms also provide unique opportunities to cut the cost of redundancy.
no code implementations • 27 Apr 2019 • Krishna Giri Narra, Zhifeng Lin, Ganesh Ananthanarayanan, Salman Avestimehr, Murali Annavaram
Deploying the collage-cnn models in the cloud, we demonstrate that the 99th percentile tail latency of inference can be reduced by 1. 2x to 2x compared to replication based approaches while providing high accuracy.
no code implementations • NeurIPS 2018 • Mingchao Yu, Zhifeng Lin, Krishna Narra, Songze Li, Youjie Li, Nam Sung Kim, Alexander Schwing, Murali Annavaram, Salman Avestimehr
Data parallelism can boost the training speed of convolutional neural networks (CNN), but could suffer from significant communication costs caused by gradient aggregation.