no code implementations • 5 Feb 2025 • Hamid Eghbalzadeh, Yang Wang, Rui Li, Yuji Mo, Qin Ding, Jiaxiang Fu, Liang Dai, Shuo Gu, Nima Noorshams, Sem Park, Bo Long, Xue Feng
Industrial ads ranking systems conventionally rely on labeled impression data, which leads to challenges such as overfitting, slower incremental gain from model scaling, and biases due to discrepancies between training and serving data.
no code implementations • 4 Jun 2024 • Lijun Zhou, Tao Tang, Pengkun Hao, Zihang He, Kalok Ho, Shuo Gu, Wenbo Hou, Zhihui Hao, Haiyang Sun, Kun Zhan, Peng Jia, Xianpeng Lang, Xiaodan Liang
Secondly, we propose an Uncertainty-guided Query Denoising strategy to further enhance the training process.
no code implementations • 12 Sep 2023 • Qianliang Wu, Yaqing Ding, Lei Luo, Haobo Jiang, Shuo Gu, Chuanwei Zhou, Jin Xie, Jian Yang
These high-order features are then propagated to dense points and utilized by a Sinkhorn matching module to identify key correspondences for successful registration.
1 code implementation • 24 Aug 2023 • Wei Xie, Haobo Jiang, Shuo Gu, Jin Xie
Robust obstacle avoidance is one of the critical steps for successful goal-driven indoor navigation tasks. Due to the obstacle missing in the visual image and the possible missed detection issue, visual image-based obstacle avoidance techniques still suffer from unsatisfactory robustness.
no code implementations • ICCV 2023 • Haobo Jiang, Zheng Dang, Shuo Gu, Jin Xie, Mathieu Salzmann, Jian Yang
Our method decouples the translation from the entire transformation by predicting the object center and estimating the rotation in a center-aware manner.
no code implementations • 6 May 2022 • Shuo Gu, Suling Yao, Jian Yang, Hui Kong
Instead of segmenting the moving objects directly, the network conducts single-scan-based semantic segmentation and multiple-scan-based moving object segmentation in turn.
no code implementations • 10 Jun 2018 • Shih Chung B. Lo, Matthew T. Freedman, Seong K. Mun, Shuo Gu
We further found that any CNN possessing the same TI kernel property for all convolution layers followed by a flatten layer with weight sharing among their transformation corresponding elements would output the same result for all transformation versions of the original input vector.