no code implementations • 2 Mar 2024 • Xinyi Yu, Ling Yan, PengTao Jiang, Hao Chen, Bo Li, Lin Yuanbo Wu, Linlin Ou
This innovative approach empowers the network to simultaneously predict masks and depth, enhancing its ability to capture nuanced depth-related information during the instance segmentation process.
no code implementations • 18 Sep 2023 • Xinyi Yu, Liqin Lu, Jintao Rong, Guangkai Xu, Linlin Ou
3D scene reconstruction from 2D images has been a long-standing task.
1 code implementation • ICCV 2023 • Mingyang Zhang, Xinyi Yu, Haodong Zhao, Linlin Ou
To address the problem of uniform sampling, we propose ShiftNAS, a method that can adjust the sampling probability based on the complexity of subnets.
no code implementations • 4 Jun 2023 • Jintao Rong, Hao Chen, Tianxiao Chen, Linlin Ou, Xinyi Yu, Yifan Liu
Prompt learning has become a popular approach for adapting large vision-language models, such as CLIP, to downstream tasks.
no code implementations • 28 May 2023 • Mingyang Zhang, Hao Chen, Chunhua Shen, Zhen Yang, Linlin Ou, Xinyi Yu, Bohan Zhuang
This is due to their utilization of unstructured pruning on LPMs, impeding the merging of LoRA weights, or their dependence on the gradients of pre-trained weights to guide pruning, which can impose significant memory overhead.
no code implementations • 19 Apr 2023 • Yang Yang, Weijie Ma, Hao Chen, Linlin Ou, Xinyi Yu
The combination of LiDAR and camera modalities is proven to be necessary and typical for 3D object detection according to recent studies.
no code implementations • 30 Jun 2022 • Jiangping Lu, Xinyi Yu, Mi Lin, Linlin Ou
Thus, the Gaussian Angle Loss (GA Loss) is presented to solve this problem by adding a corrected loss for square targets.
no code implementations • 4 May 2022 • Xinyi Yu, Jianan Hu, Yuehai Fan, Wancai Zheng, Linlin Ou
Firstly, based on subgraph network, the history information of all agents is aggregated before encoding interactions through a graph neural network, so as to improve the ability of the robot to anticipate the future scenarios implicitly.
no code implementations • 13 Apr 2022 • Xinyi Yu, Xiaowei Wang, Jintao Rong, Mingyang Zhang, Linlin Ou
However, the performance of architecture is limited by the type of operations and prior knowledge.
no code implementations • 8 Jan 2022 • Xinyi Yu, Weiqi He, Xuecheng Qian, Yang Yang, Linlin Ou
Accurate rail location is a crucial part in the railway support driving system for safety monitoring.
no code implementations • 31 Dec 2021 • Xinyi Yu, Ling Yan, Yang Yang, Libo Zhou, Linlin Ou
In this paper, we propose a conditional generative data-free knowledge distillation (CGDD) framework for training lightweight networks without any training data.
Conditional Image Generation Data-free Knowledge Distillation +1
no code implementations • 12 Oct 2021 • Jingtao Rong, Xinyi Yu, Mingyang Zhang, Linlin Ou
In this paper, an across-task neural architecture search (AT-NAS) is proposed to address the problem through combining gradient-based meta-learning with EA-based NAS to learn over the distribution of tasks.
no code implementations • 21 Sep 2021 • Xinyi Yu, Mi Lin, Jiangping Lu, Linlin Ou
Oriented object detection is a challenging task in aerial images since the objects in aerial images are displayed in arbitrary directions and are frequently densely packed.
1 code implementation • 8 Sep 2021 • Mingyang Zhang, Xinyi Yu, Jingtao Rong, Linlin Ou
However, it is still challenging to search for efficient networks due to the gap between the searched constraint and real inference time exists.
no code implementations • 11 Jun 2021 • Weichen Chen, Xinyi Yu, Linlin Ou
A specific view-attribute is composed by the extracted attribute feature and four view scores which are predicted by view predictor as the confidences for attribute from different views.
no code implementations • 10 Nov 2020 • Mingyang Zhang, Xinyi Yu, Jingtao Rong, Linlin Ou
To overcome the unfull training, a stage-wise pruning(SWP) method is proposed, which splits a deep supernet into several stage-wise supernets to reduce the candidate number and utilize inplace distillation to supervise the stage training.
no code implementations • 22 Nov 2019 • Mingyang Zhang, Xinyi Yu, Jingtao Rong, Linlin Ou
Different from previous work, we take the node features from a well-trained graph aggregator instead of the hand-craft features, as the states in reinforcement learning.