no code implementations • 1 Mar 2024 • Chunlin Li, Ruofan Liang, Hanrui Fan, Zhengen Zhang, Sankeerth Durvasula, Nandita Vijaykumar
We present a framework, DISORF, to enable online 3D reconstruction and visualization of scenes captured by resource-constrained mobile robots and edge devices.
1 code implementation • 25 Jul 2023 • Zhengliang Liu, Tianyang Zhong, Yiwei Li, Yutong Zhang, Yi Pan, Zihao Zhao, Peixin Dong, Chao Cao, Yuxiao Liu, Peng Shu, Yaonai Wei, Zihao Wu, Chong Ma, Jiaqi Wang, Sheng Wang, Mengyue Zhou, Zuowei Jiang, Chunlin Li, Jason Holmes, Shaochen Xu, Lu Zhang, Haixing Dai, Kai Zhang, Lin Zhao, Yuanhao Chen, Xu Liu, Peilong Wang, Pingkun Yan, Jun Liu, Bao Ge, Lichao Sun, Dajiang Zhu, Xiang Li, Wei Liu, Xiaoyan Cai, Xintao Hu, Xi Jiang, Shu Zhang, Xin Zhang, Tuo Zhang, Shijie Zhao, Quanzheng Li, Hongtu Zhu, Dinggang Shen, Tianming Liu
The rise of large language models (LLMs) has marked a pivotal shift in the field of natural language processing (NLP).
1 code implementation • ICCV 2023 • Ruofan Liang, Huiting Chen, Chunlin Li, Fan Chen, Selvakumar Panneer, Nandita Vijaykumar
In this work, we introduce ENVIDR, a rendering and modeling framework for high-quality rendering and reconstruction of surfaces with challenging specular reflections.
1 code implementation • 18 Nov 2022 • Ben Dai, Xiaotong Shen, Lin Yee Chen, Chunlin Li, Wei Pan
We apply the proposed method to the MNIST dataset and the MIT-BIH dataset with a convolutional auto-encoder.
1 code implementation • 27 Jun 2022 • Ben Dai, Chunlin Li
In this paper, we establish a theoretical foundation of segmentation with respect to the Dice/IoU metrics, including the Bayes rule and Dice-/IoU-calibration, analogous to classification-calibration or Fisher consistency in classification.
no code implementations • 10 Feb 2021 • Pan Wang, Rui Zhou, Shuo Wang, Ling Li, Wenjia Bai, Jialu Fan, Chunlin Li, Peter Childs, Yike Guo
For this reason, we propose an end-to-end brain decoding framework which translates brain activity into an image by latent space alignment.