1 code implementation • 4 Dec 2023 • Fan Lu, Kai Zhu, Kecheng Zheng, Wei Zhai, Yang Cao
Full-spectrum out-of-distribution (F-OOD) detection aims to accurately recognize in-distribution (ID) samples while encountering semantic and covariate shifts simultaneously.
no code implementations • 29 Oct 2023 • Weiyi Xue, Fan Lu, Guang Chen
Specifically, A novel feature consistency enhanced double-soft matching network is introduced to achieve two-stage matching with high flexibility while enlarging the receptive field with high efficiency in a patch-to patch manner, which significantly improves the registration performance.
no code implementations • 10 Sep 2023 • Fan Lu, Sean Meyn
The main contributions firstly concern properties of the relaxation, described as a deterministic convex program: we identify conditions for a bounded solution, and a significant relationship between the solution to the new convex program, and the solution to standard Q-learning.
no code implementations • ICCV 2023 • Fan Lu, Yan Xu, Guang Chen, Hongsheng Li, Kwan-Yee Lin, Changjun Jiang
To construct urban-level radiance fields efficiently, we design Deformable Neural Mesh Primitive~(DNMP), and propose to parameterize the entire scene with such primitives.
1 code implementation • CVPR 2023 • Zehan Zheng, Danni Wu, Ruisi Lu, Fan Lu, Guang Chen, Changjun Jiang
In light of these issues, we present NeuralPCI: an end-to-end 4D spatio-temporal Neural field for 3D Point Cloud Interpolation, which implicitly integrates multi-frame information to handle nonlinear large motions for both indoor and outdoor scenarios.
Ranked #1 on
3D Point Cloud Interpolation
on NL-Drive
1 code implementation • CVPR 2023 • Fan Lu, Kai Zhu, Wei Zhai, Kecheng Zheng, Yang Cao
Semantically coherent out-of-distribution (SCOOD) detection aims to discern outliers from the intended data distribution with access to unlabeled extra set.
no code implementations • 17 Oct 2022 • Fan Lu, Prashant Mehta, Sean Meyn, Gergely Neu
The main contributions follow: (i) The dual of convex Q-learning is not precisely Manne's LP or a version of logistic Q-learning, but has similar structure that reveals the need for regularization to avoid over-fitting.
no code implementations • 14 Oct 2022 • Fan Lu, Joel Mathias, Sean Meyn, Karanjit Kalsi
Convex Q-learning is a recent approach to reinforcement learning, motivated by the possibility of a firmer theory for convergence, and the possibility of making use of greater a priori knowledge regarding policy or value function structure.
1 code implementation • 12 Feb 2022 • Fan Lu, Qimai Li, Bo Liu, Xiao-Ming Wu, Xiaotong Zhang, Fuyu Lv, Guli Lin, Sen Li, Taiwei Jin, Keping Yang
Our approach can be seamlessly integrated with existing latent space based methods and be potentially applied in any product retrieval method that uses purchase history to model user preferences.
1 code implementation • ICCV 2021 • Fan Lu, Guang Chen, Yinlong Liu, Lijun Zhang, Sanqing Qu, Shu Liu, Rongqi Gu
Extensive experiments are conducted on two large-scale outdoor LiDAR point cloud datasets to demonstrate the high accuracy and efficiency of the proposed HRegNet.
2 code implementations • 7 Apr 2021 • Sanqing Qu, Guang Chen, Zhijun Li, Lijun Zhang, Fan Lu, Alois Knoll
Traditional methods mainly focus on foreground and background frames separation with only a single attention branch and class activation sequence.
Ranked #4 on
Weakly Supervised Action Localization
on THUMOS14
Weakly Supervised Action Localization
Weakly-supervised Temporal Action Localization
+1
1 code implementation • 18 Dec 2020 • Fan Lu, Guang Chen, Sanqing Qu, Zhijun Li, Yinlong Liu, Alois Knoll
Generally, the frame rates of mechanical LiDAR sensors are 10 to 20 Hz, which is much lower than other commonly used sensors like cameras.
no code implementations • 21 Nov 2020 • Fan Lu, Guang Chen, Yinlong Liu, Zhijun Li, Sanqing Qu, Tianpei Zou
3D point clouds accurately model 3D information of surrounding environment and are crucial for intelligent vehicles to perceive the scene.
no code implementations • 16 Nov 2020 • Sanqing Qu, Guang Chen, Dan Xu, Jinhu Dong, Fan Lu, Alois Knoll
At each time step, this sampling strategy first estimates current action progression and then decide what temporal ranges should be used to aggregate the optimal supplementary features.
1 code implementation • NeurIPS 2020 • Fan Lu, Guang Chen, Yinlong Liu, Zhongnan Qu, Alois Knoll
To tackle the information loss of random sampling, we exploit a novel random dilation cluster strategy to enlarge the receptive field of each sampled point and an attention mechanism to aggregate the positions and features of neighbor points.
1 code implementation • EMNLP 2020 • Di wu, Liang Ding, Fan Lu, Jian Xie
Slot filling and intent detection are two main tasks in spoken language understanding (SLU) system.
no code implementations • NeurIPS 2020 • Shuhang Chen, Adithya M. Devraj, Fan Lu, Ana Bušić, Sean P. Meyn
Based on multiple experiments with a range of neural network sizes, it is found that the new algorithms converge quickly and are robust to choice of function approximation architecture.
no code implementations • 16 Apr 2019 • Kong Aik Lee, Ville Hautamaki, Tomi Kinnunen, Hitoshi Yamamoto, Koji Okabe, Ville Vestman, Jing Huang, Guohong Ding, Hanwu Sun, Anthony Larcher, Rohan Kumar Das, Haizhou Li, Mickael Rouvier, Pierre-Michel Bousquet, Wei Rao, Qing Wang, Chunlei Zhang, Fahimeh Bahmaninezhad, Hector Delgado, Jose Patino, Qiongqiong Wang, Ling Guo, Takafumi Koshinaka, Jiacen Zhang, Koichi Shinoda, Trung Ngo Trong, Md Sahidullah, Fan Lu, Yun Tang, Ming Tu, Kah Kuan Teh, Huy Dat Tran, Kuruvachan K. George, Ivan Kukanov, Florent Desnous, Jichen Yang, Emre Yilmaz, Longting Xu, Jean-Francois Bonastre, Cheng-Lin Xu, Zhi Hao Lim, Eng Siong Chng, Shivesh Ranjan, John H. L. Hansen, Massimiliano Todisco, Nicholas Evans
The I4U consortium was established to facilitate a joint entry to NIST speaker recognition evaluations (SRE).