no code implementations • 1 Mar 2025 • Feng Guo, Luis D. Couto, Khiem Trad, Grietus Mulder, Keivan Haghverdi, Guillaume Thenaisie
For minimizing model voltage output error and time cost, {C/2, 1C} is best, while {1C} is ideal for parameter error and time cost.
no code implementations • 27 Feb 2025 • Feng Guo, Luis D. Couto
This study evaluates numerical discretization methods for lithium-ion battery models, including Finite Difference Method (FDM), spectral methods, Pad\'e approximation, and parabolic approximation.
no code implementations • 1 Oct 2024 • Liang Shi, Boyu Jiang, Tong Zeng, Feng Guo
Accurately identifying, understanding and describing traffic safety-critical events (SCEs), including crashes, tire strikes, and near-crashes, is crucial for advanced driver assistance systems, automated driving systems, and traffic safety.
1 code implementation • 23 Oct 2023 • Hengchang Guo, Qilong Zhang, Junwei Luo, Feng Guo, Wenbin Zhang, Xiaodong Su, Minglei Li
Compared with state-of-the-art approaches, our blind watermarking can achieve better performance: averagely improve the bit accuracy by 5. 28\% and 5. 93\% against single and combined attacks, respectively, and show less file size increment and better visual quality.
no code implementations • 20 Apr 2023 • Feng Guo, Zheng Sun, Yuxuan Chen, Lei Ju
In this work, we propose a novel method to infer the adversary intent and discover audio adversarial examples based on the AEs generation process.
no code implementations • 18 Apr 2023 • Feng Guo, Zheng Sun, Yuxuan Chen, Lei Ju
In this work, we explore the potential factors that impact adversarial examples (AEs) transferability in DL-based speech recognition.
no code implementations • 22 Jul 2022 • Joseph Sutlive, Hamed Seyyedhosseinzadeh, Zheng Ao, Haning Xiu, Kun Gou, Feng Guo, Zi Chen
Due to the complexity and costs of in vivo and in vitro studies, a variety of computational models have been developed and used to explain the formation and morphogenesis of brain structures.
no code implementations • 4 Feb 2022 • Luyang Liu, David Racz, Kara Vaillancourt, Julie Michelman, Matt Barnes, Stefan Mellem, Paul Eastham, Bradley Green, Charles Armstrong, Rishi Bal, Shawn O'Banion, Feng Guo
Hard-braking events have been widely used as a safety surrogate due to their relatively high prevalence and ease of detection with embedded vehicle sensors.
no code implementations • 29 Sep 2021 • Feng Guo, Qu Wei, Miao Wang, Zhaoxia Guo
We thus propose a Deep Dynamic Attention Models with Gate Mechanisms (DDAM-GM) to learn heuristics for time-dependent VRPs (TDVRPs) in real-world road networks.
1 code implementation • NeurIPS 2020 • Danni Lu, Chenyang Tao, Junya Chen, Fan Li, Feng Guo, Lawrence Carin
As a step towards more flexible, scalable and accurate ITE estimation, we present a novel generative Bayesian estimation framework that integrates representation learning, adversarial matching and causal estimation.
2 code implementations • 16 Sep 2020 • Qiang Fu, Jialong Wang, Hongshan Yu, Islam Ali, Feng Guo, Yijia He, Hong Zhang
This paper presents PL-VINS, a real-time optimization-based monocular VINS method with point and line features, developed based on the state-of-the-art point-based VINS-Mono \cite{vins}.
no code implementations • 27 May 2020 • Baoxu Shi, Jaewon Yang, Feng Guo, Qi He
Based on the above promising results, we deployed the \model ~online to extract job targeting skills for all $20$M job postings served at LinkedIn.
3 code implementations • CVPR 2020 • Junjie Huang, Zheng Zhu, Feng Guo, Guan Huang, Dalong Du
Specifically, by investigating the standard data processing in state-of-the-art approaches mainly including coordinate system transformation and keypoint format transformation (i. e., encoding and decoding), we find that the results obtained by common flipping strategy are unaligned with the original ones in inference.
Ranked #14 on
Pose Estimation
on COCO test-dev
no code implementations • 6 Aug 2019 • Rongrong Ji, Ke Li, Yan Wang, Xiaoshuai Sun, Feng Guo, Xiaowei Guo, Yongjian Wu, Feiyue Huang, Jiebo Luo
In this paper, we address the problem of monocular depth estimation when only a limited number of training image-depth pairs are available.
no code implementations • ICLR 2020 • Qian Lou, Feng Guo, Lantao Liu, Minje Kim, Lei Jiang
Recent network quantization techniques quantize each weight kernel in a convolutional layer independently for higher inference accuracy, since the weight kernels in a layer exhibit different variances and hence have different amounts of redundancy.