no code implementations • 10 May 2024 • Xiaohan Zhang, Yukui Qiu, Zhenyu Sun, Qi Liu
To that end, we propose Aerial-NeRF with three innovative modifications for jointly adapting NeRF in large-scale aerial rendering: (1) Designing an adaptive spatial partitioning and selection method based on drones' poses to adapt different flight trajectories; (2) Using similarity of poses instead of (expert) network for rendering speedup to determine which region a new viewpoint belongs to; (3) Developing an adaptive sampling approach for rendering performance improvement to cover the entire buildings at different heights.
no code implementations • 22 Mar 2024 • Zhenyu Sun, Ermin Wei
Leveraging gradient clipping and variance reduction, our algorithm can achieve the best-known $\mathcal{O}(\epsilon^{-3})$ sample complexity and enjoys convergence speedup with simple hyperparameter tuning.
no code implementations • 2 Jul 2023 • Jianxun Ren, Ning An, Youjia Zhang, Danyang Wang, Zhenyu Sun, Cong Lin, Weigang Cui, Weiwei Wang, Ying Zhou, Wei zhang, Qingyu Hu, Ping Zhang, Dan Hu, Danhong Wang, Hesheng Liu
Cortical surface registration plays a crucial role in aligning cortical functional and anatomical features across individuals.
no code implementations • 22 Jun 2023 • Chao Shen, Wenkang Zhan, Kaiyao Xin, Manyang Li, Zhenyu Sun, Hui Cong, Chi Xu, Jian Tang, Zhaofeng Wu, Bo Xu, Zhongming Wei, Chunlai Xue, Chao Zhao, Zhanguo Wang
Self-assembled InAs/GaAs quantum dots (QDs) have properties highly valuable for developing various optoelectronic devices such as QD lasers and single photon sources.
no code implementations • 6 Jun 2023 • Zhenyu Sun, Xiaochun Niu, Ermin Wei
While the existing literature has studied extensively the generalization performances of centralized machine learning algorithms, similar analysis in the federated settings is either absent or with very restrictive assumptions on the loss functions.
no code implementations • 2 Jun 2022 • Zhenyu Sun, Ermin Wei
Most existing federated minimax algorithms either require communication per iteration or lack performance guarantees with the exception of Local Stochastic Gradient Descent Ascent (SGDA), a multiple-local-update descent ascent algorithm which guarantees convergence under a diminishing stepsize.