no code implementations • 14 Nov 2024 • Xiongye Xiao, Shixuan Li, Luzhe Huang, Gengshuo Liu, Trung-Kien Nguyen, Yi Huang, Di Chang, Mykel J. Kochenderfer, Paul Bogdan
While working within the spatial domain can pose problems associated with ill-conditioned scores caused by power-law decay, recent advances in diffusion-based generative models have shown that transitioning to the wavelet domain offers a promising alternative.
no code implementations • 7 Jul 2024 • Luzhe Huang, Xiongye Xiao, Shixuan Li, Jiawen Sun, Yi Huang, Aydogan Ozcan, Paul Bogdan
The advance of diffusion-based generative models in recent years has revolutionized state-of-the-art (SOTA) techniques in a wide variety of image analysis and synthesis tasks, whereas their adaptation on image restoration, particularly within computational microscopy remains theoretically and empirically underexplored.
no code implementations • 23 May 2024 • Shukai Duan, Heng Ping, Nikos Kanakaris, Xiongye Xiao, Peiyu Zhang, Panagiotis Kyriakis, Nesreen K. Ahmed, Guixiang Ma, Mihai Capota, Shahin Nazarian, Theodore L. Willke, Paul Bogdan
To bridge the gap between encoder-placer and grouper-placer techniques, we propose a novel framework for the task of device placement, relying on smaller computation graphs extracted from the OpenVINO toolkit using reinforcement learning.
1 code implementation • 15 Apr 2024 • Xiongye Xiao, Gengshuo Liu, Gaurav Gupta, Defu Cao, Shixuan Li, Yaxing Li, Tianqing Fang, Mingxi Cheng, Paul Bogdan
Integrating and processing information from various sources or modalities are critical for obtaining a comprehensive and accurate perception of the real world in autonomous systems and cyber-physical systems.
1 code implementation • 14 Feb 2024 • Xiongye Xiao, Chenyu Zhou, Heng Ping, Defu Cao, Yaxing Li, Yizhuo Zhou, Shixuan Li, Paul Bogdan
Prior studies on the emergence in large models have primarily focused on how the functional capabilities of large language models (LLMs) scale with model size.
1 code implementation • 20 Dec 2023 • Anzhe Cheng, Zhenkun Wang, Chenzhong Yin, Mingxi Cheng, Heng Ping, Xiongye Xiao, Shahin Nazarian, Paul Bogdan
This includes decisions on how to decouple network blocks and which auxiliary networks to use for each block.
no code implementations • 19 Dec 2023 • Chenzhong Yin, Hantang Zhang, Mingxi Cheng, Xiongye Xiao, Xinghe Chen, Xin Ren, Paul Bogdan
Malware represents a significant security concern in today's digital landscape, as it can destroy or disable operating systems, steal sensitive user information, and occupy valuable disk space.
no code implementations • 9 Dec 2023 • Shukai Duan, Nikos Kanakaris, Xiongye Xiao, Heng Ping, Chenyu Zhou, Nesreen K. Ahmed, Guixiang Ma, Mihai Capota, Theodore L. Willke, Shahin Nazarian, Paul Bogdan
We compare our framework with existing state-of-the-art models and show that it is more efficient with respect to speed and computational usage, as a result of the decrement in training steps and its applicability to models with fewer parameters.
no code implementations • 11 Oct 2023 • Chenzhong Yin, Mingxi Cheng, Xiongye Xiao, Xinghe Chen, Shahin Nazarian, Andrei Irimia, Paul Bogdan
Motivated by the intricacy of these collectives, we propose a neural network (NN) architecture inspired by the rules observed in nature's collective ensembles.
no code implementations • 27 Sep 2023 • Xiongye Xiao, Gengshuo Liu, Gaurav Gupta, Defu Cao, Shixuan Li, Yaxing Li, Tianqing Fang, Mingxi Cheng, Paul Bogdan
Integrating and processing information from various sources or modalities are critical for obtaining a comprehensive and accurate perception of the real world.
1 code implementation • 4 Mar 2023 • Xiongye Xiao, Defu Cao, Ruochen Yang, Gaurav Gupta, Gengshuo Liu, Chenzhong Yin, Radu Balan, Paul Bogdan
Coupled partial differential equations (PDEs) are key tasks in modeling the complex dynamics of many physical processes.
no code implementations • ICLR 2022 • Gaurav Gupta, Xiongye Xiao, Radu Balan, Paul Bogdan
The Padé exponential operator uses a $\textit{recurrent structure with shared parameters}$ to model the non-linearity compared to recent neural operators that rely on using multiple linear operator layers in succession.
1 code implementation • NeurIPS 2021 • Gaurav Gupta, Xiongye Xiao, Paul Bogdan
The solution of a partial differential equation can be obtained by computing the inverse operator map between the input and the solution space.