no code implementations • 1 Oct 2024 • Zhipeng Li, Xiaofen Xing, Yuanbo Fang, Weibin Zhang, Hengsheng Fan, Xiangmin Xu
Speech emotion recognition plays a crucial role in human-machine interaction systems.
no code implementations • 9 Sep 2024 • Zhipeng Li, Xiaofen Xing, Jun Wang, Shuaiqi Chen, Guoqiao Yu, Guanglu Wan, Xiangmin Xu
In recent years, there has been significant progress in Text-to-Speech (TTS) synthesis technology, enabling the high-quality synthesis of voices in common scenarios.
no code implementations • 21 Jun 2024 • Zhipeng Li, Sichao Li, Geoff Hinchcliffe, Noam Maitless, Nick Birbilis
The determination of space layout is one of the primary activities in the schematic design stage of an architectural project.
no code implementations • 14 May 2024 • Hanguang Xiao, Feizhong Zhou, Xingyue Liu, Tianqi Liu, Zhipeng Li, Xin Liu, Xiaoxuan Huang
Finally, the survey addresses the challenges confronting medical LLMs and MLLMs and proposes practical strategies and future directions for their integration into medicine.
no code implementations • 3 Mar 2024 • Adiba Orzikulova, Han Xiao, Zhipeng Li, Yukang Yan, Yuntao Wang, Yuanchun Shi, Marzyeh Ghassemi, Sung-Ju Lee, Anind K Dey, Xuhai "Orson" Xu
Participants preferred the adaptive interventions and rated the system highly on intervention time accuracy, effectiveness, and level of trust.
no code implementations • 13 Dec 2023 • Xiaobo Zhu, Yan Wu, Zhipeng Li, Hailong Su, Jin Che, Zhanheng Chen, Liying Wang
Recently, representation learning over graph networks has gained popularity, with various models showing promising results.
no code implementations • 16 May 2023 • Yifan Wang, Minhao Zhang, Xingbin Tu, Zhipeng Li, Fengzhong Qu, Yan Wei
Block transmission systems have been proven successful over frequency-selective channels.
2 code implementations • 20 Feb 2021 • Yong Guo, Yin Zheng, Mingkui Tan, Qi Chen, Zhipeng Li, Jian Chen, Peilin Zhao, Junzhou Huang
To address this issue, we propose a Neural Architecture Transformer++ (NAT++) method which further enlarges the set of candidate transitions to improve the performance of architecture optimization.
no code implementations • 2 Dec 2020 • Zhipeng Li, Yong Long, Il Yong Chun
We propose a new INN architecture for DECT material decomposition.
no code implementations • 6 Oct 2020 • Siqi Ye, Zhipeng Li, Michael T. McCann, Yong Long, Saiprasad Ravishankar
The proposed learning formulation combines both unsupervised learning-based priors (or even simple analytical priors) together with (supervised) deep network-based priors in a unified MBIR framework based on a fixed point iteration analysis.
no code implementations • 26 Oct 2019 • Zhipeng Li, Siqi Ye, Yong Long, Saiprasad Ravishankar
Recent works have shown the promising reconstruction performance of methods such as PWLS-ULTRA that rely on clustering the underlying (reconstructed) image patches into a learned union of transforms.
no code implementations • 18 Jul 2019 • Zhipeng Li, Jianwei Wu, Lin Sun, Tao Rong
In sponsored search, keyword recommendations help advertisers to achieve much better performance within limited budget.
no code implementations • 1 Jan 2019 • Zhipeng Li, Saiprasad Ravishankar, Yong Long, Jeffrey A. Fessler
Dual energy computed tomography (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability.
no code implementations • 2 Nov 2017 • Xuehang Zheng, Il Yong Chun, Zhipeng Li, Yong Long, Jeffrey A. Fessler
Our results with the extended cardiac-torso (XCAT) phantom data and clinical chest data show that, for sparse-view 2D fan-beam CT and 3D axial cone-beam CT, PWLS-ST-$\ell_1$ improves the quality of reconstructed images compared to the CT reconstruction methods using edge-preserving regularizer and $\ell_2$ prior with learned ST.