Search Results for author: Zhipeng Li

Found 14 papers, 1 papers with code

AS-Speech: Adaptive Style For Speech Synthesis

no code implementations9 Sep 2024 Zhipeng Li, Xiaofen Xing, Jun Wang, Shuaiqi Chen, Guoqiao Yu, Guanglu Wan, Xiangmin Xu

In recent years, there has been significant progress in Text-to-Speech (TTS) synthesis technology, enabling the high-quality synthesis of voices in common scenarios.

Rhythm Speech Synthesis +3

Automated architectural space layout planning using a physics-inspired generative design framework

no code implementations21 Jun 2024 Zhipeng Li, Sichao Li, Geoff Hinchcliffe, Noam Maitless, Nick Birbilis

The determination of space layout is one of the primary activities in the schematic design stage of an architectural project.

A Comprehensive Survey of Large Language Models and Multimodal Large Language Models in Medicine

no code implementations14 May 2024 Hanguang Xiao, Feizhong Zhou, Xingyue Liu, Tianqi Liu, Zhipeng Li, Xin Liu, Xiaoxuan Huang

Finally, the survey addresses the challenges confronting medical LLMs and MLLMs and proposes practical strategies and future directions for their integration into medicine.

Survey

Time2Stop: Adaptive and Explainable Human-AI Loop for Smartphone Overuse Intervention

no code implementations3 Mar 2024 Adiba Orzikulova, Han Xiao, Zhipeng Li, Yukang Yan, Yuntao Wang, Yuanchun Shi, Marzyeh Ghassemi, Sung-Ju Lee, Anind K Dey, Xuhai "Orson" Xu

Participants preferred the adaptive interventions and rated the system highly on intervention time accuracy, effectiveness, and level of trust.

Multi-perspective Feedback-attention Coupling Model for Continuous-time Dynamic Graphs

no code implementations13 Dec 2023 Xiaobo Zhu, Yan Wu, Zhipeng Li, Hailong Su, Jin Che, Zhanheng Chen, Liying Wang

Recently, representation learning over graph networks has gained popularity, with various models showing promising results.

Representation Learning

Towards Accurate and Compact Architectures via Neural Architecture Transformer

2 code implementations20 Feb 2021 Yong Guo, Yin Zheng, Mingkui Tan, Qi Chen, Zhipeng Li, Jian Chen, Peilin Zhao, Junzhou Huang

To address this issue, we propose a Neural Architecture Transformer++ (NAT++) method which further enlarges the set of candidate transitions to improve the performance of architecture optimization.

Neural Architecture Search valid

Unified Supervised-Unsupervised (SUPER) Learning for X-ray CT Image Reconstruction

no code implementations6 Oct 2020 Siqi Ye, Zhipeng Li, Michael T. McCann, Yong Long, Saiprasad Ravishankar

The proposed learning formulation combines both unsupervised learning-based priors (or even simple analytical priors) together with (supervised) deep network-based priors in a unified MBIR framework based on a fixed point iteration analysis.

Computed Tomography (CT) Image Reconstruction

SUPER Learning: A Supervised-Unsupervised Framework for Low-Dose CT Image Reconstruction

no code implementations26 Oct 2019 Zhipeng Li, Siqi Ye, Yong Long, Saiprasad Ravishankar

Recent works have shown the promising reconstruction performance of methods such as PWLS-ULTRA that rely on clustering the underlying (reconstructed) image patches into a learned union of transforms.

Clustering Image Reconstruction +1

Combinatorial Keyword Recommendations for Sponsored Search with Deep Reinforcement Learning

no code implementations18 Jul 2019 Zhipeng Li, Jianwei Wu, Lin Sun, Tao Rong

In sponsored search, keyword recommendations help advertisers to achieve much better performance within limited budget.

Clustering Combinatorial Optimization +3

DECT-MULTRA: Dual-Energy CT Image Decomposition With Learned Mixed Material Models and Efficient Clustering

no code implementations1 Jan 2019 Zhipeng Li, Saiprasad Ravishankar, Yong Long, Jeffrey A. Fessler

Dual energy computed tomography (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability.

Clustering

Sparse-View X-Ray CT Reconstruction Using $\ell_1$ Prior with Learned Transform

no code implementations2 Nov 2017 Xuehang Zheng, Il Yong Chun, Zhipeng Li, Yong Long, Jeffrey A. Fessler

Our results with the extended cardiac-torso (XCAT) phantom data and clinical chest data show that, for sparse-view 2D fan-beam CT and 3D axial cone-beam CT, PWLS-ST-$\ell_1$ improves the quality of reconstructed images compared to the CT reconstruction methods using edge-preserving regularizer and $\ell_2$ prior with learned ST.

Computed Tomography (CT) CT Reconstruction +1

Cannot find the paper you are looking for? You can Submit a new open access paper.