Search Results for author: Chang Tian

Found 5 papers, 2 papers with code

Improving the Performance of R17 Type-II Codebook with Deep Learning

no code implementations13 Sep 2023 Ke Ma, Yiliang Sang, Yang Ming, Jin Lian, Chang Tian, Zhaocheng Wang

The Type-II codebook in Release 17 (R17) exploits the angular-delay-domain partial reciprocity between uplink and downlink channels to select part of angular-delay-domain ports for measuring and feeding back the downlink channel state information (CSI), where the performance of existing deep learning enhanced CSI feedback methods is limited due to the deficiency of sparse structures.

Deep Learning Empowered Type-II Codebook: New Paradigm for Enhancing CSI Feedback

no code implementations14 May 2023 Ke Ma, Yiliang Sang, Yang Ming, Jin Lian, Chang Tian, Zhaocheng Wang

In contrast to its counterpart in Release 16, the Type-II codebook in Release 17 (R17) exploits the angular-delay-domain partial reciprocity between uplink and downlink channels and selects part of angular-delay-domain ports for measuring and feeding back the downlink CSI, where the performance of the conventional deep learning methods is limited due to the deficiency of sparse structures.

Vocal Bursts Type Prediction

Paint4Poem: A Dataset for Artistic Visualization of Classical Chinese Poems

1 code implementation23 Sep 2021 Dan Li, Shuai Wang, Jie Zou, Chang Tian, Elisha Nieuwburg, Fengyuan Sun, Evangelos Kanoulas

We create abenchmark for Paint4Poem: we train two representative text-to-image generation models: AttnGAN and MirrorGAN, and evaluate theirperformance regarding painting pictorial quality, painting stylistic relevance, and semantic relevance between poems and paintings. The results indicate that the models are able to generate paintings that have good pictorial quality and mimic Feng Zikai's style, but thereflection of poem semantics is limited.

Few-Shot Learning Text-to-Image Generation

Successive Convex Approximation Based Off-Policy Optimization for Constrained Reinforcement Learning

1 code implementation26 May 2021 Chang Tian, An Liu, Guang Huang, Wu Luo

We propose a successive convex approximation based off-policy optimization (SCAOPO) algorithm to solve the general constrained reinforcement learning problem, which is formulated as a constrained Markov decision process (CMDP) in the context of average cost.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.