Search Results for author: Dandan Ding

Found 7 papers, 2 papers with code

Another Way to the Top: Exploit Contextual Clustering in Learned Image Coding

no code implementations21 Jan 2024 Yichi Zhang, Zhihao Duan, Ming Lu, Dandan Ding, Fengqing Zhu, Zhan Ma

While convolution and self-attention are extensively used in learned image compression (LIC) for transform coding, this paper proposes an alternative called Contextual Clustering based LIC (CLIC) which primarily relies on clustering operations and local attention for correlation characterization and compact representation of an image.

Clustering Image Compression +3

Lossless Point Cloud Attribute Compression Using Cross-scale, Cross-group, and Cross-color Prediction

no code implementations22 Mar 2023 Jianqiang Wang, Dandan Ding, Zhan Ma

With this aim, we extensively exploit cross-scale, cross-group, and cross-color correlations of point cloud attribute to ensure accurate probability estimation and thus high coding efficiency.

Attribute

Dynamic Point Cloud Geometry Compression Using Multiscale Inter Conditional Coding

no code implementations28 Jan 2023 Jianqiang Wang, Dandan Ding, Hao Chen, Zhan Ma

This work extends the Multiscale Sparse Representation (MSR) framework developed for static Point Cloud Geometry Compression (PCGC) to support the dynamic PCGC through the use of multiscale inter conditional coding.

CARNet:Compression Artifact Reduction for Point Cloud Attribute

no code implementations17 Sep 2022 Dandan Ding, Junzhe Zhang, Jianqiang Wang, Zhan Ma

A learning-based adaptive loop filter is developed for the Geometry-based Point Cloud Compression (G-PCC) standard to reduce attribute compression artifacts.

Attribute

Decomposition, Compression, and Synthesis (DCS)-based Video Coding: A Neural Exploration via Resolution-Adaptive Learning

no code implementations1 Dec 2020 Ming Lu, Tong Chen, Dandan Ding, Fengqing Zhu, Zhan Ma

Inspired by the facts that retinal cells actually segregate the visual scene into different attributes (e. g., spatial details, temporal motion) for respective neuronal processing, we propose to first decompose the input video into respective spatial texture frames (STF) at its native spatial resolution that preserve the rich spatial details, and the other temporal motion frames (TMF) at a lower spatial resolution that retain the motion smoothness; then compress them together using any popular video coder; and finally synthesize decoded STFs and TMFs for high-fidelity video reconstruction at the same resolution as its native input.

Motion Compensation Super-Resolution +2

Multiscale Point Cloud Geometry Compression

3 code implementations7 Nov 2020 Jianqiang Wang, Dandan Ding, Zhu Li, Zhan Ma

Recent years have witnessed the growth of point cloud based applications because of its realistic and fine-grained representation of 3D objects and scenes.

Attribute

Cannot find the paper you are looking for? You can Submit a new open access paper.