SDDNet: Style-guided Dual-layer Disentanglement Network for Shadow Detection

17 Aug 2023  ·  Runmin Cong, Yuchen Guan, Jinpeng Chen, Wei zhang, Yao Zhao, Sam Kwong ·

Despite significant progress in shadow detection, current methods still struggle with the adverse impact of background color, which may lead to errors when shadows are present on complex backgrounds. Drawing inspiration from the human visual system, we treat the input shadow image as a composition of a background layer and a shadow layer, and design a Style-guided Dual-layer Disentanglement Network (SDDNet) to model these layers independently. To achieve this, we devise a Feature Separation and Recombination (FSR) module that decomposes multi-level features into shadow-related and background-related components by offering specialized supervision for each component, while preserving information integrity and avoiding redundancy through the reconstruction constraint. Moreover, we propose a Shadow Style Filter (SSF) module to guide the feature disentanglement by focusing on style differentiation and uniformization. With these two modules and our overall pipeline, our model effectively minimizes the detrimental effects of background color, yielding superior performance on three public datasets with a real-time inference speed of 32 FPS.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Shadow Detection CUHK-Shadow SDDNet (MM 2023) (512x512) BER 7.65 # 2
Shadow Detection CUHK-Shadow SDDNet (MM 2023) (256x256) BER 8.66 # 8
Shadow Detection SBU / SBU-Refine SDDNet (MM 2023) (512x512) BER 4.86 # 1
Shadow Detection SBU / SBU-Refine SDDNet (MM 2023) (256x256) BER 5.39 # 4

Methods