RGB-D semantic segmentation methods conventionally use two independent encoders to extract features from the RGB and depth data. However, there lacks an effective fusion mechanism to bridge the encoders, for the purpose of fully exploiting the complementary information from multiple modalities. This paper proposes a novel bottom-up interactive fusion structure to model the interdependencies between the encoders. The structure introduces an interaction stream to interconnect the encoders. The interaction stream not only progressively aggregates modality-specific features from the encoders but also computes complementary features for them. To instantiate this structure, the paper proposes a residual fusion block (RFB) to formulate the interdependences of the encoders. The RFB consists of two residual units and one fusion unit with gate mechanism. It learns complementary features for the modality-specific encoders and extracts modality-specific features as well as cross-modal features. Based on the RFB, the paper presents the deep multimodal networks for RGB-D semantic segmentation called RFBNet. The experiments on two datasets demonstrate the effectiveness of modeling the interdependencies and that the RFBNet achieved state-of-the-art performance.