no code implementations • 14 Mar 2024 • Haohan Weng, Danqing Huang, Yu Qiao, Zheng Hu, Chin-Yew Lin, Tong Zhang, C. L. Philip Chen
In this paper, we present Desigen, an automatic template creation pipeline which generates background images as well as harmonious layout elements over the background.
no code implementations • 8 Feb 2024 • Qianchen Mao, Qiang Li, Bingshu Wang, Yongjun Zhang, Tao Dai, C. L. Philip Chen
To tackle this challenge, we propose SpirDet, a novel approach for efficient detection of infrared small targets.
no code implementations • 18 Dec 2023 • Chengyuan Zhu, Yiyuan Yang, Kaixiang Yang, Haifeng Zhang, Qinmin Yang, C. L. Philip Chen
This refinement is crucial in effectively identifying genuine threats to pipelines, thus enhancing the safety of energy transportation.
no code implementations • 12 Oct 2023 • Haohan Weng, Tianyu Yang, Jianan Wang, Yu Li, Tong Zhang, C. L. Philip Chen, Lei Zhang
Large image diffusion models enable novel view synthesis with high quality and excellent zero-shot capability.
1 code implementation • 7 Sep 2023 • Jiatai Lin, Guoqiang Han, Xuemiao Xu, Changhong Liang, Tien-Tsin Wong, C. L. Philip Chen, Zaiyi Liu, Chu Han
Class activation mapping~(CAM), a visualization technique for interpreting deep learning models, is now commonly used for weakly supervised semantic segmentation~(WSSS) and object localization~(WSOL).
Object Localization Weakly supervised Semantic Segmentation +1
no code implementations • 20 Aug 2023 • Yunlu Yan, Chun-Mei Feng, Mang Ye, WangMeng Zuo, Ping Li, Rick Siow Mong Goh, Lei Zhu, C. L. Philip Chen
Concretely, FedCSD introduces a class prototype similarity distillation to align the local logits with the refined global logits that are weighted by the similarity between local logits and the global prototype.
no code implementations • 25 May 2023 • Jian-Nan Su, Min Gan, Guang-Yong Chen, Wenzhong Guo, C. L. Philip Chen
Based on these findings, we introduced a concise yet effective soft thresholding operation to obtain high-similarity-pass attention (HSPA), which is beneficial for generating a more compact and interpretable distribution.
no code implementations • 12 May 2023 • Haiqi Liu, C. L. Philip Chen, Xinrong Gong, Tong Zhang
Recognizing novel sub-categories with scarce samples is an essential and challenging research topic in computer vision.
no code implementations • 3 Apr 2023 • Guang-Yong Chen, Yong-Hang Yu, Min Gan, C. L. Philip Chen, Wenzhong Guo
Random functional-linked types of neural networks (RFLNNs), e. g., the extreme learning machine (ELM) and broad learning system (BLS), which avoid suffering from a time-consuming training process, offer an alternative way of learning in deep structure.
no code implementations • 1 Apr 2023 • Chunyu Lei, C. L. Philip Chen, Jifeng Guo, Tong Zhang
Third, the TSMS feature fusion layer is proposed to extract more effective multi-scale features through the integration of CF layers and CE layers.
1 code implementation • 24 Feb 2023 • Tianpeng Deng, Yanqi Huang, Guoqiang Han, Zhenwei Shi, Jiatai Lin, Qi Dou, Zaiyi Liu, Xiao-jing Guo, C. L. Philip Chen, Chu Han
In this paper, we propose a universal and lightweight federated learning framework, named Federated Deep-Broad Learning (FedDBL), to achieve superior classification performance with limited training samples and only one-round communication.
1 code implementation • 2 Dec 2022 • Jian-Nan Su, Min Gan, Guang-Yong Chen, Jia-Li Yin, C. L. Philip Chen
Utilizing this finding, we proposed a Global Learnable Attention (GLA) to adaptively modify similarity scores of non-local textures during training instead of only using a fixed similarity scoring function such as the dot product.
1 code implementation • 20 Apr 2022 • Ling Huang, Can-Rong Guan, Zhen-Wei Huang, Yuefang Gao, Yingjie Kuang, Chang-Dong Wang, C. L. Philip Chen
Recently, Deep Neural Networks (DNNs) have been widely introduced into Collaborative Filtering (CF) to produce more accurate recommendation results due to their capability of capturing the complex nonlinear relationships between items and users. However, the DNNs-based models usually suffer from high computational complexity, i. e., consuming very long training time and storing huge amount of trainable parameters.
no code implementations • 15 Jan 2022 • Jibao Qiu, C. L. Philip Chen, Tong Zhang
In this paper, we present a simple multi-task framework for SMER, which incorporates the emotion recognition task with other emotion-related auxiliary tasks derived from the intrinsic structure of the music.
no code implementations • 15 Jan 2022 • Tong Zhang, Haohan Weng, Ke Yi, C. L. Philip Chen
Convolutional Neural Networks (CNNs) have exhibited their great power in a variety of vision tasks.
no code implementations • 13 Jan 2022 • Bingshu Wang, Jiangbin Zheng, C. L. Philip Chen
Representative algorithms are described in detail, coupled with some typical techniques that are described briefly.
no code implementations • 15 Dec 2021 • ChunYang Zhang, Hongyu Yao, C. L. Philip Chen, Yuena Lin
With the rise of contrastive learning, unsupervised graph representation learning has been booming recently, even surpassing the supervised counterparts in some machine learning tasks.
no code implementations • 15 Nov 2021 • Zixiang Ding, Yaran Chen, Nannan Li, Dongbin Zhao, C. L. Philip Chen
Moreover, multi-scale feature fusion and knowledge embedding are proposed to improve the performance of BCNN with shallow topology.
no code implementations • 27 Feb 2021 • Wenrui Gan, Zhulin Liu, C. L. Philip Chen, Tong Zhang
In general, the main work of this paper include: (1) propose SiLa Learning, which improves the performance of common models without increasing test parameters; (2) compares SiLa with DML and proves that SiLa can improve the generalization of the model; (3) SiLa is applied to Dynamic Neural Networks, and proved that SiLa can be used for various types of network structures.
2 code implementations • 8 Dec 2020 • Hui Tang, Xiatian Zhu, Ke Chen, Kui Jia, C. L. Philip Chen
To address this issue, we are motivated by a UDA assumption of structural similarity across domains, and propose to directly uncover the intrinsic target discrimination via constrained clustering, where we constrain the clustering solutions using structural source regularization that hinges on the very same assumption.
no code implementations • 22 Mar 2020 • Jiamiao Xu, Fangzhao Wang, Qinmu Peng, Xinge You, Shuo Wang, Xiao-Yuan Jing, C. L. Philip Chen
Furthermore, recent low-rank modeling provides a satisfactory solution to address data contaminated by predefined assumptions of noise distribution, such as Gaussian or Laplacian distribution.
no code implementations • 18 Jan 2020 • Zixiang Ding, Yaran Chen, Nannan Li, Dongbin Zhao, Zhiquan Sun, C. L. Philip Chen
In this paper, we propose Broad Neural Architecture Search (BNAS) where we elaborately design broad scalable architecture dubbed Broad Convolutional Neural Network (BCNN) to solve the above issue.
2 code implementations • 24 Dec 2019 • Yuxin Wen, Jiehong Lin, Ke Chen, C. L. Philip Chen, Kui Jia
Regularizing the targeted attack loss with our proposed geometry-aware objectives results in our proposed method, Geometry-Aware Adversarial Attack ($GeoA^3$).
no code implementations • 17 Oct 2019 • Hufei Zhu, Zhulin Liu, C. L. Philip Chen, Yanyang Liang
Specifically, when q > k, the proposed algorithm computes only a k * k matrix inverse, instead of a q * q matrix inverse in the existing algorithm.
no code implementations • 7 Sep 2019 • Wenjie Shi, Shiji Song, Cheng Wu, C. L. Philip Chen
Different from existing policy gradient methods which employ single actor-critic but cannot realize satisfactory tracking control accuracy and stable learning, our proposed algorithm can achieve high-level tracking control accuracy of AUVs and stable learning by applying a hybrid actors-critics architecture, where multiple actors and critics are trained to learn a deterministic policy and action-value function, respectively.
1 code implementation • ICIP 2019 • Bingshu Wang, C. L. Philip Chen
This paper proposes an effective method to remove shadows from the single document images, which contains two stages: shadow detection and shadow removal.
no code implementations • 17 May 2019 • Zhizhong Han, Xiyang Wang, Chi-Man Vong, Yu-Shen Liu, Matthias Zwicker, C. L. Philip Chen
Then, the content and spatial information of each pair of view nodes are encoded by a novel spatial pattern correlation, where the correlation is computed among latent semantic patterns.
no code implementations • 19 Apr 2018 • Jiamiao Xu, Shujian Yu, Xinge You, Mengjun Leng, Xiao-Yuan Jing, C. L. Philip Chen
We present a novel cross-view classification algorithm where the gallery and probe data come from different views.
no code implementations • 3 Mar 2018 • Hongwei Ge, Mingde Zhao, Liang Sun, Zhen Wang, Guozhen Tan, Qiang Zhang, C. L. Philip Chen
This paper proposes a many-objective optimization algorithm with two interacting processes: cascade clustering and reference point incremental learning (CLIA).
no code implementations • IEEE Transactions on Neural Networks and Learning Systems 2017 • C. L. Philip Chen, Zhulin Liu
The BLS is established in the form of a flat network, where the original inputs are transferred and placed as “mapped features” in feature nodes and the structure is expanded in wide sense in the “enhancement nodes.” The incremental learning algorithms are developed for fast remodeling in broad expansion without a retraining process if the network deems to be expanded.
no code implementations • 24 May 2017 • Zhulin Liu, C. L. Philip Chen
A new Hardy space Hardy space approach of Dirichlet type problem based on Tikhonov regularization and Reproducing Hilbert kernel space is discussed in this paper, which turns out to be a typical extremal problem located on the upper upper-high complex plane.