Search Results for author: Zibo Meng

Found 19 papers, 3 papers with code

Pik-Fix: Restoring and Colorizing Old Photos

no code implementations4 May 2022 Runsheng Xu, Zhengzhong Tu, Yuanqi Du, Xiaoyu Dong, Jinlong Li, Zibo Meng, Jiaqi Ma, Alan Bovik, Hongkai Yu

Our proposed framework consists of three modules: a restoration sub-network that conducts restoration from degradations, a similarity sub-network that performs color histogram matching and color transfer, and a colorization subnet that learns to predict the chroma elements of images that have been conditioned on chromatic reference signals.

Colorization

ROMNet: Renovate the Old Memories

no code implementations5 Feb 2022 Runsheng Xu, Zhengzhong Tu, Yuanqi Du, Xiaoyu Dong, Jinlong Li, Zibo Meng, Jiaqi Ma, Hongkai Yu

Renovating the memories in old photos is an intriguing research topic in computer vision fields.

Colorization

GIA-Net: Global Information Aware Network for Low-light Imaging

no code implementations14 Sep 2020 Zibo Meng, Runsheng Xu, Chiu Man Ho

In this paper, we propose a global information aware (GIA) module, which is capable of extracting and integrating the global information into the network to improve the performance of low-light imaging.

Point Adversarial Self Mining: A Simple Method for Facial Expression Recognition

no code implementations26 Aug 2020 Ping Liu, Yuewei Lin, Zibo Meng, Lu Lu, Weihong Deng, Joey Tianyi Zhou, Yi Yang

In this paper, we propose a simple yet effective approach, named Point Adversarial Self Mining (PASM), to improve the recognition accuracy in facial expression recognition.

Adversarial Attack Data Augmentation +2

Omni-supervised Facial Expression Recognition via Distilled Data

no code implementations18 May 2020 Ping Liu, Yunchao Wei, Zibo Meng, Weihong Deng, Joey Tianyi Zhou, Yi Yang

However, the performance of the current state-of-the-art facial expression recognition (FER) approaches is directly related to the labeled data for training.

Facial Expression Recognition

Residual Channel Attention Generative Adversarial Network for Image Super-Resolution and Noise Reduction

no code implementations28 Apr 2020 Jie Cai, Zibo Meng, Chiu Man Ho

In this paper, we propose a Residual Channel Attention-Generative Adversarial Network(RCA-GAN) to solve these problems.

Image Super-Resolution

Feature-level and Model-level Audiovisual Fusion for Emotion Recognition in the Wild

no code implementations6 Jun 2019 Jie Cai, Zibo Meng, Ahmed Shehab Khan, Zhiyuan Li, James O'Reilly, Shizhong Han, Ping Liu, Min Chen, Yan Tong

In this paper, we proposed two strategies to fuse information extracted from different modalities, i. e., audio and visual.

Emotion Recognition

Identity-Free Facial Expression Recognition using conditional Generative Adversarial Network

no code implementations19 Mar 2019 Jie Cai, Zibo Meng, Ahmed Shehab Khan, Zhiyuan Li, James O'Reilly, Shizhong Han, Yan Tong

A novel Identity-Free conditional Generative Adversarial Network (IF-GAN) was proposed for Facial Expression Recognition (FER) to explicitly reduce high inter-subject variations caused by identity-related facial attributes, e. g., age, race, and gender.

Facial Expression Recognition

Probabilistic Attribute Tree in Convolutional Neural Networks for Facial Expression Recognition

no code implementations17 Dec 2018 Jie Cai, Zibo Meng, Ahmed Shehab Khan, Zhiyuan Li, James O'Reilly, Yan Tong

In this paper, we proposed a novel Probabilistic Attribute Tree-CNN (PAT-CNN) to explicitly deal with the large intra-class variations caused by identity-related attributes, e. g., age, race, and gender.

Facial Expression Recognition

Optimizing Filter Size in Convolutional Neural Networks for Facial Action Unit Recognition

no code implementations CVPR 2018 Shizhong Han, Zibo Meng, Zhiyuan Li, James O'Reilly, Jie Cai, Xiao-Feng Wang, Yan Tong

Most recently, Convolutional Neural Networks (CNNs) have shown promise for facial AU recognition, where predefined and fixed convolution filter sizes are employed.

Facial Action Unit Detection

Incremental Boosting Convolutional Neural Network for Facial Action Unit Recognition

no code implementations NeurIPS 2016 Shizhong Han, Zibo Meng, Ahmed Shehab Khan, Yan Tong

Experimental results on four benchmark AU databases have demonstrated that the IB-CNN yields significant improvement over the traditional CNN and the boosting CNN without incremental learning, as well as outperforming the state-of-the-art CNN-based methods in AU recognition.

Facial Action Unit Detection Incremental Learning

Improving Speech Related Facial Action Unit Recognition by Audiovisual Information Fusion

no code implementations29 Jun 2017 Zibo Meng, Shizhong Han, Ping Liu, Yan Tong

Instead of solely improving visual observations, this paper presents a novel audiovisual fusion framework, which makes the best use of visual and acoustic cues in recognizing speech-related facial AUs.

Facial Action Unit Detection

Detecting Small Signs from Large Images

no code implementations26 Jun 2017 Zibo Meng, Xiaochuan Fan, Xin Chen, Min Chen, Yan Tong

Experimental results on a real-world conditioned traffic sign dataset have demonstrated the effectiveness of the proposed method in terms of detection accuracy and recall, especially for those with small sizes.

Object Detection

Listen to Your Face: Inferring Facial Action Units from Audio Channel

no code implementations23 Jun 2017 Zibo Meng, Shizhong Han, Yan Tong

Different from all prior work that utilized visual observations for facial AU recognition, this paper presents a novel approach that recognizes speech-related AUs exclusively from audio signals based on the fact that facial activities are highly correlated with voice during speech.

Facial Expression Recognition via a Boosted Deep Belief Network

no code implementations CVPR 2014 Ping Liu, Shizhong Han, Zibo Meng, Yan Tong

A training process for facial expression recognition is usually performed sequentially in three individual stages: feature learning, feature selection, and classifier construction.

Facial Expression Recognition feature selection

Cannot find the paper you are looking for? You can Submit a new open access paper.