Search Results for author: Yiming Lin

Found 8 papers, 4 papers with code

RoI Tanh-polar Transformer Network for Face Parsing in the Wild

2 code implementations4 Feb 2021 Yiming Lin, Jie Shen, Yujiang Wang, Maja Pantic

Face parsing aims to predict pixel-wise labels for facial components of a target face in an image.

Face Parsing

FP-Age: Leveraging Face Parsing Attention for Facial Age Estimation in the Wild

1 code implementation21 Jun 2021 Yiming Lin, Jie Shen, Yujiang Wang, Maja Pantic

To evaluate our method on in-the-wild data, we also introduce a new challenging large-scale benchmark called IMDB-Clean.

Age Estimation Constrained Clustering +1

MobiFace: A Novel Dataset for Mobile Face Tracking in the Wild

1 code implementation24 May 2018 Yiming Lin, Shiyang Cheng, Jie Shen, Maja Pantic

36 state-of-the-art trackers, including facial landmark trackers, generic object trackers and trackers that we have fine-tuned or improved, are evaluated.

Face Detection Object Tracking +1

Deep Polarization Imaging for 3D shape and SVBRDF Acquisition

no code implementations CVPR 2021 Valentin Deschaintre, Yiming Lin, Abhijeet Ghosh

We present a novel method for efficient acquisition of shape and spatially varying reflectance of 3D objects using polarization cues.

Self-supervised Video-centralised Transformer for Video Face Clustering

no code implementations24 Mar 2022 Yujiang Wang, Mingzhi Dong, Jie Shen, Yiming Luo, Yiming Lin, Pingchuan Ma, Stavros Petridis, Maja Pantic

We also investigate face clustering in egocentric videos, a fast-emerging field that has not been studied yet in works related to face clustering.

Clustering Contrastive Learning +1

FAN-Trans: Online Knowledge Distillation for Facial Action Unit Detection

no code implementations11 Nov 2022 Jing Yang, Jie Shen, Yiming Lin, Yordan Hristov, Maja Pantic

Our model consists of a hybrid network of convolution and transformer blocks to learn per-AU features and to model AU co-occurrences.

Action Unit Detection Face Alignment +2

Context Does Matter: End-to-end Panoptic Narrative Grounding with Deformable Attention Refined Matching Network

no code implementations25 Oct 2023 Yiming Lin, Xiao-Bo Jin, Qiufeng Wang, Kaizhu Huang

The current state-of-the-art methods first refine the representation of phrase by aggregating the most similar $k$ image pixels, and then match the refined text representations with the pixels of the image feature map to generate segmentation results.

Visual Grounding

Cannot find the paper you are looking for? You can Submit a new open access paper.