no code implementations • 2 Oct 2024 • Jie Shen
Understanding noise tolerance of machine learning algorithms is a central quest in learning theory.
no code implementations • 29 Dec 2023 • Jie Shen, Shusen Yang, Cong Zhao, Xuebin Ren, Peng Zhao, Yuqian Yang, Qing Han, Shuaijun Wu
Intelligent equipment fault diagnosis based on Federated Transfer Learning (FTL) attracts considerable attention from both academia and industry.
no code implementations • 7 Sep 2023 • Shiheng Zhang, Jiahao Zhang, Jie Shen, Guang Lin
We present a novel optimization algorithm, element-wise relaxed scalar auxiliary variable (E-RSAV), that satisfies an unconditional energy dissipation law and exhibits improved alignment between the modified and the original energy.
no code implementations • 9 Jun 2023 • Jiahao Zhang, Shiheng Zhang, Jie Shen, Guang Lin
For an objective operator G, the Branch net encodes different input functions u at the same number of sensors, and the Trunk net evaluates the output function at any location.
no code implementations • 1 Jun 2023 • Shiwei Zeng, Jie Shen
The concept class of low-degree polynomial threshold functions (PTFs) plays a fundamental role in machine learning.
no code implementations • 26 Jan 2023 • Atul Dhingra, Jie Shen, Nicholas Kleene
In particular, the memory issue precludes a large volume of prior algorithms that are based on batch optimization technique.
no code implementations • 11 Nov 2022 • Jing Yang, Jie Shen, Yiming Lin, Yordan Hristov, Maja Pantic
Our model consists of a hybrid network of convolution and transformer blocks to learn per-AU features and to model AU co-occurrences.
1 code implementation • 3 Sep 2022 • Pingchuan Ma, Yujiang Wang, Stavros Petridis, Jie Shen, Maja Pantic
In this paper, we systematically investigate the performance of state-of-the-art data augmentation approaches, temporal models and other training strategies, like self-distillation and using word boundary indicators.
Ranked #2 on Lipreading on Lip Reading in the Wild (using extra training data)
no code implementations • 28 May 2022 • Shiwei Zeng, Jie Shen
Robust mean estimation is one of the most important problems in statistics: given a set of samples in $\mathbb{R}^d$ where an $\alpha$ fraction are drawn from some distribution $D$ and the rest are adversarially corrupted, we aim to estimate the mean of $D$.
no code implementations • 24 Mar 2022 • Yujiang Wang, Mingzhi Dong, Jie Shen, Yiming Luo, Yiming Lin, Pingchuan Ma, Stavros Petridis, Maja Pantic
We also investigate face clustering in egocentric videos, a fast-emerging field that has not been studied yet in works related to face clustering.
Ranked #1 on Face Clustering on EasyCom
no code implementations • 7 Jan 2022 • Daoyan Pan, Kewei Wang, Zhiheng Zhou, Xingran Liu, Jie Shen
Exercise rehabilitation is an important part in the comprehensive management of patients with diabetes and there is a need to conduct comprehensively evaluation of several factors such as the physical fitness, cardiovascular risk and diabetic disease factors.
no code implementations • 18 Oct 2021 • Rafael Poyiadzi, Jie Shen, Stavros Petridis, Yujiang Wang, Maja Pantic
We then study the effect of variety and number of age-groups used during training on generalisation to unseen age-groups and observe that an increase in the number of training age-groups tends to increase the apparent emotional facial expression recognition performance on unseen age-groups.
Facial Expression Recognition Facial Expression Recognition (FER)
1 code implementation • 9 Jul 2021 • Jacob Donley, Vladimir Tourbabin, Jung-Suk Lee, Mark Broyles, Hao Jiang, Jie Shen, Maja Pantic, Vamsi Krishna Ithapu, Ravish Mehra
In this work, we describe, evaluate and release a dataset that contains over 5 hours of multi-modal data useful for training and testing algorithms for the application of improving conversations for an AR glasses wearer.
Ranked #1 on Speech Enhancement on EasyCom
1 code implementation • 21 Jun 2021 • Yiming Lin, Jie Shen, Yujiang Wang, Maja Pantic
To evaluate our method on in-the-wild data, we also introduce a new challenging large-scale benchmark called IMDB-Clean.
Ranked #1 on Age Estimation on KANFace
no code implementations • 13 Jun 2021 • Shiwei Zeng, Jie Shen
We study the problem of crowdsourced PAC learning of threshold functions.
1 code implementation • 14 Apr 2021 • Jie Shen, Jiajun Zhou, Yunyi Xie, Shanqing Yu, Qi Xuan
In this paper, we present a novel approach to analyze user's behavior from the perspective of the transaction subgraph, which naturally transforms the identity inference task into a graph classification pattern and effectively avoids computation in large-scale graph.
no code implementations • 11 Feb 2021 • Jie Shen
We further extend the algorithm and analysis to the more general and stronger nasty noise model of Bshouty et al. (2002), showing that it is still possible to achieve near-optimal noise tolerance and sample complexity in polynomial time.
2 code implementations • 4 Feb 2021 • Yiming Lin, Jie Shen, Yujiang Wang, Maja Pantic
Face parsing aims to predict pixel-wise labels for facial components of a target face in an image.
Ranked #1 on Face Parsing on iBugMask
no code implementations • 1 Jan 2021 • Jing Wang, Jie Shen, Xiaofei Ma, Andrew Arnold
Recent years have witnessed a surge of successful applications of machine reading comprehension.
no code implementations • 19 Dec 2020 • Jie Shen
Our main contribution is a Perceptron-like online active learning algorithm that runs in polynomial time, and under the conditions that the marginal distribution is isotropic log-concave and $\nu = \Omega(\epsilon)$, where $\epsilon \in (0, 1)$ is the target error rate, our algorithm PAC learns the underlying halfspace with near-optimal label complexity of $\tilde{O}\big(d \cdot polylog(\frac{1}{\epsilon})\big)$ and sample complexity of $\tilde{O}\big(\frac{d}{\epsilon} \big)$.
no code implementations • 2 Nov 2020 • Shiwei Zeng, Jie Shen
We study crowdsourced PAC learning of threshold functions, where the labels are gathered from a pool of annotators some of whom may behave adversarially.
1 code implementation • 29 Sep 2020 • Pingchuan Ma, Yujiang Wang, Jie Shen, Stavros Petridis, Maja Pantic
In this work, we present the Densely Connected Temporal Convolutional Network (DC-TCN) for lip-reading of isolated words.
no code implementations • 11 Jul 2020 • Jiajun Zhou, Jie Shen, Shanqing Yu, Guanrong Chen, Qi Xuan
Graph classification, which aims to identify the category labels of graphs, plays a significant role in drug classification, toxicity detection, protein analysis etc.
no code implementations • 7 Jul 2020 • Jie Shen
This paper concerns the problem of 1-bit compressed sensing, where the goal is to estimate a sparse signal from a few of its binary measurements.
no code implementations • 6 Jun 2020 • Jie Shen, Chicheng Zhang
We answer this question in the affirmative by designing a computationally efficient active learning algorithm with near-optimal label complexity of $\tilde{O}\big({s \log^4 \frac d \epsilon} \big)$ and noise tolerance $\eta = \Omega(\epsilon)$, where $\epsilon \in (0, 1)$ is the target error rate, under the assumption that the distribution over (uncorrupted) unlabeled examples is isotropic log-concave.
1 code implementation • 5 Jun 2020 • Yujiang Wang, Mingzhi Dong, Jie Shen, Yiming Lin, Maja Pantic
Introducing LI mechanisms improves the convolutional filter's sensitivity to semantic object boundaries.
no code implementations • NeurIPS 2020 • Chicheng Zhang, Jie Shen, Pranjal Awasthi
Even in the presence of mild label noise, i. e. $\eta$ is a small constant, this is a challenging problem and only recently have label complexity bounds of the form $\tilde{O}\big(s \cdot \mathrm{polylog}(d, \frac{1}{\epsilon})\big)$ been established in [Zhang, 2018] for computationally efficient algorithms.
no code implementations • 14 Nov 2019 • Shiyang Cheng, Pingchuan Ma, Georgios Tzimiropoulos, Stavros Petridis, Adrian Bulat, Jie Shen, Maja Pantic
The proposed model significantly outperforms previous approaches on non-frontal views while retaining the superior performance on frontal and near frontal mouth views.
no code implementations • 11 Oct 2019 • Bingnan Luo, Jie Shen, Shiyang Cheng, Yujiang Wang, Maja Pantic
Specifically, we learn the shape prior from our dataset using VAE-GAN, and leverage the pre-trained encoder and discriminator to regularise the training of SegNet.
no code implementations • CVPR 2020 • Yujiang Wang, Mingzhi Dong, Jie Shen, Yang Wu, Shiyang Cheng, Maja Pantic
To the best of our knowledge, this is the first work to use reinforcement learning for online key-frame decision in dynamic video segmentation, and also the first work on its application on face videos.
1 code implementation • 9 Jan 2019 • Jean Kossaifi, Robert Walecki, Yannis Panagakis, Jie Shen, Maximilian Schmitt, Fabien Ringeval, Jing Han, Vedhas Pandit, Antoine Toisoul, Bjorn Schuller, Kam Star, Elnar Hajiyev, Maja Pantic
Natural human-computer interaction and audio-visual human behaviour sensing systems, which would achieve robust performance in-the-wild are more needed than ever as digital devices are increasingly becoming an indispensable part of our life.
no code implementations • 24 Jul 2018 • Yujiang Wang, Bingnan Luo, Jie Shen, Maja Pantic
Inspired by the recent development of deep network-based methods in semantic image segmentation, we introduce an end-to-end trainable model for face mask extraction in video sequence.
no code implementations • ICML 2018 • Jing Wang, Jie Shen, Ping Li
As a remedy, online feature selection has attracted increasing attention in recent years.
1 code implementation • 24 May 2018 • Yiming Lin, Shiyang Cheng, Jie Shen, Maja Pantic
36 state-of-the-art trackers, including facial landmark trackers, generic object trackers and trackers that we have fine-tuned or improved, are evaluated.
1 code implementation • 10 Apr 2018 • Yujiang Wang, Jie Shen, Stavros Petridis, Maja Pantic
In this paper, we present an effective and unsupervised face Re-ID system which simultaneously re-identifies multiple faces for HRI.
no code implementations • 18 Feb 2018 • Stavros Petridis, Jie Shen, Doruk Cetin, Maja Pantic
We show that an absolute decrease in classification rate of up to 3. 7% is observed when training and testing on normal and whispered, respectively, and vice versa.
no code implementations • NeurIPS 2017 • Jie Shen, Ping Li
In machine learning and compressed sensing, it is of central importance to understand when a tractable algorithm recovers the support of a sparse signal from its compressed measurements.
no code implementations • ICML 2017 • Jie Shen, Ping Li
Recovering the support of a sparse signal from its compressed samples has been one of the most important problems in high dimensional statistics.
no code implementations • COLING 2016 • Jie Shen, Cong Liu
Distributed word representation is an efficient method for capturing semantic and syntactic word relations.
no code implementations • 5 May 2016 • Jie Shen, Ping Li
This paper is concerned with the hard thresholding operator which sets all but the $k$ largest absolute elements of a vector to zero.
no code implementations • 24 Apr 2015 • Jing Wang, Jie Shen, Huan Xu
Social trust prediction addresses the significant problem of exploring interactions among users in social networks.
no code implementations • 28 Mar 2015 • Jie Shen, Ping Li, Huan Xu
Low-rank representation~(LRR) has been a significant method for segmenting data that are generated from a union of subspaces.
no code implementations • 5 Feb 2015 • Jing Wang, Jie Shen, Ping Li
In order to determine a small set of proposals with a high recall, a common scheme is extracting multiple features followed by a ranking algorithm which however, incurs two major challenges: {\bf 1)} The ranking model often imposes pairwise constraints between each proposal, rendering the problem away from an efficient training/testing phase; {\bf 2)} Linear kernels are utilized due to the computational and memory bottleneck of training a kernelized model.
no code implementations • NeurIPS 2014 • Jie Shen, Huan Xu, Ping Li
The key technique in our algorithm is to reformulate the max-norm into a matrix factorization form, consisting of a basis component and a coefficients one.
no code implementations • 16 Nov 2014 • Weipeng Zhang, Jie Shen, Guangcan Liu, Yong Yu
Unlike previous approaches, our approach models the clothing attributes as latent variables and thus requires no explicit labeling for the clothing attributes.
no code implementations • 12 Jun 2014 • Jie Shen, Huan Xu, Ping Li
Max-norm regularizer has been extensively studied in the last decade as it promotes an effective low-rank estimation for the underlying data.
no code implementations • 19 Apr 2014 • Jie Shen, Guangcan Liu, Jia Chen, Yuqiang Fang, Jianbin Xie, Yong Yu, Shuicheng Yan
In this paper, we utilize structured learning to simultaneously address two intertwined problems: human pose estimation (HPE) and garment attribute classification (GAC), which are valuable for a variety of computer vision and multimedia applications.