no code implementations • 10 Aug 2021 • Ashild Kummen, Guanlin Li, Ali Hassan, Teodora Ganeva, Qianying Lu, Robert Shaw, Chenuka Ratwatte, Yang Zou, Lu Han, Emil Almazov, Sheena Visram, Andrew Taylor, Neil J Sebire, Lee Stott, Yvonne Rogers, Graham Roberts, Dean Mohamedally
We also introduce a series of bespoke gesture recognition classifications as DirectInput triggers, including gestures for idle states, auto calibration, depth capture from a 2D RGB webcam stream and tracking of facial motions such as mouth motions, winking, and head direction with rotation.
Moreover, the image-level category labels do not enforce consistent object detection across different transformations of the same images.
One major privacy attack in this domain is membership inference, where an adversary aims to determine whether a target data sample is part of the training set of a target ML model.
Second, we further consider that the predictions of target samples belonging to the hard class are vulnerable to perturbations.
To this end, we propose a joint learning framework that disentangles id-related/unrelated features and enforces adaptation to work on the id-related feature space exclusively.
Ranked #6 on Unsupervised Domain Adaptation on Market to MSMT
We propose to incorporate inter-class correlations in a Wasserstein training framework by pre-defining ($i. e.,$ using arc length of a circle) or adaptively learning the ground metric.
Monocular Depth Estimation is usually treated as a supervised and regression problem when it actually is very similar to semantic segmentation task since they both are fundamentally pixel-level classification tasks.
Recent advances in domain adaptation show that deep self-training presents a powerful means for unsupervised domain adaptation.
Ranked #9 on Domain Adaptation on VisDA2017
Unsupervised domain adaptation (UDA) aims at inferring class labels for unlabeled target domain given a related labeled source dataset.
In this paper, we propose a novel UDA framework based on an iterative self-training procedure, where the problem is formulated as latent variable loss minimization, and can be solved by alternatively generating pseudo labels on target data and re-training the model with these labels.
Recent deep networks achieved state of the art performanceon a variety of semantic segmentation tasks.
Ranked #10 on Image-to-Image Translation on GTAV-to-Cityscapes Labels
Edge detection is among the most fundamental vision problems for its role in perceptual grouping and its wide applications.
By selecting better scales in the region proposal input and by combining feature maps through careful design of the convolutional neural network, we improve performance on smaller objects.
Optimal transport distances, otherwise known as Wasserstein distances, have recently drawn ample attention in computer vision and machine learning as a powerful discrepancy measure for probability distributions.