Graph Neural Networks (GNNs) have received increasing attention for representation learning in various machine learning tasks.
We further verify the expandability of RPNet, in terms of both depth and width, on the tasks of classification and segmentation.
Ranked #2 on 3D Point Cloud Classification on ModelNet40
Interactions between users and videos are the major data source of performing video recommendation.
Based on the continuity between slices/frames and the common spatial layout of organs across volumes/sequences, we introduced a novel bootstrap self-supervised representation learning method by leveraging the predictable possibility of neighboring slices.
In this paper, we investigate if we could make the self-training -- a simple but popular framework -- work better for semi-supervised segmentation.
Not restricted by connectivity in the original graph, the generated views allow the model to enhance its expressive power with new and complementary perspectives from which to look at the relationship between nodes.
Many practical recommender systems provide item recommendation for different users only via mining user-item interactions but totally ignoring the rich attribute information of items that users interact with.
To train our network, we contribute a new dataset that contains 1000 categories of various objects with high-quality annotations.
Ranked #10 on Few-Shot Object Detection on MS-COCO (10-shot)
In particular, while some of them aim at segmenting the image into regions, such as object or surface instances, others aim at inferring the semantic labels of given regions, or their support relationships.
We tackle the problem of single image depth estimation, which, without additional knowledge, suffers from many ambiguities.