When pre-trained on the big scale datasets and transferred to the medium and small scale datasets, ProSTformer achieves a significant enhancement and behaves best.
Most deep network methods for compressive sensing reconstruction suffer from the black-box characteristic of DNN.
Graph neural networks (GNNs) have achieved remarkable performance in many graph analytics tasks such as node classification, link prediction and graph clustering.
When encountering a dubious diagnostic case, medical instance retrieval can help radiologists make evidence-based diagnoses by finding images containing instances similar to a query case from a large image database.
Graph neural networks (GNNs) have gained increasing popularity in many areas such as e-commerce, social networks and bio-informatics.
We study how to support elasticity, that is, the ability to dynamically adjust the parallelism (i. e., the number of GPUs), for deep neural network (DNN) training in a GPU cluster.
The heavy communication for model synchronization is a major bottleneck for scaling up the distributed deep neural network training to many workers.
When a new user just signs up on a website, we usually have no information about him/her, i. e. no interaction with items, no user profile and no social links with other users.
Spatial and temporal features are critical for demand forecasting in BSSs, but it is challenging to extract spatiotemporal dynamics.
A good parallelization strategy can significantly improve the efficiency or reduce the cost for the distributed training of deep neural networks (DNNs).
In this paper, we propose self-enhanced GNN (SEG), which improves the quality of the input data using the outputs of existing GNN models for better performance on semi-supervised node classification.
Edit-distance-based string similarity search has many applications such as spell correction, data de-duplication, and sequence alignment.
In particular, at the high compression ratio end, HSQ provides a low per-iteration communication cost of $O(\log d)$, which is favorable for federated learning.
In this paper, we present a new angle to analyze the quantization error, which decomposes the quantization error into norm error and direction error.
Then we explain the good performance of ip-NSW as matching the norm bias of the MIPS problem - large norm items have big in-degrees in the ip-NSW proximity graph and a walk on the graph spends the majority of computation on these items, thus effectively avoids unnecessary computation on small norm items.
Collaborative filtering, a widely-used recommendation technique, predicts a user's preference by aggregating the ratings from similar users.
Recently, locality sensitive hashing (LSH) was shown to be effective for MIPS and several algorithms including $L_2$-ALSH, Sign-ALSH and Simple-LSH have been proposed.
Neyshabur and Srebro proposed Simple-LSH, which is the state-of-the-art hashing method for maximum inner product search (MIPS) with performance guarantee.
Our experiments show that both hybrid index and search schemes can improve the recall of the initial retrieval stage with small overhead.