The common research goal of self-supervised learning is to extract a general representation which an arbitrary downstream task would benefit from.
To use the abundant information contained in non-text data, we propose a self-supervised multimodal opinion summarization framework called MultimodalSum.
To eliminate this issue, we propose a framework called CITIES, which aims to enhance the quality of the tail-item embeddings by training an embedding-inference function using multiple contextual head items so that the recommendation performance improves for not only the tail items but also for the head items.
Existing machine reading comprehension models are reported to be brittle for adversarially perturbed questions when optimizing only for accuracy, which led to the creation of new reading comprehension benchmarks, such as SQuAD 2. 0 which contains such type of questions.
This paper proposes a recommender system to alleviate the cold-start problem that can estimate user preferences based on only a small number of items.