Search Results for author: Jun Rao

Found 5 papers, 2 papers with code

Can Linguistic Knowledge Improve Multimodal Alignment in Vision-Language Pretraining?

1 code implementation24 Aug 2023 Fei Wang, Liang Ding, Jun Rao, Ye Liu, Li Shen, Changxing Ding

The multimedia community has shown a significant interest in perceiving and representing the physical world with multimodal pretrained neural network models, and among them, the visual-language pertaining (VLP) is, currently, the most captivating topic.

Attribute Negation +1

Dynamic Contrastive Distillation for Image-Text Retrieval

no code implementations4 Jul 2022 Jun Rao, Liang Ding, Shuhan Qi, Meng Fang, Yang Liu, Li Shen, DaCheng Tao

Although the vision-and-language pretraining (VLP) equipped cross-modal image-text retrieval (ITR) has achieved remarkable progress in the past two years, it suffers from a major drawback: the ever-increasing size of VLP models restricts its deployment to real-world search scenarios (where the high latency is unacceptable).

Contrastive Learning Metric Learning +3

Parameter-Efficient and Student-Friendly Knowledge Distillation

no code implementations28 May 2022 Jun Rao, Xv Meng, Liang Ding, Shuhan Qi, DaCheng Tao

In this paper, we present a parameter-efficient and student-friendly knowledge distillation method, namely PESF-KD, to achieve efficient and sufficient knowledge transfer by updating relatively few partial parameters.

Knowledge Distillation Transfer Learning

Where Does the Performance Improvement Come From? -- A Reproducibility Concern about Image-Text Retrieval

1 code implementation8 Mar 2022 Jun Rao, Fei Wang, Liang Ding, Shuhan Qi, Yibing Zhan, Weifeng Liu, DaCheng Tao

In contrast to previous works, we focus on the reproducibility of the approaches and the examination of the elements that lead to improved performance by pretrained and nonpretrained models in retrieving images and text.

Information Retrieval Retrieval +1

Using Paxos to Build a Scalable, Consistent, and Highly Available Datastore

no code implementations12 Mar 2011 Jun Rao, Eugene J. Shekita, Sandeep Tata

Compared to an eventually consistent datastore, we show that Spinnaker can be as fast or even faster on reads and only 5% to 10% slower on writes.

Databases Distributed, Parallel, and Cluster Computing

Cannot find the paper you are looking for? You can Submit a new open access paper.