Search Results for author: Xing Zhou

Found 7 papers, 2 papers with code

General Purpose Text Embeddings from Pre-trained Language Models for Scalable Inference

no code implementations Findings of the Association for Computational Linguistics 2020 Jingfei Du, Myle Ott, Haoran Li, Xing Zhou, Veselin Stoyanov

The resulting method offers a compelling solution for using large-scale pre-trained models at a fraction of the computational cost when multiple tasks are performed on the same text.

Knowledge Distillation Quantization

Evaluating Modules in Graph Contrastive Learning

1 code implementation15 Jun 2021 Ganqu Cui, Yufeng Du, Cheng Yang, Jie zhou, Liang Xu, Xing Zhou, Xingyi Cheng, Zhiyuan Liu

The recent emergence of contrastive learning approaches facilitates the application on graph representation learning (GRL), introducing graph contrastive learning (GCL) into the literature.

Contrastive Learning Graph Classification +1

Efficient Large Scale Language Modeling with Mixtures of Experts

no code implementations20 Dec 2021 Mikel Artetxe, Shruti Bhosale, Naman Goyal, Todor Mihaylov, Myle Ott, Sam Shleifer, Xi Victoria Lin, Jingfei Du, Srinivasan Iyer, Ramakanth Pasunuru, Giri Anantharaman, Xian Li, Shuohui Chen, Halil Akin, Mandeep Baines, Louis Martin, Xing Zhou, Punit Singh Koura, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Mona Diab, Zornitsa Kozareva, Ves Stoyanov

This paper presents a detailed empirical study of how autoregressive MoE language models scale in comparison with dense models in a wide range of settings: in- and out-of-domain language modeling, zero- and few-shot priming, and full-shot fine-tuning.

Language Modelling

LLMBind: A Unified Modality-Task Integration Framework

no code implementations22 Feb 2024 Bin Zhu, Peng Jin, Munan Ning, Bin Lin, Jinfa Huang, Qi Song, Jiaxi Cui, Junwu Zhang, Zhenyu Tang, Mingjun Pan, Xing Zhou, Li Yuan

While recent progress in multimodal large language models tackles various modality tasks, they posses limited integration capabilities for complex multi-modality tasks, consequently constraining the development of the field.

Audio Generation Image Segmentation +1

Envision3D: One Image to 3D with Anchor Views Interpolation

1 code implementation13 Mar 2024 Yatian Pang, Tanghui Jia, Yujun Shi, Zhenyu Tang, Junwu Zhang, Xinhua Cheng, Xing Zhou, Francis E. H. Tay, Li Yuan

To address this issue, we propose a novel cascade diffusion framework, which decomposes the challenging dense views generation task into two tractable stages, namely anchor views generation and anchor views interpolation.

Image to 3D

Cannot find the paper you are looking for? You can Submit a new open access paper.