Search Results for author: Junyi Wei

Found 4 papers, 2 papers with code

Why Larger Language Models Do In-context Learning Differently?

no code implementations30 May 2024 Zhenmei Shi, Junyi Wei, Zhuoyan Xu, YIngyu Liang

This sheds light on where transformers pay attention to and how that affects ICL.

In-Context Learning

Towards Few-Shot Adaptation of Foundation Models via Multitask Finetuning

1 code implementation22 Feb 2024 Zhuoyan Xu, Zhenmei Shi, Junyi Wei, Fangzhou Mu, Yin Li, YIngyu Liang

An emerging solution with recent success in vision and NLP involves finetuning a foundation model on a selection of relevant tasks, before its adaptation to a target task with limited labeled samples.

Interfacing Foundation Models' Embeddings

1 code implementation12 Dec 2023 Xueyan Zou, Linjie Li, JianFeng Wang, Jianwei Yang, Mingyu Ding, Junyi Wei, Zhengyuan Yang, Feng Li, Hao Zhang, Shilong Liu, Arul Aravinthan, Yong Jae Lee, Lijuan Wang

To further unleash the power of foundation models, we present FIND, a generalized interface for aligning foundation models' embeddings with unified image and dataset-level understanding spanning modality and granularity.

Decoder Image Segmentation +3

A Theoretical Analysis on Feature Learning in Neural Networks: Emergence from Inputs and Advantage over Fixed Features

no code implementations ICLR 2022 Zhenmei Shi, Junyi Wei, YIngyu Liang

These results provide theoretical evidence showing that feature learning in neural networks depends strongly on the input structure and leads to the superior performance.

Cannot find the paper you are looking for? You can Submit a new open access paper.