Search Results for author: Sherman Wong

Found 2 papers, 1 papers with code

Extending Context Window of Large Language Models via Positional Interpolation

5 code implementations27 Jun 2023 Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian

We present Position Interpolation (PI) that extends the context window sizes of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal fine-tuning (within 1000 steps), while demonstrating strong empirical results on various tasks that require long context, including passkey retrieval, language modeling, and long document summarization from LLaMA 7B to 65B.

Document Summarization Language Modelling +2

Alternate Model Growth and Pruning for Efficient Training of Recommendation Systems

no code implementations4 May 2021 Xiaocong Du, Bhargav Bhushanam, Jiecao Yu, Dhruv Choudhary, Tianxiang Gao, Sherman Wong, Louis Feng, Jongsoo Park, Yu Cao, Arun Kejariwal

Our method leverages structured sparsification to reduce computational cost without hurting the model capacity at the end of offline training so that a full-size model is available in the recurring training stage to learn new data in real-time.

Recommendation Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.