Search Results for author: Igor Molybog

Found 6 papers, 3 papers with code

Effective Long-Context Scaling of Foundation Models

1 code implementation27 Sep 2023 Wenhan Xiong, Jingyu Liu, Igor Molybog, Hejia Zhang, Prajjwal Bhargava, Rui Hou, Louis Martin, Rashi Rungta, Karthik Abinav Sankararaman, Barlas Oguz, Madian Khabsa, Han Fang, Yashar Mehdad, Sharan Narang, Kshitiz Malik, Angela Fan, Shruti Bhosale, Sergey Edunov, Mike Lewis, Sinong Wang, Hao Ma

We also examine the impact of various design choices in the pretraining process, including the data mix and the training curriculum of sequence lengths -- our ablation experiments suggest that having abundant long texts in the pretrain dataset is not the key to achieving strong performance, and we empirically verify that long context continual pretraining is more efficient and similarly effective compared to pretraining from scratch with long sequences.

Continual Pretraining Language Modelling

When Does MAML Objective Have Benign Landscape?

no code implementations31 May 2020 Igor Molybog, Javad Lavaei

The paper studies the complexity of the optimization problem behind the Model-Agnostic Meta-Learning (MAML) algorithm.

Decision Making Meta-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.