We introduce a prototype model and provide an open-source and extensible toolkit called OpenUE for various extraction tasks.
Our transferable optimization method makes transistor sizing and design porting more effective and efficient.
1 code implementation • • Hongzi Mao, Parimarjan Negi, Akshay Narayan, Hanrui Wang, Jiacheng Yang, Haonan Wang, Ryan Marcus, Ravichandra Addanki, Mehrdad Khani Shirkoohi, Songtao He, Vikram Nathan, Frank Cangialosi, Shaileshh Venkatakrishnan, Wei-Hung Weng, Song Han, Tim Kraska, Dr.Mohammad Alizadeh
We present Park, a platform for researchers to experiment with Reinforcement Learning (RL) for computer systems.
GPT-2 and BERT demonstrate the effectiveness of using pre-trained language models (LMs) on various natural language processing tasks.
We introduce a new function-preserving transformation for efficient neural architecture search.
Unlike previous research platforms on single or multi-agent reinforcement learning, MAgent focuses on supporting the tasks and the applications that require hundreds to millions of agents.