1 code implementation • 14 Feb 2024 • Nadav Schneider, Niranjan Hasabnis, Vy A. Vo, Tal Kadosh, Neva Krien, Mihai Capotă, Abdul Wasay, Guy Tamir, Ted Willke, Nesreen Ahmed, Yuval Pinter, Timothy Mattson, Gal Oren
This study first investigates the performance of state-of-the-art language models in generating MPI-based parallel programs.
no code implementations • 3 Feb 2024 • Le Chen, Nesreen K. Ahmed, Akash Dutta, Arijit Bhattacharjee, Sixing Yu, Quazi Ishtiaque Mahmud, Waqwoya Abebe, Hung Phan, Aishwarya Sarkar, Branden Butler, Niranjan Hasabnis, Gal Oren, Vy A. Vo, Juan Pablo Munoz, Theodore L. Willke, Tim Mattson, Ali Jannesari
Recently, language models (LMs), especially large language models (LLMs), have revolutionized the field of deep learning.
no code implementations • 28 Jan 2024 • Le Chen, Arijit Bhattacharjee, Nesreen Ahmed, Niranjan Hasabnis, Gal Oren, Vy Vo, Ali Jannesari
Large language models (LLMs), as epitomized by models like ChatGPT, have revolutionized the field of natural language processing (NLP).
2 code implementations • 20 Dec 2023 • Tal Kadosh, Niranjan Hasabnis, Vy A. Vo, Nadav Schneider, Neva Krien, Mihai Capota, Abdul Wasay, Nesreen Ahmed, Ted Willke, Guy Tamir, Yuval Pinter, Timothy Mattson, Gal Oren
Specifically, we start off with HPC as a domain and build an HPC-specific LM, named MonoCoder, that is orders of magnitude smaller than existing LMs but delivers similar, if not better performance, on non-HPC and HPC tasks.
no code implementations • 11 Nov 2023 • Le Chen, Arijit Bhattacharjee, Nesreen K. Ahmed, Niranjan Hasabnis, Gal Oren, Bin Lei, Ali Jannesari
The evaluation of CompCodeVet on two open-source code datasets shows that CompCodeVet has the ability to improve the training dataset quality for LLMs.
2 code implementations • 18 Aug 2023 • Tal Kadosh, Niranjan Hasabnis, Vy A. Vo, Nadav Schneider, Neva Krien, Abdul Wasay, Nesreen Ahmed, Ted Willke, Guy Tamir, Yuval Pinter, Timothy Mattson, Gal Oren
With easier access to powerful compute resources, there is a growing trend in the field of AI for software development to develop larger and larger language models (LLMs) to address a variety of programming tasks.
2 code implementations • 16 May 2023 • Nadav Schneider, Tal Kadosh, Niranjan Hasabnis, Timothy Mattson, Yuval Pinter, Gal Oren
Message Passing Interface (MPI) plays a crucial role in distributed memory parallelization across multiple nodes.
2 code implementations • 16 May 2023 • Tal Kadosh, Nadav Schneider, Niranjan Hasabnis, Timothy Mattson, Yuval Pinter, Gal Oren
Specifically, we propose a novel approach, called OMPify, to detect and predict the OpenMP pragmas and shared-memory attributes in parallel code, given its serial version.
no code implementations • 28 Nov 2022 • Mohammad Hossain, Derssie Mebratu, Niranjan Hasabnis, Jun Jin, Gaurav Chaudhary, Noah Shen
To address this problem of realizing the full potential of the underlying platform, we develop a machine learning based technique to characterize, profile and predict workloads running in the cloud environment.
no code implementations • 24 Sep 2022 • Niranjan Hasabnis
We believe that our findings also generate interesting insights towards code quality measures that affect performance of MP systems.
1 code implementation • 4 May 2022 • Niranjan Hasabnis
Open-source repositories provide wealth of information and are increasingly being used to build artificial intelligence (AI) based systems to solve problems in software engineering.
no code implementations • 13 Sep 2021 • Derssie Mebratu, Niranjan Hasabnis, Pietro Mercati, Gaurit Sharma, Shamima Najnin
In this paper, we treat the problem of tuning parameters of DL frameworks to improve training and inference performance as a black-box optimization problem.
1 code implementation • 6 Nov 2020 • Niranjan Hasabnis, Justin Gottschlich
Software debugging has been shown to utilize upwards of half of developers' time.
no code implementations • 28 Sep 2020 • Fangke Ye, Shengtian Zhou, Anand Venkat, Ryan Marcus, Nesime Tatbul, Jesmin Jahan Tithi, Niranjan Hasabnis, Paul Petersen, Timothy G Mattson, Tim Kraska, Pradeep Dubey, Vivek Sarkar, Justin Gottschlich
First, MISIM uses a novel context-aware semantic structure (CASS), which is designed to aid in lifting semantic meaning from code syntax.
no code implementations • 5 Jun 2020 • Fangke Ye, Shengtian Zhou, Anand Venkat, Ryan Marcus, Nesime Tatbul, Jesmin Jahan Tithi, Niranjan Hasabnis, Paul Petersen, Timothy Mattson, Tim Kraska, Pradeep Dubey, Vivek Sarkar, Justin Gottschlich
Code semantics similarity can be used for many tasks such as code recommendation, automated software defect correction, and clone detection.
no code implementations • 4 Dec 2018 • Niranjan Hasabnis
In this paper, we develop an automatic approach, called TensorTuner, to search for optimal parameter settings of TensorFlow's threading model for CPU backends.