1 code implementation • 21 May 2024 • Patrick Diehl, Noujoud Nader, Steve Brandt, Hartmut Kaiser
To this end, we asked ChatGPT to generate three distinct codes: a simple numerical integration, a conjugate gradient solver, and a parallel 1D stencil-based heat equation solver.
no code implementations • 23 Apr 2024 • Noujoud Nader, Patrick Diehl, Marta D'Elia, Christian Glusa, Serge Prudhomme
Training is based on datasets of loading functions for which reference coupling configurations are computed using accurate coupled solutions, where accuracy is measured in terms of the relative error between the solution to the coupling approach and the solution to the nonlocal model.
1 code implementation • 8 Oct 2020 • Nikunj Gupta, Steve R. Brandt, Bibek Wagle, Nanmiao, Alireza Kheirkhahan, Patrick Diehl, Hartmut Kaiser, Felix W. Baumann
Here we describe our efforts to configure and benchmark the use of a Raspberry Pi cluster with the HPX/Phylanx platform (normally intended for use with HPC applications) and document the lessons we learned.
Distributed, Parallel, and Cluster Computing
1 code implementation • 6 Oct 2020 • Bita Hasheminezhad, Shahrzad Shirzad, Nanmiao Wu, Patrick Diehl, Hannes Schulz, Hartmut Kaiser
Although recent scaling up approaches to training deep neural networks have proven to be effective, the computational intensity of large and complex models, as well as the availability of large-scale datasets, require deep learning frameworks to utilize scaling out techniques.
1 code implementation • 22 Aug 2020 • Serge Prudhomme, Patrick Diehl
In this paper, we propose two approaches to apply boundary conditions for bond-based peridynamic models.
Computational Engineering, Finance, and Science
1 code implementation • 19 Feb 2020 • Tianyi Zhang, Shahrzad Shirzad, Bibek Wagle, Adrian S. Lemoine, Patrick Diehl, Hartmut Kaiser
This paper is a follow-up paper on the fundamental implementation of hpxMP, an implementation of the OpenMP standard which utilizes the C++ standard library for Parallelism and Concurrency (HPX) to schedule and manage tasks.
Distributed, Parallel, and Cluster Computing Programming Languages
1 code implementation • 8 Aug 2019 • Gregor Daiß, Parsa Amini, John Biddiscombe, Patrick Diehl, Juhan Frank, Kevin Huck, Hartmut Kaiser, Dominic Marcello, David Pfander, Dirk Pflüger
We study the simulation of stellar mergers, which requires complex simulations with high computational demands.
Distributed, Parallel, and Cluster Computing Computational Engineering, Finance, and Science
1 code implementation • 7 Mar 2019 • Tianyi Zhang, Shahrzad Shirzad, Patrick Diehl, R. Tohid, Weile Wei, Hartmut Kaiser
Not only must users port their own codes, but often users rely on highly optimized libraries such as BLAS and LAPACK which use OpenMP for parallization.
Distributed, Parallel, and Cluster Computing
1 code implementation • 26 Oct 2018 • Patrick Diehl, Madhavan Seshadri, Thomas Heller, Hartmut Kaiser
One big aspect for distributed applications is to guarantee high utilization of all available resources, including local or remote acceleration cards on a cluster while fully using all the available CPU resources and the integration of the GPU work into the overall programming model.
Distributed, Parallel, and Cluster Computing Programming Languages
1 code implementation • 17 Oct 2018 • R. Tohid, Bibek Wagle, Shahrzad Shirzad, Patrick Diehl, Adrian Serio, Alireza Kheirkhahan, Parsa Amini, Katy Williams, Kate Isaacs, Kevin Huck, Steven Brandt, Hartmut Kaiser
Despite advancements in the areas of parallel and distributed computing, the complexity of programming on High Performance Computing (HPC) resources has deterred many domain experts, especially in the areas of machine learning and artificial intelligence (AI), from utilizing performance benefits of such systems.
Programming Languages
1 code implementation • 18 Jun 2018 • Patrick Diehl, Prashant K. Jha, Hartmut Kaiser, Robert Lipton, Martin Levesque
The scalability of asynchronous task-based implementation is to be in agreement with theoretical estimations.
Distributed, Parallel, and Cluster Computing Computational Physics
1 code implementation • 20 Mar 2018 • Patrick Diehl, Ilyass Tabiai, Felix W. Baumann, Martin Levesque
Experimental data availability is a cornerstone for reproducibility in experimental fracture mechanics, which is crucial to the scientific method.
Digital Libraries Applications