Search Results for author: Patrick Diehl

Found 11 papers, 10 papers with code

ML-based identification of the interface regions for coupling local and nonlocal models

no code implementations23 Apr 2024 Noujoud Nader, Patrick Diehl, Marta D'Elia, Christian Glusa, Serge Prudhomme

Training is based on datasets of loading functions for which reference coupling configurations are computed using accurate coupled solutions, where accuracy is measured in terms of the relative error between the solution to the coupling approach and the solution to the nonlocal model.

Computational Efficiency

Deploying a Task-based Runtime System on Raspberry Pi Clusters

1 code implementation8 Oct 2020 Nikunj Gupta, Steve R. Brandt, Bibek Wagle, Nanmiao, Alireza Kheirkhahan, Patrick Diehl, Hartmut Kaiser, Felix W. Baumann

Here we describe our efforts to configure and benchmark the use of a Raspberry Pi cluster with the HPX/Phylanx platform (normally intended for use with HPC applications) and document the lessons we learned.

Distributed, Parallel, and Cluster Computing

Towards a Scalable and Distributed Infrastructure for Deep Learning Applications

1 code implementation6 Oct 2020 Bita Hasheminezhad, Shahrzad Shirzad, Nanmiao Wu, Patrick Diehl, Hannes Schulz, Hartmut Kaiser

Although recent scaling up approaches to training deep neural networks have proven to be effective, the computational intensity of large and complex models, as well as the availability of large-scale datasets, require deep learning frameworks to utilize scaling out techniques.

On the treatment of boundary conditions for bond-based peridynamic models

1 code implementation22 Aug 2020 Serge Prudhomme, Patrick Diehl

In this paper, we propose two approaches to apply boundary conditions for bond-based peridynamic models.

Computational Engineering, Finance, and Science

Supporting OpenMP 5.0 Tasks in hpxMP -- A study of an OpenMP implementation within Task Based Runtime Systems

1 code implementation19 Feb 2020 Tianyi Zhang, Shahrzad Shirzad, Bibek Wagle, Adrian S. Lemoine, Patrick Diehl, Hartmut Kaiser

This paper is a follow-up paper on the fundamental implementation of hpxMP, an implementation of the OpenMP standard which utilizes the C++ standard library for Parallelism and Concurrency (HPX) to schedule and manage tasks.

Distributed, Parallel, and Cluster Computing Programming Languages

From Piz Daint to the Stars: Simulation of Stellar Mergers using High-Level Abstractions

1 code implementation8 Aug 2019 Gregor Daiß, Parsa Amini, John Biddiscombe, Patrick Diehl, Juhan Frank, Kevin Huck, Hartmut Kaiser, Dominic Marcello, David Pfander, Dirk Pflüger

We study the simulation of stellar mergers, which requires complex simulations with high computational demands.

Distributed, Parallel, and Cluster Computing Computational Engineering, Finance, and Science

An Introduction to hpxMP: A Modern OpenMP Implementation Leveraging HPX, An Asynchronous Many-Task System

1 code implementation7 Mar 2019 Tianyi Zhang, Shahrzad Shirzad, Patrick Diehl, R. Tohid, Weile Wei, Hartmut Kaiser

Not only must users port their own codes, but often users rely on highly optimized libraries such as BLAS and LAPACK which use OpenMP for parallization.

Distributed, Parallel, and Cluster Computing

Integration of CUDA Processing within the C++ library for parallelism and concurrency (HPX)

1 code implementation26 Oct 2018 Patrick Diehl, Madhavan Seshadri, Thomas Heller, Hartmut Kaiser

One big aspect for distributed applications is to guarantee high utilization of all available resources, including local or remote acceleration cards on a cluster while fully using all the available CPU resources and the integration of the GPU work into the overall programming model.

Distributed, Parallel, and Cluster Computing Programming Languages

Asynchronous Execution of Python Code on Task Based Runtime Systems

1 code implementation17 Oct 2018 R. Tohid, Bibek Wagle, Shahrzad Shirzad, Patrick Diehl, Adrian Serio, Alireza Kheirkhahan, Parsa Amini, Katy Williams, Kate Isaacs, Kevin Huck, Steven Brandt, Hartmut Kaiser

Despite advancements in the areas of parallel and distributed computing, the complexity of programming on High Performance Computing (HPC) resources has deterred many domain experts, especially in the areas of machine learning and artificial intelligence (AI), from utilizing performance benefits of such systems.

Programming Languages

An asynchronous and task-based implementation of Peridynamics utilizing HPX -- the C++ standard library for parallelism and concurrency

1 code implementation18 Jun 2018 Patrick Diehl, Prashant K. Jha, Hartmut Kaiser, Robert Lipton, Martin Levesque

The scalability of asynchronous task-based implementation is to be in agreement with theoretical estimations.

Distributed, Parallel, and Cluster Computing Computational Physics

Long term availability of raw experimental data in experimental fracture mechanics

1 code implementation20 Mar 2018 Patrick Diehl, Ilyass Tabiai, Felix W. Baumann, Martin Levesque

Experimental data availability is a cornerstone for reproducibility in experimental fracture mechanics, which is crucial to the scientific method.

Digital Libraries Applications

Cannot find the paper you are looking for? You can Submit a new open access paper.