Journal article

Title:
Abstractions for C++ code optimizations in parallel high-performance applications
Authors:
Publication:
Parallel Computing 121
DOI:
Year:
2024
Link:

Abstract:
Many computational problems consider memory throughput a performance bottleneck, especially in the domain of parallel computing. Software needs to be attuned to hardware features like cache architectures or concurrent memory banks to reach a decent level of performance efficiency. This can be achieved by selecting the right memory layouts for data structures or changing the order of data structure traversal. In this work, we present an abstraction for traversing a set of regular data structures (e.g., multidimensional arrays) that allows the design of traversal-agnostic algorithms. Such algorithms can easily optimize for memory performance and employ semi-automated parallelization or autotuning without altering their internal code. We also add an abstraction for autotuning that allows defining tuning parameters in one place and removes boilerplate code. The proposed solution was implemented as an extension of the Noarr library that simplifies a layout-agnostic design of regular data structures. It is implemented entirely using C++ template meta-programming without any nonstandard dependencies, so it is fully compatible with existing compilers, including CUDA NVCC or Intel DPC++. We evaluate the performance and expressiveness of our approach on the Polybench-C benchmarks.

BibTeX:
@article{klepl_abstractions_2024,
    title = {{Abstractions for C++ code optimizations in parallel high-performance applications}},
    author = {Klepl, Jiří and Šmelko, Adam and Rozsypal, Lukáš and Kruliš, Martin},
    year = {2024},
    journal = {{Parallel Computing}},
    doi = {https://doi.org/10.1016/j.parco.2024.103096},
    issn = {0167-8191},
    pages = {103096},
    url = {https://www.sciencedirect.com/science/article/pii/S0167819124000346},
    volume = {121},
}