Proceedings paper

Title:
Tutoring LLM into a Better CUDA Optimizer
Authors:
Publication:
Euro-Par 2025: Parallel Processing
DOI:
Year:
2025
ISBN:
978-3-031-99857-7
Link:

Abstract:
Recent leaps in large language models (LLMs) caused a revolution in programming tools (like GitHub Copilot) that can help with code generation, debugging, and even performance optimization. In this paper, we focus on the capabilities of the most recent reasoning models to generate optimized CUDA code for predefined, well-known tasks. Our objective is to determine which types of code optimizations and parallel patterns the LLMs can perform by themselves and whether they can be improved by tutoring (providing more detailed hints and guidelines in the prompt). The generated solutions were evaluated both automatically (for correctness and speedup) and manually (code reviews) to provide a more detailed perspective. We also tried an interactive approach where the LLM can fix its previous mistakes within a session. The results indicate that LLMs are quite skilled coders; however, they require tutoring to reach optimized solutions provided by parallel computing experts.

BibTeX:
@inproceedings{brabec_tutoring_2025,
    title = {{Tutoring LLM into a Better CUDA Optimizer}},
    author = {Brabec, Matyáš and Klepl, Jiří and Töpfer, Michal and Kruliš, Martin},
    year = {2025},
    booktitle = {{Euro-Par 2025: Parallel Processing}},
    editor = {Nagel, Wolfgang E. and Goehringer, Diana and Diniz, Pedro C.},
    publisher = {Springer Nature Switzerland},
    location = {Dresden, Germany},
    doi = {10.1007/978-3-031-99857-7_18},
    isbn = {978-3-031-99857-7},
    pages = {250--263},
    url = {https://link.springer.com/chapter/10.1007/978-3-031-99857-7_18},
}