• CompilerGPT: Leveraging Large Language Models for Analyzing and Acting on Compiler Optimization Reports

    From John R Levine@johnl@taugh.com to comp.compilers on Mon Jun 9 09:40:06 2025
    From Newsgroup: comp.compilers

    The authors told LLMs to read C++ compiler optimization reports and make
    the code better.

    https://arxiv.org/abs/2506.06227

    Abstract: Current compiler optimization reports often present complex, technical information that is difficult for programmers to interpret and
    act upon effectively. This paper assesses the capability of large language models (LLM) to understand compiler optimization reports and automatically rewrite the code accordingly.
    To this end, the paper introduces CompilerGPT, a novel framework that
    automates the interaction between compilers, LLMs, and user defined test
    and evaluation harness. CompilerGPT's workflow runs several iterations and reports on the obtained results.
    Experiments with two leading LLM models (GPT-4o and Claude Sonnet), optimization reports from two compilers (Clang and GCC), and five
    benchmark codes demonstrate the potential of this approach. Speedups of up
    to 6.5x were obtained, though not consistently in every test. This method
    holds promise for improving compiler usability and streamlining the
    software optimization process.

    Regards,
    John Levine, johnl@taugh.com, Taughannock Networks, Trumansburg NY
    Please consider the environment before reading this e-mail. https://jl.ly
    --- Synchronet 3.21b-Linux NewsLink 1.2