• Paper: Magellan: Autonomous Discovery of Novel Compiler Optimization Heuristics with AlphaEvolve

    From John R Levine@johnl@taugh.com to comp.compilers on Fri Jan 30 10:53:07 2026
    From Newsgroup: comp.compilers

    This Google paper describes an AI approach to invent new compiler optimizations.

    Abstract

    Modern compilers rely on hand-crafted heuristics to guide optimization
    passes. These human-designed rules often struggle to adapt to the
    complexity of modern software and hardware and lead to high maintenance
    burden. To address this challenge, we present Magellan, an agentic
    framework that evolves the compiler pass itself by synthesizing executable
    C++ decision logic. Magellan couples an LLM coding agent with evolutionary search and autotuning in a closed loop of generation, evaluation on user-provided macro-benchmarks, and refinement, producing compact
    heuristics that integrate directly into existing compilers. Across several production optimization tasks, Magellan discovers policies that match or surpass expert baselines. In LLVM function inlining, Magellan synthesizes
    new heuristics that outperform decades of manual engineering for both binary-size reduction and end-to-end performance. In register allocation,
    it learns a concise priority rule for live-range processing that matches intricate human-designed policies on a large-scale workload. We also
    report preliminary results on XLA problems, demonstrating portability
    beyond LLVM with reduced engineering effort.

    https://arxiv.org/abs/2601.21096

    Regards,
    John Levine, johnl@taugh.com, Taughannock Networks, Trumansburg NY
    Please consider the environment before reading this e-mail. https://jl.ly
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Derek@derek@shape-of-code.com to comp.compilers on Sun Feb 1 17:37:42 2026
    From Newsgroup: comp.compilers

    John,

    A paper with "novel" in the title is a major red flag.

    This Google paper describes an AI approach to invent new compiler optimizations.

    No they don't. They use an LLM to select the tuning parameters
    for a well established optimization, function inlining.

    surpass expert baselines. In LLVM function inlining, Magellan synthesizes
    new heuristics that outperform decades of manual engineering for both binary-size reduction and end-to-end performance.

    "... the continued Gemini-3-Pro run achieves consistent
    positive speedups beyond 0%, ultimately surpassing the hand-
    tuned baseline by 0.61%."

    Figure 3/4 suggests a much bigger improvement, until the reader
    realises that the comparison is not against human generated
    rules. Results given to two decimal places and no error bars!

    In register allocation,
    it learns a concise priority rule for live-range processing that matches intricate human-designed policies on a large-scale workload.

    This sentence in the abstract goes undiscussed in the paper, which
    only looks at inlining.
    --- Synchronet 3.21b-Linux NewsLink 1.2