• Semantic Indexing: Scaling Proofs as Programs

    From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Jul 16 11:54:47 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Maybe they should double check what
    modern compilers do or what modern IDEs
    to in object orient programming languages.

    How it started:

    How to Make Ad Hoc Proof Automation Less Ad Hoc https://people.mpi-sws.org/~beta/lessadhoc/

    How its going:

    Optimizing Canonical Structures https://inria.hal.science/hal-05148851v1/document

    The original paper termed canonical structures,
    it has a nice visa |a visa. rCLType ClassrCY Programming
    versus rCLLogicrCY Programming,

    giving it a less functional programming
    spin. But hell wasn't there Prolog++ already
    in the past?

    The newest paper shows new style of research,
    citing garbage tickets from TurdPit inside
    a paper, and just listing some further

    untested and shallow research Turds. Kind
    of institutionalize Trial & Error. They could
    equally well shoot their own Foot and

    then jump in circles.

    Bye
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Jul 16 12:08:22 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Maybe somebody should tell Macron that there is
    a Stargate project now. Instead using TurdPit for
    human Trial & Error of chipmunks, Lean prover from
    Microsoft has some interesting projects actually:

    LeanDojo (Stanford / Meta AI collaborators)
    Turns LreareCn into a reinforcement learning environment for LLMs.
    Train AI to interact with the LreareCn prover like a human. https://github.com/lean-dojo/LeanDojo

    ProverBot9001
    A transformer-based model that learns to generate
    LreareCn proofs. Focused on learning from LreareCn's mathlib
    and applying fine-tuned techniques.
    Uses encoder-decoder LLMs and proof search.

    Autoformalization of Math
    Stanford, OpenAI, and DeepMind folks have tried
    autoformalizing math text (LaTeX or natural language)
    into LreareCn. Ongoing effort to turn the rCLarxiv math paperrCY raA
    rCLLean formal proofrCY into a pipeline.

    GPT Agents for Lean
    Some work on tool-augmented LLMs that run LreareCn, read
    the output, and continue the proof.
    Think: GPT as an agent in a loop with LreareCn rCo tries
    a tactic, checks result, adapts.
    Experimental, but shows promise.

    Mathlib + tactic-state datasets
    Hugely valuable structured data from mathlib for
    training and evaluating AI models. Models can learn
    by seeing before-and-after proof states. Some teams
    are working on LLMs that can select the next tactic
    by learning from these states.

    Have Fun!

    Bye

    Mild Shock schrieb:
    Hi,

    Maybe they should double check what
    modern compilers do or what modern IDEs
    to in object orient programming languages.

    How it started:

    How to Make Ad Hoc Proof Automation Less Ad Hoc https://people.mpi-sws.org/~beta/lessadhoc/

    How its going:

    Optimizing Canonical Structures https://inria.hal.science/hal-05148851v1/document

    The original paper termed canonical structures,
    it has a nice visa |a visa. rCLType ClassrCY Programming
    versus rCLLogicrCY Programming,

    giving it a less functional programming
    spin. But hell wasn't there Prolog++ already
    in the past?

    The newest paper shows new style of research,
    citing garbage tickets from TurdPit inside
    a paper, and just listing some further

    untested and shallow research Turds. Kind
    of institutionalize Trial & Error. They could
    equally well shoot their own Foot and

    then jump in circles.

    Bye

    --- Synchronet 3.21a-Linux NewsLink 1.2