• LRM moving from FOM to TCS [Lean Prover] (Was: Is Causal AI simply ReLU (Geoffrey E. Hinton 2010)))

    From Mild Shock@janburse@fastmail.fm to comp.theory on Fri Feb 27 10:21:16 2026
    From Newsgroup: comp.theory


    Hi,

    Large Reasoning Models (LRM) seem to move from
    Foundation of Mathematics (FOM) to Theoretical
    Computer Science (TCS). FOM typically gives

    you "white science" mathematics, with sets and
    infinity, if you are lucky a little recursion
    theory. Fun fact TCS is even more "white".

    Interesting paper in as far:

    Lean Meets Theoretical Computer Science:
    Scalable Synthesis of Theorem Proving Challenges
    in Formal-Informal Pairs
    Terry Jingchen Zhang et. al. - 2025
    https://arxiv.org/abs/2508.15878v1

    One swallow does not make a summer?

    But its probably a necessary step. The above
    paper using Busy Beaver and Interger Constraints
    as examples. What logical frameworks do even

    apply, is it enough to have a "total function"
    theory layer, or does TCS need more. TCS can
    be heavy on all sort of discrete and

    non-discrete mathematics.

    Bye

    Mild Shock schrieb:
    Hi,

    Geoffrey E. Hinton, the Nobel Prize winner
    for AI. He was already beating the drums
    for ReLU in 2010:

    HRectified Linear Units Improve Restricted Boltzmann Machines
    Geoffrey E. Hinton & Vinod Nair - 2010 https://www.cs.toronto.edu/~fritz/absps/reluICML.pdf

    Because ANNs (Artificial Neural Networks) were originally
    designed with other functions, e.g. with Logistic function:

    An artificial neuron is a mathematical function conceived
    as a model of a biological neuron in a neural network. https://en.wikipedia.org/wiki/Artificial_neuron

    If you populate additive factor graphs with log P,
    you basically get multiplicative factor graphs.
    So an ANN can express belief networks, right?

    Bye

    P.S.: What is all the hype about Causal AI, and
    the Ladder of Causation |a la Judea Pearl?

    Causal AI rCo the next gen AI
    Prof. Sotirios A. Tsaftaris - 2025 https://www.youtube.com/watch?v=IelslFzdsYw

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From wij@wyniijj5@gmail.com to comp.theory on Tue Mar 3 08:39:18 2026
    From Newsgroup: comp.theory

    On Fri, 2026-02-27 at 10:21 +0100, Mild Shock wrote:

    Hi,

    Large Reasoning Models (LRM) seem to move from
    Foundation of Mathematics (FOM) to Theoretical
    Computer Science (TCS). FOM typically gives

    you "white science" mathematics, with sets and
    infinity, if you are lucky a little recursion
    theory. Fun fact TCS is even more "white".

    Interesting paper in as far:

    Lean Meets Theoretical Computer Science:
    Scalable Synthesis of Theorem Proving Challenges
    in Formal-Informal Pairs
    Terry Jingchen Zhang et. al. - 2025
    https://arxiv.org/abs/2508.15878v1

    One swallow does not make a summer?

    But its probably a necessary step. The above
    paper using Busy Beaver and Interger Constraints
    as examples. What logical frameworks do even

    apply, is it enough to have a "total function"
    theory layer, or does TCS need more. TCS can
    be heavy on all sort of discrete and

    non-discrete mathematics.

    Bye

    Mild Shock schrieb:
    Hi,

    Geoffrey E. Hinton, the Nobel Prize winner
    for AI. He was already beating the drums
    for ReLU in 2010:

    HRectified Linear Units Improve Restricted Boltzmann Machines
    Geoffrey E. Hinton & Vinod Nair - 2010 https://www.cs.toronto.edu/~fritz/absps/reluICML.pdf

    Because ANNs (Artificial Neural Networks) were originally
    designed with other functions, e.g. with Logistic function:

    An artificial neuron is a mathematical function conceived
    as a model of a biological neuron in a neural network. https://en.wikipedia.org/wiki/Artificial_neuron

    If you populate additive factor graphs with log P,
    you basically get multiplicative factor graphs.
    So an ANN can express belief networks, right?

    Bye

    P.S.: What is all the hype about Causal AI, and
    the Ladder of Causation |a la Judea Pearl?

    Causal AI rCo the next gen AI
    Prof. Sotirios A. Tsaftaris - 2025 https://www.youtube.com/watch?v=IelslFzdsYw
    P!=NP https://sourceforge.net/projects/cscall/files/MisFiles/PNP-proof-en.txt/download
    Figure it out what P!=NP means, otherwise whatever understanding (you said) is superficial.
    --- Synchronet 3.21d-Linux NewsLink 1.2