• Original Subject established by Chinese Scientists [Extenics] (Re: LRM moving from FOM to TCS [Lean Prover])

    From Mild Shock@janburse@fastmail.fm to sci.physics.relativity on Sat Feb 28 16:35:11 2026
    From Newsgroup: sci.physics.relativity

    Hi,

    Now "white science" might have more leaning
    towards Causal AI. But sometimes I have the
    feeling Generative AI is the "yellow science"

    now. DeepSeek left handedly inventing the
    notation M(a,b) for matrix representation of
    a dual number a + b e:

    M(a,b) = a I + b E

    And then discussing variants like:

    E = [[ 0 1 ]
    [ 0 0 ]]

    Or this variant like:

    E = [[ 1 -1 ]
    [ 1 -1 ]]

    I am in mild shock! Then I am reading the small print:

    Prof. Cai Wen, Institute of Extenics and Innovative Methods, Guangzhou, https://fs.unm.edu/DualNumbers.pdf

    What is Extenics? Any ideas?

    Have Fun!

    Bye

    Mild Shock schrieb:
    Hi,

    Large Reasoning Models (LRM) seem to move from
    Foundation of Mathematics (FOM) to Theoretical
    Computer Science (TCS). FOM typically gives

    you "white science" mathematics, with sets and
    infinity, if you are lucky a little recursion
    theory. Fun fact TCS is even more "white".

    Interesting paper in as far:

    Lean Meets Theoretical Computer Science:
    Scalable Synthesis of Theorem Proving Challenges
    in Formal-Informal Pairs
    Terry Jingchen Zhang et. al. - 2025
    https://arxiv.org/abs/2508.15878v1

    One swallow does not make a summer?

    But its probably a necessary step. The above
    paper using Busy Beaver and Interger Constraints
    as examples. What logical frameworks do even

    apply, is it enough to have a "total function"
    theory layer, or does TCS need more. TCS can
    be heavy on all sort of discrete and

    non-discrete mathematics.

    Bye

    Mild Shock schrieb:
    Hi,

    Geoffrey E. Hinton, the Nobel Prize winner
    for AI. He was already beating the drums
    for ReLU in 2010:

    HRectified Linear Units Improve Restricted Boltzmann Machines
    Geoffrey E. Hinton & Vinod Nair - 2010 https://www.cs.toronto.edu/~fritz/absps/reluICML.pdf

    Because ANNs (Artificial Neural Networks) were originally
    designed with other functions, e.g. with Logistic function:

    An artificial neuron is a mathematical function conceived
    as a model of a biological neuron in a neural network. https://en.wikipedia.org/wiki/Artificial_neuron

    If you populate additive factor graphs with log P,
    you basically get multiplicative factor graphs.
    So an ANN can express belief networks, right?

    Bye

    P.S.: What is all the hype about Causal AI, and
    the Ladder of Causation |a la Judea Pearl?

    Causal AI rCo the next gen AI
    Prof. Sotirios A. Tsaftaris - 2025 https://www.youtube.com/watch?v=IelslFzdsYw


    --- Synchronet 3.21d-Linux NewsLink 1.2