• 2026 the Year Prolog **plonked** Itself [Strawberry Prolog is back] (Was: Im memoriam Doug Lenant (1950 - 2023))

    From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Feb 4 02:20:41 2026
    From Newsgroup: comp.lang.prolog

    Hi,

    Just a quick take on the Prolog pulse:

    - SWI-Prolog: Dead Horse, LISP / MeTTa is it.

    - Scyer: Dead Horse, no core commits

    - Ciao Prolog: Dead Horse, no core commits

    - Who else?

    So I guess its back to Strawberry Prolog for teaching!

    They are also building AGI:

    https://dobrev.com/AI/Think_in_mind.pdf

    Bye

    Mild Shock schrieb:

    Thats a funny quote:

    "Once you have a truly massive amount of information
    integrated as knowledge, then the human-software
    system will be superhuman, in the same sense that
    mankind with writing is superhuman compared to
    mankind before writing."

    https://en.wikipedia.org/wiki/Douglas_Lenat#Quotes
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Feb 4 11:19:04 2026
    From Newsgroup: comp.lang.prolog

    Hi,

    Spies and Playboys: Don't use insecure E-mail.
    Its too much laugh for everybody:

    A woman's history of vaginal orgasm is discernible from her walk. https://www.jmail.world/thread/EFTA02421406?view=inbox

    Cudos to whoever did that(*), it has even PDF
    backlings, like this here;

    A woman's history of vaginal orgasm is discernible from her walk https://assets.getkino.com/documents/EFTA02421406.pdf

    P.S.: Who did it:

    The Jmail Suite
    https://www.jmail.world/about#built-by

    P.P.S.: Eurpeans be careful using USA E-mail.

    Mild Shock schrieb:
    Hi,

    Just a quick take on the Prolog pulse:

    - SWI-Prolog: Dead Horse, LISP / MeTTa is it.

    - Scyer: Dead Horse, no core commits

    - Ciao Prolog: Dead Horse, no core commits

    - Who else?

    So I guess its back to Strawberry Prolog for teaching!

    They are also building AGI:

    https://dobrev.com/AI/Think_in_mind.pdf

    Bye

    Mild Shock schrieb:

    Thats a funny quote:

    "Once you have a truly massive amount of information
    integrated as knowledge, then the human-software
    system will be superhuman, in the same sense that
    mankind with writing is superhuman compared to
    mankind before writing."

    https://en.wikipedia.org/wiki/Douglas_Lenat#Quotes

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Feb 5 09:53:45 2026
    From Newsgroup: comp.lang.prolog

    Hi,

    How it started:

    TPTP World Anthem [The TPTP World Needs Money] https://drive.google.com/file/d/1otJ8FxCiwVTIMoEDMW98puuTKIUH7BYF/view?pli=1

    How its going:

    ANTHEM 2.0: Automated Reasoning for Answer Set Programming https://www.cambridge.org/core/journals/theory-and-practice-of-logic-programming/article/anthem-20-automated-reasoning-for-answer-set-programming/69D4B6430617A334906A99F787FA2727

    Bye

    P.S.: Would be fine if the TPTP sublanguages, were
    lean, just some operator definitions, and not some
    bloathed nonsense. Going such a Lean path would

    allow to stay in the ISO core standard, instead of
    inventing one new language after the other, Gringo,
    who knows what, with horrible eval semantics of

    (=)/2 and silly (=>)/2 rules for example in Picat.
    Also translating ASP to FOL, when there are no
    cardinality heads, is rather trivial.

    Mild Shock schrieb:
    Hi,

    Just a quick take on the Prolog pulse:

    - SWI-Prolog: Dead Horse, LISP / MeTTa is it.

    - Scyer: Dead Horse, no core commits

    - Ciao Prolog: Dead Horse, no core commits

    - Who else?

    So I guess its back to Strawberry Prolog for teaching!

    They are also building AGI:

    https://dobrev.com/AI/Think_in_mind.pdf

    Bye

    Mild Shock schrieb:

    Thats a funny quote:

    "Once you have a truly massive amount of information
    integrated as knowledge, then the human-software
    system will be superhuman, in the same sense that
    mankind with writing is superhuman compared to
    mankind before writing."

    https://en.wikipedia.org/wiki/Douglas_Lenat#Quotes

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Feb 5 10:21:51 2026
    From Newsgroup: comp.lang.prolog

    Hi,

    Another example of a Dead Horse, Gecode,
    no commit for 7 years:

    https://github.com/Gecode/gecode

    So what is it, a historical footnote? Why
    does research not produce little

    ISO core standard capsules, that I can
    run even with Strawberry Prolog?

    Bye

    P.S.: So the real ANTHEM is:

    Guns N' Roses - Dead Horse
    https://www.youtube.com/watch?v=u8X76NRiQLQ

    Mild Shock schrieb:
    Hi,

    How it started:

    TPTP World Anthem [The TPTP World Needs Money] https://drive.google.com/file/d/1otJ8FxCiwVTIMoEDMW98puuTKIUH7BYF/view?pli=1


    How its going:

    ANTHEM 2.0: Automated Reasoning for Answer Set Programming https://www.cambridge.org/core/journals/theory-and-practice-of-logic-programming/article/anthem-20-automated-reasoning-for-answer-set-programming/69D4B6430617A334906A99F787FA2727


    Bye

    P.S.: Would be fine if the TPTP sublanguages, were
    lean, just some operator definitions, and not some
    bloathed nonsense. Going such a Lean path would

    allow to stay in the ISO core standard, instead of
    inventing one new language after the other, Gringo,
    who knows what, with horrible eval semantics of

    (=)/2 and silly (=>)/2 rules for example in Picat.
    Also translating ASP to FOL, when there are no
    cardinality heads, is rather trivial.

    Mild Shock schrieb:
    Hi,

    Just a quick take on the Prolog pulse:

    - SWI-Prolog: Dead Horse, LISP / MeTTa is it.

    - Scyer: Dead Horse, no core commits

    - Ciao Prolog: Dead Horse, no core commits

    - Who else?

    So I guess its back to Strawberry Prolog for teaching!

    They are also building AGI:

    https://dobrev.com/AI/Think_in_mind.pdf

    Bye

    Mild Shock schrieb:

    Thats a funny quote:

    "Once you have a truly massive amount of information
    integrated as knowledge, then the human-software
    system will be superhuman, in the same sense that
    mankind with writing is superhuman compared to
    mankind before writing."

    https://en.wikipedia.org/wiki/Douglas_Lenat#Quotes


    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Feb 14 09:53:53 2026
    From Newsgroup: comp.lang.prolog

    Hi,

    Just a quick take on the golang pulse:

    How it started:

    A Comparative Study of Language Implementations https://www.arxiv.org/abs/2502.01651

    How its going:

    Co-Creator of Go Language is Rightly Furious https://itsfoss.com/news/rob-pike-furious/

    Rob Pike going bonkers over AI.

    Bye

    See especially, ca factor 4x-5x behind:

    Go < Julia < Rust < C < Zig < Mojo < C++
    (a): Average tokens per second for stories15M.bin model

    Go < Julia < Rust < Zig < C < C++ < Mojo
    (c): Average tokens per second for stories42M.bin model

    So I guess its back to Fortran.

    Mild Shock schrieb:
    Hi,

    Just a quick take on the Prolog pulse:

    - SWI-Prolog: Dead Horse, LISP / MeTTa is it.

    - Scyer: Dead Horse, no core commits

    - Ciao Prolog: Dead Horse, no core commits

    - Who else?

    So I guess its back to Strawberry Prolog for teaching!

    They are also building AGI:

    https://dobrev.com/AI/Think_in_mind.pdf

    Bye

    Mild Shock schrieb:

    Thats a funny quote:

    "Once you have a truly massive amount of information
    integrated as knowledge, then the human-software
    system will be superhuman, in the same sense that
    mankind with writing is superhuman compared to
    mankind before writing."

    https://en.wikipedia.org/wiki/Douglas_Lenat#Quotes

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Feb 14 12:51:35 2026
    From Newsgroup: comp.lang.prolog

    Hi,

    So the Rob Pike account shows the full
    extend how people become schizophrenice
    or get amnesia in the age of the AI Boom.

    How it started:

    "Kernighan and Pike listed Mark V. Shaney
    in the acknowledgements in The Practice
    of Programming,[12] noting its roots in
    Mitchell's markov, which, adapted as shaney,
    [13] was used for "humorous deconstructionist
    activities" in the 1980s.
    https://en.wikipedia.org/wiki/Mark_V._Shaney

    How its going:

    "Fuck you people. Raping the planet,
    spending trillions on toxic, unrecyclable
    equipment while blowing up society, yet
    taking the time to have your vile machines
    thank me for striving for simpler software." https://bsky.app/profile/robpike.io/post/3matwg6w3ic2s

    Or it was one of the many academic frauds,
    that Rob Pike even appeared talking about
    the earl AI bot, to get false credits?

    Bye

    P.S.: Has somebody seen Julio Di Egidio recently?
    He posted the same vile machines stance on
    SWI-Prolog shortly between 2025 and 2026,

    got probably banned and disappeared [forever?].
    Well good ridance that he *plonked* himself,
    fucking moron who didn't understand

    rational trees, yet harassed me.

    Mild Shock schrieb:
    Hi,

    Just a quick take on the golang pulse:

    How it started:

    A Comparative Study of Language Implementations https://www.arxiv.org/abs/2502.01651

    How its going:

    Co-Creator of Go Language is Rightly Furious https://itsfoss.com/news/rob-pike-furious/

    Rob Pike going bonkers over AI.

    Bye

    See especially, ca factor 4x-5x behind:

    Go < Julia < Rust < C < Zig < Mojo < C++
    (a): Average tokens per second for stories15M.bin model

    Go < Julia < Rust < Zig < C < C++ < Mojo
    (c): Average tokens per second for stories42M.bin model

    So I guess its back to Fortran.

    Mild Shock schrieb:
    Hi,

    Just a quick take on the Prolog pulse:

    - SWI-Prolog: Dead Horse, LISP / MeTTa is it.

    - Scyer: Dead Horse, no core commits

    - Ciao Prolog: Dead Horse, no core commits

    - Who else?

    So I guess its back to Strawberry Prolog for teaching!

    They are also building AGI:

    https://dobrev.com/AI/Think_in_mind.pdf

    Bye

    Mild Shock schrieb:

    Thats a funny quote:

    "Once you have a truly massive amount of information
    integrated as knowledge, then the human-software
    system will be superhuman, in the same sense that
    mankind with writing is superhuman compared to
    mankind before writing."

    https://en.wikipedia.org/wiki/Douglas_Lenat#Quotes


    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Feb 17 11:37:33 2026
    From Newsgroup: comp.lang.prolog

    Hi,

    Geoffrey E. Hinton, the Nobel Prize winner
    for AI. He was already beating the drums
    for ReLU in 2010:

    HRectified Linear Units Improve Restricted Boltzmann Machines
    Geoffrey E. Hinton & Vinod Nair - 2010 https://www.cs.toronto.edu/~fritz/absps/reluICML.pdf

    Because ANNs (Artificial Neural Networks) were originally
    designed with other functions, e.g. with Logistic function:

    An artificial neuron is a mathematical function conceived
    as a model of a biological neuron in a neural network. https://en.wikipedia.org/wiki/Artificial_neuron

    If you populate additive factor graphs with log P,
    you basically get multiplicative factor graphs.
    So an ANN can express belief networks, right?

    Bye

    P.S.: What is all the hype about Causal AI, and
    the Ladder of Causation |a la Judea Pearl?

    Causal AI rCo the next gen AI
    Prof. Sotirios A. Tsaftaris - 2025
    https://www.youtube.com/watch?v=IelslFzdsYw
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Feb 19 20:00:39 2026
    From Newsgroup: comp.lang.prolog

    Hi,

    Looks like the Causal AI viewpoint on neural
    networks makes people comfortable. Some
    work from Pittsburgh:

    Causal Representation Learning and Generative AI
    by Dr Kun Zhang #CausalNeSyAI
    https://www.youtube.com/watch?v=UwcitNXphog

    Kathleen M. Carley: Understanding Influence
    A Network Science + AI Approach rCo IC2S2 2025 Keynote https://www.youtube.com/watch?v=iHjFXtIeXnw

    So Gen Alpha will grow up with Causal AI,
    will Gen Z just botched Scryer Prolog, its
    definitively a dead horse now?

    Bye

    Mild Shock schrieb:
    Hi,

    Geoffrey E. Hinton, the Nobel Prize winner
    for AI. He was already beating the drums
    for ReLU in 2010:

    HRectified Linear Units Improve Restricted Boltzmann Machines
    Geoffrey E. Hinton & Vinod Nair - 2010 https://www.cs.toronto.edu/~fritz/absps/reluICML.pdf

    Because ANNs (Artificial Neural Networks) were originally
    designed with other functions, e.g. with Logistic function:

    An artificial neuron is a mathematical function conceived
    as a model of a biological neuron in a neural network. https://en.wikipedia.org/wiki/Artificial_neuron

    If you populate additive factor graphs with log P,
    you basically get multiplicative factor graphs.
    So an ANN can express belief networks, right?

    Bye

    P.S.: What is all the hype about Causal AI, and
    the Ladder of Causation |a la Judea Pearl?

    Causal AI rCo the next gen AI
    Prof. Sotirios A. Tsaftaris - 2025 https://www.youtube.com/watch?v=IelslFzdsYw

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Feb 27 10:20:56 2026
    From Newsgroup: comp.lang.prolog

    Hi,

    Large Reasoning Models (LRM) seem to move from
    Foundation of Mathematics (FOM) to Theoretical
    Computer Science (TCS). FOM typically gives

    you "white science" mathematics, with sets and
    infinity, if you are lucky a little recursion
    theory. Fun fact TCS is even more "white".

    Interesting paper in as far:

    Lean Meets Theoretical Computer Science:
    Scalable Synthesis of Theorem Proving Challenges
    in Formal-Informal Pairs
    Terry Jingchen Zhang et. al. - 2025
    https://arxiv.org/abs/2508.15878v1

    One swallow does not make a summer?

    But its probably a necessary step. The above
    paper using Busy Beaver and Interger Constraints
    as examples. What logical frameworks do even

    apply, is it enough to have a "total function"
    theory layer, or does TCS need more. TCS can
    be heavy on all sort of discrete and

    non-discrete mathematics.

    Bye

    Mild Shock schrieb:
    Hi,

    Geoffrey E. Hinton, the Nobel Prize winner
    for AI. He was already beating the drums
    for ReLU in 2010:

    HRectified Linear Units Improve Restricted Boltzmann Machines
    Geoffrey E. Hinton & Vinod Nair - 2010 https://www.cs.toronto.edu/~fritz/absps/reluICML.pdf

    Because ANNs (Artificial Neural Networks) were originally
    designed with other functions, e.g. with Logistic function:

    An artificial neuron is a mathematical function conceived
    as a model of a biological neuron in a neural network. https://en.wikipedia.org/wiki/Artificial_neuron

    If you populate additive factor graphs with log P,
    you basically get multiplicative factor graphs.
    So an ANN can express belief networks, right?

    Bye

    P.S.: What is all the hype about Causal AI, and
    the Ladder of Causation |a la Judea Pearl?

    Causal AI rCo the next gen AI
    Prof. Sotirios A. Tsaftaris - 2025 https://www.youtube.com/watch?v=IelslFzdsYw

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Feb 28 16:36:27 2026
    From Newsgroup: comp.lang.prolog

    Hi,

    Now "white science" might have more leaning
    towards Causal AI. But sometimes I have the
    feeling Generative AI is the "yellow science"

    now. DeepSeek left handedly inventing the
    notation M(a,b) for matrix representation of
    a dual number a + b e:

    M(a,b) = a I + b E

    And then discussing variants like:

    E = [[ 0 1 ]
    [ 0 0 ]]

    Or this variant like:

    E = [[ 1 -1 ]
    [ 1 -1 ]]

    I am in mild shock! Then I am reading the small print:

    Prof. Cai Wen, Institute of Extenics and Innovative Methods, Guangzhou, https://fs.unm.edu/DualNumbers.pdf

    What is Extenics? Any ideas?

    Have Fun!

    Bye

    Mild Shock schrieb:
    Hi,

    Large Reasoning Models (LRM) seem to move from
    Foundation of Mathematics (FOM) to Theoretical
    Computer Science (TCS). FOM typically gives

    you "white science" mathematics, with sets and
    infinity, if you are lucky a little recursion
    theory. Fun fact TCS is even more "white".

    Interesting paper in as far:

    Lean Meets Theoretical Computer Science:
    Scalable Synthesis of Theorem Proving Challenges
    in Formal-Informal Pairs
    Terry Jingchen Zhang et. al. - 2025
    https://arxiv.org/abs/2508.15878v1

    One swallow does not make a summer?

    But its probably a necessary step. The above
    paper using Busy Beaver and Interger Constraints
    as examples. What logical frameworks do even

    apply, is it enough to have a "total function"
    theory layer, or does TCS need more. TCS can
    be heavy on all sort of discrete and

    non-discrete mathematics.

    Bye

    Mild Shock schrieb:
    Hi,

    Geoffrey E. Hinton, the Nobel Prize winner
    for AI. He was already beating the drums
    for ReLU in 2010:

    HRectified Linear Units Improve Restricted Boltzmann Machines
    Geoffrey E. Hinton & Vinod Nair - 2010
    https://www.cs.toronto.edu/~fritz/absps/reluICML.pdf

    Because ANNs (Artificial Neural Networks) were originally
    designed with other functions, e.g. with Logistic function:

    An artificial neuron is a mathematical function conceived
    as a model of a biological neuron in a neural network.
    https://en.wikipedia.org/wiki/Artificial_neuron

    If you populate additive factor graphs with log P,
    you basically get multiplicative factor graphs.
    So an ANN can express belief networks, right?

    Bye

    P.S.: What is all the hype about Causal AI, and
    the Ladder of Causation |a la Judea Pearl?

    Causal AI rCo the next gen AI
    Prof. Sotirios A. Tsaftaris - 2025
    https://www.youtube.com/watch?v=IelslFzdsYw


    --- Synchronet 3.21d-Linux NewsLink 1.2