• Will the world build on American Stacks? (Was: Prolog totally missed the AI Boom)

    From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Jul 14 15:55:49 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Will the world build on American Stacks?
    Or is the american dream over?

    How it started, 1 month go:

    Nvidia CEO Jensen Huang on AI, Musk and Trump https://www.youtube.com/watch?v=c-XAL2oYelI

    How its going, now:

    Are you still talking about Jeffrey Epstein? https://www.bbc.com/news/articles/cm2m879neljo

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Jul 17 12:14:47 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Rota often celebrated symbolic, analogical, and
    conceptual understanding over brute calculation.
    This philosophy has come full circle in modern AI:

    - Large Language Models (LLMs) like GPT-4 don't
    just store facts rCo they recognize patterns,
    make analogies, and generate new structures
    from old ones.

    - RotarCOs work in combinatorics, symbolic logic, and
    operator theory is essentially pattern-based
    manipulation rCo exactly the kind of reasoning LLMs
    aim to emulate at scale.

    Rota had a clear aesthetic. He valued clean formalisms,
    symbolic beauty, and well-defined structures. Rota wanted
    mathematics to mean something rCo to be not just correct,
    but intelligible and expressive.

    In contrast, modern AI (especially LLMs like GPT) thrives
    on the messy, including: Noisy data , Inconsistency ,
    Uncertainty, Contradiction. AI engineers today are mining
    meaning from noise.

    What counts as rCLstructurerCY is often just the best
    pragmatic/effective description available at that moment.

    Bye

    Mild Shock schrieb:
    Hi,

    Will the world build on American Stacks?
    Or is the american dream over?

    How it started, 1 month go:

    Nvidia CEO Jensen Huang on AI, Musk and Trump https://www.youtube.com/watch?v=c-XAL2oYelI

    How its going, now:

    Are you still talking about Jeffrey Epstein? https://www.bbc.com/news/articles/cm2m879neljo

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Jul 17 14:33:06 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    An example of human intelligence, is of course the
    name "Rational Term" for cyclic terms set forth by
    Alain Colmerauer. Since it plays with "Rational Numbers".

    A subset of cyclic terms can indeed represent
    rational numbers, and they give a nice counter
    example to transitivity:

    ?- problem(X,Y,Z).
    X = _S1-7-9-1, % where
    _S1 = _S1-6-8-0-6-2-8,
    Y = _S2-1-6-1-5-4-6-1, % where
    _S2 = _S2-0-9-2,
    Z = _S3-3-0, % where
    _S3 = _S3-8-1

    The Fuzzer 2 from 2025 does just what I did in 2023,
    expanding rational numbers into rational terms:

    % fuzzy(-Term)
    fuzzy(X) :-
    random_between(1,100,A),
    random_between(1,100,B),
    random_between(1,10,M),
    fuzzy_chunk(M,A,B,C,X,Y),
    random_between(1,10,L),
    fuzzy_chunk(L,C,B,_,Y,Z),
    Z = Y.

    % fuzzy_chunk(+Integer,+Integer,+Integer,-Integer,+Term,-Term)
    fuzzy_chunk(0, A, _, A, X, X) :- !.
    fuzzy_chunk(N, A, B, C, Y-D, X) :-
    M is N-1,
    D is A // B,
    H is 10*(A - B*D),
    fuzzy_chunk(M, H, B, C, Y, X).

    Bye

    Mild Shock schrieb:
    Hi,

    Rota often celebrated symbolic, analogical, and
    conceptual understanding over brute calculation.
    This philosophy has come full circle in modern AI:

    - Large Language Models (LLMs) like GPT-4 don't
    -a just store facts rCo they recognize patterns,
    -a make analogies, and generate new structures
    -a from old ones.

    - RotarCOs work in combinatorics, symbolic logic, and
    -a operator theory is essentially pattern-based
    -a manipulation rCo exactly the kind of reasoning LLMs
    -a aim to emulate at scale.

    Rota had a clear aesthetic. He valued clean formalisms,
    symbolic beauty, and well-defined structures. Rota wanted
    mathematics to mean something rCo to be not just correct,
    but intelligible and expressive.

    In contrast, modern AI (especially LLMs like GPT) thrives
    on the messy, including: Noisy data , Inconsistency ,
    Uncertainty, Contradiction. AI engineers today are mining
    meaning from noise.

    What counts as rCLstructurerCY is often just the best
    pragmatic/effective description available at that moment.

    Bye

    Mild Shock schrieb:
    Hi,

    Will the world build on American Stacks?
    Or is the american dream over?

    How it started, 1 month go:

    Nvidia CEO Jensen Huang on AI, Musk and Trump
    https://www.youtube.com/watch?v=c-XAL2oYelI

    How its going, now:

    Are you still talking about Jeffrey Epstein?
    https://www.bbc.com/news/articles/cm2m879neljo

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>
    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Jul 17 14:57:04 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Ok I have to correct "Rational Term" was less
    common, what was more in use "Rational Trees",
    but they might have also talked about finitely

    represented infinite tree. Rational trees itself
    probably an echo from Dmitry Mirimanoffs
    (1861rCo1945) rCLextraordinairerCY sets.

    Dmitry Semionovitch Mirimanoff (Russian:
    -o-+-+|U-e-C-+-| -i-|-+-a-+-+-#-+-c -L-+-C-+-+-#|U-+-+-#; 13 September 1861, Pereslavl-Zalessky, Russia rCo 5 January 1945, Geneva,
    Switzerland) was a member of the Moscow Mathematical
    Society in 1897.[1] And later became a doctor of
    mathematical sciences in 1900, in Geneva, and
    taught at the universities of Geneva and Lausanne. https://en.wikipedia.org/wiki/Dmitry_Mirimanoff

    This year we can again celebrate another researcher,
    who died in 2023, Peter Aczel R.I.P., who made
    as well some thoughtful deviance from orthodoxy.

    Peter Aczel Memorial Conference on 10th September 2025.
    Logic Colloquium will take place at the University
    of Manchester (UK) from 11th to 12th September 2025 https://sites.google.com/view/blc2025/home

    Have Fun!

    Bye

    Mild Shock schrieb:
    Hi,

    An example of human intelligence, is of course the
    name "Rational Term" for cyclic terms set forth by
    Alain Colmerauer. Since it plays with "Rational Numbers".

    A subset of cyclic terms can indeed represent
    rational numbers, and they give a nice counter
    example to transitivity:

    ?- problem(X,Y,Z).
    X = _S1-7-9-1, % where
    -a-a-a _S1 = _S1-6-8-0-6-2-8,
    Y = _S2-1-6-1-5-4-6-1, % where
    -a-a-a _S2 = _S2-0-9-2,
    Z = _S3-3-0, % where
    -a-a-a _S3 = _S3-8-1

    The Fuzzer 2 from 2025 does just what I did in 2023,
    expanding rational numbers into rational terms:

    % fuzzy(-Term)
    fuzzy(X) :-
    -a-a random_between(1,100,A),
    -a-a random_between(1,100,B),
    -a-a random_between(1,10,M),
    -a-a fuzzy_chunk(M,A,B,C,X,Y),
    -a-a random_between(1,10,L),
    -a-a fuzzy_chunk(L,C,B,_,Y,Z),
    -a-a Z = Y.

    % fuzzy_chunk(+Integer,+Integer,+Integer,-Integer,+Term,-Term)
    fuzzy_chunk(0, A, _, A, X, X) :- !.
    fuzzy_chunk(N, A, B, C, Y-D, X) :-
    -a-a M is N-1,
    -a-a D is A // B,
    -a-a H is 10*(A - B*D),
    -a-a fuzzy_chunk(M, H, B, C, Y, X).

    Bye

    Mild Shock schrieb:
    Hi,

    Rota often celebrated symbolic, analogical, and
    conceptual understanding over brute calculation.
    This philosophy has come full circle in modern AI:

    - Large Language Models (LLMs) like GPT-4 don't
    -a-a just store facts rCo they recognize patterns,
    -a-a make analogies, and generate new structures
    -a-a from old ones.

    - RotarCOs work in combinatorics, symbolic logic, and
    -a-a operator theory is essentially pattern-based
    -a-a manipulation rCo exactly the kind of reasoning LLMs
    -a-a aim to emulate at scale.

    Rota had a clear aesthetic. He valued clean formalisms,
    symbolic beauty, and well-defined structures. Rota wanted
    mathematics to mean something rCo to be not just correct,
    but intelligible and expressive.

    In contrast, modern AI (especially LLMs like GPT) thrives
    on the messy, including: Noisy data , Inconsistency ,
    Uncertainty, Contradiction. AI engineers today are mining
    meaning from noise.

    What counts as rCLstructurerCY is often just the best
    pragmatic/effective description available at that moment.

    Bye

    Mild Shock schrieb:
    Hi,

    Will the world build on American Stacks?
    Or is the american dream over?

    How it started, 1 month go:

    Nvidia CEO Jensen Huang on AI, Musk and Trump
    https://www.youtube.com/watch?v=c-XAL2oYelI

    How its going, now:

    Are you still talking about Jeffrey Epstein?
    https://www.bbc.com/news/articles/cm2m879neljo

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Jul 17 23:17:45 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I am trying to verify my hypothesis
    that Rocq is a dead horse. Dead
    horses can come in different forms,

    for example a project that just
    imitates what was already done by
    the precursor, is most likely a

    Dead horse. For example MetaRocq,
    verifying a logic framework inside
    some strong enough set theory,

    is not novell. Maybe they get more
    out of doing MetaRocq:

    MetaRocq is a project formalizing Rocq in Rocq https://github.com/MetaRocq/metarocq#papers

    #50 Nicolas Tabareau
    https://www.youtube.com/watch?v=8kwe24gvigk

    Bye

    Mild Shock schrieb:
    Hi,

    Ok I have to correct "Rational Term" was less
    common, what was more in use "Rational Trees",
    but they might have also talked about finitely

    represented infinite tree. Rational trees itself
    probably an echo from Dmitry Mirimanoffs
    (1861rCo1945) rCLextraordinairerCY sets.

    Dmitry Semionovitch Mirimanoff (Russian:
    -o-+-+|U-e-C-+-| -i-|-+-a-+-+-#-+-c -L-+-C-+-+-#|U-+-+-#; 13 September 1861, Pereslavl-Zalessky, Russia rCo 5 January 1945, Geneva,
    Switzerland) was a member of the Moscow Mathematical
    Society in 1897.[1] And later became a doctor of
    mathematical sciences in 1900, in Geneva, and
    taught at the universities of Geneva and Lausanne. https://en.wikipedia.org/wiki/Dmitry_Mirimanoff

    This year we can again celebrate another researcher,
    who died in 2023, Peter Aczel R.I.P., who made
    as well some thoughtful deviance from orthodoxy.

    Peter Aczel Memorial Conference on 10th September 2025.
    Logic Colloquium will take place at the University
    of Manchester-a (UK) from 11th to 12th September 2025 https://sites.google.com/view/blc2025/home

    Have Fun!

    Bye

    Mild Shock schrieb:
    Hi,

    An example of human intelligence, is of course the
    name "Rational Term" for cyclic terms set forth by
    Alain Colmerauer. Since it plays with "Rational Numbers".

    A subset of cyclic terms can indeed represent
    rational numbers, and they give a nice counter
    example to transitivity:

    ?- problem(X,Y,Z).
    X = _S1-7-9-1, % where
    -a-a-a-a _S1 = _S1-6-8-0-6-2-8,
    Y = _S2-1-6-1-5-4-6-1, % where
    -a-a-a-a _S2 = _S2-0-9-2,
    Z = _S3-3-0, % where
    -a-a-a-a _S3 = _S3-8-1

    The Fuzzer 2 from 2025 does just what I did in 2023,
    expanding rational numbers into rational terms:

    % fuzzy(-Term)
    fuzzy(X) :-
    -a-a-a random_between(1,100,A),
    -a-a-a random_between(1,100,B),
    -a-a-a random_between(1,10,M),
    -a-a-a fuzzy_chunk(M,A,B,C,X,Y),
    -a-a-a random_between(1,10,L),
    -a-a-a fuzzy_chunk(L,C,B,_,Y,Z),
    -a-a-a Z = Y.

    % fuzzy_chunk(+Integer,+Integer,+Integer,-Integer,+Term,-Term)
    fuzzy_chunk(0, A, _, A, X, X) :- !.
    fuzzy_chunk(N, A, B, C, Y-D, X) :-
    -a-a-a M is N-1,
    -a-a-a D is A // B,
    -a-a-a H is 10*(A - B*D),
    -a-a-a fuzzy_chunk(M, H, B, C, Y, X).

    Bye

    Mild Shock schrieb:
    Hi,

    Rota often celebrated symbolic, analogical, and
    conceptual understanding over brute calculation.
    This philosophy has come full circle in modern AI:

    - Large Language Models (LLMs) like GPT-4 don't
    -a-a just store facts rCo they recognize patterns,
    -a-a make analogies, and generate new structures
    -a-a from old ones.

    - RotarCOs work in combinatorics, symbolic logic, and
    -a-a operator theory is essentially pattern-based
    -a-a manipulation rCo exactly the kind of reasoning LLMs
    -a-a aim to emulate at scale.

    Rota had a clear aesthetic. He valued clean formalisms,
    symbolic beauty, and well-defined structures. Rota wanted
    mathematics to mean something rCo to be not just correct,
    but intelligible and expressive.

    In contrast, modern AI (especially LLMs like GPT) thrives
    on the messy, including: Noisy data , Inconsistency ,
    Uncertainty, Contradiction. AI engineers today are mining
    meaning from noise.

    What counts as rCLstructurerCY is often just the best
    pragmatic/effective description available at that moment.

    Bye

    Mild Shock schrieb:
    Hi,

    Will the world build on American Stacks?
    Or is the american dream over?

    How it started, 1 month go:

    Nvidia CEO Jensen Huang on AI, Musk and Trump
    https://www.youtube.com/watch?v=c-XAL2oYelI

    How its going, now:

    Are you still talking about Jeffrey Epstein?
    https://www.bbc.com/news/articles/cm2m879neljo

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>>

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg






    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Jul 17 23:36:06 2025
    From Newsgroup: comp.lang.prolog

    So, we had the formal revolution:

    Frege (1879): Predicate logic formalized
    Peano (1889): Axioms of arithmetic
    Hilbert (1899rCo1930): Formal axiomatic method, Hilbert's Program
    Zermelo (1908): Axiomatic set theory (ZF, later ZFC)
    G||del (1931): Incompleteness theorems end HilbertrCOs
    dream of a complete formal system

    Then the machanized formal revolution:

    Automath (1967): The first real proof assistant,
    laying the conceptual groundwork.
    Mizar (1970srCo1980s): Building a readable,
    structured formal language and large libraries.
    Isabelle (1980s): Developing a generic proof framework, making
    formalization more flexible.
    Coq (early 1990s): Fully fledged dependent type theory and tactic
    language emerge.
    HOL family (1980srCo2000s): Focus on classical higher-order logic with applications in hardware/software verification.
    Lean + mathlib (late 2010s): Community-driven scaling,
    large libraries, easier onboarding.

    So we practically landed on the moon.

    Next steps:
    - Mars Orbit (NowrCo2030), AI-augmented theorem proving.
    - Mars Surface rCo AGI-Based Proving (2030s?)
    - Mars Camp - The Hub of Next-Gen Mathematics and Reasoning
    quantum computers, distributed supercomputers, and even alien
    intelligences (hypothetically)

    Mild Shock schrieb:
    Hi,

    I am trying to verify my hypothesis
    that Rocq is a dead horse. Dead
    horses can come in different forms,

    for example a project that just
    imitates what was already done by
    the precursor, is most likely a

    Dead horse. For example MetaRocq,
    verifying a logic framework inside
    some strong enough set theory,

    is not novell. Maybe they get more
    out of doing MetaRocq:

    MetaRocq is a project formalizing Rocq in Rocq https://github.com/MetaRocq/metarocq#papers

    #50 Nicolas Tabareau
    https://www.youtube.com/watch?v=8kwe24gvigk

    Bye

    Mild Shock schrieb:
    Hi,

    Ok I have to correct "Rational Term" was less
    common, what was more in use "Rational Trees",
    but they might have also talked about finitely

    represented infinite tree. Rational trees itself
    probably an echo from Dmitry Mirimanoffs
    (1861rCo1945) rCLextraordinairerCY sets.

    Dmitry Semionovitch Mirimanoff (Russian:
    -o-+-+|U-e-C-+-| -i-|-+-a-+-+-#-+-c -L-+-C-+-+-#|U-+-+-#; 13 September 1861, >> Pereslavl-Zalessky, Russia rCo 5 January 1945, Geneva,
    Switzerland) was a member of the Moscow Mathematical
    Society in 1897.[1] And later became a doctor of
    mathematical sciences in 1900, in Geneva, and
    taught at the universities of Geneva and Lausanne.
    https://en.wikipedia.org/wiki/Dmitry_Mirimanoff

    This year we can again celebrate another researcher,
    who died in 2023, Peter Aczel R.I.P., who made
    as well some thoughtful deviance from orthodoxy.

    Peter Aczel Memorial Conference on 10th September 2025.
    Logic Colloquium will take place at the University
    of Manchester-a (UK) from 11th to 12th September 2025
    https://sites.google.com/view/blc2025/home

    Have Fun!

    Bye

    Mild Shock schrieb:
    Hi,

    An example of human intelligence, is of course the
    name "Rational Term" for cyclic terms set forth by
    Alain Colmerauer. Since it plays with "Rational Numbers".

    A subset of cyclic terms can indeed represent
    rational numbers, and they give a nice counter
    example to transitivity:

    ?- problem(X,Y,Z).
    X = _S1-7-9-1, % where
    -a-a-a-a _S1 = _S1-6-8-0-6-2-8,
    Y = _S2-1-6-1-5-4-6-1, % where
    -a-a-a-a _S2 = _S2-0-9-2,
    Z = _S3-3-0, % where
    -a-a-a-a _S3 = _S3-8-1

    The Fuzzer 2 from 2025 does just what I did in 2023,
    expanding rational numbers into rational terms:

    % fuzzy(-Term)
    fuzzy(X) :-
    -a-a-a random_between(1,100,A),
    -a-a-a random_between(1,100,B),
    -a-a-a random_between(1,10,M),
    -a-a-a fuzzy_chunk(M,A,B,C,X,Y),
    -a-a-a random_between(1,10,L),
    -a-a-a fuzzy_chunk(L,C,B,_,Y,Z),
    -a-a-a Z = Y.

    % fuzzy_chunk(+Integer,+Integer,+Integer,-Integer,+Term,-Term)
    fuzzy_chunk(0, A, _, A, X, X) :- !.
    fuzzy_chunk(N, A, B, C, Y-D, X) :-
    -a-a-a M is N-1,
    -a-a-a D is A // B,
    -a-a-a H is 10*(A - B*D),
    -a-a-a fuzzy_chunk(M, H, B, C, Y, X).

    Bye

    Mild Shock schrieb:
    Hi,

    Rota often celebrated symbolic, analogical, and
    conceptual understanding over brute calculation.
    This philosophy has come full circle in modern AI:

    - Large Language Models (LLMs) like GPT-4 don't
    -a-a just store facts rCo they recognize patterns,
    -a-a make analogies, and generate new structures
    -a-a from old ones.

    - RotarCOs work in combinatorics, symbolic logic, and
    -a-a operator theory is essentially pattern-based
    -a-a manipulation rCo exactly the kind of reasoning LLMs
    -a-a aim to emulate at scale.

    Rota had a clear aesthetic. He valued clean formalisms,
    symbolic beauty, and well-defined structures. Rota wanted
    mathematics to mean something rCo to be not just correct,
    but intelligible and expressive.

    In contrast, modern AI (especially LLMs like GPT) thrives
    on the messy, including: Noisy data , Inconsistency ,
    Uncertainty, Contradiction. AI engineers today are mining
    meaning from noise.

    What counts as rCLstructurerCY is often just the best
    pragmatic/effective description available at that moment.

    Bye

    Mild Shock schrieb:
    Hi,

    Will the world build on American Stacks?
    Or is the american dream over?

    How it started, 1 month go:

    Nvidia CEO Jensen Huang on AI, Musk and Trump
    https://www.youtube.com/watch?v=c-XAL2oYelI

    How its going, now:

    Are you still talking about Jeffrey Epstein?
    https://www.bbc.com/news/articles/cm2m879neljo

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf


    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses? >>>>>> The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg







    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Jul 23 19:10:09 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    you do need a theory of terms, and a specific one

    You could pull an Anti Ackerman. Negate the
    infinity axiom like Ackerman did here, where
    he also kept the regularity axiom:

    Die Widerspruchsfreiheit der allgemeinen Mengenlehre
    Ackermann, Wilhelm - 1937 https://www.digizeitschriften.de/id/235181684_0114%7Clog23

    But instead of Ackermann, you get an Anti (-Foundation)
    Ackermann if you drop the regularity axiom. Result, you
    get a lot of exotic sets, among which are also the

    famous Quine atoms:

    x = {x}

    Funny that in the setting I just described , where
    there is the negation of the infinity axiom, i.e.
    all sets are finite, contrary to the usually vulgar
    view, x = {x} is a finite object. Just like in Prolog

    X = f(X) is in principle a finite object, it has
    only one subtree, or what Alain Colmerauer
    already postulated:

    Definition: a "rational" tre is a tree which
    has a finite set of subtrees.

    Bye

    Mild Shock schrieb:
    Hi,

    Ok I have to correct "Rational Term" was less
    common, what was more in use "Rational Trees",
    but they might have also talked about finitely

    represented infinite tree. Rational trees itself
    probably an echo from Dmitry Mirimanoffs
    (1861rCo1945) rCLextraordinairerCY sets.

    Dmitry Semionovitch Mirimanoff (Russian:
    -o-+-+|U-e-C-+-| -i-|-+-a-+-+-#-+-c -L-+-C-+-+-#|U-+-+-#; 13 September 1861, Pereslavl-Zalessky, Russia rCo 5 January 1945, Geneva,
    Switzerland) was a member of the Moscow Mathematical
    Society in 1897.[1] And later became a doctor of
    mathematical sciences in 1900, in Geneva, and
    taught at the universities of Geneva and Lausanne. https://en.wikipedia.org/wiki/Dmitry_Mirimanoff

    This year we can again celebrate another researcher,
    who died in 2023, Peter Aczel R.I.P., who made
    as well some thoughtful deviance from orthodoxy.

    Peter Aczel Memorial Conference on 10th September 2025.
    Logic Colloquium will take place at the University
    of Manchester-a (UK) from 11th to 12th September 2025 https://sites.google.com/view/blc2025/home

    Have Fun!

    Bye

    Mild Shock schrieb:
    Hi,

    An example of human intelligence, is of course the
    name "Rational Term" for cyclic terms set forth by
    Alain Colmerauer. Since it plays with "Rational Numbers".

    A subset of cyclic terms can indeed represent
    rational numbers, and they give a nice counter
    example to transitivity:

    ?- problem(X,Y,Z).
    X = _S1-7-9-1, % where
    -a-a-a-a _S1 = _S1-6-8-0-6-2-8,
    Y = _S2-1-6-1-5-4-6-1, % where
    -a-a-a-a _S2 = _S2-0-9-2,
    Z = _S3-3-0, % where
    -a-a-a-a _S3 = _S3-8-1

    The Fuzzer 2 from 2025 does just what I did in 2023,
    expanding rational numbers into rational terms:

    % fuzzy(-Term)
    fuzzy(X) :-
    -a-a-a random_between(1,100,A),
    -a-a-a random_between(1,100,B),
    -a-a-a random_between(1,10,M),
    -a-a-a fuzzy_chunk(M,A,B,C,X,Y),
    -a-a-a random_between(1,10,L),
    -a-a-a fuzzy_chunk(L,C,B,_,Y,Z),
    -a-a-a Z = Y.

    % fuzzy_chunk(+Integer,+Integer,+Integer,-Integer,+Term,-Term)
    fuzzy_chunk(0, A, _, A, X, X) :- !.
    fuzzy_chunk(N, A, B, C, Y-D, X) :-
    -a-a-a M is N-1,
    -a-a-a D is A // B,
    -a-a-a H is 10*(A - B*D),
    -a-a-a fuzzy_chunk(M, H, B, C, Y, X).

    Bye

    Mild Shock schrieb:
    Hi,

    Rota often celebrated symbolic, analogical, and
    conceptual understanding over brute calculation.
    This philosophy has come full circle in modern AI:

    - Large Language Models (LLMs) like GPT-4 don't
    -a-a just store facts rCo they recognize patterns,
    -a-a make analogies, and generate new structures
    -a-a from old ones.

    - RotarCOs work in combinatorics, symbolic logic, and
    -a-a operator theory is essentially pattern-based
    -a-a manipulation rCo exactly the kind of reasoning LLMs
    -a-a aim to emulate at scale.

    Rota had a clear aesthetic. He valued clean formalisms,
    symbolic beauty, and well-defined structures. Rota wanted
    mathematics to mean something rCo to be not just correct,
    but intelligible and expressive.

    In contrast, modern AI (especially LLMs like GPT) thrives
    on the messy, including: Noisy data , Inconsistency ,
    Uncertainty, Contradiction. AI engineers today are mining
    meaning from noise.

    What counts as rCLstructurerCY is often just the best
    pragmatic/effective description available at that moment.

    Bye

    Mild Shock schrieb:
    Hi,

    Will the world build on American Stacks?
    Or is the american dream over?

    How it started, 1 month go:

    Nvidia CEO Jensen Huang on AI, Musk and Trump
    https://www.youtube.com/watch?v=c-XAL2oYelI

    How its going, now:

    Are you still talking about Jeffrey Epstein?
    https://www.bbc.com/news/articles/cm2m879neljo

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>>

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg






    --- Synchronet 3.21a-Linux NewsLink 1.2