• Looks like sorting of rational trees needs an existential type (Was: Prolog totally missed the AI Boom)

    From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Jul 23 13:57:20 2025
    From Newsgroup: comp.lang.prolog

    Looks like sorting of rational trees
    needs an existential type, if we go full rCLlogicalrCY.
    If I use my old code from 2023 which computes
    a finest (*), i.e. non-monster, bisimulation

    pre-quotient (**) in prefix order:

    factorize(T, _, T) --> {var(T)}, !.
    factorize(T, C, V) --> {compound(T), member(S-V, C), S == T}, !.
    factorize(T, C, V) --> {compound(T)}, !,
    [V = S],
    {T =.. [F|L]},
    factorize_list(L, [T-V|C], R),
    {S =.. [F|R]}.
    factorize(T, _, T) --> [].

    I see that it always generates new
    intermediate variables:

    ?- X = f(f(X)), factorize(X, [], T, L, []), write(L-T), nl. [_8066=f(_8066)]-_8066

    ?- X = f(f(X)), factorize(X, [], T, L, []), write(L-T), nl. [_10984=f(_10984)]-_10984

    What would be swell if it would generate an
    existential quantifier, something like T^([T = f(T)]-T)
    in the above case. Then using alpha conversion
    different factorization runs would be equal,

    when they only differ by the introduced
    intermediate variables. But Prolog has no alpha
    conversion, only ++-Prolog has such things.
    So what can we do, how can we produce a

    representation, that can be used for sorting?

    (*) Why finest and not corsets? Because it uses
    non-monster instructions and not monster
    instructions

    (**) Why only pre-quotient? Because a
    XXX_with_stack algorithm does not fully
    deduplicate the equations, would
    probably need a XXX_with_memo algorithm.

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Jul 23 14:03:42 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    So do we see a new wave in interst in bismulation,
    especially in computing existential types for all
    kind of things? It seems so, quite facinating find:

    BQ-NCO: Bisimulation Quotienting for Efficient
    Neural Combinatorial Optimization
    https://arxiv.org/abs/2301.03313

    Has nobody less than Jean-Marc Andreoli on the
    author list. Possibly the same guy from earlier
    Focusing and Linear Logic, who was associated with

    ECRC Munich in 1990rCOs, but now working for naverlabs.com.

    Bye

    Mild Shock schrieb:
    Looks like sorting of rational trees
    needs an existential type, if we go full rCLlogicalrCY.
    If I use my old code from 2023 which computes
    a finest (*), i.e. non-monster, bisimulation

    pre-quotient (**) in prefix order:

    factorize(T, _, T) --> {var(T)}, !.
    factorize(T, C, V) --> {compound(T), member(S-V, C), S == T}, !.
    factorize(T, C, V) --> {compound(T)}, !,
    -a-a [V = S],
    -a-a {T =.. [F|L]},
    -a-a factorize_list(L, [T-V|C], R),
    -a-a {S =.. [F|R]}.
    factorize(T, _, T) --> [].

    I see that it always generates new
    intermediate variables:

    ?- X = f(f(X)), factorize(X, [], T, L, []), write(L-T), nl. [_8066=f(_8066)]-_8066

    ?- X = f(f(X)), factorize(X, [], T, L, []), write(L-T), nl. [_10984=f(_10984)]-_10984

    What would be swell if it would generate an
    existential quantifier, something like T^([T = f(T)]-T)
    in the above case. Then using alpha conversion
    different factorization runs would be equal,

    when they only differ by the introduced
    intermediate variables. But Prolog has no alpha
    conversion, only ++-Prolog has such things.
    So what can we do, how can we produce a

    representation, that can be used for sorting?

    (*) Why finest and not corsets? Because it uses
    non-monster instructions and not monster
    instructions

    (**) Why only pre-quotient? Because a
    XXX_with_stack algorithm does not fully
    deduplicate the equations, would
    probably need a XXX_with_memo algorithm.

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Jul 23 15:18:31 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    To do bi-simulation, you don't need to wear
    this t-shirt, bi-simulation doesn't refer to
    any sexual orientation, although you could give

    it a game theoretic touch with Samson and Delilah:

    Why Are You Geh T-Shirt https://www.amazon.co.uk/Why-Are-You-Gay-T-Shirt/dp/B0DJMZFQN8

    rCLbi-simulation equivalentrCY is sometimes simply
    named rCLbi-similarrCY. There is a nice paper by Manual
    Carro which gives a larger bisimilarity example:

    An Application of Rational Trees in a Logic.
    Programming Interpreter for a Procedural Language
    Manuel Carro - 2004
    https://arxiv.org/abs/cs/0403028v1

    He makes the case of rCLgotorCY in a programming
    language, where Labels are not needed, simply
    rational tree sharing and looping can be used.

    The case, from Figure 5: Threading the code into
    a rational tree, uses in its result the simpler
    bisimilarity, doesnrCOt need that much of a more
    elaborat bisimulation later.

    You can use dicts (not SWI-Prolog dicts, but
    some table operations) lookup to create the
    rational tree. But I guess you can also use dicts
    (again table operations) for the reverse, find

    some factorization of a rational tree,
    recreate the labels and jumps.

    Bye

    Mild Shock schrieb:
    Hi,

    So do we see a new wave in interst in bismulation,
    especially in computing existential types for all
    kind of things? It seems so, quite facinating find:

    BQ-NCO: Bisimulation Quotienting for Efficient
    Neural Combinatorial Optimization
    https://arxiv.org/abs/2301.03313

    Has nobody less than Jean-Marc Andreoli on the
    author list. Possibly the same guy from earlier
    Focusing and Linear Logic, who was associated with

    ECRC Munich in 1990rCOs, but now working for naverlabs.com.

    Bye

    Mild Shock schrieb:
    Looks like sorting of rational trees
    needs an existential type, if we go full rCLlogicalrCY.
    If I use my old code from 2023 which computes
    a finest (*), i.e. non-monster, bisimulation

    pre-quotient (**) in prefix order:

    factorize(T, _, T) --> {var(T)}, !.
    factorize(T, C, V) --> {compound(T), member(S-V, C), S == T}, !.
    factorize(T, C, V) --> {compound(T)}, !,
    -a-a-a [V = S],
    -a-a-a {T =.. [F|L]},
    -a-a-a factorize_list(L, [T-V|C], R),
    -a-a-a {S =.. [F|R]}.
    factorize(T, _, T) --> [].

    I see that it always generates new
    intermediate variables:

    ?- X = f(f(X)), factorize(X, [], T, L, []), write(L-T), nl.
    [_8066=f(_8066)]-_8066

    ?- X = f(f(X)), factorize(X, [], T, L, []), write(L-T), nl.
    [_10984=f(_10984)]-_10984

    What would be swell if it would generate an
    existential quantifier, something like T^([T = f(T)]-T)
    in the above case. Then using alpha conversion
    different factorization runs would be equal,

    when they only differ by the introduced
    intermediate variables. But Prolog has no alpha
    conversion, only ++-Prolog has such things.
    So what can we do, how can we produce a

    representation, that can be used for sorting?

    (*) Why finest and not corsets? Because it uses
    non-monster instructions and not monster
    instructions

    (**) Why only pre-quotient? Because a
    XXX_with_stack algorithm does not fully
    deduplicate the equations, would
    probably need a XXX_with_memo algorithm.

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>
    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Jul 23 19:14:13 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    But you might then experience the problem
    that the usual extensionality axiom of
    set theory is not enough, there could
    be two quine atoms y = {y} and x = {x}
    with x=/=y.

    On the other hand SWI-Prolog is convinced
    that X = [X] and Y = [Y] are the same,
    it can even apply member/2 to it since
    it has built-in rational trees:

    /* SWI-Prolog 9.3.25 */
    ?- X = [X], Y = [Y], X == Y.
    X = Y, Y = [Y].

    ?- X = [X], member(X, X).
    X = [X].

    But Peter AczelrCOs Original AFA Statement was
    only Uniqueness of solutions to graph equations,
    whereas today we would talk about Equality =

    existence of a bisimulation relation.

    Bye

    Hi,

    you do need a theory of terms, and a specific one

    You could pull an Anti Ackerman. Negate the
    infinity axiom like Ackerman did here, where
    he also kept the regularity axiom:

    Die Widerspruchsfreiheit der allgemeinen Mengenlehre
    Ackermann, Wilhelm - 1937 https://www.digizeitschriften.de/id/235181684_0114%7Clog23

    But instead of Ackermann, you get an Anti (-Foundation)
    Ackermann if you drop the regularity axiom. Result, you
    get a lot of exotic sets, among which are also the

    famous Quine atoms:

    x = {x}

    Funny that in the setting I just described , where
    there is the negation of the infinity axiom, i.e.
    all sets are finite, contrary to the usually vulgar
    view, x = {x} is a finite object. Just like in Prolog

    X = f(X) is in principle a finite object, it has
    only one subtree, or what Alain Colmerauer
    already postulated:

    Definition: a "rational" tre is a tree which
    has a finite set of subtrees.

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2