• Prolog Cycle detection in the Top-Level (Was: Most radical approach is Novacore from Dogelog Player)

    From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Jul 20 13:39:42 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I didn't expect the topic to be that rich.

    The big challenge in a top-level display is
    the "interleaving" of equations and their
    factorization as well as the "inlining" of
    equations with existing variable names,

    trying hard to do exactly that, was looking
    at these test cases:

    ?- [user].
    p(X,Y) :- X = f(f(f(X))), Y = f(f(Y)).
    p(X,Y) :- X = a(f(X,Y)), Y = b(g(X,Y)).
    p(X,Y) :- X = s(s(X,Y),_), Y = s(Y,X).

    Using cycle detection via (==)/2 I get:

    /* Dogelog Player 1.3.5 */
    ?- p(X,Y).
    X = f(X), Y = X;
    X = a(f(X, Y)), Y = b(g(X, Y));
    X = s(s(X, Y), _), Y = s(Y, X).

    Using cycle detection via same_term/2 I get:

    /* Dogelog Player 1.3.5 */
    ?- p(X,Y).
    X = f(f(f(X))), Y = f(f(Y));
    X = a(f(X, Y)), Y = b(g(X, Y));
    X = s(s(X, Y), _), Y = s(Y, X).

    Cool!

    Bye

    Mild Shock schrieb:
    Hi,

    The most radical approach is Novacore from
    Dogelog Player. It consists of the following
    major incisions in the ISO core standard:

    - We do not forbid chars, like for example
    -a using lists of the form [a,b,c], we also
    -a provide char_code/2 predicate bidirectionally.

    - We do not provide and _chars built-in
    -a predicates also there is nothing _strings. The
    -a Prolog system is clever enough to not put
    -a every atom it sees in an atom table. There
    -a is only a predicate table.

    - Some host languages have garbage collection that
    -a deduplicates Strings. For example some Java
    -a versions have an options to do that. But we
    -a do not have any efforts to deduplicate atoms,
    -a which are simply plain strings.

    - Some languages have constant pools. For example
    -a the Java byte code format includes a constant
    -a pool in every class header. We do not do that
    -a during transpilation , but we could of course.
    -a But it begs the question, why only deduplicate
    -a strings and not other constant expressions as well?

    - We are totally happy that we have only codes,
    -a there are chances that the host languages use
    -a tagged pointers to represent them. So they
    -a are represented similar to the tagged pointers
    -a in SWI-Prolog which works for small integers.

    - But the tagged pointer argument is moot,
    -a since atom length=1 entities can be also
    -a represented as tagged pointers, and some
    -a programming languages do that. Dogelog Player
    -a would use such tagged pointers without
    -a poluting the atom table.

    - What else?

    Bye

    Mild Shock schrieb:

    Technically SWI-Prolog doesn't prefer codes.
    Library `library(pure_input)` might prefer codes.
    But this is again an issue of improving the
    library by some non existent SWI-Prolog community.

    The ISO core standard is silent about a flag
    back_quotes, but has a lot of API requirements
    that support both codes and chars, for example it
    requires atom_codes/2 and atom_chars/2.

    Implementation wise there can be an issue,
    like one might decide to implement the atoms
    of length=1 more efficiently, since with Unicode
    there is now an explosion.

    Not sure whether Trealla Prolog and Scryer
    Prolog thought about this problem, that the
    atom table gets quite large. Whereas codes don't
    eat the atom table. Maybe they forbit predicates

    that have an atom of length=1 head:

    h(X) :-
    -a-a-a-a write('Hello '), write(X), write('!'), nl.

    Does this still work?

    Mild Shock schrieb:
    Concerning library(portray_text) which is in limbo:

    Libraries are (often) written for either
    and thus the libraries make the choice.

    But who writes these libraries? The SWI Prolog
    community. And who doesnrCOt improve these libraries,
    instead floods the web with workaround tips?
    The SWI Prolog community.

    Conclusion the SWI-Prolog community has itself
    trapped in an ancient status quo, creating an island.
    Cannot improve its own tooling, is not willing
    to support code from else where that uses chars.

    Same with the missed AI Boom.

    (*) Code from elsewhere is dangerous, People
    might use other Prolog systems than only SWI-Prolog,
    like for exampe Trealla Prolog and Scryer Prolog.

    (**) Keeping the status quo is comfy. No need to
    think in terms of programm code. Its like biology
    teachers versus pathology staff, biology teachers
    do not everyday see opened corpses.


    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Jul 20 13:43:45 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    SWI-Prolog has a small glitch in the clause
    compilation, which can be compensated by using:

    ?- [user].
    p(X,Y) :- call((X = f(f(f(X))), Y = f(f(Y)))).
    p(X,Y) :- call((X = a(f(X,Y)), Y = b(g(X,Y)))).
    p(X,Y) :- call((X = s(s(X,Y),_), Y = s(Y,X))).

    I then get these results:

    /* SWI-Prolog 9.3.25 */
    ?- p(X,Y).
    X = Y, Y = f(f(Y)) ; /* ordering dependent */
    X = _S1, % where
    _S1 = a(f(_S1, _S2)),
    _S2 = b(g(_S1, _S2)),
    Y = b(g(_S1, _S2)) ; /* could use _S2 */
    X = _S1, % where
    _S1 = s(s(_S1, _S2), _),
    _S2 = s(_S2, _S1),
    Y = s(_S2, _S1). /* could use _S2 */

    And for Ciao Prolog I get these results:

    /* Ciao Prolog 1.25.0 */
    ?- p(X,Y).
    X = f(X),
    Y = f(X) ? ; /* too big! */
    X = a(f(X,b(g(X,Y)))), /* too big! */
    Y = b(g(a(f(X,Y)),Y)) ? ; /* too big! */
    X = s(s(X,s(Y,X)),_A), /* too big! */
    Y = s(Y,s(s(X,Y),_A)) ? ; /* too big! */

    Bye

    Mild Shock schrieb:
    Hi,

    I didn't expect the topic to be that rich.

    The big challenge in a top-level display is
    the "interleaving" of equations and their
    factorization as well as the "inlining" of
    equations with existing variable names,

    trying hard to do exactly that, was looking
    at these test cases:

    ?- [user].
    p(X,Y) :- X = f(f(f(X))), Y = f(f(Y)).
    p(X,Y) :- X = a(f(X,Y)), Y = b(g(X,Y)).
    p(X,Y) :- X = s(s(X,Y),_), Y = s(Y,X).

    Using cycle detection via (==)/2 I get:

    /* Dogelog Player 1.3.5 */
    ?- p(X,Y).
    X = f(X), Y = X;
    X = a(f(X, Y)), Y = b(g(X, Y));
    X = s(s(X, Y), _), Y = s(Y, X).

    Using cycle detection via same_term/2 I get:

    /* Dogelog Player 1.3.5 */
    ?- p(X,Y).
    X = f(f(f(X))), Y = f(f(Y));
    X = a(f(X, Y)), Y = b(g(X, Y));
    X = s(s(X, Y), _), Y = s(Y, X).

    Cool!

    Bye

    Mild Shock schrieb:
    Hi,

    The most radical approach is Novacore from
    Dogelog Player. It consists of the following
    major incisions in the ISO core standard:

    - We do not forbid chars, like for example
    -a-a using lists of the form [a,b,c], we also
    -a-a provide char_code/2 predicate bidirectionally.

    - We do not provide and _chars built-in
    -a-a predicates also there is nothing _strings. The
    -a-a Prolog system is clever enough to not put
    -a-a every atom it sees in an atom table. There
    -a-a is only a predicate table.

    - Some host languages have garbage collection that
    -a-a deduplicates Strings. For example some Java
    -a-a versions have an options to do that. But we
    -a-a do not have any efforts to deduplicate atoms,
    -a-a which are simply plain strings.

    - Some languages have constant pools. For example
    -a-a the Java byte code format includes a constant
    -a-a pool in every class header. We do not do that
    -a-a during transpilation , but we could of course.
    -a-a But it begs the question, why only deduplicate
    -a-a strings and not other constant expressions as well?

    - We are totally happy that we have only codes,
    -a-a there are chances that the host languages use
    -a-a tagged pointers to represent them. So they
    -a-a are represented similar to the tagged pointers
    -a-a in SWI-Prolog which works for small integers.

    - But the tagged pointer argument is moot,
    -a-a since atom length=1 entities can be also
    -a-a represented as tagged pointers, and some
    -a-a programming languages do that. Dogelog Player
    -a-a would use such tagged pointers without
    -a-a poluting the atom table.

    - What else?

    Bye

    Mild Shock schrieb:

    Technically SWI-Prolog doesn't prefer codes.
    Library `library(pure_input)` might prefer codes.
    But this is again an issue of improving the
    library by some non existent SWI-Prolog community.

    The ISO core standard is silent about a flag
    back_quotes, but has a lot of API requirements
    that support both codes and chars, for example it
    requires atom_codes/2 and atom_chars/2.

    Implementation wise there can be an issue,
    like one might decide to implement the atoms
    of length=1 more efficiently, since with Unicode
    there is now an explosion.

    Not sure whether Trealla Prolog and Scryer
    Prolog thought about this problem, that the
    atom table gets quite large. Whereas codes don't
    eat the atom table. Maybe they forbit predicates

    that have an atom of length=1 head:

    h(X) :-
    -a-a-a-a write('Hello '), write(X), write('!'), nl.

    Does this still work?

    Mild Shock schrieb:
    Concerning library(portray_text) which is in limbo:

    Libraries are (often) written for either
    and thus the libraries make the choice.

    But who writes these libraries? The SWI Prolog
    community. And who doesnrCOt improve these libraries,
    instead floods the web with workaround tips?
    The SWI Prolog community.

    Conclusion the SWI-Prolog community has itself
    trapped in an ancient status quo, creating an island.
    Cannot improve its own tooling, is not willing
    to support code from else where that uses chars.

    Same with the missed AI Boom.

    (*) Code from elsewhere is dangerous, People
    might use other Prolog systems than only SWI-Prolog,
    like for exampe Trealla Prolog and Scryer Prolog.

    (**) Keeping the status quo is comfy. No need to
    think in terms of programm code. Its like biology
    teachers versus pathology staff, biology teachers
    do not everyday see opened corpses.


    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>>

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg






    --- Synchronet 3.21a-Linux NewsLink 1.2