• WebPL is already outdated

    From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Aug 17 18:37:07 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Aug 18 14:52:50 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Heap/Stack Prolog systems could solve some Prolog
    String Problems, especially in connection with a FFI, but I am
    not showing that. More a general design limitation of the common

    take of WAM resp. ZIP. The new WebPL Prolog describes itself as a
    merged Heap/Stack architecture Prolog system. And has a reference
    in its escorting paper to an academic work by Xining Li (1999):

    A new term representation method for prolog
    Xining Li - 1999 https://www.sciencedirect.com/science/article/pii/S0743106697000629

    Besides that Program Sharing (PS), as it is called in the paper,
    is nothing new, WebPL also shows a more modern take, in that
    it already uses compound data types from Rust. Can we

    replicate some of the performance advantages of a PS system
    versus the more traditional WAM resp. ZIP based systems? Here
    is a simple test in the WebPL Playground, for Web PL without GC:

    /* WebPL NoGC */
    ?- test2(10).
    (1795.6ms)

    ?- test2(30).
    (1785.5ms)

    ?- test2(90).
    (1765.6ms)
    Then SWI-Prolog WASM as found in SWI-Tinker:

    /* SWI-Prolog WASM */
    ?- test2(10).
    (1239.3ms)

    ?- test2(30).
    (2276.1ms)

    ?- test2(90).
    (5372.3ms)

    https://webpl.whenderson.dev/

    Bye

    The test case:

    data(10, [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    data(30, [30, 29, 28, 27, 26, 25, 24, 23,
    22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
    12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    data(90, [90, 89, 88, 87, 86, 85, 84, 83,
    82, 81, 80, 79, 78, 77, 76, 75, 74, 73,
    72, 71, 70, 69, 68, 67, 66, 65, 64, 63,
    62, 61, 60, 59, 58, 57, 56, 55, 54, 53,
    52, 51, 50, 49, 48, 47, 46, 45, 44, 43,
    42, 41, 40, 39, 38, 37, 36, 35, 34, 33,
    32, 31, 30, 29, 28, 27, 26, 25, 24, 23,
    22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
    12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    test(N) :- between(1,1000,_), data(N,_), fail.
    test(_).

    test2(N) :- between(1,1000,_), test(N), fail.
    test2(_).

    between(Lo, Lo, R) :- !, Lo = R.
    between(Lo, _, Lo).
    between(Lo, Hi, X) :- Lo2 is Lo+1, between(Lo2, Hi, X).



    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Aug 18 15:06:39 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Ok lets run the test case on the desktop,
    and not on the web. What do we get? Its almost
    constant for Trealla Prolog as well, in

    WebPL it was perfectly constant, but here
    its only almost constant:

    /* Trealla Prolog 2.82.14 */

    ?- time(test2(10)).
    % Time elapsed 0.188s, 3004002 Inferences, 16.014 MLips
    true.

    ?- time(test2(30)).
    % Time elapsed 0.210s, 3004002 Inferences, 14.321 MLips
    true.

    ?- time(test2(90)).
    % Time elapsed 0.228s, 3004002 Inferences, 13.147 MLips
    true.

    Scryer Prolog fails the test horribly. Which
    is amazing, since it is a Rust Prolog system just
    like WebPL. But they are too traditional in

    following the stupid WAM design:

    /* Scryer Prolog 0.9.4-599 */

    ?- time(test2(10)).
    % CPU time: 0.714s, 7_049_076 inferences
    true.

    ?- time(test2(30)).
    % CPU time: 1.284s, 7_049_099 inferences
    true.

    ?- time(test2(90)).
    % CPU time: 2.984s, 7_049_099 inferences
    true.

    Bye

    Mild Shock schrieb:
    Hi,

    Heap/Stack Prolog systems could solve some Prolog
    String Problems, especially in connection with a FFI, but I am
    not showing that. More a general design limitation of the common

    take of WAM resp. ZIP. The new WebPL Prolog describes itself as a
    merged Heap/Stack architecture Prolog system. And has a reference
    in its escorting paper to an academic work by Xining Li (1999):

    A new term representation method for prolog
    Xining Li - 1999 https://www.sciencedirect.com/science/article/pii/S0743106697000629

    Besides that Program Sharing (PS), as it is called in the paper,
    is nothing new, WebPL also shows a more modern take, in that
    it already uses compound data types from Rust. Can we

    replicate some of the performance advantages of a PS system
    versus the more traditional WAM resp. ZIP based systems? Here
    is a simple test in the WebPL Playground, for Web PL without GC:

    /* WebPL NoGC */
    ?- test2(10).
    (1795.6ms)

    ?- test2(30).
    (1785.5ms)

    ?- test2(90).
    (1765.6ms)
    Then SWI-Prolog WASM as found in SWI-Tinker:

    /* SWI-Prolog WASM */
    ?- test2(10).
    (1239.3ms)

    ?- test2(30).
    (2276.1ms)

    ?- test2(90).
    (5372.3ms)

    https://webpl.whenderson.dev/

    Bye

    The test case:

    data(10, [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    data(30, [30, 29, 28, 27, 26, 25, 24, 23,
    -a-a 22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
    -a-a 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    data(90, [90, 89, 88, 87, 86, 85, 84, 83,
    -a-a 82, 81, 80, 79, 78, 77, 76, 75, 74, 73,
    -a-a 72, 71, 70, 69, 68, 67, 66, 65, 64, 63,
    -a-a 62, 61, 60, 59, 58, 57, 56, 55, 54, 53,
    -a-a 52, 51, 50, 49, 48, 47, 46, 45, 44, 43,
    -a-a 42, 41, 40, 39, 38, 37, 36, 35, 34, 33,
    -a-a 32, 31, 30, 29, 28, 27, 26, 25, 24, 23,
    -a-a 22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
    -a-a 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    test(N) :- between(1,1000,_), data(N,_), fail.
    test(_).

    test2(N) :- between(1,1000,_), test(N), fail.
    test2(_).

    between(Lo, Lo, R) :- !, Lo = R.
    between(Lo, _, Lo).
    between(Lo, Hi, X) :- Lo2 is Lo+1, between(Lo2, Hi, X).



    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Aug 18 15:42:38 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Smarter Partial Strings would use Program
    Sharing. Take the invention of Scryer
    Prolog and think about it from a Program

    Sharing prespective:

    p --> "abc", q

    Translates to with Partial Strings:

    p(C, B) :- C = "abc"||A, q(A, B).

    Unfortunately straight forward Program
    Sharning of the partial string doesn't
    work anymore, since it is not ground:

    p(C, B) :- C = [a,b,c|A], q(A, B).

    But we could translate the DCG also to:

    p(C, B) :- '$append'([a,b,c],A,C), q(A, B).

    Where '$append'/3 is a mode (+,-,-) specialization
    of append/3. Could be natively implemented.
    The mode (+,-,-) will be more clever

    then the failed programm sharing. The program
    sharing can share the string "abc", since with
    '$append'/3, the DCG is basically:

    p(C, B) :- '$append'("abc",A,C), q(A, B).

    Now '$append'/3 would do a copying of the string,
    if A is unbound, this is usually the "DCG used for
    text generation" mode. But if A is bound, the

    '$append'/3 would not do some copying, but it
    would actually match the prefix. So it gives
    a much better DCG for parsing, since this is

    "DCG used for text parsing" mode.

    Bye

    Mild Shock schrieb:
    Hi,

    Ok lets run the test case on the desktop,
    and not on the web. What do we get? Its almost
    constant for Trealla Prolog as well, in

    WebPL it was perfectly constant, but here
    its only almost constant:

    /* Trealla Prolog 2.82.14 */

    ?- time(test2(10)).
    % Time elapsed 0.188s, 3004002 Inferences, 16.014 MLips
    -a-a true.

    ?- time(test2(30)).
    % Time elapsed 0.210s, 3004002 Inferences, 14.321 MLips
    -a-a true.

    ?- time(test2(90)).
    % Time elapsed 0.228s, 3004002 Inferences, 13.147 MLips
    -a-a true.

    Scryer Prolog fails the test horribly. Which
    is amazing, since it is a Rust Prolog system just
    like WebPL. But they are too traditional in

    following the stupid WAM design:

    /* Scryer Prolog 0.9.4-599 */

    ?- time(test2(10)).
    -a-a % CPU time: 0.714s, 7_049_076 inferences
    -a-a true.

    ?- time(test2(30)).
    -a-a % CPU time: 1.284s, 7_049_099 inferences
    -a-a true.

    ?- time(test2(90)).
    -a-a % CPU time: 2.984s, 7_049_099 inferences
    -a-a true.

    Bye

    Mild Shock schrieb:
    Hi,

    Heap/Stack Prolog systems could solve some Prolog
    String Problems, especially in connection with a FFI, but I am
    not showing that. More a general design limitation of the common

    take of WAM resp. ZIP. The new WebPL Prolog describes itself as a
    merged Heap/Stack architecture Prolog system. And has a reference
    in its escorting paper to an academic work by Xining Li (1999):

    A new term representation method for prolog
    Xining Li - 1999
    https://www.sciencedirect.com/science/article/pii/S0743106697000629

    Besides that Program Sharing (PS), as it is called in the paper,
    is nothing new, WebPL also shows a more modern take, in that
    it already uses compound data types from Rust. Can we

    replicate some of the performance advantages of a PS system
    versus the more traditional WAM resp. ZIP based systems? Here
    is a simple test in the WebPL Playground, for Web PL without GC:

    /* WebPL NoGC */
    ?- test2(10).
    (1795.6ms)

    ?- test2(30).
    (1785.5ms)

    ?- test2(90).
    (1765.6ms)
    Then SWI-Prolog WASM as found in SWI-Tinker:

    /* SWI-Prolog WASM */
    ?- test2(10).
    (1239.3ms)

    ?- test2(30).
    (2276.1ms)

    ?- test2(90).
    (5372.3ms)

    https://webpl.whenderson.dev/

    Bye

    The test case:

    data(10, [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    data(30, [30, 29, 28, 27, 26, 25, 24, 23,
    -a-a-a 22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
    -a-a-a 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    data(90, [90, 89, 88, 87, 86, 85, 84, 83,
    -a-a-a 82, 81, 80, 79, 78, 77, 76, 75, 74, 73,
    -a-a-a 72, 71, 70, 69, 68, 67, 66, 65, 64, 63,
    -a-a-a 62, 61, 60, 59, 58, 57, 56, 55, 54, 53,
    -a-a-a 52, 51, 50, 49, 48, 47, 46, 45, 44, 43,
    -a-a-a 42, 41, 40, 39, 38, 37, 36, 35, 34, 33,
    -a-a-a 32, 31, 30, 29, 28, 27, 26, 25, 24, 23,
    -a-a-a 22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
    -a-a-a 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    test(N) :- between(1,1000,_), data(N,_), fail.
    test(_).

    test2(N) :- between(1,1000,_), test(N), fail.
    test2(_).

    between(Lo, Lo, R) :- !, Lo = R.
    between(Lo, _, Lo).
    between(Lo, Hi, X) :- Lo2 is Lo+1, between(Lo2, Hi, X).



    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Aug 18 15:49:54 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    In Dogelog Player I don't need to introduce
    '$append'/3, since in this code there is
    anyway an attempt to do static shunting:

    p(C, B) :- C = [a,b,c|A], q(A, B).

    It is handled as if it were:

    p([a,b,c|A], B) :- q(A, B).

    This means [a,b,c|A] is anyway program shared (PS),
    and a matching happens so that we can ultimately
    omit the creation of a real Prolog variable for

    A. It will get a special place holder that is
    not trailed. Maybe I will find a test case
    to illustrate this form of program sharing,

    which I have temporarily termed static shunting,
    whereas WebPL paper shunting, I would rather
    call dynamic shunting. Unfortuantely WebPL

    does not support DCG parsing, the (-->)/2
    clauses don't work. So will take me more time
    to test whether there is something in WebPL,

    concerning this type of program sharing as well,
    or whether it was botched.

    Bye

    Mild Shock schrieb:
    Hi,

    Smarter Partial Strings would use Program
    Sharing. Take the invention of Scryer
    Prolog and think about it from a Program

    Sharing prespective:

    p --> "abc", q

    Translates to with Partial Strings:

    p(C, B) :- C = "abc"||A, q(A, B).

    Unfortunately straight forward Program
    Sharning of the partial string doesn't
    work anymore, since it is not ground:

    p(C, B) :- C = [a,b,c|A], q(A, B).

    But we could translate the DCG also to:

    p(C, B) :- '$append'([a,b,c],A,C), q(A, B).

    Where '$append'/3 is a mode (+,-,-) specialization
    of append/3. Could be natively implemented.
    The mode (+,-,-) will be more clever

    then the failed programm sharing. The program
    sharing can share the string "abc", since with
    '$append'/3, the DCG is basically:

    p(C, B) :- '$append'("abc",A,C), q(A, B).

    Now '$append'/3 would do a copying of the string,
    if A is unbound, this is usually the "DCG used for
    text generation" mode. But if A is bound, the

    '$append'/3 would not do some copying, but it
    would actually match the prefix. So it gives
    a much better DCG for parsing, since this is

    "DCG used for text parsing" mode.

    Bye

    Mild Shock schrieb:
    Hi,

    Ok lets run the test case on the desktop,
    and not on the web. What do we get? Its almost
    constant for Trealla Prolog as well, in

    WebPL it was perfectly constant, but here
    its only almost constant:

    /* Trealla Prolog 2.82.14 */

    ?- time(test2(10)).
    % Time elapsed 0.188s, 3004002 Inferences, 16.014 MLips
    -a-a-a true.

    ?- time(test2(30)).
    % Time elapsed 0.210s, 3004002 Inferences, 14.321 MLips
    -a-a-a true.

    ?- time(test2(90)).
    % Time elapsed 0.228s, 3004002 Inferences, 13.147 MLips
    -a-a-a true.

    Scryer Prolog fails the test horribly. Which
    is amazing, since it is a Rust Prolog system just
    like WebPL. But they are too traditional in

    following the stupid WAM design:

    /* Scryer Prolog 0.9.4-599 */

    ?- time(test2(10)).
    -a-a-a % CPU time: 0.714s, 7_049_076 inferences
    -a-a-a true.

    ?- time(test2(30)).
    -a-a-a % CPU time: 1.284s, 7_049_099 inferences
    -a-a-a true.

    ?- time(test2(90)).
    -a-a-a % CPU time: 2.984s, 7_049_099 inferences
    -a-a-a true.

    Bye

    Mild Shock schrieb:
    Hi,

    Heap/Stack Prolog systems could solve some Prolog
    String Problems, especially in connection with a FFI, but I am
    not showing that. More a general design limitation of the common

    take of WAM resp. ZIP. The new WebPL Prolog describes itself as a
    merged Heap/Stack architecture Prolog system. And has a reference
    in its escorting paper to an academic work by Xining Li (1999):

    A new term representation method for prolog
    Xining Li - 1999
    https://www.sciencedirect.com/science/article/pii/S0743106697000629

    Besides that Program Sharing (PS), as it is called in the paper,
    is nothing new, WebPL also shows a more modern take, in that
    it already uses compound data types from Rust. Can we

    replicate some of the performance advantages of a PS system
    versus the more traditional WAM resp. ZIP based systems? Here
    is a simple test in the WebPL Playground, for Web PL without GC:

    /* WebPL NoGC */
    ?- test2(10).
    (1795.6ms)

    ?- test2(30).
    (1785.5ms)

    ?- test2(90).
    (1765.6ms)
    Then SWI-Prolog WASM as found in SWI-Tinker:

    /* SWI-Prolog WASM */
    ?- test2(10).
    (1239.3ms)

    ?- test2(30).
    (2276.1ms)

    ?- test2(90).
    (5372.3ms)

    https://webpl.whenderson.dev/

    Bye

    The test case:

    data(10, [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    data(30, [30, 29, 28, 27, 26, 25, 24, 23,
    -a-a-a 22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
    -a-a-a 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    data(90, [90, 89, 88, 87, 86, 85, 84, 83,
    -a-a-a 82, 81, 80, 79, 78, 77, 76, 75, 74, 73,
    -a-a-a 72, 71, 70, 69, 68, 67, 66, 65, 64, 63,
    -a-a-a 62, 61, 60, 59, 58, 57, 56, 55, 54, 53,
    -a-a-a 52, 51, 50, 49, 48, 47, 46, 45, 44, 43,
    -a-a-a 42, 41, 40, 39, 38, 37, 36, 35, 34, 33,
    -a-a-a 32, 31, 30, 29, 28, 27, 26, 25, 24, 23,
    -a-a-a 22, 21, 20, 19, 18, 17, 16, 15, 14, 13,
    -a-a-a 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]).

    test(N) :- between(1,1000,_), data(N,_), fail.
    test(_).

    test2(N) :- between(1,1000,_), test(N), fail.
    test2(_).

    between(Lo, Lo, R) :- !, Lo = R.
    between(Lo, _, Lo).
    between(Lo, Hi, X) :- Lo2 is Lo+1, between(Lo2, Hi, X).



    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Aug 31 23:56:56 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Woa! I didn't know that lausy Microsoft
    Copilot certified Laptops are that fast:

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    % Dogelog Player 2.1.1 for Java

    % AMD Ryzen 5 4500U
    % ?- time(test).
    % % Zeit 756 ms, GC 1 ms, Lips 9950390, Uhr 23.08.2025 02:45
    % true.

    % AMD Ryzen AI 7 350
    % ?- time(test).
    % % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    % true.

    What happened to the Death of Moore's Law?
    But somehow memory speed, CPU - RAM and GPU - RAM
    trippled. Possibly due to some Artificial

    Intelligence demand. And the bloody thing
    has also a NPU (Neural Processing Unit),
    nicely visible.

    Bye

    About the RAM speed. L1, L2 and L3
    caches are bigger. So its harder to poison
    the CPU. Also the CPU shows a revival of

    Hyper-Threading Technology (HTT), which
    AMD gives it a different name: They call it
    Simultaneous multithreading (SMT).

    https://www.cpubenchmark.net/compare/3702vs6397/AMD-Ryzen-5-4500U-vs-AMD-Ryzen-AI-7-350

    BTW: Still ticking along with the primes.pl example:

    test :-
    len(L, 1000),
    primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
    primes(L, I),
    K is I+1,
    search(L, K, J).

    search(L, I, J) :-
    mem(X, L),
    I mod X =:= 0, !,
    K is I+1,
    search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
    mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
    N > 0,
    M is N-1,
    len(L, M).

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Sep 1 00:45:00 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    2025 will be last year we hear of Python.
    This is just a tears in your eyes Eulogy:

    Python: The Documentary | An origin story https://www.youtube.com/watch?v=GfH4QL4VqJ0

    The Zen of Python is very different
    from the Zen of Copilot+ . The bloody
    Copilot+ Laptop doesn't use Python

    in its Artificial Intelligence:

    AI Content Extraction
    - Python Involced? rYi None at runtime,
    Model runs in ONNX + DirectML on NPU

    AI Image Search
    - Python Involced? rYi None at runtime,
    ON-device image feature, fully compiled

    AI Phi Silica
    - Python Involced? rYi None at runtime,
    Lightweight Phi model packaged as ONNX

    AI Semantic Analysis?
    - Python Involced? rYi None at runtime,
    Text understanding done via compiled
    ONNX operators

    Bye

    Mild Shock schrieb:
    Hi,

    Woa! I didn't know that lausy Microsoft
    Copilot certified Laptops are that fast:

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    % Dogelog Player 2.1.1 for Java

    % AMD Ryzen 5 4500U
    % ?- time(test).
    % % Zeit 756 ms, GC 1 ms, Lips 9950390, Uhr 23.08.2025 02:45
    % true.

    % AMD Ryzen AI 7 350
    % ?- time(test).
    % % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    % true.

    What happened to the Death of Moore's Law?
    But somehow memory speed, CPU - RAM and GPU - RAM
    trippled. Possibly due to some Artificial

    Intelligence demand. And the bloody thing
    has also a NPU (Neural Processing Unit),
    nicely visible.

    Bye

    About the RAM speed. L1, L2 and L3
    caches are bigger. So its harder to poison
    the CPU. Also the CPU shows a revival of

    Hyper-Threading Technology (HTT), which
    AMD gives it a different name: They call it
    Simultaneous multithreading (SMT).

    https://www.cpubenchmark.net/compare/3702vs6397/AMD-Ryzen-5-4500U-vs-AMD-Ryzen-AI-7-350


    BTW: Still ticking along with the primes.pl example:

    test :-
    -a-a len(L, 1000),
    -a-a primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
    -a-a primes(L, I),
    -a-a K is I+1,
    -a-a search(L, K, J).

    search(L, I, J) :-
    -a-a mem(X, L),
    -a-a I mod X =:= 0, !,
    -a-a K is I+1,
    -a-a search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
    -a-a mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
    -a-a N > 0,
    -a-a M is N-1,
    -a-a len(L, M).

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Sep 5 00:36:17 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Swiss AI Apertus
    Model ID: apertus-70b-instruct
    Parameters: 70 billion
    License: Apache 2.0
    Training: 15T tokens across 1,000+ languages
    Availability: Free during Swiss AI Weeks (September 2025)

    https://platform.publicai.co/docs

    Bye

    P.S.: A chat interface is here:

    Try Apertus
    https://publicai.co/

    Mild Shock schrieb:
    Hi,

    2025 will be last year we hear of Python.
    This is just a tears in your eyes Eulogy:

    Python: The Documentary | An origin story https://www.youtube.com/watch?v=GfH4QL4VqJ0

    The Zen of Python is very different
    from the Zen of Copilot+ . The bloody
    Copilot+ Laptop doesn't use Python

    in its Artificial Intelligence:

    AI Content Extraction
    - Python Involced? rYi None at runtime,
    -a Model runs in ONNX + DirectML on NPU

    AI Image Search
    - Python Involced? rYi None at runtime,
    -a ON-device image feature, fully compiled

    AI Phi Silica
    - Python Involced? rYi None at runtime,
    -a Lightweight Phi model packaged as ONNX

    AI Semantic Analysis?
    - Python Involced? rYi None at runtime,
    -a Text understanding done via compiled
    -a ONNX operators

    Bye

    Mild Shock schrieb:
    Hi,

    Woa! I didn't know that lausy Microsoft
    Copilot certified Laptops are that fast:

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    % Dogelog Player 2.1.1 for Java

    % AMD Ryzen 5 4500U
    % ?- time(test).
    % % Zeit 756 ms, GC 1 ms, Lips 9950390, Uhr 23.08.2025 02:45
    % true.

    % AMD Ryzen AI 7 350
    % ?- time(test).
    % % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    % true.

    What happened to the Death of Moore's Law?
    But somehow memory speed, CPU - RAM and GPU - RAM
    trippled. Possibly due to some Artificial

    Intelligence demand. And the bloody thing
    has also a NPU (Neural Processing Unit),
    nicely visible.

    Bye

    About the RAM speed. L1, L2 and L3
    caches are bigger. So its harder to poison
    the CPU. Also the CPU shows a revival of

    Hyper-Threading Technology (HTT), which
    AMD gives it a different name: They call it
    Simultaneous multithreading (SMT).

    https://www.cpubenchmark.net/compare/3702vs6397/AMD-Ryzen-5-4500U-vs-AMD-Ryzen-AI-7-350


    BTW: Still ticking along with the primes.pl example:

    test :-
    -a-a-a len(L, 1000),
    -a-a-a primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
    -a-a-a primes(L, I),
    -a-a-a K is I+1,
    -a-a-a search(L, K, J).

    search(L, I, J) :-
    -a-a-a mem(X, L),
    -a-a-a I mod X =:= 0, !,
    -a-a-a K is I+1,
    -a-a-a search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
    -a-a-a mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
    -a-a-a N > 0,
    -a-a-a M is N-1,
    -a-a-a len(L, M).

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Sep 5 01:03:55 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Don't try this, don't ask Apertus how
    many holes an emmentaler cheese has.

    And absolutely don't try this, ask it
    next to please answer in Schwitzerd|+tsch.

    Bye

    P.S.: Chatgpt can do it.

    Mild Shock schrieb:
    Hi,

    Swiss AI Apertus
    Model ID: apertus-70b-instruct
    Parameters: 70 billion
    License: Apache 2.0
    Training: 15T tokens across 1,000+ languages
    Availability: Free during Swiss AI Weeks (September 2025)

    https://platform.publicai.co/docs

    Bye

    P.S.: A chat interface is here:

    Try Apertus
    https://publicai.co/

    Mild Shock schrieb:
    Hi,

    2025 will be last year we hear of Python.
    This is just a tears in your eyes Eulogy:

    Python: The Documentary | An origin story
    https://www.youtube.com/watch?v=GfH4QL4VqJ0

    The Zen of Python is very different
    from the Zen of Copilot+ . The bloody
    Copilot+ Laptop doesn't use Python

    in its Artificial Intelligence:

    AI Content Extraction
    - Python Involced? rYi None at runtime,
    -a-a Model runs in ONNX + DirectML on NPU

    AI Image Search
    - Python Involced? rYi None at runtime,
    -a-a ON-device image feature, fully compiled

    AI Phi Silica
    - Python Involced? rYi None at runtime,
    -a-a Lightweight Phi model packaged as ONNX

    AI Semantic Analysis?
    - Python Involced? rYi None at runtime,
    -a-a Text understanding done via compiled
    -a-a ONNX operators

    Bye

    Mild Shock schrieb:
    Hi,

    Woa! I didn't know that lausy Microsoft
    Copilot certified Laptops are that fast:

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    % Dogelog Player 2.1.1 for Java

    % AMD Ryzen 5 4500U
    % ?- time(test).
    % % Zeit 756 ms, GC 1 ms, Lips 9950390, Uhr 23.08.2025 02:45
    % true.

    % AMD Ryzen AI 7 350
    % ?- time(test).
    % % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    % true.

    What happened to the Death of Moore's Law?
    But somehow memory speed, CPU - RAM and GPU - RAM
    trippled. Possibly due to some Artificial

    Intelligence demand. And the bloody thing
    has also a NPU (Neural Processing Unit),
    nicely visible.

    Bye

    About the RAM speed. L1, L2 and L3
    caches are bigger. So its harder to poison
    the CPU. Also the CPU shows a revival of

    Hyper-Threading Technology (HTT), which
    AMD gives it a different name: They call it
    Simultaneous multithreading (SMT).

    https://www.cpubenchmark.net/compare/3702vs6397/AMD-Ryzen-5-4500U-vs-AMD-Ryzen-AI-7-350


    BTW: Still ticking along with the primes.pl example:

    test :-
    -a-a-a len(L, 1000),
    -a-a-a primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
    -a-a-a primes(L, I),
    -a-a-a K is I+1,
    -a-a-a search(L, K, J).

    search(L, I, J) :-
    -a-a-a mem(X, L),
    -a-a-a I mod X =:= 0, !,
    -a-a-a K is I+1,
    -a-a-a search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
    -a-a-a mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
    -a-a-a N > 0,
    -a-a-a M is N-1,
    -a-a-a len(L, M).

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Sep 19 10:01:22 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    For the LP (Linear programming) part, it
    might be interesting to recall, that SWI-Prolog
    has an according library:

    A.55 library(simplex): Solve linear programming problems https://eu.swi-prolog.org/pldoc/man?section=simplex

    To model the constraint store, it doesnrCOt need
    any native Prolog system support, since it uses
    DCG for state threading. Linear programming was

    long time the pinnacle of mathematical problem
    solving. But some Articial Intelligence method do
    typically go beyond the linear case, might also

    tackle non-linear problems etc.. making heavy
    use of a NPU (Neural Processing Unit). In May 2025
    the first AI Laptops arrived with >40 TOPS NPUs.

    Spearheaded by Microsoft branding it Copilot+.

    Bye
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Sep 19 10:10:51 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    It seems the LP (Linear programming)
    library by SWI-Prolog has been also
    ported to Scryer Prolog using the same DCG
    design as demonstrated in SWI-Prolog again:

    Module simplex
    https://www.scryer.pl/simplex

    What it requires from the Prolog system,
    and is not covered by the ISO core standard,
    are rational number, i.e. rdiv/2 etc.. and if
    you feed it with floating point numbers,

    judging from the source code, it might bark
    that it has no CLP(R) available to solve it. CLP(R)
    could be maybe a good candidate for Copilot+
    machines, but I am currently not aware

    of a Copliot+ Prolog system so to speak:

    About Microsoft Copilot+ PCs https://www.wired.com/story/what-is-copilot-plus-pc/

    The DCG design could make it easy that a
    solver somehow hands a problem to a NPU,
    making it transparent for the end-user.

    Bye

    Mild Shock schrieb:
    Hi,

    For the LP (Linear programming) part, it
    might be interesting to recall, that SWI-Prolog
    has an according library:

    A.55 library(simplex): Solve linear programming problems https://eu.swi-prolog.org/pldoc/man?section=simplex

    To model the constraint store, it doesnrCOt need
    any native Prolog system support, since it uses
    DCG for state threading. Linear programming was

    long time the pinnacle of mathematical problem
    solving. But some Articial Intelligence method do
    typically go beyond the linear case, might also

    tackle non-linear problems etc.. making heavy
    use of a NPU (Neural Processing Unit). In May 2025
    the first AI Laptops arrived with >40 TOPS NPUs.

    Spearheaded by Microsoft branding it Copilot+.

    Bye

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Sep 19 14:38:28 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Thank god it was only coffee and not orange juice:

    Ozzy Pours The Perfect O.J.
    https://m.youtube.com/watch?v=ojQUYq21G-o


    Bye

    Mild Shock schrieb:
    Hi,

    It seems the LP (Linear programming)
    library by SWI-Prolog has been also
    ported to Scryer Prolog using the same DCG
    design as demonstrated in SWI-Prolog again:

    Module simplex
    https://www.scryer.pl/simplex

    What it requires from the Prolog system,
    and is not covered by the ISO core standard,
    are rational number, i.e. rdiv/2 etc.. and if
    you feed it with floating point numbers,

    judging from the source code, it might bark
    that it has no CLP(R) available to solve it. CLP(R)
    could be maybe a good candidate for Copilot+
    machines, but I am currently not aware

    of a Copliot+ Prolog system so to speak:

    About Microsoft Copilot+ PCs https://www.wired.com/story/what-is-copilot-plus-pc/

    The DCG design could make it easy that a
    solver somehow hands a problem to a NPU,
    making it transparent for the end-user.

    Bye

    Mild Shock schrieb:
    Hi,

    For the LP (Linear programming) part, it
    might be interesting to recall, that SWI-Prolog
    has an according library:

    A.55 library(simplex): Solve linear programming problems
    https://eu.swi-prolog.org/pldoc/man?section=simplex

    To model the constraint store, it doesnrCOt need
    any native Prolog system support, since it uses
    DCG for state threading. Linear programming was

    long time the pinnacle of mathematical problem
    solving. But some Articial Intelligence method do
    typically go beyond the linear case, might also

    tackle non-linear problems etc.. making heavy
    use of a NPU (Neural Processing Unit). In May 2025
    the first AI Laptops arrived with >40 TOPS NPUs.

    Spearheaded by Microsoft branding it Copilot+.

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Sep 19 16:08:29 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Since some idiots blocked me on Scryer Prolog issues,
    I raise the issue here. Basically uniy_with_occurs_check/2
    probably does use a different implementation of unification

    than find for (=)/2. Because it doesn't scale, I find:

    /* Scryer Prolog */
    ?- test3(25).
    % CPU time: 0.001s, 57 inferences
    true.

    ?- test4(25).
    % CPU time: 2.133s, 57 inferences
    true.

    Expectation would be that unify_with_occurs_check/2
    does just scale like it does in SWI-Prolog. In
    SWI-Prolog I find:

    /* SWI-Prolog 9.3.30 */
    ?- test3(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    ?- test4(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    The test case was simply a hydra variant. Actually the
    last hydra modification posted by @kuniaki, which I
    am currently ticking along now:

    hydra(0, _) :- !.
    hydra(N, h(X, X)):- N>0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A):- N>0, N0 is N-1, hydra(N0, X, A).

    test3(N) :- hydra(N, X), hydra(N, Y, Y),
    time(X = Y).

    test4(N) :- hydra(N, X), hydra(N, Y, Y),
    time(unify_with_occurs_check(X, Y)).

    But of course there is a cut (!) in the first rules.

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Sep 19 16:18:25 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Not sure whether its a language issue, or an
    algorithmic issue. But was working hard to
    bring unlimited stacks to Dogelog Player,

    removing the use of native stacks, and introducing
    some agenda data structures for certain primitive
    built-ins. Now amazingly I get in Rust:

    /* Scryer Prolog 0.9.4-656 */
    ?- between(7,10,K), N is 4^K, test2(N), fail; true.
    % CPU time: 0.001s, 56 inferences
    % CPU time: 0.004s, 56 inferences
    % CPU time: 0.019s, 56 inferences
    % CPU time: 0.132s, 56 inferences
    true.

    On the the other hand JavaScript shows me:

    /* Dogelog Player 2.1.1 / Node.js v24.6.0 */
    ?- between(7,10,K), N is 4^K, test2(N), fail; true.
    % Zeit 1 ms, GC 0 ms, Lips 15000, Uhr 19.09.2025 09:17
    % Zeit 4 ms, GC 0 ms, Lips 3750, Uhr 19.09.2025 09:17
    % Zeit 21 ms, GC 0 ms, Lips 714, Uhr 19.09.2025 09:17
    % Zeit 57 ms, GC 0 ms, Lips 263, Uhr 19.09.2025 09:17
    true.

    Stunning! The test case is the same hydra as
    below, now benchmarking the predicate (==)/2:

    test2(N) :- hydra(N, X), hydra(N, Y, Y), time(X == Y).

    But I have to redo the tests with more iterations
    to flatten the erractic behaviour of time measurement
    garbage collection and who nows what. Could get a

    better picture. But I observe since yesterady that
    JavaScript easily beats Rust, when using the Bart
    Demoen folklore trick inside JavaScript. One of

    the big brakes was not the stack, there is practically
    no difference between using a native stack or an
    artificial stack based on Array(). Its more that

    the slowdown was Map(), and it could be removed
    by using Bart Demoen folklore trick, as referenced
    by SWI-Prolog in the source code of unify().

    Bye

    Mild Shock schrieb:
    Hi,

    Since some idiots blocked me on Scryer Prolog issues,
    I raise the issue here. Basically uniy_with_occurs_check/2
    probably does use a different implementation of unification

    than find for (=)/2. Because it doesn't scale, I find:

    /* Scryer Prolog */
    ?- test3(25).
    -a-a % CPU time: 0.001s, 57 inferences
    -a-a true.

    ?- test4(25).
    -a-a % CPU time: 2.133s, 57 inferences
    -a-a true.

    Expectation would be that unify_with_occurs_check/2
    does just scale like it does in SWI-Prolog. In
    SWI-Prolog I find:

    /* SWI-Prolog 9.3.30 */
    ?- test3(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    ?- test4(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    The test case was simply a hydra variant. Actually the
    last hydra modification posted by @kuniaki, which I
    am currently ticking along now:

    hydra(0, _) :- !.
    hydra(N, h(X, X)):- N>0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A):- N>0, N0 is N-1, hydra(N0, X, A).

    test3(N) :- hydra(N, X), hydra(N, Y, Y),
    -a-a time(X = Y).

    test4(N) :- hydra(N, X), hydra(N, Y, Y),
    -a-a time(unify_with_occurs_check(X, Y)).

    But of course there is a cut (!) in the first rules.

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Sep 19 18:22:23 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I like the expert system description by

    I used Claude code to help me create a Prolog
    program of a little expert system to manage a
    kitchen that needed to produce different dishes
    with different appliances and to be able to
    maximize revenue. -- bauhaus911

    Instead of maximizing revenue you could also
    maximize energy boost. So instead of having
    a couple of morons on SWI-Prolog discourse,

    like those that have parked their brain in the
    nowhere and are going full throttle Donald
    Trump / Kesh Patel Nazi, the system could

    indeed recommend Orange Juice instead of
    coffee. For the following brain benefits:

    - Vitamin C powerhouse: ~50rCo60 mg per 100 ml,
    giving a solid immune boost.

    - Quick energy: natural sugars (glucose + fructose)
    give your brain and body fast fuel.

    - Hydration: mostly water, which helps maintain
    energy and focus.

    Have Fun! LoL

    Bye


    Mild Shock schrieb:
    Hi,

    Thank god it was only coffee and not orange juice:

    Ozzy Pours The Perfect O.J.
    https://m.youtube.com/watch?v=ojQUYq21G-o


    Bye

    Mild Shock schrieb:
    Hi,

    It seems the LP (Linear programming)
    library by SWI-Prolog has been also
    ported to Scryer Prolog using the same DCG
    design as demonstrated in SWI-Prolog again:

    Module simplex
    https://www.scryer.pl/simplex

    What it requires from the Prolog system,
    and is not covered by the ISO core standard,
    are rational number, i.e. rdiv/2 etc.. and if
    you feed it with floating point numbers,

    judging from the source code, it might bark
    that it has no CLP(R) available to solve it. CLP(R)
    could be maybe a good candidate for Copilot+
    machines, but I am currently not aware

    of a Copliot+ Prolog system so to speak:

    About Microsoft Copilot+ PCs
    https://www.wired.com/story/what-is-copilot-plus-pc/

    The DCG design could make it easy that a
    solver somehow hands a problem to a NPU,
    making it transparent for the end-user.

    Bye

    Mild Shock schrieb:
    Hi,

    For the LP (Linear programming) part, it
    might be interesting to recall, that SWI-Prolog
    has an according library:

    A.55 library(simplex): Solve linear programming problems
    https://eu.swi-prolog.org/pldoc/man?section=simplex

    To model the constraint store, it doesnrCOt need
    any native Prolog system support, since it uses
    DCG for state threading. Linear programming was

    long time the pinnacle of mathematical problem
    solving. But some Articial Intelligence method do
    typically go beyond the linear case, might also

    tackle non-linear problems etc.. making heavy
    use of a NPU (Neural Processing Unit). In May 2025
    the first AI Laptops arrived with >40 TOPS NPUs.

    Spearheaded by Microsoft branding it Copilot+.

    Bye



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Sep 19 18:38:59 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    You deleted like 10 posts of mine in the last
    48 hours, which tried to explain why patching
    is against "discourse".

    Even Torbj||rn Lager agreed. I don't think
    you can continue your forum in this style.
    And then after you deleted a dozen of posts

    of mine, I am not allowed to delete my posts?

    You are simply completely crazy!!!

    Bye

    I got the following nonsense from you:

    Jan, werCOve asked you to be less combative with
    people here, but you continue to be extremely
    aggressive towards other users of the site.
    You have very helpful things to add, but when
    you then go back and delete everything you post,
    it obviates that helpfulness.

    Mild Shock schrieb:
    Hi,

    I like the expert system description by

    I used Claude code to help me create a Prolog
    program of a little expert system to manage a
    kitchen that needed to produce different dishes
    with different appliances and to be able to
    maximize revenue. -- bauhaus911

    Instead of maximizing revenue you could also
    maximize energy boost. So instead of having
    a couple of morons on SWI-Prolog discourse,

    like those that have parked their brain in the
    nowhere and are going full throttle Donald
    Trump / Kesh Patel Nazi, the system could

    indeed recommend Orange Juice instead of
    coffee. For the following brain benefits:

    - Vitamin C powerhouse: ~50rCo60 mg per 100 ml,
    -a giving a solid immune boost.

    - Quick energy: natural sugars (glucose + fructose)
    -a give your brain and body fast fuel.

    - Hydration: mostly water, which helps maintain
    -a energy and focus.

    Have Fun! LoL

    Bye


    Mild Shock schrieb:
    Hi,

    Thank god it was only coffee and not orange juice:

    Ozzy Pours The Perfect O.J.
    https://m.youtube.com/watch?v=ojQUYq21G-o


    Bye

    Mild Shock schrieb:
    Hi,

    It seems the LP (Linear programming)
    library by SWI-Prolog has been also
    ported to Scryer Prolog using the same DCG
    design as demonstrated in SWI-Prolog again:

    Module simplex
    https://www.scryer.pl/simplex

    What it requires from the Prolog system,
    and is not covered by the ISO core standard,
    are rational number, i.e. rdiv/2 etc.. and if
    you feed it with floating point numbers,

    judging from the source code, it might bark
    that it has no CLP(R) available to solve it. CLP(R)
    could be maybe a good candidate for Copilot+
    machines, but I am currently not aware

    of a Copliot+ Prolog system so to speak:

    About Microsoft Copilot+ PCs
    https://www.wired.com/story/what-is-copilot-plus-pc/

    The DCG design could make it easy that a
    solver somehow hands a problem to a NPU,
    making it transparent for the end-user.

    Bye

    Mild Shock schrieb:
    Hi,

    For the LP (Linear programming) part, it
    might be interesting to recall, that SWI-Prolog
    has an according library:

    A.55 library(simplex): Solve linear programming problems
    https://eu.swi-prolog.org/pldoc/man?section=simplex

    To model the constraint store, it doesnrCOt need
    any native Prolog system support, since it uses
    DCG for state threading. Linear programming was

    long time the pinnacle of mathematical problem
    solving. But some Articial Intelligence method do
    typically go beyond the linear case, might also

    tackle non-linear problems etc.. making heavy
    use of a NPU (Neural Processing Unit). In May 2025
    the first AI Laptops arrived with >40 TOPS NPUs.

    Spearheaded by Microsoft branding it Copilot+.

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Sep 19 18:42:23 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I will consult a Lawyer of mine.
    Maybe I can ask for a complete
    tear down of all my content.

    Bye

    Mild Shock schrieb:
    Hi,

    You deleted like 10 posts of mine in the last
    48 hours, which tried to explain why patching
    is against "discourse".

    Even Torbj||rn Lager agreed. I don't think
    you can continue your forum in this style.
    And then after you deleted a dozen of posts

    of mine, I am not allowed to delete my posts?

    You are simply completely crazy!!!

    Bye

    I got the following nonsense from you:

    Jan, werCOve asked you to be less combative with
    people here, but you continue to be extremely
    aggressive towards other users of the site.
    You have very helpful things to add, but when
    you then go back and delete everything you post,
    it obviates that helpfulness.

    Mild Shock schrieb:
    Hi,

    I like the expert system description by

    I used Claude code to help me create a Prolog
    program of a little expert system to manage a
    kitchen that needed to produce different dishes
    with different appliances and to be able to
    maximize revenue. -- bauhaus911

    Instead of maximizing revenue you could also
    maximize energy boost. So instead of having
    a couple of morons on SWI-Prolog discourse,

    like those that have parked their brain in the
    nowhere and are going full throttle Donald
    Trump / Kesh Patel Nazi, the system could

    indeed recommend Orange Juice instead of
    coffee. For the following brain benefits:

    - Vitamin C powerhouse: ~50rCo60 mg per 100 ml,
    -a-a giving a solid immune boost.

    - Quick energy: natural sugars (glucose + fructose)
    -a-a give your brain and body fast fuel.

    - Hydration: mostly water, which helps maintain
    -a-a energy and focus.

    Have Fun! LoL

    Bye


    Mild Shock schrieb:
    Hi,

    Thank god it was only coffee and not orange juice:

    Ozzy Pours The Perfect O.J.
    https://m.youtube.com/watch?v=ojQUYq21G-o


    Bye

    Mild Shock schrieb:
    Hi,

    It seems the LP (Linear programming)
    library by SWI-Prolog has been also
    ported to Scryer Prolog using the same DCG
    design as demonstrated in SWI-Prolog again:

    Module simplex
    https://www.scryer.pl/simplex

    What it requires from the Prolog system,
    and is not covered by the ISO core standard,
    are rational number, i.e. rdiv/2 etc.. and if
    you feed it with floating point numbers,

    judging from the source code, it might bark
    that it has no CLP(R) available to solve it. CLP(R)
    could be maybe a good candidate for Copilot+
    machines, but I am currently not aware

    of a Copliot+ Prolog system so to speak:

    About Microsoft Copilot+ PCs
    https://www.wired.com/story/what-is-copilot-plus-pc/

    The DCG design could make it easy that a
    solver somehow hands a problem to a NPU,
    making it transparent for the end-user.

    Bye

    Mild Shock schrieb:
    Hi,

    For the LP (Linear programming) part, it
    might be interesting to recall, that SWI-Prolog
    has an according library:

    A.55 library(simplex): Solve linear programming problems
    https://eu.swi-prolog.org/pldoc/man?section=simplex

    To model the constraint store, it doesnrCOt need
    any native Prolog system support, since it uses
    DCG for state threading. Linear programming was

    long time the pinnacle of mathematical problem
    solving. But some Articial Intelligence method do
    typically go beyond the linear case, might also

    tackle non-linear problems etc.. making heavy
    use of a NPU (Neural Processing Unit). In May 2025
    the first AI Laptops arrived with >40 TOPS NPUs.

    Spearheaded by Microsoft branding it Copilot+.

    Bye





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Sep 25 01:50:01 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Scryer Prologs unify_with_occurs_check/2 might have
    been fixed. I can now test the following:

    /* Scryer Prolog 0.9.4-660 */

    % ?- bench, bench, bench.
    % [...]
    % % CPU time: 0.148s, 57 inferences
    % % CPU time: 0.126s, 57 inferences
    % % CPU time: 0.214s, 58 inferences
    % % CPU time: 0.213s, 58 inferences
    % true.

    % ?- bench2, bench2, bench2.
    % [...]
    % % CPU time: 0.036s, 58 inferences
    % % CPU time: 0.042s, 58 inferences
    % % CPU time: 0.018s, 59 inferences
    % % CPU time: 0.096s, 56 inferences
    % true.

    This was the test case, it includes
    unify_with_occurs_check/2:

    hydra(0, _) :- !.
    hydra(N, h(X, X)) :- N > 0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A) :- N > 0, N0 is N-1, hydra(N0, X, A).

    bench :-
    hydra(1048576, X), hydra(1048576, Y, Y),
    time(X = Y),
    time(unify_with_occurs_check(X, Y)),
    time(X == Y),
    time(compare(_, X, Y)), fail; true.

    bench2 :-
    hydra(1048576, X), hydra(1048576, Y, Y),
    time(copy_term(X-Y,_)),
    time(term_variables(X-Y,_)),
    time(\+ ground(X-Y)),
    time(acyclic_term(X-Y)),
    fail; true.

    Bye

    Mild Shock schrieb:
    Hi,

    Since some idiots blocked me on Scryer Prolog issues,
    I raise the issue here. Basically uniy_with_occurs_check/2
    probably does use a different implementation of unification

    than find for (=)/2. Because it doesn't scale, I find:

    /* Scryer Prolog */
    ?- test3(25).
    -a-a % CPU time: 0.001s, 57 inferences
    -a-a true.

    ?- test4(25).
    -a-a % CPU time: 2.133s, 57 inferences
    -a-a true.

    Expectation would be that unify_with_occurs_check/2
    does just scale like it does in SWI-Prolog. In
    SWI-Prolog I find:

    /* SWI-Prolog 9.3.30 */
    ?- test3(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    ?- test4(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    The test case was simply a hydra variant. Actually the
    last hydra modification posted by @kuniaki, which I
    am currently ticking along now:

    hydra(0, _) :- !.
    hydra(N, h(X, X)):- N>0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A):- N>0, N0 is N-1, hydra(N0, X, A).

    test3(N) :- hydra(N, X), hydra(N, Y, Y),
    -a-a time(X = Y).

    test4(N) :- hydra(N, X), hydra(N, Y, Y),
    -a-a time(unify_with_occurs_check(X, Y)).

    But of course there is a cut (!) in the first rules.

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Sep 25 01:59:16 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    The facinating result was the Jaffar Unification
    beats Scryer Prolog even on the target JavaScript.
    Not to speak of the Java target, which also beat it.

    But I rejected Jaffar Unification, because it
    temporarily modifies my frozen terms, which might
    impede some future program sharing across

    premptive threads. So I rolled back Pointer based
    Jaffar Unification, and went back to Map Based
    Union Find. Overall the Map and a slightly bigger

    stack incures a factor 3x slowdown. So for Java I get now:

    /* Dogelog Player 2.1.1 for Java */

    % ?- bench, bench, bench.
    % [...]
    % % Zeit 469 ms, GC 0 ms, Lips 42, Uhr 24.09.2025 20:00
    % % Zeit 318 ms, GC 0 ms, Lips 62, Uhr 24.09.2025 20:00
    % % Zeit 329 ms, GC 0 ms, Lips 60, Uhr 24.09.2025 20:00
    % % Zeit 378 ms, GC 0 ms, Lips 52, Uhr 24.09.2025 20:00
    % true.

    % ?- bench2, bench2, bench2.
    % [...]
    % % Zeit 847 ms, GC 0 ms, Lips 23, Uhr 25.09.2025 01:04
    % % Zeit 506 ms, GC 0 ms, Lips 39, Uhr 25.09.2025 01:04
    % % Zeit 186 ms, GC 0 ms, Lips 118, Uhr 25.09.2025 01:04
    % % Zeit 418 ms, GC 0 ms, Lips 35, Uhr 25.09.2025 01:04
    % true.

    In the binary predicates (bench) the factor 3x is pretty
    much seen. But in the unary predicates (bench2) the
    factor is much higher , something 10x - 20x. And JavaScript

    doesn't help. But this might be the price to pay for
    a "non-intrusive" algorithm. Another name I have for my
    current take is "non-tainting" algorithms.

    Should put a closer eye what could be done "non-intrusive",
    or maybe device an algorithm that is a mixture of "non-
    intrusive" and "intrucive".

    Bye

    Mild Shock schrieb:
    Hi,

    Scryer Prologs unify_with_occurs_check/2 might have
    been fixed. I can now test the following:

    /* Scryer Prolog 0.9.4-660 */

    % ?- bench, bench, bench.
    % [...]
    %-a-a-a % CPU time: 0.148s, 57 inferences
    %-a-a-a % CPU time: 0.126s, 57 inferences
    %-a-a-a % CPU time: 0.214s, 58 inferences
    %-a-a-a % CPU time: 0.213s, 58 inferences
    %-a-a-a true.

    % ?- bench2, bench2, bench2.
    % [...]
    %-a-a-a % CPU time: 0.036s, 58 inferences
    %-a-a-a % CPU time: 0.042s, 58 inferences
    %-a-a-a % CPU time: 0.018s, 59 inferences
    %-a-a-a % CPU time: 0.096s, 56 inferences
    %-a-a-a true.

    This was the test case, it includes
    unify_with_occurs_check/2:

    hydra(0, _) :- !.
    hydra(N, h(X, X)) :- N > 0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A) :- N > 0, N0 is N-1, hydra(N0, X, A).

    bench :-
    -a-a hydra(1048576, X), hydra(1048576, Y, Y),
    -a-a time(X = Y),
    -a-a time(unify_with_occurs_check(X, Y)),
    -a-a time(X == Y),
    -a-a time(compare(_, X, Y)), fail; true.

    bench2 :-
    -a-a hydra(1048576, X), hydra(1048576, Y, Y),
    -a-a time(copy_term(X-Y,_)),
    -a-a time(term_variables(X-Y,_)),
    -a-a time(\+ ground(X-Y)),
    -a-a time(acyclic_term(X-Y)),
    -a-a fail; true.

    Bye

    Mild Shock schrieb:
    Hi,

    Since some idiots blocked me on Scryer Prolog issues,
    I raise the issue here. Basically uniy_with_occurs_check/2
    probably does use a different implementation of unification

    than find for (=)/2. Because it doesn't scale, I find:

    /* Scryer Prolog */
    ?- test3(25).
    -a-a-a % CPU time: 0.001s, 57 inferences
    -a-a-a true.

    ?- test4(25).
    -a-a-a % CPU time: 2.133s, 57 inferences
    -a-a-a true.

    Expectation would be that unify_with_occurs_check/2
    does just scale like it does in SWI-Prolog. In
    SWI-Prolog I find:

    /* SWI-Prolog 9.3.30 */
    ?- test3(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    ?- test4(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    The test case was simply a hydra variant. Actually the
    last hydra modification posted by @kuniaki, which I
    am currently ticking along now:

    hydra(0, _) :- !.
    hydra(N, h(X, X)):- N>0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A):- N>0, N0 is N-1, hydra(N0, X, A).

    test3(N) :- hydra(N, X), hydra(N, Y, Y),
    -a-a-a time(X = Y).

    test4(N) :- hydra(N, X), hydra(N, Y, Y),
    -a-a-a time(unify_with_occurs_check(X, Y)).

    But of course there is a cut (!) in the first rules.

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Sep 25 02:06:44 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I also tryed to measure Trealla Prolog. But the
    measurements are strange, always 0.001 secs or
    something. My suspicion is that Trealla Prolog

    might apply "frozeness" to cyclic terms. A form
    of hash consing, which gives Trealla Prolog
    enough information to turn certain operations

    practically into no-ops. I don't know yet how
    to proof my suspicion, and don't know how to
    deduce it from the source code.

    That there are kind two types of "frozen" terms,
    acylic and cyclic, emerge a few days ago in
    formerly Jekejeke Prolog. I can represent it

    inside the terms as null versus Variable[], in
    the variable spine. But I was not yet able to
    bring this feature to Dogelog Player. Because

    copy_term/2 does not yet attempt a "frozenness"
    analysis. Frozen Prolog terms are only produced
    during transpilation, consult or assert,

    but not yet during copy_term/2 in Dogelog Player.

    Bye

    Mild Shock schrieb:
    Hi,

    The facinating result was the Jaffar Unification
    beats Scryer Prolog even on the target JavaScript.
    Not to speak of the Java target, which also beat it.

    But I rejected Jaffar Unification, because it
    temporarily modifies my frozen terms, which might
    impede some future program sharing across

    premptive threads. So I rolled back Pointer based
    Jaffar Unification, and went back to Map Based
    Union Find. Overall the Map and a slightly bigger

    stack incures a factor 3x slowdown. So for Java I get now:

    /* Dogelog Player 2.1.1 for Java */

    % ?- bench, bench, bench.
    % [...]
    % % Zeit 469 ms, GC 0 ms, Lips 42, Uhr 24.09.2025 20:00
    % % Zeit 318 ms, GC 0 ms, Lips 62, Uhr 24.09.2025 20:00
    % % Zeit 329 ms, GC 0 ms, Lips 60, Uhr 24.09.2025 20:00
    % % Zeit 378 ms, GC 0 ms, Lips 52, Uhr 24.09.2025 20:00
    % true.

    % ?- bench2, bench2, bench2.
    % [...]
    % % Zeit 847 ms, GC 0 ms, Lips 23, Uhr 25.09.2025 01:04
    % % Zeit 506 ms, GC 0 ms, Lips 39, Uhr 25.09.2025 01:04
    % % Zeit 186 ms, GC 0 ms, Lips 118, Uhr 25.09.2025 01:04
    % % Zeit 418 ms, GC 0 ms, Lips 35, Uhr 25.09.2025 01:04
    % true.

    In the binary predicates (bench) the factor 3x is pretty
    much seen. But in the unary predicates (bench2) the
    factor is much higher , something 10x - 20x. And JavaScript

    doesn't help. But this might be the price to pay for
    a "non-intrusive" algorithm. Another name I have for my
    current take is "non-tainting" algorithms.

    Should put a closer eye what could be done "non-intrusive",
    or maybe device an algorithm that is a mixture of "non-
    intrusive" and "intrucive".

    Bye

    Mild Shock schrieb:
    Hi,

    Scryer Prologs unify_with_occurs_check/2 might have
    been fixed. I can now test the following:

    /* Scryer Prolog 0.9.4-660 */

    % ?- bench, bench, bench.
    % [...]
    %-a-a-a % CPU time: 0.148s, 57 inferences
    %-a-a-a % CPU time: 0.126s, 57 inferences
    %-a-a-a % CPU time: 0.214s, 58 inferences
    %-a-a-a % CPU time: 0.213s, 58 inferences
    %-a-a-a true.

    % ?- bench2, bench2, bench2.
    % [...]
    %-a-a-a % CPU time: 0.036s, 58 inferences
    %-a-a-a % CPU time: 0.042s, 58 inferences
    %-a-a-a % CPU time: 0.018s, 59 inferences
    %-a-a-a % CPU time: 0.096s, 56 inferences
    %-a-a-a true.

    This was the test case, it includes
    unify_with_occurs_check/2:

    hydra(0, _) :- !.
    hydra(N, h(X, X)) :- N > 0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A) :- N > 0, N0 is N-1, hydra(N0, X, A).

    bench :-
    -a-a-a hydra(1048576, X), hydra(1048576, Y, Y),
    -a-a-a time(X = Y),
    -a-a-a time(unify_with_occurs_check(X, Y)),
    -a-a-a time(X == Y),
    -a-a-a time(compare(_, X, Y)), fail; true.

    bench2 :-
    -a-a-a hydra(1048576, X), hydra(1048576, Y, Y),
    -a-a-a time(copy_term(X-Y,_)),
    -a-a-a time(term_variables(X-Y,_)),
    -a-a-a time(\+ ground(X-Y)),
    -a-a-a time(acyclic_term(X-Y)),
    -a-a-a fail; true.

    Bye

    Mild Shock schrieb:
    Hi,

    Since some idiots blocked me on Scryer Prolog issues,
    I raise the issue here. Basically uniy_with_occurs_check/2
    probably does use a different implementation of unification

    than find for (=)/2. Because it doesn't scale, I find:

    /* Scryer Prolog */
    ?- test3(25).
    -a-a-a % CPU time: 0.001s, 57 inferences
    -a-a-a true.

    ?- test4(25).
    -a-a-a % CPU time: 2.133s, 57 inferences
    -a-a-a true.

    Expectation would be that unify_with_occurs_check/2
    does just scale like it does in SWI-Prolog. In
    SWI-Prolog I find:

    /* SWI-Prolog 9.3.30 */
    ?- test3(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    ?- test4(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    The test case was simply a hydra variant. Actually the
    last hydra modification posted by @kuniaki, which I
    am currently ticking along now:

    hydra(0, _) :- !.
    hydra(N, h(X, X)):- N>0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A):- N>0, N0 is N-1, hydra(N0, X, A).

    test3(N) :- hydra(N, X), hydra(N, Y, Y),
    -a-a-a time(X = Y).

    test4(N) :- hydra(N, X), hydra(N, Y, Y),
    -a-a-a time(unify_with_occurs_check(X, Y)).

    But of course there is a cut (!) in the first rules.

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Sep 25 02:21:16 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    If there would be a subcategories "acyclic"
    and "cyclic" inside the "frozen" category.
    One could indeed use safely a hybrid algorithm

    that is non-intrusive for frozen terms, and
    intrusive for non-frozen terms. Actually calling
    it hybrid is a little overkill. It would just

    stop at frozen terms. If it has the subcategories,
    the built-in acyclic_term/1 could also stop, and
    draw its result from the subcategory. This works

    already in formerly Jekejeke Prolog now, but not
    yet in Dogelog Player. That the rollback also
    gave a 10x-20x factor slowdown for the unary

    predicates is a little annonying. Must find a compromise.

    Bye

    Mild Shock schrieb:
    Hi,

    I also tryed to measure Trealla Prolog. But the
    measurements are strange, always 0.001 secs or
    something. My suspicion is that Trealla Prolog

    might apply "frozeness" to cyclic terms. A form
    of hash consing, which gives Trealla Prolog
    enough information to turn certain operations

    practically into no-ops. I don't know yet how
    to proof my suspicion, and don't know how to
    deduce it from the source code.

    That there are kind two types of "frozen" terms,
    acylic and cyclic, emerge a few days ago in
    formerly Jekejeke Prolog. I can represent it

    inside the terms as null versus Variable[], in
    the variable spine. But I was not yet able to
    bring this feature to Dogelog Player. Because

    copy_term/2 does not yet attempt a "frozenness"
    analysis. Frozen Prolog terms are only produced
    during transpilation, consult or assert,

    but not yet during copy_term/2 in Dogelog Player.

    Bye

    Mild Shock schrieb:
    Hi,

    The facinating result was the Jaffar Unification
    beats Scryer Prolog even on the target JavaScript.
    Not to speak of the Java target, which also beat it.

    But I rejected Jaffar Unification, because it
    temporarily modifies my frozen terms, which might
    impede some future program sharing across

    premptive threads. So I rolled back Pointer based
    Jaffar Unification, and went back to Map Based
    Union Find. Overall the Map and a slightly bigger

    stack incures a factor 3x slowdown. So for Java I get now:

    /* Dogelog Player 2.1.1 for Java */

    % ?- bench, bench, bench.
    % [...]
    % % Zeit 469 ms, GC 0 ms, Lips 42, Uhr 24.09.2025 20:00
    % % Zeit 318 ms, GC 0 ms, Lips 62, Uhr 24.09.2025 20:00
    % % Zeit 329 ms, GC 0 ms, Lips 60, Uhr 24.09.2025 20:00
    % % Zeit 378 ms, GC 0 ms, Lips 52, Uhr 24.09.2025 20:00
    % true.

    % ?- bench2, bench2, bench2.
    % [...]
    % % Zeit 847 ms, GC 0 ms, Lips 23, Uhr 25.09.2025 01:04
    % % Zeit 506 ms, GC 0 ms, Lips 39, Uhr 25.09.2025 01:04
    % % Zeit 186 ms, GC 0 ms, Lips 118, Uhr 25.09.2025 01:04
    % % Zeit 418 ms, GC 0 ms, Lips 35, Uhr 25.09.2025 01:04
    % true.

    In the binary predicates (bench) the factor 3x is pretty
    much seen. But in the unary predicates (bench2) the
    factor is much higher , something 10x - 20x. And JavaScript

    doesn't help. But this might be the price to pay for
    a "non-intrusive" algorithm. Another name I have for my
    current take is "non-tainting" algorithms.

    Should put a closer eye what could be done "non-intrusive",
    or maybe device an algorithm that is a mixture of "non-
    intrusive" and "intrucive".

    Bye

    Mild Shock schrieb:
    Hi,

    Scryer Prologs unify_with_occurs_check/2 might have
    been fixed. I can now test the following:

    /* Scryer Prolog 0.9.4-660 */

    % ?- bench, bench, bench.
    % [...]
    %-a-a-a % CPU time: 0.148s, 57 inferences
    %-a-a-a % CPU time: 0.126s, 57 inferences
    %-a-a-a % CPU time: 0.214s, 58 inferences
    %-a-a-a % CPU time: 0.213s, 58 inferences
    %-a-a-a true.

    % ?- bench2, bench2, bench2.
    % [...]
    %-a-a-a % CPU time: 0.036s, 58 inferences
    %-a-a-a % CPU time: 0.042s, 58 inferences
    %-a-a-a % CPU time: 0.018s, 59 inferences
    %-a-a-a % CPU time: 0.096s, 56 inferences
    %-a-a-a true.

    This was the test case, it includes
    unify_with_occurs_check/2:

    hydra(0, _) :- !.
    hydra(N, h(X, X)) :- N > 0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A) :- N > 0, N0 is N-1, hydra(N0, X, A).

    bench :-
    -a-a-a hydra(1048576, X), hydra(1048576, Y, Y),
    -a-a-a time(X = Y),
    -a-a-a time(unify_with_occurs_check(X, Y)),
    -a-a-a time(X == Y),
    -a-a-a time(compare(_, X, Y)), fail; true.

    bench2 :-
    -a-a-a hydra(1048576, X), hydra(1048576, Y, Y),
    -a-a-a time(copy_term(X-Y,_)),
    -a-a-a time(term_variables(X-Y,_)),
    -a-a-a time(\+ ground(X-Y)),
    -a-a-a time(acyclic_term(X-Y)),
    -a-a-a fail; true.

    Bye

    Mild Shock schrieb:
    Hi,

    Since some idiots blocked me on Scryer Prolog issues,
    I raise the issue here. Basically uniy_with_occurs_check/2
    probably does use a different implementation of unification

    than find for (=)/2. Because it doesn't scale, I find:

    /* Scryer Prolog */
    ?- test3(25).
    -a-a-a % CPU time: 0.001s, 57 inferences
    -a-a-a true.

    ?- test4(25).
    -a-a-a % CPU time: 2.133s, 57 inferences
    -a-a-a true.

    Expectation would be that unify_with_occurs_check/2
    does just scale like it does in SWI-Prolog. In
    SWI-Prolog I find:

    /* SWI-Prolog 9.3.30 */
    ?- test3(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    ?- test4(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    The test case was simply a hydra variant. Actually the
    last hydra modification posted by @kuniaki, which I
    am currently ticking along now:

    hydra(0, _) :- !.
    hydra(N, h(X, X)):- N>0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A):- N>0, N0 is N-1, hydra(N0, X, A).

    test3(N) :- hydra(N, X), hydra(N, Y, Y),
    -a-a-a time(X = Y).

    test4(N) :- hydra(N, X), hydra(N, Y, Y),
    -a-a-a time(unify_with_occurs_check(X, Y)).

    But of course there is a cut (!) in the first rules.

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Sep 26 12:19:51 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Was first thinking the unify_with_occurs_check/2 is
    gone, when I tested this:

    % ?- bench, bench, bench.
    % [...]
    % % CPU time: 0.148s, 57 inferences
    % % CPU time: 0.126s, 57 inferences

    But I did the test wrongly, basically the preceeding
    (=)/2 did bind a variable, so that unify_with_occurs_check/2
    didn't have to perform an occurs check.

    If I undo the bind of a variable in (=)/2 before
    going into the testing of unify_with_occurs_check/2,
    I get the "bug" again:

    % ?- bench, bench, bench.
    % % CPU time: 0.203s, 37 inferences
    % %%% hangs

    But since ground/1 etc.. can do hydra, I suspect
    the Scryer Prolog team will sooner or later figure
    out how to do the occurs check so that it can

    also do hyda. This is the test case now:

    hydra(0, _) :- !.
    hydra(N, h(X, X)) :- N > 0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A) :- N > 0, N0 is N-1, hydra(N0, X, A).

    bench :-
    hydra(1048576, X), hydra(1048576, Y, Y),
    time(\+ \+ X = Y),
    time(\+ \+ unify_with_occurs_check(X, Y)),
    time(\+ X == Y),
    time(compare(_, X, Y)), fail; true.

    Bye

    Mild Shock schrieb:
    Hi,

    Since some idiots blocked me on Scryer Prolog issues,
    I raise the issue here. Basically uniy_with_occurs_check/2
    probably does use a different implementation of unification

    than find for (=)/2. Because it doesn't scale, I find:

    /* Scryer Prolog */
    ?- test3(25).
    -a-a % CPU time: 0.001s, 57 inferences
    -a-a true.

    ?- test4(25).
    -a-a % CPU time: 2.133s, 57 inferences
    -a-a true.

    Expectation would be that unify_with_occurs_check/2
    does just scale like it does in SWI-Prolog. In
    SWI-Prolog I find:

    /* SWI-Prolog 9.3.30 */
    ?- test3(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    ?- test4(25).
    % -1 inferences, 0.000 CPU in 0.000 seconds (0% CPU, Infinite Lips)
    true.

    The test case was simply a hydra variant. Actually the
    last hydra modification posted by @kuniaki, which I
    am currently ticking along now:

    hydra(0, _) :- !.
    hydra(N, h(X, X)):- N>0, N0 is N-1, hydra(N0, X).

    hydra(0, A, A) :- !.
    hydra(N, h(X, X), A):- N>0, N0 is N-1, hydra(N0, X, A).

    test3(N) :- hydra(N, X), hydra(N, Y, Y),
    -a-a time(X = Y).

    test4(N) :- hydra(N, X), hydra(N, Y, Y),
    -a-a time(unify_with_occurs_check(X, Y)).

    But of course there is a cut (!) in the first rules.

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Oct 13 09:49:15 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Because GenX and later suffers from:

    Somehow methods and tools to realize
    efficient DCGs in Prolog are missing. Most
    DCG attempts that one sees succumb

    to some declarative nonsense, creating
    exponentially many spurious choice points,
    you find rarely somebody mastering the Art.

    Modern programmers fancy nothing else than
    throwing a set of foreign library to their
    Prolog system project. This is best seen in WebPL:

    LALRPOP MIT/Apache-2.0 Generate the parser https://github.com/w-henderson/WebPL/blob/main/dissertation.pdf

    So there is no aim at creating a self hosting
    Prolog system. There is a deep distrust in
    DCGs. But why build a Prolog system that will

    possibly ultimately have DCG, when you distrust
    in DCGs? The second problem of GenX and later
    is probably they don't know how to bootstrap

    a Prolog system B via another Prolog system A.

    Bye

    P.S.: The result of using a Parser Tool are
    often frustrating on the following levels:
    - No operator table
    - Directives are fixed
    - Introducong DCGs need rebuid

    Scryer Prolog has Operator Table, but mostlikely
    used a Parser Tool some time in the project,
    or programming templates borrow from Parser Tools.

    Probably the worst recent example building a
    Prolog system, which would have a reference for
    the Parsing in Rust itself. So we have 2025

    and there is not a single self hosting Prolog
    yet, while all other programming languages such
    as Java, golang, etc.. are self hosting.

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Oct 13 15:09:44 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Maybe it is with Prolog like with Dinosaurs,
    when they got extinct by a meteor crash.
    All that survived were some small rodents
    as the story goes. Their advantage:
    - Small Size
    - Burrowing Behavior
    - Omnivorous Diet
    - Reproductive Speed

    Now Scryer Prolog uses a Shift-Reduce under
    the hood, the small rodent. But it might possibly
    shove their end-users heavy Tabled DCG into
    the face? So that this left recursion can be solved:

    expr --> expr + factor

    "constraint programming" got already killed
    when ILOG was bought by IBM in 2008. ILOG's
    optimization solver, CPLEX, has its roots in
    the CHIP (Constraint Handling in Prolog)

    language, 1985 at the European Computer-Industry
    Research Centre (ECRC), initially using a Prolog
    language interface. So its even not a Fench product.
    By the time ILOG became a commercial powerhouse,

    Prolog largely disappeared from their product
    codebases. There was a Transition to C++ for
    Performance and Industry Adoption. I have the
    gut feeling that Tabled DCG is similarly dead,

    especially in the light of large languages models (LLM).
    But I cannot point the figure yet perfectly
    at the issues. Currently exploring the sad problem
    domain of this mostlikely dead horse.

    A problem could be the overkill of "Logic Grammars",
    that do not tolerate incorrect texts and that cannot
    be applied so easy partially. Mostlikely one has
    to scrutinize the assumptions behind Tabled DCG, and

    review again the possibly options beyond the beaten paths.

    Bye

    Mild Shock schrieb:
    Hi,

    Because GenX and later suffers from:

    Somehow methods and tools to realize
    efficient DCGs in Prolog are missing. Most
    DCG attempts that one sees succumb

    to some declarative nonsense, creating
    exponentially many spurious choice points,
    you find rarely somebody mastering the Art.

    Modern programmers fancy nothing else than
    throwing a set of foreign library to their
    Prolog system project. This is best seen in WebPL:

    LALRPOP MIT/Apache-2.0 Generate the parser https://github.com/w-henderson/WebPL/blob/main/dissertation.pdf

    So there is no aim at creating a self hosting
    Prolog system. There is a deep distrust in
    DCGs. But why build a Prolog system that will

    possibly ultimately have DCG, when you distrust
    in DCGs? The second problem of GenX and later
    is probably they don't know how to bootstrap

    a Prolog system B via another Prolog system A.

    Bye

    P.S.: The result of using a Parser Tool are
    often frustrating on the following levels:
    - No operator table
    - Directives are fixed
    - Introducong DCGs need rebuid

    Scryer Prolog has Operator Table, but mostlikely
    used a Parser Tool some time in the project,
    or programming templates borrow from Parser Tools.

    Probably the worst recent example building a
    Prolog system, which would have a reference for
    the Parsing in Rust itself. So we have 2025

    and there is not a single self hosting Prolog
    yet, while all other programming languages such
    as Java, golang, etc.. are self hosting.

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Oct 15 02:38:34 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I spent some time thinking about my primes.pl
    test. And came to the conclusion that it
    mainly tests the Prolog ALU. Things like

    integer successor or integer modulo. Then
    I found that Java has Math.floorMod() which
    I wasn't using yet. And peng results are better:

    /* Dogelog Player 2.1.2 for Java, today */
    ?- time(test).
    % Zeit 286 ms, GC 1 ms, Lips 26302430, Uhr 15.10.2025 02:31
    true.

    Maybe the Java backend picks a CPU instruction
    for Math.floorMod() instead of executing the
    longer code sequence that is needed to correct

    rem/2 into mod/2. Who knows. I also reorganized
    the code a little bit, and eliminated an extra
    method call in all arithmetic functions, by

    inlining the arithmetic function body in the
    evaluable predicate definition code. Comparison
    to old measurements and some measurements of

    other Prolog systems:

    /* Dogelog Player 2.1.2 for Java, weeks ago */
    ?- time(test).
    % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    true.

    /* SWI-Prolog 9.0.4 */
    ?- time(test).
    % 7,506,639 inferences, 0.363 CPU in 0.362 seconds
    (100% CPU, 20693560 Lips)
    true.

    /* Scryer Prolog 0.9.4-639 */
    ?- time(test).
    % CPU time: 0.365s, 7_517_613 inferences
    true.

    /* Trealla Prolog 2.82.23-3 */
    ?- time(test).
    % Time elapsed 0.868s, 11263917 Inferences, 12.983 MLips
    true.

    Bye

    P.S.: The code uses the hated mathematical mod/2,
    and not the cheaper rem/2 that CPUs usually have:

    test :-
    len(L, 1000),
    primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
    primes(L, I),
    K is I+1,
    search(L, K, J).

    search(L, I, J) :-
    mem(X, L),
    I mod X =:= 0, !,
    K is I+1,
    search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
    mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
    N > 0,
    M is N-1,
    len(L, M).

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Oct 15 04:33:12 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    The change from 378 ms to 286 ms is around 25-30%
    is insane. But I did both tests on a novel AI CPU.
    To be precise on a AMD Ryzen AI 7 350.

    But somehow I picked up rumors that AI CPUs now
    might do Neural Network Branch Prediction. The
    idea seems to exist in hardware at least since (2012):

    Machine learning and artificial intelligence are
    the current hype (again). In their new Ryzen
    processors, AMD advertises the Neural Net
    Prediction. It turns out this is was already
    used in their older (2012) Piledriver architecture
    used for example in the AMD A10-4600M. It is also
    present in recent Samsung processors such as the
    one powering the Galaxy S7. What is it really? https://chasethedevil.github.io/post/the_neural_network_in_your_cpu/

    It can be done with Convoluted Neural Networks (CNN):

    BranchNet: A Convolutional Neural Network to
    Predict Hard-To-Predict Branches
    To this end, Tarsa et al. proposed using convolutional
    neural networks (CNNs) that are trained at
    compiletime to accurately predict branches that
    TAGE cannot. Given enough profiling coverage, CNNs
    learn input-independent branch correlations. https://microarch.org/micro53/papers/738300a118.pdf

    Interstingly the above shows cases a PGO based
    Machine Learning for Branch Predictors. No clue
    how they construct the CPU, that they can feed

    it with offline constructed neural neutworks for
    their own execution. Maybe an optimizer uses it?
    But I guess a more modern solutions would not only

    use CNN, but also an Attention Mechanism.

    Bye

    Mild Shock schrieb:
    Hi,

    I spent some time thinking about my primes.pl
    test. And came to the conclusion that it
    mainly tests the Prolog ALU. Things like

    integer successor or integer modulo. Then
    I found that Java has Math.floorMod() which
    I wasn't using yet. And peng results are better:

    /* Dogelog Player 2.1.2 for Java, today */
    ?- time(test).
    % Zeit 286 ms, GC 1 ms, Lips 26302430, Uhr 15.10.2025 02:31
    true.

    Maybe the Java backend picks a CPU instruction
    for Math.floorMod() instead of executing the
    longer code sequence that is needed to correct

    rem/2 into mod/2. Who knows. I also reorganized
    the code a little bit, and eliminated an extra
    method call in all arithmetic functions, by

    inlining the arithmetic function body in the
    evaluable predicate definition code. Comparison
    to old measurements and some measurements of

    other Prolog systems:

    /* Dogelog Player 2.1.2 for Java, weeks ago */
    ?- time(test).
    % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    true.

    /* SWI-Prolog 9.0.4 */
    ?- time(test).
    % 7,506,639 inferences, 0.363 CPU in 0.362 seconds
    (100% CPU, 20693560 Lips)
    true.

    /* Scryer Prolog 0.9.4-639 */
    ?- time(test).
    % CPU time: 0.365s, 7_517_613 inferences
    true.

    /* Trealla Prolog 2.82.23-3 */
    ?- time(test).
    % Time elapsed 0.868s, 11263917 Inferences, 12.983 MLips
    true.

    Bye

    P.S.: The code uses the hated mathematical mod/2,
    and not the cheaper rem/2 that CPUs usually have:

    test :-
    -a-a len(L, 1000),
    -a-a primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
    -a-a primes(L, I),
    -a-a K is I+1,
    -a-a search(L, K, J).

    search(L, I, J) :-
    -a-a mem(X, L),
    -a-a I mod X =:= 0, !,
    -a-a K is I+1,
    -a-a search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
    -a-a mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
    -a-a N > 0,
    -a-a M is N-1,
    -a-a len(L, M).

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Oct 15 16:04:08 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    It seems I am having problems pacing with
    all the new fancy toys. Wasn't able to really
    benchmark my NPU from a Desktop AI machine,

    picked the wrong driver. Need to try again.
    What worked was benchmarking Mobile AI machines.
    I just grabbed Geekbench AI and some devices:

    USA Fab, M4:

    sANN hANN qANN
    iPad CPU 4848 7947 6353
    iPad GPU 9752 11383 10051
    iPad NPU 4873 36544 *51634*

    China Fab, Snapdragon:

    sANN hANN qANN
    Redmi CPU 1044 950 1723
    Redmi GPU 480 905 737
    Redmi NNAPI 205 205 469
    Redmi QNN 226 226 *10221*

    Speed-Up via NPU is factor 10x. See the column
    qANN which means quantizised artificial neural
    networks, when NPU or QNN is picked.

    The mobile AI NPUs are optimized using
    mimimal amounts of energy, and minimal amounts
    of space squeezing (distilling) everything

    into INT8 and INT4.

    Bye

    Mild Shock schrieb:
    Hi,

    The change from 378 ms to 286 ms is around 25-30%
    is insane. But I did both tests on a novel AI CPU.
    To be precise on a AMD Ryzen AI 7 350.

    But somehow I picked up rumors that AI CPUs now
    might do Neural Network Branch Prediction. The
    idea seems to exist in hardware at least since (2012):

    Machine learning and artificial intelligence are
    the current hype (again). In their new Ryzen
    processors, AMD advertises the Neural Net
    Prediction. It turns out this is was already
    used in their older (2012) Piledriver architecture
    used for example in the AMD A10-4600M. It is also
    present in recent Samsung processors such as the
    one powering the Galaxy S7. What is it really? https://chasethedevil.github.io/post/the_neural_network_in_your_cpu/

    It can be done with Convoluted Neural Networks (CNN):

    BranchNet: A Convolutional Neural Network to
    Predict Hard-To-Predict Branches
    To this end, Tarsa et al. proposed using convolutional
    neural networks (CNNs) that are trained at
    compiletime to accurately predict branches that
    TAGE cannot. Given enough profiling coverage, CNNs
    learn input-independent branch correlations. https://microarch.org/micro53/papers/738300a118.pdf

    Interstingly the above shows cases a PGO based
    Machine Learning for Branch Predictors. No clue
    how they construct the CPU, that they can feed

    it with offline constructed neural neutworks for
    their own execution. Maybe an optimizer uses it?
    But I guess a more modern-a solutions would not only

    use CNN, but also an Attention Mechanism.

    Bye

    Mild Shock schrieb:
    Hi,

    I spent some time thinking about my primes.pl
    test. And came to the conclusion that it
    mainly tests the Prolog ALU. Things like

    integer successor or integer modulo. Then
    I found that Java has Math.floorMod() which
    I wasn't using yet. And peng results are better:

    /* Dogelog Player 2.1.2 for Java, today */
    ?- time(test).
    % Zeit 286 ms, GC 1 ms, Lips 26302430, Uhr 15.10.2025 02:31
    true.

    Maybe the Java backend picks a CPU instruction
    for Math.floorMod() instead of executing the
    longer code sequence that is needed to correct

    rem/2 into mod/2. Who knows. I also reorganized
    the code a little bit, and eliminated an extra
    method call in all arithmetic functions, by

    inlining the arithmetic function body in the
    evaluable predicate definition code. Comparison
    to old measurements and some measurements of

    other Prolog systems:

    /* Dogelog Player 2.1.2 for Java, weeks ago */
    ?- time(test).
    % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    true.

    /* SWI-Prolog 9.0.4 */
    ?- time(test).
    % 7,506,639 inferences, 0.363 CPU in 0.362 seconds
    (100% CPU, 20693560 Lips)
    true.

    /* Scryer Prolog 0.9.4-639 */
    ?- time(test).
    % CPU time: 0.365s, 7_517_613 inferences
    true.

    /* Trealla Prolog 2.82.23-3 */
    ?- time(test).
    % Time elapsed 0.868s, 11263917 Inferences, 12.983 MLips
    true.

    Bye

    P.S.: The code uses the hated mathematical mod/2,
    and not the cheaper rem/2 that CPUs usually have:

    test :-
    -a-a-a len(L, 1000),
    -a-a-a primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
    -a-a-a primes(L, I),
    -a-a-a K is I+1,
    -a-a-a search(L, K, J).

    search(L, I, J) :-
    -a-a-a mem(X, L),
    -a-a-a I mod X =:= 0, !,
    -a-a-a K is I+1,
    -a-a-a search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
    -a-a-a mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
    -a-a-a N > 0,
    -a-a-a M is N-1,
    -a-a-a len(L, M).

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Oct 15 16:10:42 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    But not only Mobie AI and Desktop AI are making
    a broader imprint now. We might also experience
    Workstation AI, with a 3'000.- USD price tag:

    You Can't Buy This... Yet! The NVIDIA GB10 from Dell
    The New Superchip that Terrifies the Cloud! https://www.youtube.com/watch?v=x1qViw4xyVo

    So whats going on? I was asking Phind, which is
    driven by a 70B model tailored towards developers:

    Q: Is there an AI inflection point right now ,
    with NPUs in mobile, desktop and workstation

    A: Evidence of the Inflection Point

    - Mobile Leadership
    NPUs originated in smartphones
    Now becoming ubiquitous across all device types
    Enabling sophisticated AI features at consumer price points

    - Desktop Revolution
    Major manufacturers implementing NPUs across product lines
    Apple's Neural Engine integrated into M-series chips
    Qualcomm, Intel, and AMD incorporating AI accelerators

    - Workstation Transformation
    Professional-grade NPUs in mobile workstations
    Demonstrated superior performance for AI-specific tasks
    Enabling local processing of previously cloud-dependent workloads

    https://www.phind.com/search/cmgs1s6jv00023h67g5z2aaa0

    Bye

    Mild Shock schrieb:
    Hi,

    It seems I am having problems pacing with
    all the new fancy toys. Wasn't able to really
    benchmark my NPU from a Desktop AI machine,

    picked the wrong driver. Need to try again.
    What worked was benchmarking Mobile AI machines.
    I just grabbed Geekbench AI and some devices:

    USA Fab, M4:

    -a-a-a sANN-a-a-a hANN-a-a-a qANN
    iPad CPU-a-a-a 4848-a-a-a 7947-a-a-a 6353
    iPad GPU-a-a-a 9752-a-a-a 11383-a-a-a 10051
    iPad NPU-a-a-a 4873-a-a-a 36544-a-a-a *51634*

    China Fab, Snapdragon:

    -a-a-a sANN-a-a-a hANN-a-a-a qANN
    Redmi CPU-a-a-a 1044-a-a-a 950-a-a-a 1723
    Redmi GPU-a-a-a 480-a-a-a 905-a-a-a 737
    Redmi NNAPI-a-a-a 205-a-a-a 205-a-a-a 469
    Redmi QNN-a-a-a 226-a-a-a 226-a-a-a *10221*

    Speed-Up via NPU is factor 10x. See the column
    qANN which means quantizised artificial neural
    networks, when NPU or QNN is picked.

    The mobile AI NPUs are optimized using
    mimimal amounts of energy, and minimal amounts
    of space squeezing (distilling) everything

    into INT8 and INT4.

    Bye

    Mild Shock schrieb:
    Hi,

    The change from 378 ms to 286 ms is around 25-30%
    is insane. But I did both tests on a novel AI CPU.
    To be precise on a AMD Ryzen AI 7 350.

    But somehow I picked up rumors that AI CPUs now
    might do Neural Network Branch Prediction. The
    idea seems to exist in hardware at least since (2012):

    Machine learning and artificial intelligence are
    the current hype (again). In their new Ryzen
    processors, AMD advertises the Neural Net
    Prediction. It turns out this is was already
    used in their older (2012) Piledriver architecture
    used for example in the AMD A10-4600M. It is also
    present in recent Samsung processors such as the
    one powering the Galaxy S7. What is it really?
    https://chasethedevil.github.io/post/the_neural_network_in_your_cpu/

    It can be done with Convoluted Neural Networks (CNN):

    BranchNet: A Convolutional Neural Network to
    Predict Hard-To-Predict Branches
    To this end, Tarsa et al. proposed using convolutional
    neural networks (CNNs) that are trained at
    compiletime to accurately predict branches that
    TAGE cannot. Given enough profiling coverage, CNNs
    learn input-independent branch correlations.
    https://microarch.org/micro53/papers/738300a118.pdf

    Interstingly the above shows cases a PGO based
    Machine Learning for Branch Predictors. No clue
    how they construct the CPU, that they can feed

    it with offline constructed neural neutworks for
    their own execution. Maybe an optimizer uses it?
    But I guess a more modern-a solutions would not only

    use CNN, but also an Attention Mechanism.

    Bye

    Mild Shock schrieb:
    Hi,

    I spent some time thinking about my primes.pl
    test. And came to the conclusion that it
    mainly tests the Prolog ALU. Things like

    integer successor or integer modulo. Then
    I found that Java has Math.floorMod() which
    I wasn't using yet. And peng results are better:

    /* Dogelog Player 2.1.2 for Java, today */
    ?- time(test).
    % Zeit 286 ms, GC 1 ms, Lips 26302430, Uhr 15.10.2025 02:31
    true.

    Maybe the Java backend picks a CPU instruction
    for Math.floorMod() instead of executing the
    longer code sequence that is needed to correct

    rem/2 into mod/2. Who knows. I also reorganized
    the code a little bit, and eliminated an extra
    method call in all arithmetic functions, by

    inlining the arithmetic function body in the
    evaluable predicate definition code. Comparison
    to old measurements and some measurements of

    other Prolog systems:

    /* Dogelog Player 2.1.2 for Java, weeks ago */
    ?- time(test).
    % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    true.

    /* SWI-Prolog 9.0.4 */
    ?- time(test).
    % 7,506,639 inferences, 0.363 CPU in 0.362 seconds
    (100% CPU, 20693560 Lips)
    true.

    /* Scryer Prolog 0.9.4-639 */
    ?- time(test).
    % CPU time: 0.365s, 7_517_613 inferences
    true.

    /* Trealla Prolog 2.82.23-3 */
    ?- time(test).
    % Time elapsed 0.868s, 11263917 Inferences, 12.983 MLips
    true.

    Bye

    P.S.: The code uses the hated mathematical mod/2,
    and not the cheaper rem/2 that CPUs usually have:

    test :-
    -a-a-a len(L, 1000),
    -a-a-a primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
    -a-a-a primes(L, I),
    -a-a-a K is I+1,
    -a-a-a search(L, K, J).

    search(L, I, J) :-
    -a-a-a mem(X, L),
    -a-a-a I mod X =:= 0, !,
    -a-a-a K is I+1,
    -a-a-a search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
    -a-a-a mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
    -a-a-a N > 0,
    -a-a-a M is N-1,
    -a-a-a len(L, M).

    Mild Shock schrieb:
    Hi,

    WebPL is already outdated I guess. It doesn't
    show the versions of the other Prolog systems
    it is using. While I had these results for

    the primes example in the WebPL playground:

    /* Trealla Prolog WASM */
    (23568.9ms)

    When I run the example here:

    https://php.energy/trealla.html

    I get better results:

    /* trealla-js 0.27.1 */

    ?- time(test).
    % Time elapsed 9.907s, 11263917 Inferences, 1.137 MLips

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2