• The roots Program Sharing (PS): J Strother Moore II (1973)

    From Mild Shock@janburse@fastmail.fm to sci.logic on Mon Aug 18 18:00:32 2025
    From Newsgroup: sci.logic

    Hi,

    J Strother Moore II is the Original Gangster (OG)
    of program sharng. Interestingly structure sharing
    meant always program sharing in the theorem

    proving community back then:

    COMPUTATIONAL LOGIC: STRUCTURE SHARING AND
    PROOF OF PROGRAM PROPERTIES
    J Strother Moore II - 1973 https://era.ed.ac.uk/bitstream/handle/1842/2245/Moore-Thesis-1973-OCR.pdf

    Only the WAM community managed to intsitutionalize
    the term structure sharng, as a reduced form of
    program sharing, namely goal argument sharing

    not using pairs of two pointers with skeleton and binding
    environment anymore, to indentify a Prolog term,
    but simple single pointers for a Prolog term.

    Bye
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.logic on Mon Aug 18 18:02:48 2025
    From Newsgroup: sci.logic

    Hi,

    The below PhD thesis is a nice gem,
    including marvels such as:

    2.3 Avoiding Unnecessary OCCUR Checks
    It is possible to significantly reduce the number
    of calls to OCCUR during a resolution unification
    by the following observation. If two clauses are
    being resolved, they are standardized apart.

    Thus, a variable from the left-hand parent will not
    occur in a term from the right-hand parent unless
    during this unification, there has been a binding of a
    variable from the right to a term from the left.

    A similar statement holds for 1eft-to-right bindings.
    Once again, in structure sharing, this condition
    is easy to check.

    The BAROQUE programming languages is also
    a genious contraption, it can run functions
    backwards, basically a Prolog based functional

    language, using the reified IF-THEN-ELSE
    from Ulrich Neumerkel. Only in 1973 there was
    no EMACS yet. Now we have Prologers that know

    EMACS but still know nothing otherwise.

    Bye

    Mild Shock schrieb:
    Hi,

    J Strother Moore II is the Original Gangster (OG)
    of program sharng. Interestingly structure sharing
    meant always program sharing in the theorem

    proving community back then:

    COMPUTATIONAL LOGIC: STRUCTURE SHARING AND
    PROOF OF PROGRAM PROPERTIES
    J Strother Moore II - 1973 https://era.ed.ac.uk/bitstream/handle/1842/2245/Moore-Thesis-1973-OCR.pdf

    Only the WAM community managed to intsitutionalize
    the term structure sharng, as a reduced form of
    program sharing, namely goal argument sharing

    not using pairs of two pointers with skeleton and binding
    environment anymore, to indentify a Prolog term,
    but simple single pointers for a Prolog term.

    Bye

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.logic on Sun Aug 31 23:59:21 2025
    From Newsgroup: sci.logic

    Hi,

    Woa! I didn't know that lausy Microsoft
    Copilot certified Laptops are that fast:

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    % Dogelog Player 2.1.1 for Java

    % AMD Ryzen 5 4500U
    % ?- time(test).
    % % Zeit 756 ms, GC 1 ms, Lips 9950390, Uhr 23.08.2025 02:45
    % true.

    % AMD Ryzen AI 7 350
    % ?- time(test).
    % % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    % true.

    What happened to the Death of Moore's Law?
    But somehow memory speed, CPU - RAM and GPU - RAM
    trippled. Possibly due to some Artificial

    Intelligence demand. And the bloody thing
    has also a NPU (Neural Processing Unit),
    nicely visible.

    Bye

    About the RAM speed. L1, L2 and L3
    caches are bigger. So its harder to poison
    the CPU. Also the CPU shows a revival of

    Hyper-Threading Technology (HTT), which
    AMD gives it a different name: They call it
    Simultaneous multithreading (SMT).

    https://www.cpubenchmark.net/compare/3702vs6397/AMD-Ryzen-5-4500U-vs-AMD-Ryzen-AI-7-350

    BTW: Still ticking along with the primes.pl example:

    test :-
    len(L, 1000),
    primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
    primes(L, I),
    K is I+1,
    search(L, K, J).

    search(L, I, J) :-
    mem(X, L),
    I mod X =:= 0, !,
    K is I+1,
    search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
    mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
    N > 0,
    M is N-1,
    len(L,


    Mild Shock schrieb:
    Hi,

    J Strother Moore II is the Original Gangster (OG)
    of program sharng. Interestingly structure sharing
    meant always program sharing in the theorem

    proving community back then:

    COMPUTATIONAL LOGIC: STRUCTURE SHARING AND
    PROOF OF PROGRAM PROPERTIES
    J Strother Moore II - 1973 https://era.ed.ac.uk/bitstream/handle/1842/2245/Moore-Thesis-1973-OCR.pdf

    Only the WAM community managed to intsitutionalize
    the term structure sharng, as a reduced form of
    program sharing, namely goal argument sharing

    not using pairs of two pointers with skeleton and binding
    environment anymore, to indentify a Prolog term,
    but simple single pointers for a Prolog term.

    Bye

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.logic on Mon Sep 1 00:34:22 2025
    From Newsgroup: sci.logic

    Hi,

    2025 will be last year we hear of Python.
    This is just a tears in your eyes Eulogy:

    Python: The Documentary | An origin story https://www.youtube.com/watch?v=GfH4QL4VqJ0

    The Zen of Python is very different
    from the Zen of Copilot+ . The bloody
    Copilot+ Laptop doesn't use Python

    in its Artificial Intelligence:

    AI Content Extraction
    - Python Involced? rYi None at runtime,
    Model runs in ONNX + DirectML on NPU

    AI Image Search
    - Python Involced? rYi None at runtime,
    ON-device image feature, fully compiled

    AI Phi Silica
    - Python Involced? rYi None at runtime,
    Lightweight Phi model packaged as ONNX

    - AI Semantic Analysis? rYi None at runtime,
    Text understanding done via compiled
    ONNX operators

    Bye

    Mild Shock schrieb:
    Hi,

    Woa! I didn't know that lausy Microsoft
    Copilot certified Laptops are that fast:

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    % Dogelog Player 2.1.1 for Java

    % AMD Ryzen 5 4500U
    % ?- time(test).
    % % Zeit 756 ms, GC 1 ms, Lips 9950390, Uhr 23.08.2025 02:45
    % true.

    % AMD Ryzen AI 7 350
    % ?- time(test).
    % % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    % true.

    What happened to the Death of Moore's Law?
    But somehow memory speed, CPU - RAM and GPU - RAM
    trippled. Possibly due to some Artificial

    Intelligence demand. And the bloody thing
    has also a NPU (Neural Processing Unit),
    nicely visible.

    Bye

    About the RAM speed. L1, L2 and L3
    caches are bigger. So its harder to poison
    the CPU. Also the CPU shows a revival of

    Hyper-Threading Technology (HTT), which
    AMD gives it a different name: They call it
    Simultaneous multithreading (SMT).

    https://www.cpubenchmark.net/compare/3702vs6397/AMD-Ryzen-5-4500U-vs-AMD-Ryzen-AI-7-350


    BTW: Still ticking along with the primes.pl example:

    test :-
    -a-a len(L, 1000),
    -a-a primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
    -a-a primes(L, I),
    -a-a K is I+1,
    -a-a search(L, K, J).

    search(L, I, J) :-
    -a-a mem(X, L),
    -a-a I mod X =:= 0, !,
    -a-a K is I+1,
    -a-a search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
    -a-a mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
    -a-a N > 0,
    -a-a M is N-1,
    -a-a len(L,


    Mild Shock schrieb:
    Hi,

    J Strother Moore II is the Original Gangster (OG)
    of program sharng. Interestingly structure sharing
    meant always program sharing in the theorem

    proving community back then:

    COMPUTATIONAL LOGIC: STRUCTURE SHARING AND
    PROOF OF PROGRAM PROPERTIES
    J Strother Moore II - 1973
    https://era.ed.ac.uk/bitstream/handle/1842/2245/Moore-Thesis-1973-OCR.pdf

    Only the WAM community managed to intsitutionalize
    the term structure sharng, as a reduced form of
    program sharing, namely goal argument sharing

    not using pairs of two pointers with skeleton and binding
    environment anymore, to indentify a Prolog term,
    but simple single pointers for a Prolog term.

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.logic on Fri Sep 5 00:37:38 2025
    From Newsgroup: sci.logic

    Hi,

    Swiss AI Apertus
    Model ID: apertus-70b-instruct
    Parameters: 70 billion
    License: Apache 2.0
    Training: 15T tokens across 1,000+ languages
    Availability: Free during Swiss AI Weeks (September 2025)

    https://platform.publicai.co/docs

    Bye

    P.S.: A chat interface is here:

    Try Apertus
    https://publicai.co/

    Mild Shock schrieb:
    Hi,

    2025 will be last year we hear of Python.
    This is just a tears in your eyes Eulogy:

    Python: The Documentary | An origin story https://www.youtube.com/watch?v=GfH4QL4VqJ0

    The Zen of Python is very different
    from the Zen of Copilot+ . The bloody
    Copilot+ Laptop doesn't use Python

    in its Artificial Intelligence:

    AI Content Extraction
    - Python Involced? rYi None at runtime,
    -a Model runs in ONNX + DirectML on NPU

    AI Image Search
    - Python Involced? rYi None at runtime,
    -a ON-device image feature, fully compiled

    AI Phi Silica
    - Python Involced? rYi None at runtime,
    -a Lightweight Phi model packaged as ONNX

    - AI Semantic Analysis? rYi None at runtime,
    -a Text understanding done via compiled
    -a ONNX operators

    Bye

    Mild Shock schrieb:
    Hi,

    Woa! I didn't know that lausy Microsoft
    Copilot certified Laptops are that fast:

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    % Dogelog Player 2.1.1 for Java

    % AMD Ryzen 5 4500U
    % ?- time(test).
    % % Zeit 756 ms, GC 1 ms, Lips 9950390, Uhr 23.08.2025 02:45
    % true.

    % AMD Ryzen AI 7 350
    % ?- time(test).
    % % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    % true.

    What happened to the Death of Moore's Law?
    But somehow memory speed, CPU - RAM and GPU - RAM
    trippled. Possibly due to some Artificial

    Intelligence demand. And the bloody thing
    has also a NPU (Neural Processing Unit),
    nicely visible.

    Bye

    About the RAM speed. L1, L2 and L3
    caches are bigger. So its harder to poison
    the CPU. Also the CPU shows a revival of

    Hyper-Threading Technology (HTT), which
    AMD gives it a different name: They call it
    Simultaneous multithreading (SMT).

    https://www.cpubenchmark.net/compare/3702vs6397/AMD-Ryzen-5-4500U-vs-AMD-Ryzen-AI-7-350


    BTW: Still ticking along with the primes.pl example:

    test :-
    -a-a-a len(L, 1000),
    -a-a-a primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
    -a-a-a primes(L, I),
    -a-a-a K is I+1,
    -a-a-a search(L, K, J).

    search(L, I, J) :-
    -a-a-a mem(X, L),
    -a-a-a I mod X =:= 0, !,
    -a-a-a K is I+1,
    -a-a-a search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
    -a-a-a mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
    -a-a-a N > 0,
    -a-a-a M is N-1,
    -a-a-a len(L,


    Mild Shock schrieb:
    Hi,

    J Strother Moore II is the Original Gangster (OG)
    of program sharng. Interestingly structure sharing
    meant always program sharing in the theorem

    proving community back then:

    COMPUTATIONAL LOGIC: STRUCTURE SHARING AND
    PROOF OF PROGRAM PROPERTIES
    J Strother Moore II - 1973
    https://era.ed.ac.uk/bitstream/handle/1842/2245/Moore-Thesis-1973-OCR.pdf >>>

    Only the WAM community managed to intsitutionalize
    the term structure sharng, as a reduced form of
    program sharing, namely goal argument sharing

    not using pairs of two pointers with skeleton and binding
    environment anymore, to indentify a Prolog term,
    but simple single pointers for a Prolog term.

    Bye



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.logic on Fri Sep 5 01:06:45 2025
    From Newsgroup: sci.logic

    Hi,

    Don't try this, don't ask Apertus how
    many holes an emmentaler cheese has.

    And absolutely don't try this, ask it
    next to please answer in Schwitzerd|+tsch.

    Bye

    P.S.: Chatgpt can do it.

    Mild Shock schrieb:
    Hi,

    Swiss AI Apertus
    Model ID: apertus-70b-instruct
    Parameters: 70 billion
    License: Apache 2.0
    Training: 15T tokens across 1,000+ languages
    Availability: Free during Swiss AI Weeks (September 2025)

    https://platform.publicai.co/docs

    Bye

    P.S.: A chat interface is here:

    Try Apertus
    https://publicai.co/

    Mild Shock schrieb:
    Hi,

    2025 will be last year we hear of Python.
    This is just a tears in your eyes Eulogy:

    Python: The Documentary | An origin story
    https://www.youtube.com/watch?v=GfH4QL4VqJ0

    The Zen of Python is very different
    from the Zen of Copilot+ . The bloody
    Copilot+ Laptop doesn't use Python

    in its Artificial Intelligence:

    AI Content Extraction
    - Python Involced? rYi None at runtime,
    -a-a Model runs in ONNX + DirectML on NPU

    AI Image Search
    - Python Involced? rYi None at runtime,
    -a-a ON-device image feature, fully compiled

    AI Phi Silica
    - Python Involced? rYi None at runtime,
    -a-a Lightweight Phi model packaged as ONNX

    - AI Semantic Analysis? rYi None at runtime,
    -a-a Text understanding done via compiled
    -a-a ONNX operators

    Bye

    Mild Shock schrieb:
    Hi,

    Woa! I didn't know that lausy Microsoft
    Copilot certified Laptops are that fast:

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    % Dogelog Player 2.1.1 for Java

    % AMD Ryzen 5 4500U
    % ?- time(test).
    % % Zeit 756 ms, GC 1 ms, Lips 9950390, Uhr 23.08.2025 02:45
    % true.

    % AMD Ryzen AI 7 350
    % ?- time(test).
    % % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    % true.

    What happened to the Death of Moore's Law?
    But somehow memory speed, CPU - RAM and GPU - RAM
    trippled. Possibly due to some Artificial

    Intelligence demand. And the bloody thing
    has also a NPU (Neural Processing Unit),
    nicely visible.

    Bye

    About the RAM speed. L1, L2 and L3
    caches are bigger. So its harder to poison
    the CPU. Also the CPU shows a revival of

    Hyper-Threading Technology (HTT), which
    AMD gives it a different name: They call it
    Simultaneous multithreading (SMT).

    https://www.cpubenchmark.net/compare/3702vs6397/AMD-Ryzen-5-4500U-vs-AMD-Ryzen-AI-7-350


    BTW: Still ticking along with the primes.pl example:

    test :-
    -a-a-a len(L, 1000),
    -a-a-a primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
    -a-a-a primes(L, I),
    -a-a-a K is I+1,
    -a-a-a search(L, K, J).

    search(L, I, J) :-
    -a-a-a mem(X, L),
    -a-a-a I mod X =:= 0, !,
    -a-a-a K is I+1,
    -a-a-a search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
    -a-a-a mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
    -a-a-a N > 0,
    -a-a-a M is N-1,
    -a-a-a len(L,


    Mild Shock schrieb:
    Hi,

    J Strother Moore II is the Original Gangster (OG)
    of program sharng. Interestingly structure sharing
    meant always program sharing in the theorem

    proving community back then:

    COMPUTATIONAL LOGIC: STRUCTURE SHARING AND
    PROOF OF PROGRAM PROPERTIES
    J Strother Moore II - 1973
    https://era.ed.ac.uk/bitstream/handle/1842/2245/Moore-Thesis-1973-OCR.pdf >>>>

    Only the WAM community managed to intsitutionalize
    the term structure sharng, as a reduced form of
    program sharing, namely goal argument sharing

    not using pairs of two pointers with skeleton and binding
    environment anymore, to indentify a Prolog term,
    but simple single pointers for a Prolog term.

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.logic on Fri Sep 19 10:02:43 2025
    From Newsgroup: sci.logic

    Hi,

    For the LP (Linear programming) part, it
    might be interesting to recall, that SWI-Prolog
    has an according library:

    A.55 library(simplex): Solve linear programming problems https://eu.swi-prolog.org/pldoc/man?section=simplex

    To model the constraint store, it doesnrCOt need
    any native Prolog system support, since it uses
    DCG for state threading. Linear programming was

    long time the pinnacle of mathematical problem
    solving. But some Articial Intelligence method do
    typically go beyond the linear case, might also

    tackle non-linear problems etc.. making heavy
    use of a NPU (Neural Processing Unit). In May 2025
    the first AI Laptops arrived with >40 TOPS NPUs.

    Spearheaded by Microsoft branding it Copilot+.

    Bye
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.logic on Fri Sep 19 10:14:01 2025
    From Newsgroup: sci.logic

    Hi,

    It seems the LP (Linear programming)
    library by SWI-Prolog has been also
    ported to Scryer Prolog using the same DCG
    design as demonstrated in SWI-Prolog again:

    Module simplex
    https://www.scryer.pl/simplex

    What it requires from the Prolog system,
    and is not covered by the ISO core standard,
    are rational number, i.e. rdiv/2 etc.. and if
    you feed it with floating point numbers,

    judging from the source code, it might bark
    that it has no CLP(R) available to solve it. CLP(R)
    could be maybe a good candidate for Copilot+
    machines, but I am currently not aware

    of a Copliot+ Prolog system so to speak:

    About Microsoft Copilot+ PCs https://www.wired.com/story/what-is-copilot-plus-pc/

    The DCG design could make it easy that a
    solver somehow hands a problem to a NPU,
    making it transparent for the end-user.

    Bye

    Mild Shock schrieb:
    Hi,

    For the LP (Linear programming) part, it
    might be interesting to recall, that SWI-Prolog
    has an according library:

    A.55 library(simplex): Solve linear programming problems https://eu.swi-prolog.org/pldoc/man?section=simplex

    To model the constraint store, it doesnrCOt need
    any native Prolog system support, since it uses
    DCG for state threading. Linear programming was

    long time the pinnacle of mathematical problem
    solving. But some Articial Intelligence method do
    typically go beyond the linear case, might also

    tackle non-linear problems etc.. making heavy
    use of a NPU (Neural Processing Unit). In May 2025
    the first AI Laptops arrived with >40 TOPS NPUs.

    Spearheaded by Microsoft branding it Copilot+.

    Bye

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.logic on Fri Sep 19 14:39:15 2025
    From Newsgroup: sci.logic

    Hi,

    Thank god it was only coffee and not orange juice:

    Ozzy Pours The Perfect O.J.
    https://m.youtube.com/watch?v=ojQUYq21G-o

    Bye

    Mild Shock schrieb:
    Hi,

    It seems the LP (Linear programming)
    library by SWI-Prolog has been also
    ported to Scryer Prolog using the same DCG
    design as demonstrated in SWI-Prolog again:

    Module simplex
    https://www.scryer.pl/simplex

    What it requires from the Prolog system,
    and is not covered by the ISO core standard,
    are rational number, i.e. rdiv/2 etc.. and if
    you feed it with floating point numbers,

    judging from the source code, it might bark
    that it has no CLP(R) available to solve it. CLP(R)
    could be maybe a good candidate for Copilot+
    machines, but I am currently not aware

    of a Copliot+ Prolog system so to speak:

    About Microsoft Copilot+ PCs https://www.wired.com/story/what-is-copilot-plus-pc/

    The DCG design could make it easy that a
    solver somehow hands a problem to a NPU,
    making it transparent for the end-user.

    Bye

    Mild Shock schrieb:
    Hi,

    For the LP (Linear programming) part, it
    might be interesting to recall, that SWI-Prolog
    has an according library:

    A.55 library(simplex): Solve linear programming problems
    https://eu.swi-prolog.org/pldoc/man?section=simplex

    To model the constraint store, it doesnrCOt need
    any native Prolog system support, since it uses
    DCG for state threading. Linear programming was

    long time the pinnacle of mathematical problem
    solving. But some Articial Intelligence method do
    typically go beyond the linear case, might also

    tackle non-linear problems etc.. making heavy
    use of a NPU (Neural Processing Unit). In May 2025
    the first AI Laptops arrived with >40 TOPS NPUs.

    Spearheaded by Microsoft branding it Copilot+.

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.logic on Fri Sep 19 18:25:09 2025
    From Newsgroup: sci.logic

    Hi,

    I like the expert system description by

    I used Claude code to help me create a Prolog
    program of a little expert system to manage a
    kitchen that needed to produce different dishes
    with different appliances and to be able to
    maximize revenue. -- bauhaus911

    Instead of maximizing revenue you could also
    maximize energy boost. So instead of having
    a couple of morons on SWI-Prolog discourse,

    like those that have parked their brain in the
    nowhere and are going full throttle Donald
    Trump / Kesh Patel Nazi, the system could

    indeed recommend Orange Juice instead of
    coffee. For the following brain benefits:

    - Vitamin C powerhouse: ~50rCo60 mg per 100 ml,
    giving a solid immune boost.

    - Quick energy: natural sugars (glucose + fructose)
    give your brain and body fast fuel.

    - Hydration: mostly water, which helps maintain
    energy and focus.

    Have Fun! LoL

    Bye

    Mild Shock schrieb:
    Hi,

    Thank god it was only coffee and not orange juice:

    Ozzy Pours The Perfect O.J.
    https://m.youtube.com/watch?v=ojQUYq21G-o

    Bye

    Mild Shock schrieb:
    Hi,

    It seems the LP (Linear programming)
    library by SWI-Prolog has been also
    ported to Scryer Prolog using the same DCG
    design as demonstrated in SWI-Prolog again:

    Module simplex
    https://www.scryer.pl/simplex

    What it requires from the Prolog system,
    and is not covered by the ISO core standard,
    are rational number, i.e. rdiv/2 etc.. and if
    you feed it with floating point numbers,

    judging from the source code, it might bark
    that it has no CLP(R) available to solve it. CLP(R)
    could be maybe a good candidate for Copilot+
    machines, but I am currently not aware

    of a Copliot+ Prolog system so to speak:

    About Microsoft Copilot+ PCs
    https://www.wired.com/story/what-is-copilot-plus-pc/

    The DCG design could make it easy that a
    solver somehow hands a problem to a NPU,
    making it transparent for the end-user.

    Bye

    Mild Shock schrieb:
    Hi,

    For the LP (Linear programming) part, it
    might be interesting to recall, that SWI-Prolog
    has an according library:

    A.55 library(simplex): Solve linear programming problems
    https://eu.swi-prolog.org/pldoc/man?section=simplex

    To model the constraint store, it doesnrCOt need
    any native Prolog system support, since it uses
    DCG for state threading. Linear programming was

    long time the pinnacle of mathematical problem
    solving. But some Articial Intelligence method do
    typically go beyond the linear case, might also

    tackle non-linear problems etc.. making heavy
    use of a NPU (Neural Processing Unit). In May 2025
    the first AI Laptops arrived with >40 TOPS NPUs.

    Spearheaded by Microsoft branding it Copilot+.

    Bye



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.logic on Fri Sep 19 18:38:00 2025
    From Newsgroup: sci.logic

    Hi,

    You deleted like 10 posts of mine in the last
    48 hours, which tried to explain why patching
    is against "discourse".

    Even Torbj||rn Lager agreed. I don't think
    you can continue your forum in this style.
    And then after you deleted a dozen of posts

    of mine, I am not allowed to delete my posts?

    You are simply completely crazy!!!

    Bye

    I got the following nonsense from you:

    Jan, werCOve asked you to be less combative with
    people here, but you continue to be extremely
    aggressive towards other users of the site.
    You have very helpful things to add, but when
    you then go back and delete everything you post,
    it obviates that helpfulness.

    Mild Shock schrieb:
    Hi,

    I like the expert system description by

    I used Claude code to help me create a Prolog
    program of a little expert system to manage a
    kitchen that needed to produce different dishes
    with different appliances and to be able to
    maximize revenue. -- bauhaus911

    Instead of maximizing revenue you could also
    maximize energy boost. So instead of having
    a couple of morons on SWI-Prolog discourse,

    like those that have parked their brain in the
    nowhere and are going full throttle Donald
    Trump / Kesh Patel Nazi, the system could

    indeed recommend Orange Juice instead of
    coffee. For the following brain benefits:

    - Vitamin C powerhouse: ~50rCo60 mg per 100 ml,
    -a giving a solid immune boost.

    - Quick energy: natural sugars (glucose + fructose)
    -a give your brain and body fast fuel.

    - Hydration: mostly water, which helps maintain
    -a energy and focus.

    Have Fun! LoL

    Bye

    Mild Shock schrieb:
    Hi,

    Thank god it was only coffee and not orange juice:

    Ozzy Pours The Perfect O.J.
    https://m.youtube.com/watch?v=ojQUYq21G-o

    Bye

    Mild Shock schrieb:
    Hi,

    It seems the LP (Linear programming)
    library by SWI-Prolog has been also
    ported to Scryer Prolog using the same DCG
    design as demonstrated in SWI-Prolog again:

    Module simplex
    https://www.scryer.pl/simplex

    What it requires from the Prolog system,
    and is not covered by the ISO core standard,
    are rational number, i.e. rdiv/2 etc.. and if
    you feed it with floating point numbers,

    judging from the source code, it might bark
    that it has no CLP(R) available to solve it. CLP(R)
    could be maybe a good candidate for Copilot+
    machines, but I am currently not aware

    of a Copliot+ Prolog system so to speak:

    About Microsoft Copilot+ PCs
    https://www.wired.com/story/what-is-copilot-plus-pc/

    The DCG design could make it easy that a
    solver somehow hands a problem to a NPU,
    making it transparent for the end-user.

    Bye

    Mild Shock schrieb:
    Hi,

    For the LP (Linear programming) part, it
    might be interesting to recall, that SWI-Prolog
    has an according library:

    A.55 library(simplex): Solve linear programming problems
    https://eu.swi-prolog.org/pldoc/man?section=simplex

    To model the constraint store, it doesnrCOt need
    any native Prolog system support, since it uses
    DCG for state threading. Linear programming was

    long time the pinnacle of mathematical problem
    solving. But some Articial Intelligence method do
    typically go beyond the linear case, might also

    tackle non-linear problems etc.. making heavy
    use of a NPU (Neural Processing Unit). In May 2025
    the first AI Laptops arrived with >40 TOPS NPUs.

    Spearheaded by Microsoft branding it Copilot+.

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.logic on Fri Sep 19 18:42:56 2025
    From Newsgroup: sci.logic

    Hi,

    I will consult a Lawyer of mine.
    Maybe I can ask for a complete
    tear down of all my content.

    Bye

    Mild Shock schrieb:
    Hi,

    You deleted like 10 posts of mine in the last
    48 hours, which tried to explain why patching
    is against "discourse".

    Even Torbj||rn Lager agreed. I don't think
    you can continue your forum in this style.
    And then after you deleted a dozen of posts

    of mine, I am not allowed to delete my posts?

    You are simply completely crazy!!!

    Bye

    I got the following nonsense from you:

    Jan, werCOve asked you to be less combative with
    people here, but you continue to be extremely
    aggressive towards other users of the site.
    You have very helpful things to add, but when
    you then go back and delete everything you post,
    it obviates that helpfulness.

    Mild Shock schrieb:
    Hi,

    I like the expert system description by

    I used Claude code to help me create a Prolog
    program of a little expert system to manage a
    kitchen that needed to produce different dishes
    with different appliances and to be able to
    maximize revenue. -- bauhaus911

    Instead of maximizing revenue you could also
    maximize energy boost. So instead of having
    a couple of morons on SWI-Prolog discourse,

    like those that have parked their brain in the
    nowhere and are going full throttle Donald
    Trump / Kesh Patel Nazi, the system could

    indeed recommend Orange Juice instead of
    coffee. For the following brain benefits:

    - Vitamin C powerhouse: ~50rCo60 mg per 100 ml,
    -a-a giving a solid immune boost.

    - Quick energy: natural sugars (glucose + fructose)
    -a-a give your brain and body fast fuel.

    - Hydration: mostly water, which helps maintain
    -a-a energy and focus.

    Have Fun! LoL

    Bye

    Mild Shock schrieb:
    Hi,

    Thank god it was only coffee and not orange juice:

    Ozzy Pours The Perfect O.J.
    https://m.youtube.com/watch?v=ojQUYq21G-o

    Bye

    Mild Shock schrieb:
    Hi,

    It seems the LP (Linear programming)
    library by SWI-Prolog has been also
    ported to Scryer Prolog using the same DCG
    design as demonstrated in SWI-Prolog again:

    Module simplex
    https://www.scryer.pl/simplex

    What it requires from the Prolog system,
    and is not covered by the ISO core standard,
    are rational number, i.e. rdiv/2 etc.. and if
    you feed it with floating point numbers,

    judging from the source code, it might bark
    that it has no CLP(R) available to solve it. CLP(R)
    could be maybe a good candidate for Copilot+
    machines, but I am currently not aware

    of a Copliot+ Prolog system so to speak:

    About Microsoft Copilot+ PCs
    https://www.wired.com/story/what-is-copilot-plus-pc/

    The DCG design could make it easy that a
    solver somehow hands a problem to a NPU,
    making it transparent for the end-user.

    Bye

    Mild Shock schrieb:
    Hi,

    For the LP (Linear programming) part, it
    might be interesting to recall, that SWI-Prolog
    has an according library:

    A.55 library(simplex): Solve linear programming problems
    https://eu.swi-prolog.org/pldoc/man?section=simplex

    To model the constraint store, it doesnrCOt need
    any native Prolog system support, since it uses
    DCG for state threading. Linear programming was

    long time the pinnacle of mathematical problem
    solving. But some Articial Intelligence method do
    typically go beyond the linear case, might also

    tackle non-linear problems etc.. making heavy
    use of a NPU (Neural Processing Unit). In May 2025
    the first AI Laptops arrived with >40 TOPS NPUs.

    Spearheaded by Microsoft branding it Copilot+.

    Bye





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.logic on Fri Sep 19 18:43:19 2025
    From Newsgroup: sci.logic

    Hi,

    I will consult a Lawyer of mine.
    Maybe I can ask for a complete
    tear down of all my content.

    Bye

    Mild Shock schrieb:
    Hi,

    You deleted like 10 posts of mine in the last
    48 hours, which tried to explain why patching
    is against "discourse".

    Even Torbj||rn Lager agreed. I don't think
    you can continue your forum in this style.
    And then after you deleted a dozen of posts

    of mine, I am not allowed to delete my posts?

    You are simply completely crazy!!!

    Bye

    I got the following nonsense from you:

    Jan, werCOve asked you to be less combative with
    people here, but you continue to be extremely
    aggressive towards other users of the site.
    You have very helpful things to add, but when
    you then go back and delete everything you post,
    it obviates that helpfulness.

    Mild Shock schrieb:
    Hi,

    I like the expert system description by

    I used Claude code to help me create a Prolog
    program of a little expert system to manage a
    kitchen that needed to produce different dishes
    with different appliances and to be able to
    maximize revenue. -- bauhaus911

    Instead of maximizing revenue you could also
    maximize energy boost. So instead of having
    a couple of morons on SWI-Prolog discourse,

    like those that have parked their brain in the
    nowhere and are going full throttle Donald
    Trump / Kesh Patel Nazi, the system could

    indeed recommend Orange Juice instead of
    coffee. For the following brain benefits:

    - Vitamin C powerhouse: ~50rCo60 mg per 100 ml,
    -a-a giving a solid immune boost.

    - Quick energy: natural sugars (glucose + fructose)
    -a-a give your brain and body fast fuel.

    - Hydration: mostly water, which helps maintain
    -a-a energy and focus.

    Have Fun! LoL

    Bye

    Mild Shock schrieb:
    Hi,

    Thank god it was only coffee and not orange juice:

    Ozzy Pours The Perfect O.J.
    https://m.youtube.com/watch?v=ojQUYq21G-o

    Bye

    Mild Shock schrieb:
    Hi,

    It seems the LP (Linear programming)
    library by SWI-Prolog has been also
    ported to Scryer Prolog using the same DCG
    design as demonstrated in SWI-Prolog again:

    Module simplex
    https://www.scryer.pl/simplex

    What it requires from the Prolog system,
    and is not covered by the ISO core standard,
    are rational number, i.e. rdiv/2 etc.. and if
    you feed it with floating point numbers,

    judging from the source code, it might bark
    that it has no CLP(R) available to solve it. CLP(R)
    could be maybe a good candidate for Copilot+
    machines, but I am currently not aware

    of a Copliot+ Prolog system so to speak:

    About Microsoft Copilot+ PCs
    https://www.wired.com/story/what-is-copilot-plus-pc/

    The DCG design could make it easy that a
    solver somehow hands a problem to a NPU,
    making it transparent for the end-user.

    Bye

    Mild Shock schrieb:
    Hi,

    For the LP (Linear programming) part, it
    might be interesting to recall, that SWI-Prolog
    has an according library:

    A.55 library(simplex): Solve linear programming problems
    https://eu.swi-prolog.org/pldoc/man?section=simplex

    To model the constraint store, it doesnrCOt need
    any native Prolog system support, since it uses
    DCG for state threading. Linear programming was

    long time the pinnacle of mathematical problem
    solving. But some Articial Intelligence method do
    typically go beyond the linear case, might also

    tackle non-linear problems etc.. making heavy
    use of a NPU (Neural Processing Unit). In May 2025
    the first AI Laptops arrived with >40 TOPS NPUs.

    Spearheaded by Microsoft branding it Copilot+.

    Bye





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.logic on Mon Sep 29 22:34:00 2025
    From Newsgroup: sci.logic

    Hi,

    Is there a silver lining of AI democratization?
    With MedGamma I can analyse my own broken ribs.
    If only I had a body scanner. I am currently exploring

    options of LLM models that I could run on
    my new AI soaked AMD Ryzen AI 7 350 laptop.
    While Qualcomm spear headed LLM players with

    their LM Studio. There is FastFlowLM
    that can do Ryzen, and would utilize the NPU. For
    example to run a distilled DeepSeek would amount to:

    flm run deepseek-r1:8b

    And yes there is MedGamma:

    MedGemma:4B (Multimodal) Running Exclusively on AMD Ryzenrao AI NPU https://www.youtube.com/watch?v=KWzXZEOcgK4

    Bye


    Mild Shock schrieb:
    Hi,

    2025 will be last year we hear of Python.
    This is just a tears in your eyes Eulogy:

    Python: The Documentary | An origin story https://www.youtube.com/watch?v=GfH4QL4VqJ0

    The Zen of Python is very different
    from the Zen of Copilot+ . The bloody
    Copilot+ Laptop doesn't use Python

    in its Artificial Intelligence:

    AI Content Extraction
    - Python Involced? rYi None at runtime,
    -a Model runs in ONNX + DirectML on NPU

    AI Image Search
    - Python Involced? rYi None at runtime,
    -a ON-device image feature, fully compiled

    AI Phi Silica
    - Python Involced? rYi None at runtime,
    -a Lightweight Phi model packaged as ONNX

    - AI Semantic Analysis? rYi None at runtime,
    -a Text understanding done via compiled
    -a ONNX operators

    Bye

    Mild Shock schrieb:
    Hi,

    Woa! I didn't know that lausy Microsoft
    Copilot certified Laptops are that fast:

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    % Dogelog Player 2.1.1 for Java

    % AMD Ryzen 5 4500U
    % ?- time(test).
    % % Zeit 756 ms, GC 1 ms, Lips 9950390, Uhr 23.08.2025 02:45
    % true.

    % AMD Ryzen AI 7 350
    % ?- time(test).
    % % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    % true.

    What happened to the Death of Moore's Law?
    But somehow memory speed, CPU - RAM and GPU - RAM
    trippled. Possibly due to some Artificial

    Intelligence demand. And the bloody thing
    has also a NPU (Neural Processing Unit),
    nicely visible.

    Bye

    About the RAM speed. L1, L2 and L3
    caches are bigger. So its harder to poison
    the CPU. Also the CPU shows a revival of

    Hyper-Threading Technology (HTT), which
    AMD gives it a different name: They call it
    Simultaneous multithreading (SMT).

    https://www.cpubenchmark.net/compare/3702vs6397/AMD-Ryzen-5-4500U-vs-AMD-Ryzen-AI-7-350


    BTW: Still ticking along with the primes.pl example:

    test :-
    -a-a-a len(L, 1000),
    -a-a-a primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
    -a-a-a primes(L, I),
    -a-a-a K is I+1,
    -a-a-a search(L, K, J).

    search(L, I, J) :-
    -a-a-a mem(X, L),
    -a-a-a I mod X =:= 0, !,
    -a-a-a K is I+1,
    -a-a-a search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
    -a-a-a mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
    -a-a-a N > 0,
    -a-a-a M is N-1,
    -a-a-a len(L,


    Mild Shock schrieb:
    Hi,

    J Strother Moore II is the Original Gangster (OG)
    of program sharng. Interestingly structure sharing
    meant always program sharing in the theorem

    proving community back then:

    COMPUTATIONAL LOGIC: STRUCTURE SHARING AND
    PROOF OF PROGRAM PROPERTIES
    J Strother Moore II - 1973
    https://era.ed.ac.uk/bitstream/handle/1842/2245/Moore-Thesis-1973-OCR.pdf >>>

    Only the WAM community managed to intsitutionalize
    the term structure sharng, as a reduced form of
    program sharing, namely goal argument sharing

    not using pairs of two pointers with skeleton and binding
    environment anymore, to indentify a Prolog term,
    but simple single pointers for a Prolog term.

    Bye



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.logic on Mon Sep 29 22:56:50 2025
    From Newsgroup: sci.logic

    Hi,

    I hope it doesn't turn my laptop into a
    frying pan. The thingy had a few hickups
    recently, like using 100% CPU and doing

    nothing. Maybe IntelliJs fork join framework
    was overdoing it. But NPUs are physically
    designed for efficient AI math - more

    computations per watt, less heat generated.
    Lets see. But whats the technology behind
    FastFlowLM? It might be a result of:

    GAIA: An Open-Source Project from AMD for
    Running Local LLMs on Ryzenrao AI. GAIA seems
    to be an important piece of the Ryzen AI story.

    Initially GAIA wanted to Provide a unified
    software stack for Ryzen AI NPUs. But
    AMD shifted focus to DirectML integration

    with Windows. GAIA Absorbed into AMD's ROCm
    Ecosystem, on the other hand XDNA (2024)
    AMD's commercial NPU architecture goes

    full circle back to Niklaus Wirth:

    Hades: fast hardware synthesis tools and a reconfigurable coprocessor https://www.research-collection.ethz.ch/entities/publication/23b3a0e4-e5e7-44fe-9b5d-ab43e21859b2

    It has FPGA-inspired reconfigurable fabric!

    Bye

    P.S.: Shit, I should have such a little toy
    compiler as well somewhere in my notes I took
    during a lecture. Array with For Loop to

    model a hardware bus is really funny.

    Mild Shock schrieb:
    Hi,

    Is there a silver lining of AI democratization?
    With MedGamma I can analyse my own broken ribs.
    If only I had a body scanner. I am currently exploring

    options of LLM models that I could run on
    my new AI soaked AMD Ryzen AI 7 350 laptop.
    While Qualcomm spear headed LLM players with

    their LM Studio. There is FastFlowLM
    that can do Ryzen, and would utilize the NPU. For
    example to run a distilled DeepSeek would amount to:

    flm run deepseek-r1:8b

    And yes there is MedGamma:

    MedGemma:4B (Multimodal) Running Exclusively on AMD Ryzenrao AI NPU https://www.youtube.com/watch?v=KWzXZEOcgK4

    Bye


    Mild Shock schrieb:
    Hi,

    2025 will be last year we hear of Python.
    This is just a tears in your eyes Eulogy:

    Python: The Documentary | An origin story
    https://www.youtube.com/watch?v=GfH4QL4VqJ0

    The Zen of Python is very different
    from the Zen of Copilot+ . The bloody
    Copilot+ Laptop doesn't use Python

    in its Artificial Intelligence:

    AI Content Extraction
    - Python Involced? rYi None at runtime,
    -a-a Model runs in ONNX + DirectML on NPU

    AI Image Search
    - Python Involced? rYi None at runtime,
    -a-a ON-device image feature, fully compiled

    AI Phi Silica
    - Python Involced? rYi None at runtime,
    -a-a Lightweight Phi model packaged as ONNX

    - AI Semantic Analysis? rYi None at runtime,
    -a-a Text understanding done via compiled
    -a-a ONNX operators

    Bye

    Mild Shock schrieb:
    Hi,

    Woa! I didn't know that lausy Microsoft
    Copilot certified Laptops are that fast:

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    % Dogelog Player 2.1.1 for Java

    % AMD Ryzen 5 4500U
    % ?- time(test).
    % % Zeit 756 ms, GC 1 ms, Lips 9950390, Uhr 23.08.2025 02:45
    % true.

    % AMD Ryzen AI 7 350
    % ?- time(test).
    % % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
    % true.

    What happened to the Death of Moore's Law?
    But somehow memory speed, CPU - RAM and GPU - RAM
    trippled. Possibly due to some Artificial

    Intelligence demand. And the bloody thing
    has also a NPU (Neural Processing Unit),
    nicely visible.

    Bye

    About the RAM speed. L1, L2 and L3
    caches are bigger. So its harder to poison
    the CPU. Also the CPU shows a revival of

    Hyper-Threading Technology (HTT), which
    AMD gives it a different name: They call it
    Simultaneous multithreading (SMT).

    https://www.cpubenchmark.net/compare/3702vs6397/AMD-Ryzen-5-4500U-vs-AMD-Ryzen-AI-7-350


    BTW: Still ticking along with the primes.pl example:

    test :-
    -a-a-a len(L, 1000),
    -a-a-a primes(L, _).

    primes([], 1).
    primes([J|L], J) :-
    -a-a-a primes(L, I),
    -a-a-a K is I+1,
    -a-a-a search(L, K, J).

    search(L, I, J) :-
    -a-a-a mem(X, L),
    -a-a-a I mod X =:= 0, !,
    -a-a-a K is I+1,
    -a-a-a search(L, K, J).
    search(_, I, I).

    mem(X, [X|_]).
    mem(X, [_|Y]) :-
    -a-a-a mem(X, Y).

    len([], 0) :- !.
    len([_|L], N) :-
    -a-a-a N > 0,
    -a-a-a M is N-1,
    -a-a-a len(L,


    Mild Shock schrieb:
    Hi,

    J Strother Moore II is the Original Gangster (OG)
    of program sharng. Interestingly structure sharing
    meant always program sharing in the theorem

    proving community back then:

    COMPUTATIONAL LOGIC: STRUCTURE SHARING AND
    PROOF OF PROGRAM PROPERTIES
    J Strother Moore II - 1973
    https://era.ed.ac.uk/bitstream/handle/1842/2245/Moore-Thesis-1973-OCR.pdf >>>>

    Only the WAM community managed to intsitutionalize
    the term structure sharng, as a reduced form of
    program sharing, namely goal argument sharing

    not using pairs of two pointers with skeleton and binding
    environment anymore, to indentify a Prolog term,
    but simple single pointers for a Prolog term.

    Bye




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to sci.logic on Tue Sep 30 08:41:23 2025
    From Newsgroup: sci.logic

    Hi,

    Was Linus Torvalds cautious or clueless?

    "I think AI is really interesting and I think it
    is going to change the world. At the same time,
    I hate the hype cycle so much that I really don't
    want to go there. So, my approach to AI right now
    is I will basically ignore it because I think
    the whole tech industry around AI is in a
    very bad position, and its 90% marketing and
    10% reality. And, in 5 years, things will change
    and at that point, we will see what of the AI
    is getting used for real workloads".

    https://www.tweaktown.com/news/101381/linux-creator-linus-torvalds-ai-is-useless-its-90-marketing-while-he-ignores-for-now/index.html

    I think his fallacy is to judge AI as hype.
    So his timeline 2030 might have received a
    suckerpunch by Copilot+ already now in late

    2025. Before in 2024, when he made his statement,
    AI was already not hype at all:

    2009rCo2012 (Deep Learning Wave): GPUs began being
    used for deep learning research, thanks to frameworks
    like Caffe and Theano. This was when convolutional
    networks for vision really took off.

    2012rCo2015 (Big Data + Deep Learning): Data centers
    started leveraging clusters of GPUs for large-scale
    training, using distributed frameworks like
    TensorFlow and PyTorch (from 2016). Text analysis
    and recommendation systems were already benefiting from this.

    2015rCo2020 (Specialized Accelerators): Companies
    like Google (TPU), Nvidia (A100), and Qualcomm
    (Hexagon DSP) developed purpose-built hardware
    for AI inference and training. Large-scale NLP
    models like BERT were trained in these environments.

    2020rCo2024 (Commercial AI Explosion): On-device AI,
    cloud AI services, Copilot+, Claude integrations rCo
    all of these are the practical realization of what
    had been quietly powering research and enterprise
    workloads for over a decade.

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to sci.logic on Fri Oct 3 21:48:38 2025
    From Newsgroup: sci.logic

    On 9/30/2025 1:41 AM, Mild Shock wrote:
    Hi,

    Was Linus Torvalds cautious or clueless?

    "I think AI is really interesting and I think it
    is going to change the world. At the same time,
    I hate the hype cycle so much that I really don't
    want to go there. So, my approach to AI right now
    is I will basically ignore it because I think
    the whole tech industry around AI is in a
    very bad position, and its 90% marketing and
    10% reality. And, in 5 years, things will change
    and at that point, we will see what of the AI
    is getting used for real workloads".


    The key thing to grasp is that when an LLM system is
    self-correcting and can mostly learn entirely on its
    own the typical human technological development cycle
    is blown away. LLMs may get ten-fold more powerful
    every year. Linus Torvalds does not seem to factor
    that into his analysis.

    The key element of this increase in power is simply
    the maximum number of tokens that it can keep track of.
    ChatGPT used to act like it has alzheimers when you
    exceeded its 4000 token limit.


    https://www.tweaktown.com/news/101381/linux-creator-linus-torvalds-ai- is-useless-its-90-marketing-while-he-ignores-for-now/index.html

    I think his fallacy is to judge AI as hype.
    So his timeline 2030 might have received a
    suckerpunch by Copilot+ already now in late

    2025. Before in 2024, when he made his statement,
    AI was already not hype at all:

    2009rCo2012 (Deep Learning Wave): GPUs began being
    used for deep learning research, thanks to frameworks
    like Caffe and Theano. This was when convolutional
    networks for vision really took off.

    2012rCo2015 (Big Data + Deep Learning): Data centers
    started leveraging clusters of GPUs for large-scale
    training, using distributed frameworks like
    TensorFlow and PyTorch (from 2016). Text analysis
    and recommendation systems were already benefiting from this.

    2015rCo2020 (Specialized Accelerators): Companies
    like Google (TPU), Nvidia (A100), and Qualcomm
    (Hexagon DSP) developed purpose-built hardware
    for AI inference and training. Large-scale NLP
    models like BERT were trained in these environments.

    2020rCo2024 (Commercial AI Explosion): On-device AI,
    cloud AI services, Copilot+, Claude integrations rCo
    all of these are the practical realization of what
    had been quietly powering research and enterprise
    workloads for over a decade.

    Bye


    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2