Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 27 |
Nodes: | 6 (0 / 6) |
Uptime: | 38:58:12 |
Calls: | 631 |
Calls today: | 2 |
Files: | 1,187 |
D/L today: |
23 files (29,781K bytes) |
Messages: | 174,061 |
2.3 Avoiding Unnecessary OCCUR Checks
It is possible to significantly reduce the number
of calls to OCCUR during a resolution unification
by the following observation. If two clauses are
being resolved, they are standardized apart.
Thus, a variable from the left-hand parent will not
occur in a term from the right-hand parent unless
during this unification, there has been a binding of a
variable from the right to a term from the left.
A similar statement holds for 1eft-to-right bindings.
Once again, in structure sharing, this condition
is easy to check.
Hi,
J Strother Moore II is the Original Gangster (OG)
of program sharng. Interestingly structure sharing
meant always program sharing in the theorem
proving community back then:
COMPUTATIONAL LOGIC: STRUCTURE SHARING AND
PROOF OF PROGRAM PROPERTIES
J Strother Moore II - 1973 https://era.ed.ac.uk/bitstream/handle/1842/2245/Moore-Thesis-1973-OCR.pdf
Only the WAM community managed to intsitutionalize
the term structure sharng, as a reduced form of
program sharing, namely goal argument sharing
not using pairs of two pointers with skeleton and binding
environment anymore, to indentify a Prolog term,
but simple single pointers for a Prolog term.
Bye
Hi,
J Strother Moore II is the Original Gangster (OG)
of program sharng. Interestingly structure sharing
meant always program sharing in the theorem
proving community back then:
COMPUTATIONAL LOGIC: STRUCTURE SHARING AND
PROOF OF PROGRAM PROPERTIES
J Strother Moore II - 1973 https://era.ed.ac.uk/bitstream/handle/1842/2245/Moore-Thesis-1973-OCR.pdf
Only the WAM community managed to intsitutionalize
the term structure sharng, as a reduced form of
program sharing, namely goal argument sharing
not using pairs of two pointers with skeleton and binding
environment anymore, to indentify a Prolog term,
but simple single pointers for a Prolog term.
Bye
Hi,
Woa! I didn't know that lausy Microsoft
Copilot certified Laptops are that fast:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Dogelog Player 2.1.1 for Java
% AMD Ryzen 5 4500U
% ?- time(test).
% % Zeit 756 ms, GC 1 ms, Lips 9950390, Uhr 23.08.2025 02:45
% true.
% AMD Ryzen AI 7 350
% ?- time(test).
% % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
% true.
What happened to the Death of Moore's Law?
But somehow memory speed, CPU - RAM and GPU - RAM
trippled. Possibly due to some Artificial
Intelligence demand. And the bloody thing
has also a NPU (Neural Processing Unit),
nicely visible.
Bye
About the RAM speed. L1, L2 and L3
caches are bigger. So its harder to poison
the CPU. Also the CPU shows a revival of
Hyper-Threading Technology (HTT), which
AMD gives it a different name: They call it
Simultaneous multithreading (SMT).
https://www.cpubenchmark.net/compare/3702vs6397/AMD-Ryzen-5-4500U-vs-AMD-Ryzen-AI-7-350
BTW: Still ticking along with the primes.pl example:
test :-
-a-a len(L, 1000),
-a-a primes(L, _).
primes([], 1).
primes([J|L], J) :-
-a-a primes(L, I),
-a-a K is I+1,
-a-a search(L, K, J).
search(L, I, J) :-
-a-a mem(X, L),
-a-a I mod X =:= 0, !,
-a-a K is I+1,
-a-a search(L, K, J).
search(_, I, I).
mem(X, [X|_]).
mem(X, [_|Y]) :-
-a-a mem(X, Y).
len([], 0) :- !.
len([_|L], N) :-
-a-a N > 0,
-a-a M is N-1,
-a-a len(L,
Mild Shock schrieb:
Hi,
J Strother Moore II is the Original Gangster (OG)
of program sharng. Interestingly structure sharing
meant always program sharing in the theorem
proving community back then:
COMPUTATIONAL LOGIC: STRUCTURE SHARING AND
PROOF OF PROGRAM PROPERTIES
J Strother Moore II - 1973
https://era.ed.ac.uk/bitstream/handle/1842/2245/Moore-Thesis-1973-OCR.pdf
Only the WAM community managed to intsitutionalize
the term structure sharng, as a reduced form of
program sharing, namely goal argument sharing
not using pairs of two pointers with skeleton and binding
environment anymore, to indentify a Prolog term,
but simple single pointers for a Prolog term.
Bye
Hi,
2025 will be last year we hear of Python.
This is just a tears in your eyes Eulogy:
Python: The Documentary | An origin story https://www.youtube.com/watch?v=GfH4QL4VqJ0
The Zen of Python is very different
from the Zen of Copilot+ . The bloody
Copilot+ Laptop doesn't use Python
in its Artificial Intelligence:
AI Content Extraction
- Python Involced? rYi None at runtime,
-a Model runs in ONNX + DirectML on NPU
AI Image Search
- Python Involced? rYi None at runtime,
-a ON-device image feature, fully compiled
AI Phi Silica
- Python Involced? rYi None at runtime,
-a Lightweight Phi model packaged as ONNX
- AI Semantic Analysis? rYi None at runtime,
-a Text understanding done via compiled
-a ONNX operators
Bye
Mild Shock schrieb:
Hi,
Woa! I didn't know that lausy Microsoft
Copilot certified Laptops are that fast:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Dogelog Player 2.1.1 for Java
% AMD Ryzen 5 4500U
% ?- time(test).
% % Zeit 756 ms, GC 1 ms, Lips 9950390, Uhr 23.08.2025 02:45
% true.
% AMD Ryzen AI 7 350
% ?- time(test).
% % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
% true.
What happened to the Death of Moore's Law?
But somehow memory speed, CPU - RAM and GPU - RAM
trippled. Possibly due to some Artificial
Intelligence demand. And the bloody thing
has also a NPU (Neural Processing Unit),
nicely visible.
Bye
About the RAM speed. L1, L2 and L3
caches are bigger. So its harder to poison
the CPU. Also the CPU shows a revival of
Hyper-Threading Technology (HTT), which
AMD gives it a different name: They call it
Simultaneous multithreading (SMT).
https://www.cpubenchmark.net/compare/3702vs6397/AMD-Ryzen-5-4500U-vs-AMD-Ryzen-AI-7-350
BTW: Still ticking along with the primes.pl example:
test :-
-a-a-a len(L, 1000),
-a-a-a primes(L, _).
primes([], 1).
primes([J|L], J) :-
-a-a-a primes(L, I),
-a-a-a K is I+1,
-a-a-a search(L, K, J).
search(L, I, J) :-
-a-a-a mem(X, L),
-a-a-a I mod X =:= 0, !,
-a-a-a K is I+1,
-a-a-a search(L, K, J).
search(_, I, I).
mem(X, [X|_]).
mem(X, [_|Y]) :-
-a-a-a mem(X, Y).
len([], 0) :- !.
len([_|L], N) :-
-a-a-a N > 0,
-a-a-a M is N-1,
-a-a-a len(L,
Mild Shock schrieb:
Hi,
J Strother Moore II is the Original Gangster (OG)
of program sharng. Interestingly structure sharing
meant always program sharing in the theorem
proving community back then:
COMPUTATIONAL LOGIC: STRUCTURE SHARING AND
PROOF OF PROGRAM PROPERTIES
J Strother Moore II - 1973
https://era.ed.ac.uk/bitstream/handle/1842/2245/Moore-Thesis-1973-OCR.pdf >>>
Only the WAM community managed to intsitutionalize
the term structure sharng, as a reduced form of
program sharing, namely goal argument sharing
not using pairs of two pointers with skeleton and binding
environment anymore, to indentify a Prolog term,
but simple single pointers for a Prolog term.
Bye
Hi,
Swiss AI Apertus
Model ID: apertus-70b-instruct
Parameters: 70 billion
License: Apache 2.0
Training: 15T tokens across 1,000+ languages
Availability: Free during Swiss AI Weeks (September 2025)
https://platform.publicai.co/docs
Bye
P.S.: A chat interface is here:
Try Apertus
https://publicai.co/
Mild Shock schrieb:
Hi,
2025 will be last year we hear of Python.
This is just a tears in your eyes Eulogy:
Python: The Documentary | An origin story
https://www.youtube.com/watch?v=GfH4QL4VqJ0
The Zen of Python is very different
from the Zen of Copilot+ . The bloody
Copilot+ Laptop doesn't use Python
in its Artificial Intelligence:
AI Content Extraction
- Python Involced? rYi None at runtime,
-a-a Model runs in ONNX + DirectML on NPU
AI Image Search
- Python Involced? rYi None at runtime,
-a-a ON-device image feature, fully compiled
AI Phi Silica
- Python Involced? rYi None at runtime,
-a-a Lightweight Phi model packaged as ONNX
- AI Semantic Analysis? rYi None at runtime,
-a-a Text understanding done via compiled
-a-a ONNX operators
Bye
Mild Shock schrieb:
Hi,
Woa! I didn't know that lausy Microsoft
Copilot certified Laptops are that fast:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Dogelog Player 2.1.1 for Java
% AMD Ryzen 5 4500U
% ?- time(test).
% % Zeit 756 ms, GC 1 ms, Lips 9950390, Uhr 23.08.2025 02:45
% true.
% AMD Ryzen AI 7 350
% ?- time(test).
% % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
% true.
What happened to the Death of Moore's Law?
But somehow memory speed, CPU - RAM and GPU - RAM
trippled. Possibly due to some Artificial
Intelligence demand. And the bloody thing
has also a NPU (Neural Processing Unit),
nicely visible.
Bye
About the RAM speed. L1, L2 and L3
caches are bigger. So its harder to poison
the CPU. Also the CPU shows a revival of
Hyper-Threading Technology (HTT), which
AMD gives it a different name: They call it
Simultaneous multithreading (SMT).
https://www.cpubenchmark.net/compare/3702vs6397/AMD-Ryzen-5-4500U-vs-AMD-Ryzen-AI-7-350
BTW: Still ticking along with the primes.pl example:
test :-
-a-a-a len(L, 1000),
-a-a-a primes(L, _).
primes([], 1).
primes([J|L], J) :-
-a-a-a primes(L, I),
-a-a-a K is I+1,
-a-a-a search(L, K, J).
search(L, I, J) :-
-a-a-a mem(X, L),
-a-a-a I mod X =:= 0, !,
-a-a-a K is I+1,
-a-a-a search(L, K, J).
search(_, I, I).
mem(X, [X|_]).
mem(X, [_|Y]) :-
-a-a-a mem(X, Y).
len([], 0) :- !.
len([_|L], N) :-
-a-a-a N > 0,
-a-a-a M is N-1,
-a-a-a len(L,
Mild Shock schrieb:
Hi,
J Strother Moore II is the Original Gangster (OG)
of program sharng. Interestingly structure sharing
meant always program sharing in the theorem
proving community back then:
COMPUTATIONAL LOGIC: STRUCTURE SHARING AND
PROOF OF PROGRAM PROPERTIES
J Strother Moore II - 1973
https://era.ed.ac.uk/bitstream/handle/1842/2245/Moore-Thesis-1973-OCR.pdf >>>>
Only the WAM community managed to intsitutionalize
the term structure sharng, as a reduced form of
program sharing, namely goal argument sharing
not using pairs of two pointers with skeleton and binding
environment anymore, to indentify a Prolog term,
but simple single pointers for a Prolog term.
Bye
Hi,
For the LP (Linear programming) part, it
might be interesting to recall, that SWI-Prolog
has an according library:
A.55 library(simplex): Solve linear programming problems https://eu.swi-prolog.org/pldoc/man?section=simplex
To model the constraint store, it doesnrCOt need
any native Prolog system support, since it uses
DCG for state threading. Linear programming was
long time the pinnacle of mathematical problem
solving. But some Articial Intelligence method do
typically go beyond the linear case, might also
tackle non-linear problems etc.. making heavy
use of a NPU (Neural Processing Unit). In May 2025
the first AI Laptops arrived with >40 TOPS NPUs.
Spearheaded by Microsoft branding it Copilot+.
Bye
Hi,
It seems the LP (Linear programming)
library by SWI-Prolog has been also
ported to Scryer Prolog using the same DCG
design as demonstrated in SWI-Prolog again:
Module simplex
https://www.scryer.pl/simplex
What it requires from the Prolog system,
and is not covered by the ISO core standard,
are rational number, i.e. rdiv/2 etc.. and if
you feed it with floating point numbers,
judging from the source code, it might bark
that it has no CLP(R) available to solve it. CLP(R)
could be maybe a good candidate for Copilot+
machines, but I am currently not aware
of a Copliot+ Prolog system so to speak:
About Microsoft Copilot+ PCs https://www.wired.com/story/what-is-copilot-plus-pc/
The DCG design could make it easy that a
solver somehow hands a problem to a NPU,
making it transparent for the end-user.
Bye
Mild Shock schrieb:
Hi,
For the LP (Linear programming) part, it
might be interesting to recall, that SWI-Prolog
has an according library:
A.55 library(simplex): Solve linear programming problems
https://eu.swi-prolog.org/pldoc/man?section=simplex
To model the constraint store, it doesnrCOt need
any native Prolog system support, since it uses
DCG for state threading. Linear programming was
long time the pinnacle of mathematical problem
solving. But some Articial Intelligence method do
typically go beyond the linear case, might also
tackle non-linear problems etc.. making heavy
use of a NPU (Neural Processing Unit). In May 2025
the first AI Laptops arrived with >40 TOPS NPUs.
Spearheaded by Microsoft branding it Copilot+.
Bye
I used Claude code to help me create a Prolog
program of a little expert system to manage a
kitchen that needed to produce different dishes
with different appliances and to be able to
maximize revenue. -- bauhaus911
Hi,
Thank god it was only coffee and not orange juice:
Ozzy Pours The Perfect O.J.
https://m.youtube.com/watch?v=ojQUYq21G-o
Bye
Mild Shock schrieb:
Hi,
It seems the LP (Linear programming)
library by SWI-Prolog has been also
ported to Scryer Prolog using the same DCG
design as demonstrated in SWI-Prolog again:
Module simplex
https://www.scryer.pl/simplex
What it requires from the Prolog system,
and is not covered by the ISO core standard,
are rational number, i.e. rdiv/2 etc.. and if
you feed it with floating point numbers,
judging from the source code, it might bark
that it has no CLP(R) available to solve it. CLP(R)
could be maybe a good candidate for Copilot+
machines, but I am currently not aware
of a Copliot+ Prolog system so to speak:
About Microsoft Copilot+ PCs
https://www.wired.com/story/what-is-copilot-plus-pc/
The DCG design could make it easy that a
solver somehow hands a problem to a NPU,
making it transparent for the end-user.
Bye
Mild Shock schrieb:
Hi,
For the LP (Linear programming) part, it
might be interesting to recall, that SWI-Prolog
has an according library:
A.55 library(simplex): Solve linear programming problems
https://eu.swi-prolog.org/pldoc/man?section=simplex
To model the constraint store, it doesnrCOt need
any native Prolog system support, since it uses
DCG for state threading. Linear programming was
long time the pinnacle of mathematical problem
solving. But some Articial Intelligence method do
typically go beyond the linear case, might also
tackle non-linear problems etc.. making heavy
use of a NPU (Neural Processing Unit). In May 2025
the first AI Laptops arrived with >40 TOPS NPUs.
Spearheaded by Microsoft branding it Copilot+.
Bye
Jan, werCOve asked you to be less combative withpeople here, but you continue to be extremely
You have very helpful things to add, but whenyou then go back and delete everything you post,
Hi,
I like the expert system description by
I used Claude code to help me create a Prolog
program of a little expert system to manage a
kitchen that needed to produce different dishes
with different appliances and to be able to
maximize revenue. -- bauhaus911
Instead of maximizing revenue you could also
maximize energy boost. So instead of having
a couple of morons on SWI-Prolog discourse,
like those that have parked their brain in the
nowhere and are going full throttle Donald
Trump / Kesh Patel Nazi, the system could
indeed recommend Orange Juice instead of
coffee. For the following brain benefits:
- Vitamin C powerhouse: ~50rCo60 mg per 100 ml,
-a giving a solid immune boost.
- Quick energy: natural sugars (glucose + fructose)
-a give your brain and body fast fuel.
- Hydration: mostly water, which helps maintain
-a energy and focus.
Have Fun! LoL
Bye
Mild Shock schrieb:
Hi,
Thank god it was only coffee and not orange juice:
Ozzy Pours The Perfect O.J.
https://m.youtube.com/watch?v=ojQUYq21G-o
Bye
Mild Shock schrieb:
Hi,
It seems the LP (Linear programming)
library by SWI-Prolog has been also
ported to Scryer Prolog using the same DCG
design as demonstrated in SWI-Prolog again:
Module simplex
https://www.scryer.pl/simplex
What it requires from the Prolog system,
and is not covered by the ISO core standard,
are rational number, i.e. rdiv/2 etc.. and if
you feed it with floating point numbers,
judging from the source code, it might bark
that it has no CLP(R) available to solve it. CLP(R)
could be maybe a good candidate for Copilot+
machines, but I am currently not aware
of a Copliot+ Prolog system so to speak:
About Microsoft Copilot+ PCs
https://www.wired.com/story/what-is-copilot-plus-pc/
The DCG design could make it easy that a
solver somehow hands a problem to a NPU,
making it transparent for the end-user.
Bye
Mild Shock schrieb:
Hi,
For the LP (Linear programming) part, it
might be interesting to recall, that SWI-Prolog
has an according library:
A.55 library(simplex): Solve linear programming problems
https://eu.swi-prolog.org/pldoc/man?section=simplex
To model the constraint store, it doesnrCOt need
any native Prolog system support, since it uses
DCG for state threading. Linear programming was
long time the pinnacle of mathematical problem
solving. But some Articial Intelligence method do
typically go beyond the linear case, might also
tackle non-linear problems etc.. making heavy
use of a NPU (Neural Processing Unit). In May 2025
the first AI Laptops arrived with >40 TOPS NPUs.
Spearheaded by Microsoft branding it Copilot+.
Bye
Hi,
You deleted like 10 posts of mine in the last
48 hours, which tried to explain why patching
is against "discourse".
Even Torbj||rn Lager agreed. I don't think
you can continue your forum in this style.
And then after you deleted a dozen of posts
of mine, I am not allowed to delete my posts?
You are simply completely crazy!!!
Bye
I got the following nonsense from you:
Jan, werCOve asked you to be less combative withpeople here, but you continue to be extremely
aggressive towards other users of the site.
You have very helpful things to add, but whenyou then go back and delete everything you post,
it obviates that helpfulness.
Mild Shock schrieb:
Hi,
I like the expert system description by
I used Claude code to help me create a Prolog
program of a little expert system to manage a
kitchen that needed to produce different dishes
with different appliances and to be able to
maximize revenue. -- bauhaus911
Instead of maximizing revenue you could also
maximize energy boost. So instead of having
a couple of morons on SWI-Prolog discourse,
like those that have parked their brain in the
nowhere and are going full throttle Donald
Trump / Kesh Patel Nazi, the system could
indeed recommend Orange Juice instead of
coffee. For the following brain benefits:
- Vitamin C powerhouse: ~50rCo60 mg per 100 ml,
-a-a giving a solid immune boost.
- Quick energy: natural sugars (glucose + fructose)
-a-a give your brain and body fast fuel.
- Hydration: mostly water, which helps maintain
-a-a energy and focus.
Have Fun! LoL
Bye
Mild Shock schrieb:
Hi,
Thank god it was only coffee and not orange juice:
Ozzy Pours The Perfect O.J.
https://m.youtube.com/watch?v=ojQUYq21G-o
Bye
Mild Shock schrieb:
Hi,
It seems the LP (Linear programming)
library by SWI-Prolog has been also
ported to Scryer Prolog using the same DCG
design as demonstrated in SWI-Prolog again:
Module simplex
https://www.scryer.pl/simplex
What it requires from the Prolog system,
and is not covered by the ISO core standard,
are rational number, i.e. rdiv/2 etc.. and if
you feed it with floating point numbers,
judging from the source code, it might bark
that it has no CLP(R) available to solve it. CLP(R)
could be maybe a good candidate for Copilot+
machines, but I am currently not aware
of a Copliot+ Prolog system so to speak:
About Microsoft Copilot+ PCs
https://www.wired.com/story/what-is-copilot-plus-pc/
The DCG design could make it easy that a
solver somehow hands a problem to a NPU,
making it transparent for the end-user.
Bye
Mild Shock schrieb:
Hi,
For the LP (Linear programming) part, it
might be interesting to recall, that SWI-Prolog
has an according library:
A.55 library(simplex): Solve linear programming problems
https://eu.swi-prolog.org/pldoc/man?section=simplex
To model the constraint store, it doesnrCOt need
any native Prolog system support, since it uses
DCG for state threading. Linear programming was
long time the pinnacle of mathematical problem
solving. But some Articial Intelligence method do
typically go beyond the linear case, might also
tackle non-linear problems etc.. making heavy
use of a NPU (Neural Processing Unit). In May 2025
the first AI Laptops arrived with >40 TOPS NPUs.
Spearheaded by Microsoft branding it Copilot+.
Bye
Hi,
You deleted like 10 posts of mine in the last
48 hours, which tried to explain why patching
is against "discourse".
Even Torbj||rn Lager agreed. I don't think
you can continue your forum in this style.
And then after you deleted a dozen of posts
of mine, I am not allowed to delete my posts?
You are simply completely crazy!!!
Bye
I got the following nonsense from you:
Jan, werCOve asked you to be less combative withpeople here, but you continue to be extremely
aggressive towards other users of the site.
You have very helpful things to add, but whenyou then go back and delete everything you post,
it obviates that helpfulness.
Mild Shock schrieb:
Hi,
I like the expert system description by
I used Claude code to help me create a Prolog
program of a little expert system to manage a
kitchen that needed to produce different dishes
with different appliances and to be able to
maximize revenue. -- bauhaus911
Instead of maximizing revenue you could also
maximize energy boost. So instead of having
a couple of morons on SWI-Prolog discourse,
like those that have parked their brain in the
nowhere and are going full throttle Donald
Trump / Kesh Patel Nazi, the system could
indeed recommend Orange Juice instead of
coffee. For the following brain benefits:
- Vitamin C powerhouse: ~50rCo60 mg per 100 ml,
-a-a giving a solid immune boost.
- Quick energy: natural sugars (glucose + fructose)
-a-a give your brain and body fast fuel.
- Hydration: mostly water, which helps maintain
-a-a energy and focus.
Have Fun! LoL
Bye
Mild Shock schrieb:
Hi,
Thank god it was only coffee and not orange juice:
Ozzy Pours The Perfect O.J.
https://m.youtube.com/watch?v=ojQUYq21G-o
Bye
Mild Shock schrieb:
Hi,
It seems the LP (Linear programming)
library by SWI-Prolog has been also
ported to Scryer Prolog using the same DCG
design as demonstrated in SWI-Prolog again:
Module simplex
https://www.scryer.pl/simplex
What it requires from the Prolog system,
and is not covered by the ISO core standard,
are rational number, i.e. rdiv/2 etc.. and if
you feed it with floating point numbers,
judging from the source code, it might bark
that it has no CLP(R) available to solve it. CLP(R)
could be maybe a good candidate for Copilot+
machines, but I am currently not aware
of a Copliot+ Prolog system so to speak:
About Microsoft Copilot+ PCs
https://www.wired.com/story/what-is-copilot-plus-pc/
The DCG design could make it easy that a
solver somehow hands a problem to a NPU,
making it transparent for the end-user.
Bye
Mild Shock schrieb:
Hi,
For the LP (Linear programming) part, it
might be interesting to recall, that SWI-Prolog
has an according library:
A.55 library(simplex): Solve linear programming problems
https://eu.swi-prolog.org/pldoc/man?section=simplex
To model the constraint store, it doesnrCOt need
any native Prolog system support, since it uses
DCG for state threading. Linear programming was
long time the pinnacle of mathematical problem
solving. But some Articial Intelligence method do
typically go beyond the linear case, might also
tackle non-linear problems etc.. making heavy
use of a NPU (Neural Processing Unit). In May 2025
the first AI Laptops arrived with >40 TOPS NPUs.
Spearheaded by Microsoft branding it Copilot+.
Bye
Hi,
2025 will be last year we hear of Python.
This is just a tears in your eyes Eulogy:
Python: The Documentary | An origin story https://www.youtube.com/watch?v=GfH4QL4VqJ0
The Zen of Python is very different
from the Zen of Copilot+ . The bloody
Copilot+ Laptop doesn't use Python
in its Artificial Intelligence:
AI Content Extraction
- Python Involced? rYi None at runtime,
-a Model runs in ONNX + DirectML on NPU
AI Image Search
- Python Involced? rYi None at runtime,
-a ON-device image feature, fully compiled
AI Phi Silica
- Python Involced? rYi None at runtime,
-a Lightweight Phi model packaged as ONNX
- AI Semantic Analysis? rYi None at runtime,
-a Text understanding done via compiled
-a ONNX operators
Bye
Mild Shock schrieb:
Hi,
Woa! I didn't know that lausy Microsoft
Copilot certified Laptops are that fast:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Dogelog Player 2.1.1 for Java
% AMD Ryzen 5 4500U
% ?- time(test).
% % Zeit 756 ms, GC 1 ms, Lips 9950390, Uhr 23.08.2025 02:45
% true.
% AMD Ryzen AI 7 350
% ?- time(test).
% % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
% true.
What happened to the Death of Moore's Law?
But somehow memory speed, CPU - RAM and GPU - RAM
trippled. Possibly due to some Artificial
Intelligence demand. And the bloody thing
has also a NPU (Neural Processing Unit),
nicely visible.
Bye
About the RAM speed. L1, L2 and L3
caches are bigger. So its harder to poison
the CPU. Also the CPU shows a revival of
Hyper-Threading Technology (HTT), which
AMD gives it a different name: They call it
Simultaneous multithreading (SMT).
https://www.cpubenchmark.net/compare/3702vs6397/AMD-Ryzen-5-4500U-vs-AMD-Ryzen-AI-7-350
BTW: Still ticking along with the primes.pl example:
test :-
-a-a-a len(L, 1000),
-a-a-a primes(L, _).
primes([], 1).
primes([J|L], J) :-
-a-a-a primes(L, I),
-a-a-a K is I+1,
-a-a-a search(L, K, J).
search(L, I, J) :-
-a-a-a mem(X, L),
-a-a-a I mod X =:= 0, !,
-a-a-a K is I+1,
-a-a-a search(L, K, J).
search(_, I, I).
mem(X, [X|_]).
mem(X, [_|Y]) :-
-a-a-a mem(X, Y).
len([], 0) :- !.
len([_|L], N) :-
-a-a-a N > 0,
-a-a-a M is N-1,
-a-a-a len(L,
Mild Shock schrieb:
Hi,
J Strother Moore II is the Original Gangster (OG)
of program sharng. Interestingly structure sharing
meant always program sharing in the theorem
proving community back then:
COMPUTATIONAL LOGIC: STRUCTURE SHARING AND
PROOF OF PROGRAM PROPERTIES
J Strother Moore II - 1973
https://era.ed.ac.uk/bitstream/handle/1842/2245/Moore-Thesis-1973-OCR.pdf >>>
Only the WAM community managed to intsitutionalize
the term structure sharng, as a reduced form of
program sharing, namely goal argument sharing
not using pairs of two pointers with skeleton and binding
environment anymore, to indentify a Prolog term,
but simple single pointers for a Prolog term.
Bye
Hi,
Is there a silver lining of AI democratization?
With MedGamma I can analyse my own broken ribs.
If only I had a body scanner. I am currently exploring
options of LLM models that I could run on
my new AI soaked AMD Ryzen AI 7 350 laptop.
While Qualcomm spear headed LLM players with
their LM Studio. There is FastFlowLM
that can do Ryzen, and would utilize the NPU. For
example to run a distilled DeepSeek would amount to:
flm run deepseek-r1:8b
And yes there is MedGamma:
MedGemma:4B (Multimodal) Running Exclusively on AMD Ryzenrao AI NPU https://www.youtube.com/watch?v=KWzXZEOcgK4
Bye
Mild Shock schrieb:
Hi,
2025 will be last year we hear of Python.
This is just a tears in your eyes Eulogy:
Python: The Documentary | An origin story
https://www.youtube.com/watch?v=GfH4QL4VqJ0
The Zen of Python is very different
from the Zen of Copilot+ . The bloody
Copilot+ Laptop doesn't use Python
in its Artificial Intelligence:
AI Content Extraction
- Python Involced? rYi None at runtime,
-a-a Model runs in ONNX + DirectML on NPU
AI Image Search
- Python Involced? rYi None at runtime,
-a-a ON-device image feature, fully compiled
AI Phi Silica
- Python Involced? rYi None at runtime,
-a-a Lightweight Phi model packaged as ONNX
- AI Semantic Analysis? rYi None at runtime,
-a-a Text understanding done via compiled
-a-a ONNX operators
Bye
Mild Shock schrieb:
Hi,
Woa! I didn't know that lausy Microsoft
Copilot certified Laptops are that fast:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Dogelog Player 2.1.1 for Java
% AMD Ryzen 5 4500U
% ?- time(test).
% % Zeit 756 ms, GC 1 ms, Lips 9950390, Uhr 23.08.2025 02:45
% true.
% AMD Ryzen AI 7 350
% ?- time(test).
% % Zeit 378 ms, GC 1 ms, Lips 19900780, Uhr 28.08.2025 17:44
% true.
What happened to the Death of Moore's Law?
But somehow memory speed, CPU - RAM and GPU - RAM
trippled. Possibly due to some Artificial
Intelligence demand. And the bloody thing
has also a NPU (Neural Processing Unit),
nicely visible.
Bye
About the RAM speed. L1, L2 and L3
caches are bigger. So its harder to poison
the CPU. Also the CPU shows a revival of
Hyper-Threading Technology (HTT), which
AMD gives it a different name: They call it
Simultaneous multithreading (SMT).
https://www.cpubenchmark.net/compare/3702vs6397/AMD-Ryzen-5-4500U-vs-AMD-Ryzen-AI-7-350
BTW: Still ticking along with the primes.pl example:
test :-
-a-a-a len(L, 1000),
-a-a-a primes(L, _).
primes([], 1).
primes([J|L], J) :-
-a-a-a primes(L, I),
-a-a-a K is I+1,
-a-a-a search(L, K, J).
search(L, I, J) :-
-a-a-a mem(X, L),
-a-a-a I mod X =:= 0, !,
-a-a-a K is I+1,
-a-a-a search(L, K, J).
search(_, I, I).
mem(X, [X|_]).
mem(X, [_|Y]) :-
-a-a-a mem(X, Y).
len([], 0) :- !.
len([_|L], N) :-
-a-a-a N > 0,
-a-a-a M is N-1,
-a-a-a len(L,
Mild Shock schrieb:
Hi,
J Strother Moore II is the Original Gangster (OG)
of program sharng. Interestingly structure sharing
meant always program sharing in the theorem
proving community back then:
COMPUTATIONAL LOGIC: STRUCTURE SHARING AND
PROOF OF PROGRAM PROPERTIES
J Strother Moore II - 1973
https://era.ed.ac.uk/bitstream/handle/1842/2245/Moore-Thesis-1973-OCR.pdf >>>>
Only the WAM community managed to intsitutionalize
the term structure sharng, as a reduced form of
program sharing, namely goal argument sharing
not using pairs of two pointers with skeleton and binding
environment anymore, to indentify a Prolog term,
but simple single pointers for a Prolog term.
Bye
Hi,
Was Linus Torvalds cautious or clueless?
"I think AI is really interesting and I think it
is going to change the world. At the same time,
I hate the hype cycle so much that I really don't
want to go there. So, my approach to AI right now
is I will basically ignore it because I think
the whole tech industry around AI is in a
very bad position, and its 90% marketing and
10% reality. And, in 5 years, things will change
and at that point, we will see what of the AI
is getting used for real workloads".
https://www.tweaktown.com/news/101381/linux-creator-linus-torvalds-ai- is-useless-its-90-marketing-while-he-ignores-for-now/index.html
I think his fallacy is to judge AI as hype.
So his timeline 2030 might have received a
suckerpunch by Copilot+ already now in late
2025. Before in 2024, when he made his statement,
AI was already not hype at all:
2009rCo2012 (Deep Learning Wave): GPUs began being
used for deep learning research, thanks to frameworks
like Caffe and Theano. This was when convolutional
networks for vision really took off.
2012rCo2015 (Big Data + Deep Learning): Data centers
started leveraging clusters of GPUs for large-scale
training, using distributed frameworks like
TensorFlow and PyTorch (from 2016). Text analysis
and recommendation systems were already benefiting from this.
2015rCo2020 (Specialized Accelerators): Companies
like Google (TPU), Nvidia (A100), and Qualcomm
(Hexagon DSP) developed purpose-built hardware
for AI inference and training. Large-scale NLP
models like BERT were trained in these environments.
2020rCo2024 (Commercial AI Explosion): On-device AI,
cloud AI services, Copilot+, Claude integrations rCo
all of these are the practical realization of what
had been quietly powering research and enterprise
workloads for over a decade.
Bye