Thats a funny quote:--- Synchronet 3.21b-Linux NewsLink 1.2
"Once you have a truly massive amount of information
integrated as knowledge, then the human-software
system will be superhuman, in the same sense that
mankind with writing is superhuman compared to
mankind before writing."
https://en.wikipedia.org/wiki/Douglas_Lenat#Quotes
Hi,
Just a quick take on the Prolog pulse:
- SWI-Prolog: Dead Horse, LISP / MeTTa is it.
- Scyer: Dead Horse, no core commits
- Ciao Prolog: Dead Horse, no core commits
- Who else?
So I guess its back to Strawberry Prolog for teaching!
They are also building AGI:
https://dobrev.com/AI/Think_in_mind.pdf
Bye
Mild Shock schrieb:
Thats a funny quote:
"Once you have a truly massive amount of information
integrated as knowledge, then the human-software
system will be superhuman, in the same sense that
mankind with writing is superhuman compared to
mankind before writing."
https://en.wikipedia.org/wiki/Douglas_Lenat#Quotes
Hi,
Just a quick take on the Prolog pulse:
- SWI-Prolog: Dead Horse, LISP / MeTTa is it.
- Scyer: Dead Horse, no core commits
- Ciao Prolog: Dead Horse, no core commits
- Who else?
So I guess its back to Strawberry Prolog for teaching!
They are also building AGI:
https://dobrev.com/AI/Think_in_mind.pdf
Bye
Mild Shock schrieb:
Thats a funny quote:
"Once you have a truly massive amount of information
integrated as knowledge, then the human-software
system will be superhuman, in the same sense that
mankind with writing is superhuman compared to
mankind before writing."
https://en.wikipedia.org/wiki/Douglas_Lenat#Quotes
Hi,
How it started:
TPTP World Anthem [The TPTP World Needs Money] https://drive.google.com/file/d/1otJ8FxCiwVTIMoEDMW98puuTKIUH7BYF/view?pli=1
How its going:
ANTHEM 2.0: Automated Reasoning for Answer Set Programming https://www.cambridge.org/core/journals/theory-and-practice-of-logic-programming/article/anthem-20-automated-reasoning-for-answer-set-programming/69D4B6430617A334906A99F787FA2727
Bye
P.S.: Would be fine if the TPTP sublanguages, were
lean, just some operator definitions, and not some
bloathed nonsense. Going such a Lean path would
allow to stay in the ISO core standard, instead of
inventing one new language after the other, Gringo,
who knows what, with horrible eval semantics of
(=)/2 and silly (=>)/2 rules for example in Picat.
Also translating ASP to FOL, when there are no
cardinality heads, is rather trivial.
Mild Shock schrieb:
Hi,
Just a quick take on the Prolog pulse:
- SWI-Prolog: Dead Horse, LISP / MeTTa is it.
- Scyer: Dead Horse, no core commits
- Ciao Prolog: Dead Horse, no core commits
- Who else?
So I guess its back to Strawberry Prolog for teaching!
They are also building AGI:
https://dobrev.com/AI/Think_in_mind.pdf
Bye
Mild Shock schrieb:
Thats a funny quote:
"Once you have a truly massive amount of information
integrated as knowledge, then the human-software
system will be superhuman, in the same sense that
mankind with writing is superhuman compared to
mankind before writing."
https://en.wikipedia.org/wiki/Douglas_Lenat#Quotes
Hi,
Just a quick take on the Prolog pulse:
- SWI-Prolog: Dead Horse, LISP / MeTTa is it.
- Scyer: Dead Horse, no core commits
- Ciao Prolog: Dead Horse, no core commits
- Who else?
So I guess its back to Strawberry Prolog for teaching!
They are also building AGI:
https://dobrev.com/AI/Think_in_mind.pdf
Bye
Mild Shock schrieb:
Thats a funny quote:
"Once you have a truly massive amount of information
integrated as knowledge, then the human-software
system will be superhuman, in the same sense that
mankind with writing is superhuman compared to
mankind before writing."
https://en.wikipedia.org/wiki/Douglas_Lenat#Quotes
Hi,
Just a quick take on the golang pulse:
How it started:
A Comparative Study of Language Implementations https://www.arxiv.org/abs/2502.01651
How its going:
Co-Creator of Go Language is Rightly Furious https://itsfoss.com/news/rob-pike-furious/
Rob Pike going bonkers over AI.
Bye
See especially, ca factor 4x-5x behind:
Go < Julia < Rust < C < Zig < Mojo < C++
(a): Average tokens per second for stories15M.bin model
Go < Julia < Rust < Zig < C < C++ < Mojo
(c): Average tokens per second for stories42M.bin model
So I guess its back to Fortran.
Mild Shock schrieb:
Hi,
Just a quick take on the Prolog pulse:
- SWI-Prolog: Dead Horse, LISP / MeTTa is it.
- Scyer: Dead Horse, no core commits
- Ciao Prolog: Dead Horse, no core commits
- Who else?
So I guess its back to Strawberry Prolog for teaching!
They are also building AGI:
https://dobrev.com/AI/Think_in_mind.pdf
Bye
Mild Shock schrieb:
Thats a funny quote:
"Once you have a truly massive amount of information
integrated as knowledge, then the human-software
system will be superhuman, in the same sense that
mankind with writing is superhuman compared to
mankind before writing."
https://en.wikipedia.org/wiki/Douglas_Lenat#Quotes
Hi,
Geoffrey E. Hinton, the Nobel Prize winner
for AI. He was already beating the drums
for ReLU in 2010:
HRectified Linear Units Improve Restricted Boltzmann Machines
Geoffrey E. Hinton & Vinod Nair - 2010 https://www.cs.toronto.edu/~fritz/absps/reluICML.pdf
Because ANNs (Artificial Neural Networks) were originally
designed with other functions, e.g. with Logistic function:
An artificial neuron is a mathematical function conceived
as a model of a biological neuron in a neural network. https://en.wikipedia.org/wiki/Artificial_neuron
If you populate additive factor graphs with log P,
you basically get multiplicative factor graphs.
So an ANN can express belief networks, right?
Bye
P.S.: What is all the hype about Causal AI, and
the Ladder of Causation |a la Judea Pearl?
Causal AI rCo the next gen AI
Prof. Sotirios A. Tsaftaris - 2025 https://www.youtube.com/watch?v=IelslFzdsYw
Hi,
Geoffrey E. Hinton, the Nobel Prize winner
for AI. He was already beating the drums
for ReLU in 2010:
HRectified Linear Units Improve Restricted Boltzmann Machines
Geoffrey E. Hinton & Vinod Nair - 2010 https://www.cs.toronto.edu/~fritz/absps/reluICML.pdf
Because ANNs (Artificial Neural Networks) were originally
designed with other functions, e.g. with Logistic function:
An artificial neuron is a mathematical function conceived
as a model of a biological neuron in a neural network. https://en.wikipedia.org/wiki/Artificial_neuron
If you populate additive factor graphs with log P,
you basically get multiplicative factor graphs.
So an ANN can express belief networks, right?
Bye
P.S.: What is all the hype about Causal AI, and
the Ladder of Causation |a la Judea Pearl?
Causal AI rCo the next gen AI
Prof. Sotirios A. Tsaftaris - 2025 https://www.youtube.com/watch?v=IelslFzdsYw
Hi,
Large Reasoning Models (LRM) seem to move from
Foundation of Mathematics (FOM) to Theoretical
Computer Science (TCS). FOM typically gives
you "white science" mathematics, with sets and
infinity, if you are lucky a little recursion
theory. Fun fact TCS is even more "white".
Interesting paper in as far:
Lean Meets Theoretical Computer Science:
Scalable Synthesis of Theorem Proving Challenges
in Formal-Informal Pairs
Terry Jingchen Zhang et. al. - 2025
https://arxiv.org/abs/2508.15878v1
One swallow does not make a summer?
But its probably a necessary step. The above
paper using Busy Beaver and Interger Constraints
as examples. What logical frameworks do even
apply, is it enough to have a "total function"
theory layer, or does TCS need more. TCS can
be heavy on all sort of discrete and
non-discrete mathematics.
Bye
Mild Shock schrieb:
Hi,
Geoffrey E. Hinton, the Nobel Prize winner
for AI. He was already beating the drums
for ReLU in 2010:
HRectified Linear Units Improve Restricted Boltzmann Machines
Geoffrey E. Hinton & Vinod Nair - 2010
https://www.cs.toronto.edu/~fritz/absps/reluICML.pdf
Because ANNs (Artificial Neural Networks) were originally
designed with other functions, e.g. with Logistic function:
An artificial neuron is a mathematical function conceived
as a model of a biological neuron in a neural network.
https://en.wikipedia.org/wiki/Artificial_neuron
If you populate additive factor graphs with log P,
you basically get multiplicative factor graphs.
So an ANN can express belief networks, right?
Bye
P.S.: What is all the hype about Causal AI, and
the Ladder of Causation |a la Judea Pearl?
Causal AI rCo the next gen AI
Prof. Sotirios A. Tsaftaris - 2025
https://www.youtube.com/watch?v=IelslFzdsYw
| Sysop: | Amessyroom |
|---|---|
| Location: | Fayetteville, NC |
| Users: | 59 |
| Nodes: | 6 (0 / 6) |
| Uptime: | 22:34:38 |
| Calls: | 810 |
| Calls today: | 1 |
| Files: | 1,287 |
| D/L today: |
12 files (21,036K bytes) |
| Messages: | 195,759 |