Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 23 |
Nodes: | 6 (0 / 6) |
Uptime: | 49:43:07 |
Calls: | 583 |
Files: | 1,138 |
Messages: | 111,301 |
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg
Hi,
Will the world build on American Stacks?
Or is the american dream over?
How it started, 1 month go:
Nvidia CEO Jensen Huang on AI, Musk and Trump https://www.youtube.com/watch?v=c-XAL2oYelI
How its going, now:
Are you still talking about Jeffrey Epstein? https://www.bbc.com/news/articles/cm2m879neljo
Bye
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Hi,
Rota often celebrated symbolic, analogical, and
conceptual understanding over brute calculation.
This philosophy has come full circle in modern AI:
- Large Language Models (LLMs) like GPT-4 don't
-a just store facts rCo they recognize patterns,
-a make analogies, and generate new structures
-a from old ones.
- RotarCOs work in combinatorics, symbolic logic, and
-a operator theory is essentially pattern-based
-a manipulation rCo exactly the kind of reasoning LLMs
-a aim to emulate at scale.
Rota had a clear aesthetic. He valued clean formalisms,
symbolic beauty, and well-defined structures. Rota wanted
mathematics to mean something rCo to be not just correct,
but intelligible and expressive.
In contrast, modern AI (especially LLMs like GPT) thrives
on the messy, including: Noisy data , Inconsistency ,
Uncertainty, Contradiction. AI engineers today are mining
meaning from noise.
What counts as rCLstructurerCY is often just the best
pragmatic/effective description available at that moment.
Bye
Mild Shock schrieb:
Hi,
Will the world build on American Stacks?
Or is the american dream over?
How it started, 1 month go:
Nvidia CEO Jensen Huang on AI, Musk and Trump
https://www.youtube.com/watch?v=c-XAL2oYelI
How its going, now:
Are you still talking about Jeffrey Epstein?
https://www.bbc.com/news/articles/cm2m879neljo
Bye
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Hi,
An example of human intelligence, is of course the
name "Rational Term" for cyclic terms set forth by
Alain Colmerauer. Since it plays with "Rational Numbers".
A subset of cyclic terms can indeed represent
rational numbers, and they give a nice counter
example to transitivity:
?- problem(X,Y,Z).
X = _S1-7-9-1, % where
-a-a-a _S1 = _S1-6-8-0-6-2-8,
Y = _S2-1-6-1-5-4-6-1, % where
-a-a-a _S2 = _S2-0-9-2,
Z = _S3-3-0, % where
-a-a-a _S3 = _S3-8-1
The Fuzzer 2 from 2025 does just what I did in 2023,
expanding rational numbers into rational terms:
% fuzzy(-Term)
fuzzy(X) :-
-a-a random_between(1,100,A),
-a-a random_between(1,100,B),
-a-a random_between(1,10,M),
-a-a fuzzy_chunk(M,A,B,C,X,Y),
-a-a random_between(1,10,L),
-a-a fuzzy_chunk(L,C,B,_,Y,Z),
-a-a Z = Y.
% fuzzy_chunk(+Integer,+Integer,+Integer,-Integer,+Term,-Term)
fuzzy_chunk(0, A, _, A, X, X) :- !.
fuzzy_chunk(N, A, B, C, Y-D, X) :-
-a-a M is N-1,
-a-a D is A // B,
-a-a H is 10*(A - B*D),
-a-a fuzzy_chunk(M, H, B, C, Y, X).
Bye
Mild Shock schrieb:
Hi,
Rota often celebrated symbolic, analogical, and
conceptual understanding over brute calculation.
This philosophy has come full circle in modern AI:
- Large Language Models (LLMs) like GPT-4 don't
-a-a just store facts rCo they recognize patterns,
-a-a make analogies, and generate new structures
-a-a from old ones.
- RotarCOs work in combinatorics, symbolic logic, and
-a-a operator theory is essentially pattern-based
-a-a manipulation rCo exactly the kind of reasoning LLMs
-a-a aim to emulate at scale.
Rota had a clear aesthetic. He valued clean formalisms,
symbolic beauty, and well-defined structures. Rota wanted
mathematics to mean something rCo to be not just correct,
but intelligible and expressive.
In contrast, modern AI (especially LLMs like GPT) thrives
on the messy, including: Noisy data , Inconsistency ,
Uncertainty, Contradiction. AI engineers today are mining
meaning from noise.
What counts as rCLstructurerCY is often just the best
pragmatic/effective description available at that moment.
Bye
Mild Shock schrieb:
Hi,
Will the world build on American Stacks?
Or is the american dream over?
How it started, 1 month go:
Nvidia CEO Jensen Huang on AI, Musk and Trump
https://www.youtube.com/watch?v=c-XAL2oYelI
How its going, now:
Are you still talking about Jeffrey Epstein?
https://www.bbc.com/news/articles/cm2m879neljo
Bye
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Hi,
Ok I have to correct "Rational Term" was less
common, what was more in use "Rational Trees",
but they might have also talked about finitely
represented infinite tree. Rational trees itself
probably an echo from Dmitry Mirimanoffs
(1861rCo1945) rCLextraordinairerCY sets.
Dmitry Semionovitch Mirimanoff (Russian:
-o-+-+|U-e-C-+-| -i-|-+-a-+-+-#-+-c -L-+-C-+-+-#|U-+-+-#; 13 September 1861, Pereslavl-Zalessky, Russia rCo 5 January 1945, Geneva,
Switzerland) was a member of the Moscow Mathematical
Society in 1897.[1] And later became a doctor of
mathematical sciences in 1900, in Geneva, and
taught at the universities of Geneva and Lausanne. https://en.wikipedia.org/wiki/Dmitry_Mirimanoff
This year we can again celebrate another researcher,
who died in 2023, Peter Aczel R.I.P., who made
as well some thoughtful deviance from orthodoxy.
Peter Aczel Memorial Conference on 10th September 2025.
Logic Colloquium will take place at the University
of Manchester-a (UK) from 11th to 12th September 2025 https://sites.google.com/view/blc2025/home
Have Fun!
Bye
Mild Shock schrieb:
Hi,
An example of human intelligence, is of course the
name "Rational Term" for cyclic terms set forth by
Alain Colmerauer. Since it plays with "Rational Numbers".
A subset of cyclic terms can indeed represent
rational numbers, and they give a nice counter
example to transitivity:
?- problem(X,Y,Z).
X = _S1-7-9-1, % where
-a-a-a-a _S1 = _S1-6-8-0-6-2-8,
Y = _S2-1-6-1-5-4-6-1, % where
-a-a-a-a _S2 = _S2-0-9-2,
Z = _S3-3-0, % where
-a-a-a-a _S3 = _S3-8-1
The Fuzzer 2 from 2025 does just what I did in 2023,
expanding rational numbers into rational terms:
% fuzzy(-Term)
fuzzy(X) :-
-a-a-a random_between(1,100,A),
-a-a-a random_between(1,100,B),
-a-a-a random_between(1,10,M),
-a-a-a fuzzy_chunk(M,A,B,C,X,Y),
-a-a-a random_between(1,10,L),
-a-a-a fuzzy_chunk(L,C,B,_,Y,Z),
-a-a-a Z = Y.
% fuzzy_chunk(+Integer,+Integer,+Integer,-Integer,+Term,-Term)
fuzzy_chunk(0, A, _, A, X, X) :- !.
fuzzy_chunk(N, A, B, C, Y-D, X) :-
-a-a-a M is N-1,
-a-a-a D is A // B,
-a-a-a H is 10*(A - B*D),
-a-a-a fuzzy_chunk(M, H, B, C, Y, X).
Bye
Mild Shock schrieb:
Hi,
Rota often celebrated symbolic, analogical, and
conceptual understanding over brute calculation.
This philosophy has come full circle in modern AI:
- Large Language Models (LLMs) like GPT-4 don't
-a-a just store facts rCo they recognize patterns,
-a-a make analogies, and generate new structures
-a-a from old ones.
- RotarCOs work in combinatorics, symbolic logic, and
-a-a operator theory is essentially pattern-based
-a-a manipulation rCo exactly the kind of reasoning LLMs
-a-a aim to emulate at scale.
Rota had a clear aesthetic. He valued clean formalisms,
symbolic beauty, and well-defined structures. Rota wanted
mathematics to mean something rCo to be not just correct,
but intelligible and expressive.
In contrast, modern AI (especially LLMs like GPT) thrives
on the messy, including: Noisy data , Inconsistency ,
Uncertainty, Contradiction. AI engineers today are mining
meaning from noise.
What counts as rCLstructurerCY is often just the best
pragmatic/effective description available at that moment.
Bye
Mild Shock schrieb:
Hi,
Will the world build on American Stacks?
Or is the american dream over?
How it started, 1 month go:
Nvidia CEO Jensen Huang on AI, Musk and Trump
https://www.youtube.com/watch?v=c-XAL2oYelI
How its going, now:
Are you still talking about Jeffrey Epstein?
https://www.bbc.com/news/articles/cm2m879neljo
Bye
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Hi,
I am trying to verify my hypothesis
that Rocq is a dead horse. Dead
horses can come in different forms,
for example a project that just
imitates what was already done by
the precursor, is most likely a
Dead horse. For example MetaRocq,
verifying a logic framework inside
some strong enough set theory,
is not novell. Maybe they get more
out of doing MetaRocq:
MetaRocq is a project formalizing Rocq in Rocq https://github.com/MetaRocq/metarocq#papers
#50 Nicolas Tabareau
https://www.youtube.com/watch?v=8kwe24gvigk
Bye
Mild Shock schrieb:
Hi,
Ok I have to correct "Rational Term" was less
common, what was more in use "Rational Trees",
but they might have also talked about finitely
represented infinite tree. Rational trees itself
probably an echo from Dmitry Mirimanoffs
(1861rCo1945) rCLextraordinairerCY sets.
Dmitry Semionovitch Mirimanoff (Russian:
-o-+-+|U-e-C-+-| -i-|-+-a-+-+-#-+-c -L-+-C-+-+-#|U-+-+-#; 13 September 1861, >> Pereslavl-Zalessky, Russia rCo 5 January 1945, Geneva,
Switzerland) was a member of the Moscow Mathematical
Society in 1897.[1] And later became a doctor of
mathematical sciences in 1900, in Geneva, and
taught at the universities of Geneva and Lausanne.
https://en.wikipedia.org/wiki/Dmitry_Mirimanoff
This year we can again celebrate another researcher,
who died in 2023, Peter Aczel R.I.P., who made
as well some thoughtful deviance from orthodoxy.
Peter Aczel Memorial Conference on 10th September 2025.
Logic Colloquium will take place at the University
of Manchester-a (UK) from 11th to 12th September 2025
https://sites.google.com/view/blc2025/home
Have Fun!
Bye
Mild Shock schrieb:
Hi,
An example of human intelligence, is of course the
name "Rational Term" for cyclic terms set forth by
Alain Colmerauer. Since it plays with "Rational Numbers".
A subset of cyclic terms can indeed represent
rational numbers, and they give a nice counter
example to transitivity:
?- problem(X,Y,Z).
X = _S1-7-9-1, % where
-a-a-a-a _S1 = _S1-6-8-0-6-2-8,
Y = _S2-1-6-1-5-4-6-1, % where
-a-a-a-a _S2 = _S2-0-9-2,
Z = _S3-3-0, % where
-a-a-a-a _S3 = _S3-8-1
The Fuzzer 2 from 2025 does just what I did in 2023,
expanding rational numbers into rational terms:
% fuzzy(-Term)
fuzzy(X) :-
-a-a-a random_between(1,100,A),
-a-a-a random_between(1,100,B),
-a-a-a random_between(1,10,M),
-a-a-a fuzzy_chunk(M,A,B,C,X,Y),
-a-a-a random_between(1,10,L),
-a-a-a fuzzy_chunk(L,C,B,_,Y,Z),
-a-a-a Z = Y.
% fuzzy_chunk(+Integer,+Integer,+Integer,-Integer,+Term,-Term)
fuzzy_chunk(0, A, _, A, X, X) :- !.
fuzzy_chunk(N, A, B, C, Y-D, X) :-
-a-a-a M is N-1,
-a-a-a D is A // B,
-a-a-a H is 10*(A - B*D),
-a-a-a fuzzy_chunk(M, H, B, C, Y, X).
Bye
Mild Shock schrieb:
Hi,
Rota often celebrated symbolic, analogical, and
conceptual understanding over brute calculation.
This philosophy has come full circle in modern AI:
- Large Language Models (LLMs) like GPT-4 don't
-a-a just store facts rCo they recognize patterns,
-a-a make analogies, and generate new structures
-a-a from old ones.
- RotarCOs work in combinatorics, symbolic logic, and
-a-a operator theory is essentially pattern-based
-a-a manipulation rCo exactly the kind of reasoning LLMs
-a-a aim to emulate at scale.
Rota had a clear aesthetic. He valued clean formalisms,
symbolic beauty, and well-defined structures. Rota wanted
mathematics to mean something rCo to be not just correct,
but intelligible and expressive.
In contrast, modern AI (especially LLMs like GPT) thrives
on the messy, including: Noisy data , Inconsistency ,
Uncertainty, Contradiction. AI engineers today are mining
meaning from noise.
What counts as rCLstructurerCY is often just the best
pragmatic/effective description available at that moment.
Bye
Mild Shock schrieb:
Hi,
Will the world build on American Stacks?
Or is the american dream over?
How it started, 1 month go:
Nvidia CEO Jensen Huang on AI, Musk and Trump
https://www.youtube.com/watch?v=c-XAL2oYelI
How its going, now:
Are you still talking about Jeffrey Epstein?
https://www.bbc.com/news/articles/cm2m879neljo
Bye
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses? >>>>>> The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
you do need a theory of terms, and a specific one
Hi,
Ok I have to correct "Rational Term" was less
common, what was more in use "Rational Trees",
but they might have also talked about finitely
represented infinite tree. Rational trees itself
probably an echo from Dmitry Mirimanoffs
(1861rCo1945) rCLextraordinairerCY sets.
Dmitry Semionovitch Mirimanoff (Russian:
-o-+-+|U-e-C-+-| -i-|-+-a-+-+-#-+-c -L-+-C-+-+-#|U-+-+-#; 13 September 1861, Pereslavl-Zalessky, Russia rCo 5 January 1945, Geneva,
Switzerland) was a member of the Moscow Mathematical
Society in 1897.[1] And later became a doctor of
mathematical sciences in 1900, in Geneva, and
taught at the universities of Geneva and Lausanne. https://en.wikipedia.org/wiki/Dmitry_Mirimanoff
This year we can again celebrate another researcher,
who died in 2023, Peter Aczel R.I.P., who made
as well some thoughtful deviance from orthodoxy.
Peter Aczel Memorial Conference on 10th September 2025.
Logic Colloquium will take place at the University
of Manchester-a (UK) from 11th to 12th September 2025 https://sites.google.com/view/blc2025/home
Have Fun!
Bye
Mild Shock schrieb:
Hi,
An example of human intelligence, is of course the
name "Rational Term" for cyclic terms set forth by
Alain Colmerauer. Since it plays with "Rational Numbers".
A subset of cyclic terms can indeed represent
rational numbers, and they give a nice counter
example to transitivity:
?- problem(X,Y,Z).
X = _S1-7-9-1, % where
-a-a-a-a _S1 = _S1-6-8-0-6-2-8,
Y = _S2-1-6-1-5-4-6-1, % where
-a-a-a-a _S2 = _S2-0-9-2,
Z = _S3-3-0, % where
-a-a-a-a _S3 = _S3-8-1
The Fuzzer 2 from 2025 does just what I did in 2023,
expanding rational numbers into rational terms:
% fuzzy(-Term)
fuzzy(X) :-
-a-a-a random_between(1,100,A),
-a-a-a random_between(1,100,B),
-a-a-a random_between(1,10,M),
-a-a-a fuzzy_chunk(M,A,B,C,X,Y),
-a-a-a random_between(1,10,L),
-a-a-a fuzzy_chunk(L,C,B,_,Y,Z),
-a-a-a Z = Y.
% fuzzy_chunk(+Integer,+Integer,+Integer,-Integer,+Term,-Term)
fuzzy_chunk(0, A, _, A, X, X) :- !.
fuzzy_chunk(N, A, B, C, Y-D, X) :-
-a-a-a M is N-1,
-a-a-a D is A // B,
-a-a-a H is 10*(A - B*D),
-a-a-a fuzzy_chunk(M, H, B, C, Y, X).
Bye
Mild Shock schrieb:
Hi,
Rota often celebrated symbolic, analogical, and
conceptual understanding over brute calculation.
This philosophy has come full circle in modern AI:
- Large Language Models (LLMs) like GPT-4 don't
-a-a just store facts rCo they recognize patterns,
-a-a make analogies, and generate new structures
-a-a from old ones.
- RotarCOs work in combinatorics, symbolic logic, and
-a-a operator theory is essentially pattern-based
-a-a manipulation rCo exactly the kind of reasoning LLMs
-a-a aim to emulate at scale.
Rota had a clear aesthetic. He valued clean formalisms,
symbolic beauty, and well-defined structures. Rota wanted
mathematics to mean something rCo to be not just correct,
but intelligible and expressive.
In contrast, modern AI (especially LLMs like GPT) thrives
on the messy, including: Noisy data , Inconsistency ,
Uncertainty, Contradiction. AI engineers today are mining
meaning from noise.
What counts as rCLstructurerCY is often just the best
pragmatic/effective description available at that moment.
Bye
Mild Shock schrieb:
Hi,
Will the world build on American Stacks?
Or is the american dream over?
How it started, 1 month go:
Nvidia CEO Jensen Huang on AI, Musk and Trump
https://www.youtube.com/watch?v=c-XAL2oYelI
How its going, now:
Are you still talking about Jeffrey Epstein?
https://www.bbc.com/news/articles/cm2m879neljo
Bye
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>>>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg