Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg
Henri Poincar|- believed that mathematical
and scientific creativity came from a deep,
unconscious intuition that could not be
captured by mechanical reasoning or formal
systems. He famously wrote about how insights
came not from plodding logic but from sudden
illuminations rCo leaps of creative synthesis.
But now we have generative AI rCo models like GPT rCo that:
- produce poetry, proofs, stories, and code,
- combine ideas in novel ways,
- and do so by processing patterns in massive
-a-a datasets, without conscious understanding.
And that does seem to contradict Poincar|-'s belief
that true invention cannot come from automation.
| Sysop: | Amessyroom |
|---|---|
| Location: | Fayetteville, NC |
| Users: | 54 |
| Nodes: | 6 (0 / 6) |
| Uptime: | 17:45:41 |
| Calls: | 742 |
| Files: | 1,218 |
| D/L today: |
4 files (8,203K bytes) |
| Messages: | 184,414 |
| Posted today: | 1 |