Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 23 |
Nodes: | 6 (0 / 6) |
Uptime: | 49:43:03 |
Calls: | 583 |
Files: | 1,138 |
Messages: | 111,301 |
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg
Looks like sorting of rational trees
needs an existential type, if we go full rCLlogicalrCY.
If I use my old code from 2023 which computes
a finest (*), i.e. non-monster, bisimulation
pre-quotient (**) in prefix order:
factorize(T, _, T) --> {var(T)}, !.
factorize(T, C, V) --> {compound(T), member(S-V, C), S == T}, !.
factorize(T, C, V) --> {compound(T)}, !,
-a-a [V = S],
-a-a {T =.. [F|L]},
-a-a factorize_list(L, [T-V|C], R),
-a-a {S =.. [F|R]}.
factorize(T, _, T) --> [].
I see that it always generates new
intermediate variables:
?- X = f(f(X)), factorize(X, [], T, L, []), write(L-T), nl. [_8066=f(_8066)]-_8066
?- X = f(f(X)), factorize(X, [], T, L, []), write(L-T), nl. [_10984=f(_10984)]-_10984
What would be swell if it would generate an
existential quantifier, something like T^([T = f(T)]-T)
in the above case. Then using alpha conversion
different factorization runs would be equal,
when they only differ by the introduced
intermediate variables. But Prolog has no alpha
conversion, only ++-Prolog has such things.
So what can we do, how can we produce a
representation, that can be used for sorting?
(*) Why finest and not corsets? Because it uses
non-monster instructions and not monster
instructions
(**) Why only pre-quotient? Because a
XXX_with_stack algorithm does not fully
deduplicate the equations, would
probably need a XXX_with_memo algorithm.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Hi,
So do we see a new wave in interst in bismulation,
especially in computing existential types for all
kind of things? It seems so, quite facinating find:
BQ-NCO: Bisimulation Quotienting for Efficient
Neural Combinatorial Optimization
https://arxiv.org/abs/2301.03313
Has nobody less than Jean-Marc Andreoli on the
author list. Possibly the same guy from earlier
Focusing and Linear Logic, who was associated with
ECRC Munich in 1990rCOs, but now working for naverlabs.com.
Bye
Mild Shock schrieb:
Looks like sorting of rational trees
needs an existential type, if we go full rCLlogicalrCY.
If I use my old code from 2023 which computes
a finest (*), i.e. non-monster, bisimulation
pre-quotient (**) in prefix order:
factorize(T, _, T) --> {var(T)}, !.
factorize(T, C, V) --> {compound(T), member(S-V, C), S == T}, !.
factorize(T, C, V) --> {compound(T)}, !,
-a-a-a [V = S],
-a-a-a {T =.. [F|L]},
-a-a-a factorize_list(L, [T-V|C], R),
-a-a-a {S =.. [F|R]}.
factorize(T, _, T) --> [].
I see that it always generates new
intermediate variables:
?- X = f(f(X)), factorize(X, [], T, L, []), write(L-T), nl.
[_8066=f(_8066)]-_8066
?- X = f(f(X)), factorize(X, [], T, L, []), write(L-T), nl.
[_10984=f(_10984)]-_10984
What would be swell if it would generate an
existential quantifier, something like T^([T = f(T)]-T)
in the above case. Then using alpha conversion
different factorization runs would be equal,
when they only differ by the introduced
intermediate variables. But Prolog has no alpha
conversion, only ++-Prolog has such things.
So what can we do, how can we produce a
representation, that can be used for sorting?
(*) Why finest and not corsets? Because it uses
non-monster instructions and not monster
instructions
(**) Why only pre-quotient? Because a
XXX_with_stack algorithm does not fully
deduplicate the equations, would
probably need a XXX_with_memo algorithm.
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Hi,
you do need a theory of terms, and a specific one
You could pull an Anti Ackerman. Negate the
infinity axiom like Ackerman did here, where
he also kept the regularity axiom:
Die Widerspruchsfreiheit der allgemeinen Mengenlehre
Ackermann, Wilhelm - 1937 https://www.digizeitschriften.de/id/235181684_0114%7Clog23
But instead of Ackermann, you get an Anti (-Foundation)
Ackermann if you drop the regularity axiom. Result, you
get a lot of exotic sets, among which are also the
famous Quine atoms:
x = {x}
Funny that in the setting I just described , where
there is the negation of the infinity axiom, i.e.
all sets are finite, contrary to the usually vulgar
view, x = {x} is a finite object. Just like in Prolog
X = f(X) is in principle a finite object, it has
only one subtree, or what Alain Colmerauer
already postulated:
Definition: a "rational" tre is a tree which
has a finite set of subtrees.
Bye