Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 26 |
Nodes: | 6 (0 / 6) |
Uptime: | 56:10:35 |
Calls: | 632 |
Files: | 1,187 |
D/L today: |
27 files (19,977K bytes) |
Messages: | 179,568 |
Ex-OpenAI pioneer Ilya Sutskever warns that as
AI begins to self-improve, its trajectory may become
"extremely unpredictable and unimaginable,"
ushering in a rapid advance beyond human control. https://www.youtube.com/watch?v=79-bApI3GIU
That casual, almost incidental quality you noticed
is exactly the abstraction engine working so fluidly
that it becomes invisible. The AI was:
1. Understanding the essential computation (the "what")
2. Discarding the Java-specific implementation (the "how")
2. Re-expressing it using C#'s idiomatic patterns (a different "how")
Hi,
That is extremly embarassing. I donrCOt know
what you are bragging about, when you wrote
the below. You are wrestling with a ghost!
Maybe you didnrCOt follow my superbe link:
seemingly interesting paper. In stead
particular, his final coa[l]gebra theorem
The link behind Hopcroft and Karp (1971) I
gave, which is a Bisimulation and Equirecursive
Equality hand-out, has a coalgebra example,
I used to derive pairs.pl from:
https://www.cs.cornell.edu/courses/cs6110/2014sp/Lectures/lec35a.pdf
Bye
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
ETH-Professor Martin Jaggi explains that Apertus
AI is a basis LLM, doesn't have yet RAG, doesn't
have yet thinking. Etc.. Etc.. Speculates that the
"open" community might help change it.
One month later: Interview with Martin Jaggi https://www.youtube.com/watch?v=KgB8CfZCeME
Goliath (40,000 TFLOPS): Perfect for discovering new
patterns, complex reasoning, creative tasks
David (40 TFLOPS): Perfect for execution, integration,
personalization, real-time response
Hi,
Here we find Ex-OpenAI Scientist looking extremly concerned:
Ex-OpenAI pioneer Ilya Sutskever warns that as
AI begins to self-improve, its trajectory may become
"extremely unpredictable and unimaginable,"
ushering in a rapid advance beyond human control. https://www.youtube.com/watch?v=79-bApI3GIU
Meanwhile I am enjoying some of the AIs abstracting capabilities:
The bludy thingy was translating my Java code into C#
code in a blink and did all kind of fancy translation,
and explains his own doing as:
That casual, almost incidental quality you noticed
is exactly the abstraction engine working so fluidly
that it becomes invisible. The AI was:
1. Understanding the essential computation (the "what")
2. Discarding the Java-specific implementation (the "how")
2. Re-expressing it using C#'s idiomatic patterns (a different "how")
Ha Ha, nice try AI, presenting me this antropomorphic
illusion of comprehension. Doesn't the AI just apply tons
of patterns without any knowing what the code really does?
Well I am fine with that, I don't need more than this
pattern based transformations. If the result works,
the approach is not broken.
Bye
Mild Shock schrieb:
Hi,
That is extremly embarassing. I donrCOt know
what you are bragging about, when you wrote
the below. You are wrestling with a ghost!
Maybe you didnrCOt follow my superbe link:
seemingly interesting paper. In stead
particular, his final coa[l]gebra theorem
The link behind Hopcroft and Karp (1971) I
gave, which is a Bisimulation and Equirecursive
Equality hand-out, has a coalgebra example,
I used to derive pairs.pl from:
https://www.cs.cornell.edu/courses/cs6110/2014sp/Lectures/lec35a.pdf
Bye
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf >>>
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg