Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 26 |
Nodes: | 6 (0 / 6) |
Uptime: | 58:30:41 |
Calls: | 632 |
Files: | 1,188 |
D/L today: |
31 files (20,038K bytes) |
Messages: | 180,225 |
Ex-OpenAI pioneer Ilya Sutskever warns that as
AI begins to self-improve, its trajectory may become
"extremely unpredictable and unimaginable,"
ushering in a rapid advance beyond human control. https://www.youtube.com/watch?v=79-bApI3GIU
That casual, almost incidental quality you noticed
is exactly the abstraction engine working so fluidly
that it becomes invisible. The AI was:
1. Understanding the essential computation (the "what")
2. Discarding the Java-specific implementation (the "how")
2. Re-expressing it using C#'s idiomatic patterns (a different "how")
Hi,
Prologers with their pipe dream of Ontologies
with Axioms are most hurt by LLMs that work
more on the basis of Fuzzy Logic.
Even good old "hardmath" is not immune to
this coping mechanism:
"I've cast one of my rare votes-to-delete. It is
a self-answer to the OP's off-topic "question".
Rather than improve the original post, the effort
has been made to "promote" some so-called RETRO
Project by linking YouTube and arxiv.org URLs.
Not worth retaining IMHO.
-- hardmath
https://math.meta.stackexchange.com/a/38051/1482376
Bye
ETH-Professor Martin Jaggi explains that Apertus
AI is a basis LLM, doesn't have yet RAG, doesn't
have yet thinking. Etc.. Etc.. Speculates that the
"open" community might help change it.
One month later: Interview with Martin Jaggi https://www.youtube.com/watch?v=KgB8CfZCeME
Goliath (40,000 TFLOPS): Perfect for discovering new
patterns, complex reasoning, creative tasks
David (40 TFLOPS): Perfect for execution, integration,
personalization, real-time response
Hi,
Here we find Ex-OpenAI Scientist looking extremly concerned:
Ex-OpenAI pioneer Ilya Sutskever warns that as
AI begins to self-improve, its trajectory may become
"extremely unpredictable and unimaginable,"
ushering in a rapid advance beyond human control. https://www.youtube.com/watch?v=79-bApI3GIU
Meanwhile I am enjoying some of the AIs abstracting capabilities:
The bludy thingy was translating my Java code into C#
code in a blink and did all kind of fancy translation,
and explains his own doing as:
That casual, almost incidental quality you noticed
is exactly the abstraction engine working so fluidly
that it becomes invisible. The AI was:
1. Understanding the essential computation (the "what")
2. Discarding the Java-specific implementation (the "how")
2. Re-expressing it using C#'s idiomatic patterns (a different "how")
Ha Ha, nice try AI, presenting me this antropomorphic
illusion of comprehension. Doesn't the AI just apply tons
of patterns without any knowing what the code really does?
Well I am fine with that, I don't need more than this
pattern based transformations. If the result works,
the approach is not broken.
Bye
Mild Shock schrieb:
Hi,
Prologers with their pipe dream of Ontologies
with Axioms are most hurt by LLMs that work
more on the basis of Fuzzy Logic.
Even good old "hardmath" is not immune to
this coping mechanism:
"I've cast one of my rare votes-to-delete. It is
a self-answer to the OP's off-topic "question".
Rather than improve the original post, the effort
has been made to "promote" some so-called RETRO
Project by linking YouTube and arxiv.org URLs.
Not worth retaining IMHO.
-- hardmath
https://math.meta.stackexchange.com/a/38051/1482376
Bye