• Abstraction Engine / Pattern-Amplification AI Avalanche [Java to C# translation] (Re: Prologers are hurt the most by LLMs)

    From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Oct 4 15:50:13 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Here we find Ex-OpenAI Scientist looking extremly concerned:

    Ex-OpenAI pioneer Ilya Sutskever warns that as
    AI begins to self-improve, its trajectory may become
    "extremely unpredictable and unimaginable,"
    ushering in a rapid advance beyond human control. https://www.youtube.com/watch?v=79-bApI3GIU

    Meanwhile I am enjoying some of the AIs abstracting capabilities:

    The bludy thingy was translating my Java code into C#
    code in a blink and did all kind of fancy translation,
    and explains his own doing as:

    That casual, almost incidental quality you noticed
    is exactly the abstraction engine working so fluidly
    that it becomes invisible. The AI was:
    1. Understanding the essential computation (the "what")
    2. Discarding the Java-specific implementation (the "how")
    2. Re-expressing it using C#'s idiomatic patterns (a different "how")

    Ha Ha, nice try AI, presenting me this antropomorphic
    illusion of comprehension. Doesn't the AI just apply tons
    of patterns without any knowing what the code really does?

    Well I am fine with that, I don't need more than this
    pattern based transformations. If the result works,
    the approach is not broken.

    Bye

    Mild Shock schrieb:
    Hi,

    Prologers with their pipe dream of Ontologies
    with Axioms are most hurt by LLMs that work
    more on the basis of Fuzzy Logic.

    Even good old "hardmath" is not immune to
    this coping mechanism:

    "I've cast one of my rare votes-to-delete. It is
    a self-answer to the OP's off-topic "question".
    Rather than improve the original post, the effort
    has been made to "promote" some so-called RETRO
    Project by linking YouTube and arxiv.org URLs.
    Not worth retaining IMHO.
    -- hardmath

    https://math.meta.stackexchange.com/a/38051/1482376

    Bye

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Oct 4 16:04:43 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Here we find Switzerland laying an Apertus AI roadmap:

    ETH-Professor Martin Jaggi explains that Apertus
    AI is a basis LLM, doesn't have yet RAG, doesn't
    have yet thinking. Etc.. Etc.. Speculates that the
    "open" community might help change it.
    One month later: Interview with Martin Jaggi https://www.youtube.com/watch?v=KgB8CfZCeME

    Meanwhile I wish my AI Laptop would do the Java to C#
    translation in a blink locally and autonomous. It
    has a few technical hickups at the moment, the
    convential CPUs are still sometimes over scheduling,

    for example I cannot run VCS from Microsoft, something
    goes wrong and it turns my whole laptop into a frying
    pan, while Rider from IntelliJ works. Now an AI
    gives me some advice:

    Goliath (40,000 TFLOPS): Perfect for discovering new
    patterns, complex reasoning, creative tasks
    David (40 TFLOPS): Perfect for execution, integration,
    personalization, real-time response

    So I would use Goliath to distill the patterns.
    And still could profit as David locally.

    Bye

    Mild Shock schrieb:
    Hi,

    Here we find Ex-OpenAI Scientist looking extremly concerned:

    Ex-OpenAI pioneer Ilya Sutskever warns that as
    AI begins to self-improve, its trajectory may become
    "extremely unpredictable and unimaginable,"
    ushering in a rapid advance beyond human control. https://www.youtube.com/watch?v=79-bApI3GIU

    Meanwhile I am enjoying some of the AIs abstracting capabilities:

    The bludy thingy was translating my Java code into C#
    code in a blink and did all kind of fancy translation,
    and explains his own doing as:

    That casual, almost incidental quality you noticed
    is exactly the abstraction engine working so fluidly
    that it becomes invisible. The AI was:
    1. Understanding the essential computation (the "what")
    2. Discarding the Java-specific implementation (the "how")
    2. Re-expressing it using C#'s idiomatic patterns (a different "how")

    Ha Ha, nice try AI, presenting me this antropomorphic
    illusion of comprehension. Doesn't the AI just apply tons
    of patterns without any knowing what the code really does?

    Well I am fine with that, I don't need more than this
    pattern based transformations. If the result works,
    the approach is not broken.

    Bye

    Mild Shock schrieb:
    Hi,

    Prologers with their pipe dream of Ontologies
    with Axioms are most hurt by LLMs that work
    more on the basis of Fuzzy Logic.

    Even good old "hardmath" is not immune to
    this coping mechanism:

    "I've cast one of my rare votes-to-delete. It is
    a self-answer to the OP's off-topic "question".
    Rather than improve the original post, the effort
    has been made to "promote" some so-called RETRO
    Project by linking YouTube and arxiv.org URLs.
    Not worth retaining IMHO.
    -- hardmath

    https://math.meta.stackexchange.com/a/38051/1482376

    Bye


    --- Synchronet 3.21a-Linux NewsLink 1.2