• =?UTF-8?Q?Latent_Thinking_the_forbidden_Fruit_=28Was:_Would_Poincar?= =?UTF-8?B?w6kgbWlzcyB0aGUgQUkgQm9vbT8p?=

    From Mild Shock@janburse@fastmail.fm to sci.physics on Sun Nov 2 12:22:17 2025
    From Newsgroup: sci.physics


    Hi,

    Taking this one:

    Sam, Jakub, and Wojciech on the future of OpenAI https://www.youtube.com/watch?v=ngDCxlZcecw

    There are some funny parts where Jakub stutters:

    OpenAI is Deploying the Forbidden Method: GPT-6 is Different! https://www.youtube.com/watch?v=tR2M6JDyrRw

    What is even "Latent Thinking". While some thinking
    models go through varbalization loops and realize a
    form of "Loud Thinking", i.e. think out loud.

    Autoencoders anyway build a latent space during the
    training phase, so one can do chain of thoughs
    in the latent space, providing a form of "Slient Thinking".

    The Energy Part: 20 Billion USD for 1 GW per 5 Years.
    I wonder how, when, and why the Bubble will burst.
    Or is the bubble here to stay?

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg


    Mild Shock schrieb:

    Henri Poincar|- believed that mathematical
    and scientific creativity came from a deep,
    unconscious intuition that could not be

    captured by mechanical reasoning or formal
    systems. He famously wrote about how insights
    came not from plodding logic but from sudden

    illuminations rCo leaps of creative synthesis.

    But now we have generative AI rCo models like GPT rCo that:

    - produce poetry, proofs, stories, and code,

    - combine ideas in novel ways,

    - and do so by processing patterns in massive
    -a-a datasets, without conscious understanding.

    And that does seem to contradict Poincar|-'s belief
    that true invention cannot come from automation.

    --- Synchronet 3.21a-Linux NewsLink 1.2