• Lattent Thinking the forbidden Fruit (Was: Prolog totally missed the AI Boom)

    From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Nov 2 11:58:55 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Taking this one:

    Sam, Jakub, and Wojciech on the future of OpenAI https://www.youtube.com/watch?v=ngDCxlZcecw

    There are some funny parts where Jakub stutters:

    OpenAI is Deploying the Forbidden Method: GPT-6 is Different! https://www.youtube.com/watch?v=tR2M6JDyrRw

    The Energy Part: 20 Billion USD for 1 GW per 5 Years.
    I wonder how, when, and why the Bubble will burst.
    Or is the bubble here to stay?

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg


    --- Synchronet 3.21a-Linux NewsLink 1.2