• OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws

    From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.misc on Fri Sep 19 09:20:45 2025
    From Newsgroup: comp.misc

    Researchers at OpenAI have come to the conclusion, after a careful
    mathematical analysis of the nature of those rCLlarge-language modelsrCY
    that are all the rage nowadays, that the risk of hallucinations is an unavoidable fundamental characteristic of those models <https://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html>:

    The researchers demonstrated that hallucinations stemmed from
    statistical properties of language model training rather than
    implementation flaws. The study established that rCLthe generative
    error rate is at least twice the IIV misclassification rate,rCY
    where IIV referred to rCLIs-It-ValidrCY and demonstrated mathematical
    lower bounds that prove AI systems will always make a certain
    percentage of mistakes, no matter how much the technology
    improves.

    Examples of these problems can be quite embarrassing:

    The researchers demonstrated their findings using state-of-the-art
    models, including those from OpenAIrCOs competitors. When asked rCLHow
    many Ds are in DEEPSEEK?rCY the DeepSeek-V3 model with 600 billion
    parameters rCLreturned rCy2rCO or rCy3rCO in ten independent trialsrCY while
    Meta AI and Claude 3.7 Sonnet performed similarly, rCLincluding
    answers as large as rCy6rCO and rCy7.rCOrCY

    I canrCOt believe they were serious about this, though:

    rCLUnlike human intelligence, it lacks the humility to acknowledge
    uncertainty,rCY said Neil Shah, VP for research and partner at
    Counterpoint Technologies.

    As we all know, there are *plenty* of humans who lack such humility!
    ThatrCOs where concepts like rCLideologyrCY and rCLreligionrCY come in ...
    --- Synchronet 3.21a-Linux NewsLink 1.2