• Updated input to LLM systems proving HHH(DD)==0 within assumptions

    From olcott@polcott333@gmail.com to comp.theory,comp.lang.c++,comp.lang.c,comp.ai.philosophy on Sun Oct 12 08:50:05 2025
    From Newsgroup: comp.theory

    Please think this all the way through without making any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) Detects a non-terminating behavior pattern:
    abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    then HHH is correct to abort this simulation and return 0.

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    int Halt_Status = HHH(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    int main()
    {
    HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mr Flibble@flibble@red-dwarf.jmc.corp to comp.theory on Sun Oct 12 14:10:45 2025
    From Newsgroup: comp.theory

    On Sun, 12 Oct 2025 08:50:05 -0500, olcott wrote:

    Please think this all the way through without making any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) Detects a non-terminating behavior pattern:
    abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    then HHH is correct to abort this simulation and return 0.

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    int Halt_Status = HHH(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    int main()
    {
    HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>

    Change the input substituting the words "Termination Analyzer" with the
    words "Halting Decider" and try again.

    /Flibble
    --
    meet ever shorter deadlines, known as "beat the clock"
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory on Sun Oct 12 10:19:59 2025
    From Newsgroup: comp.theory

    On 10/12/2025 9:50 AM, olcott wrote:
    Please think this all the way through without making any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) Detects a non-terminating behavior pattern:
    -a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    -a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    -a-a-a then HHH is correct to abort this simulation and return 0.

    These conditions make HHH not a halt decider because they are
    incompatible with the requirements:


    Given any algorithm (i.e. a fixed immutable sequence of instructions) X described as <X> with input Y:

    A solution to the halting problem is an algorithm H that computes the following mapping:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed directly




    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a int Halt_Status = HHH(DD);
    -a if (Halt_Status)
    -a-a-a HERE: goto HERE;
    -a return Halt_Status;
    }

    int main()
    {
    -a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>


    Error: assumes it's possible to design HHH to get a correct answer.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Sun Oct 12 10:42:47 2025
    From Newsgroup: comp.theory

    On 10/12/2025 9:10 AM, Mr Flibble wrote:
    On Sun, 12 Oct 2025 08:50:05 -0500, olcott wrote:

    Please think this all the way through without making any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) Detects a non-terminating behavior pattern:
    abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    then HHH is correct to abort this simulation and return 0.

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    int Halt_Status = HHH(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    int main()
    {
    HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>

    Change the input substituting the words "Termination Analyzer" with the
    words "Halting Decider" and try again.

    /Flibble



    "Partial halt decider" because I do not claim
    to solve the halting problem, only correctly
    determine the halt status of the counter-example input.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Sun Oct 12 10:47:43 2025
    From Newsgroup: comp.theory

    On 10/12/2025 9:19 AM, dbush wrote:
    On 10/12/2025 9:50 AM, olcott wrote:
    Please think this all the way through without making any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    -a-a-a-a then HHH is correct to abort this simulation and return 0.

    These conditions make HHH not a halt decider because they are
    incompatible with the requirements:


    It is perfectly compatible with those requirements
    except in the case where the input calls its own
    simulating halt decider.


    Given any algorithm (i.e. a fixed immutable sequence of instructions) X described as <X> with input Y:

    A solution to the halting problem is an algorithm H that computes the following mapping:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed directly




    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>


    Error: assumes it's possible to design HHH to get a correct answer.


    HHH(DD) gets the correct answer within its set
    of assumptions / premises
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Bonita Montero@Bonita.Montero@gmail.com to comp.theory on Sun Oct 12 17:53:17 2025
    From Newsgroup: comp.theory

    Sorry, that's silly. You spend half your life discussing the
    same problem over and over again and never get to the end.

    Am 12.10.2025 um 15:50 schrieb olcott:
    Please think this all the way through without making any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) Detects a non-terminating behavior pattern:
    -a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    -a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    -a-a-a then HHH is correct to abort this simulation and return 0.

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a int Halt_Status = HHH(DD);
    -a if (Halt_Status)
    -a-a-a HERE: goto HERE;
    -a return Halt_Status;
    }

    int main()
    {
    -a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Sun Oct 12 11:00:46 2025
    From Newsgroup: comp.theory

    On 10/12/2025 10:53 AM, Bonita Montero wrote:
    Sorry, that's silly. You spend half your life discussing the
    same problem over and over again and never get to the end.

    Am 12.10.2025 um 15:50 schrieb olcott:
    Please think this all the way through without making any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    -a-a-a-a then HHH is correct to abort this simulation and return 0.

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>



    I am getting to the end.
    I needed feedback to make my words clearer and LLM
    systems are giving me this feedback. They provided
    more help in a few dozen messages than tens of
    thousands of dialogues with humans. LLM systems
    became 67-fold more powerful on the last one year.

    Their context window increased from 3000 words
    to 200,000 words. Basically how much of the
    conversation that they can keep in their head
    at the same time. Last year ChatGPT acted like
    it had Alzheimer's when I exceeded its 3000
    word limit.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Sun Oct 12 11:04:28 2025
    From Newsgroup: comp.theory

    On 10/12/2025 11:00 AM, olcott wrote:
    On 10/12/2025 10:53 AM, Bonita Montero wrote:
    Sorry, that's silly. You spend half your life discussing the
    same problem over and over again and never get to the end.

    Am 12.10.2025 um 15:50 schrieb olcott:
    Please think this all the way through without making any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input until: >>> (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    -a-a-a-a then HHH is correct to abort this simulation and return 0.

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>



    I am getting to the end.
    I needed feedback to make my words clearer and LLM
    systems are giving me this feedback. They provided
    more help in a few dozen messages than tens of
    thousands of dialogues with humans. LLM systems
    became 67-fold more powerful on the last one year.

    Their context window increased from 3000 words
    to 200,000 words. Basically how much of the
    conversation that they can keep in their head
    at the same time. Last year ChatGPT acted like
    it had Alzheimer's when I exceeded its 3000
    word limit.


    Also very important is that there is no chance of
    AI hallucination when they are only reasoning
    within a set of premises. Some systems must be told:

    Please think this all the way through without making any guesses
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From joes@noreply@example.org to comp.theory on Sun Oct 12 16:05:01 2025
    From Newsgroup: comp.theory

    Am Sun, 12 Oct 2025 10:47:43 -0500 schrieb olcott:
    On 10/12/2025 9:19 AM, dbush wrote:
    On 10/12/2025 9:50 AM, olcott wrote:

    Simulating Termination Analyzer HHH correctly simulates its input
    until:
    (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own
    non-termination
    -a-a-a-a then HHH is correct to abort this simulation and return 0.

    These conditions make HHH not a halt decider because they are
    incompatible with the requirements:

    It is perfectly compatible with those requirements except in the case
    where the input calls its own simulating halt decider.

    Yes, it is not compatible with the requirements in that case.
    --
    Am Sat, 20 Jul 2024 12:35:31 +0000 schrieb WM in sci.math:
    It is not guaranteed that n+1 exists for every n.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Sun Oct 12 11:13:32 2025
    From Newsgroup: comp.theory

    On 10/12/2025 11:05 AM, joes wrote:
    Am Sun, 12 Oct 2025 10:47:43 -0500 schrieb olcott:
    On 10/12/2025 9:19 AM, dbush wrote:
    On 10/12/2025 9:50 AM, olcott wrote:

    Simulating Termination Analyzer HHH correctly simulates its input
    until:
    (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own
    non-termination
    -a-a-a-a then HHH is correct to abort this simulation and return 0.

    These conditions make HHH not a halt decider because they are
    incompatible with the requirements:

    It is perfectly compatible with those requirements except in the case
    where the input calls its own simulating halt decider.

    Yes, it is not compatible with the requirements in that case.


    It also does get the correct answer within
    its premises. HHH(DD) is correct to reject
    is input within its premises.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mike Terry@news.dead.person.stones@darjeeling.plus.com to comp.theory on Sun Oct 12 18:06:10 2025
    From Newsgroup: comp.theory

    On 12/10/2025 16:53, Bonita Montero wrote:
    Sorry, that's silly. You spend half your life discussing the
    same problem over and over again and never get to the end.

    This gives PO a narrative he can hold on to which gives his life a meaning: he is the heroic
    world-saving unrecognised genius, constantly struggling against "the system" right up to his final
    breath! If he were to suddenly realise he was just a deluded dumbo who had wasted most of his life
    arguing over a succession of mistakes and misunderstandings on his part, and had never contributed a
    single idea of any academic value, would his life be better? I think not.

    Thankfully he has recently discovered chatbots who can give him the uncritical approval he craves,
    so there is next to no chance of that happening now. [Assuming they don't suddenly get better, to
    the point where they can genuinely analyse and criticise his claims in the way we do... Given how
    they currently work, I don't see that happening any time soon.]

    Would the lives of other posters here be better? That's a trickier question.


    Mike.


    Am 12.10.2025 um 15:50 schrieb olcott:
    Please think this all the way through without making any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) Detects a non-terminating behavior pattern:
    aaaa abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    aaaa return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    aaaa then HHH is correct to abort this simulation and return 0.

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    aa int Halt_Status = HHH(DD);
    aa if (Halt_Status)
    aaaa HERE: goto HERE;
    aa return Halt_Status;
    }

    int main()
    {
    aa HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mr Flibble@flibble@red-dwarf.jmc.corp to comp.theory on Sun Oct 12 17:22:16 2025
    From Newsgroup: comp.theory

    On Sun, 12 Oct 2025 10:42:47 -0500, olcott wrote:

    On 10/12/2025 9:10 AM, Mr Flibble wrote:
    On Sun, 12 Oct 2025 08:50:05 -0500, olcott wrote:

    Please think this all the way through without making any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input
    until:
    (a) Detects a non-terminating behavior pattern:
    abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    return 1.
    (c) If HHH must abort its simulation to prevent its own
    non-termination
    then HHH is correct to abort this simulation and return 0.

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    int Halt_Status = HHH(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    int main()
    {
    HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>

    Change the input substituting the words "Termination Analyzer" with the
    words "Halting Decider" and try again.

    /Flibble



    "Partial halt decider" because I do not claim to solve the halting
    problem, only correctly determine the halt status of the counter-example input.

    The Halting Problem proofs you are attempting to refute are NOT predicated
    on partial halt deciders.

    /Flibble
    --
    meet ever shorter deadlines, known as "beat the clock"
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory on Sun Oct 12 11:27:48 2025
    From Newsgroup: comp.theory

    On 10/12/2025 6:50 AM, olcott wrote:
    Please think this all the way through without making any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) Detects a non-terminating behavior pattern:
    -a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    -a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    -a-a-a then HHH is correct to abort this simulation and return 0.

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a int Halt_Status = HHH(DD);
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    -a if (Halt_Status)
    -a-a-a HERE: goto HERE;
    -a return Halt_Status;
    }

    HHH is now an integral part of DD. So, you can make it return anything
    you want, blah, blah, blah. DD is dependent on the result of HHH(DD).



    int main()
    {
    -a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andrew Church@church-usenet@autistici.org to comp.theory on Sun Oct 12 14:49:51 2025
    From Newsgroup: comp.theory

    On 10/12/25 12:04 PM, olcott wrote:
    Also very important is that there is no chance of
    AI hallucination when they are only reasoning
    within a set of premises.-a Some systems must be told:

    Please think this all the way through without making any guesses

    I don't mean to be rude, but that is a completely insane assertion to
    me. There is always a non-zero chance for an LLM to roll a bad token
    during inference and spit out garbage. Sure, the top-p decoding strategy
    can help minimize such mistakes by pruning the token pool of the worst
    of the bad apples, but such models will never *ever* be foolproof. The
    price you pay for convincingly generating natural language is
    bulletproof reasoning.

    If you're interested in formalizing your ideas using cutting-edge tech,
    I encourage you to look at Lean 4. Once you provide a machine-checked
    proof in Lean 4 with no `sorry`/`axiom`/other cheats, come back. People
    might adopt a very different tone.

    Best of luck, you will need it.
    --
    garrick "andrew" church
    they/he
    please address all complaints to /dev/null
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory on Sun Oct 12 16:11:17 2025
    From Newsgroup: comp.theory

    On 10/12/2025 11:47 AM, olcott wrote:
    On 10/12/2025 9:19 AM, dbush wrote:
    On 10/12/2025 9:50 AM, olcott wrote:
    Please think this all the way through without making any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input until: >>> (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    -a-a-a-a then HHH is correct to abort this simulation and return 0.

    These conditions make HHH not a halt decider because they are
    incompatible with the requirements:


    It is perfectly compatible with those requirements
    except in the case where the input calls its own
    simulating halt decider.

    In other words, not compatible. No "except".



    Given any algorithm (i.e. a fixed immutable sequence of instructions)
    X described as <X> with input Y:

    A solution to the halting problem is an algorithm H that computes the
    following mapping:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
    directly




    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>


    Error: assumes it's possible to design HHH to get a correct answer.


    HHH(DD) gets the correct answer within its set
    of assumptions / premises


    Which is incompatible with the requirements for a halt decider:


    Given any algorithm (i.e. a fixed immutable sequence of instructions) X described as <X> with input Y:

    A solution to the halting problem is an algorithm H that computes the following mapping:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed directly

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Sun Oct 12 19:44:46 2025
    From Newsgroup: comp.theory

    On 10/12/2025 12:22 PM, Mr Flibble wrote:
    On Sun, 12 Oct 2025 10:42:47 -0500, olcott wrote:

    On 10/12/2025 9:10 AM, Mr Flibble wrote:
    On Sun, 12 Oct 2025 08:50:05 -0500, olcott wrote:

    Please think this all the way through without making any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input
    until:
    (a) Detects a non-terminating behavior pattern:
    abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    return 1.
    (c) If HHH must abort its simulation to prevent its own
    non-termination
    then HHH is correct to abort this simulation and return 0.

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    int Halt_Status = HHH(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    int main()
    {
    HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>

    Change the input substituting the words "Termination Analyzer" with the
    words "Halting Decider" and try again.

    /Flibble



    "Partial halt decider" because I do not claim to solve the halting
    problem, only correctly determine the halt status of the counter-example
    input.

    The Halting Problem proofs you are attempting to refute are NOT predicated
    on partial halt deciders.

    /Flibble




    They are all anchored in the "proof" that
    a specific partial halt decider cannot exist.
    When I refute that then these proofs fail
    yet it may still be true that no universal
    halt decider exists.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Sun Oct 12 20:20:45 2025
    From Newsgroup: comp.theory

    On 10/12/2025 3:11 PM, dbush wrote:
    On 10/12/2025 11:47 AM, olcott wrote:
    On 10/12/2025 9:19 AM, dbush wrote:
    On 10/12/2025 9:50 AM, olcott wrote:
    Please think this all the way through without making any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input
    until:
    (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination >>>> -a-a-a-a then HHH is correct to abort this simulation and return 0.

    These conditions make HHH not a halt decider because they are
    incompatible with the requirements:


    It is perfectly compatible with those requirements
    except in the case where the input calls its own
    simulating halt decider.

    In other words, not compatible.-a No "except".



    Given any algorithm (i.e. a fixed immutable sequence of instructions)
    X described as <X> with input Y:

    A solution to the halting problem is an algorithm H that computes the
    following mapping:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
    directly




    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>


    Error: assumes it's possible to design HHH to get a correct answer.


    HHH(DD) gets the correct answer within its set
    of assumptions / premises


    Which is incompatible with the requirements for a halt decider:



    Yes, but the requirements for a halt decider are inconsistent
    with reality.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory on Sun Oct 12 21:22:12 2025
    From Newsgroup: comp.theory

    On 10/12/2025 9:20 PM, olcott wrote:
    On 10/12/2025 3:11 PM, dbush wrote:
    On 10/12/2025 11:47 AM, olcott wrote:
    On 10/12/2025 9:19 AM, dbush wrote:
    On 10/12/2025 9:50 AM, olcott wrote:
    Please think this all the way through without making any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input
    until:
    (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non-
    termination
    -a-a-a-a then HHH is correct to abort this simulation and return 0.

    These conditions make HHH not a halt decider because they are
    incompatible with the requirements:


    It is perfectly compatible with those requirements
    except in the case where the input calls its own
    simulating halt decider.

    In other words, not compatible.-a No "except".



    Given any algorithm (i.e. a fixed immutable sequence of
    instructions) X described as <X> with input Y:

    A solution to the halting problem is an algorithm H that computes
    the following mapping:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
    directly




    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>


    Error: assumes it's possible to design HHH to get a correct answer.


    HHH(DD) gets the correct answer within its set
    of assumptions / premises


    Which is incompatible with the requirements for a halt decider:



    Yes, but the requirements for a halt decider are inconsistent
    with reality.


    In other words, you agree with Turing and Linz that the following
    requirements cannot be satisfied:


    Given any algorithm (i.e. a fixed immutable sequence of instructions) X described as <X> with input Y:

    A solution to the halting problem is an algorithm H that computes the following mapping:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed directly

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy on Sun Oct 12 20:36:11 2025
    From Newsgroup: comp.theory

    On 10/12/2025 1:49 PM, Andrew Church wrote:
    On 10/12/25 12:04 PM, olcott wrote:
    Also very important is that there is no chance of
    AI hallucination when they are only reasoning
    within a set of premises.-a Some systems must be told:

    Please think this all the way through without making any guesses

    I don't mean to be rude, but that is a completely insane assertion to
    me. There is always a non-zero chance for an LLM to roll a bad token
    during inference and spit out garbage.

    If it is provided the entire basis for reasoning
    then is cannot simply make stuff up about this basis.

    Sure, the top-p decoding strategy
    can help minimize such mistakes by pruning the token pool of the worst
    of the bad apples, but such models will never *ever* be foolproof. The
    price you pay for convincingly generating natural language is
    bulletproof reasoning.


    LLM systems have gotten 67-fold more powerful in that
    their context window increased from 3000 words to
    200,000 words in the last year.

    They seem to be very reliable at applying semantic
    logical entailment to a set of premises. This does
    seems to totally prevent any hallucination.

    It like talking to a guy with a 160 IQ that knows
    the subject of computer theory and practice like a PhD.

    If you're interested in formalizing your ideas using cutting-edge tech,
    I encourage you to look at Lean 4. Once you provide a machine-checked
    proof in Lean 4 with no `sorry`/`axiom`/other cheats, come back. People might adopt a very different tone.

    Best of luck, you will need it.


    https://leodemoura.github.io/files/CAV2024.pdf
    LLM's can do the same thing with very carefully
    crafted English. My initial post provided an
    example of this.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy on Sun Oct 12 20:49:43 2025
    From Newsgroup: comp.theory

    On 10/12/2025 12:06 PM, Mike Terry wrote:
    On 12/10/2025 16:53, Bonita Montero wrote:
    Sorry, that's silly. You spend half your life discussing the
    same problem over and over again and never get to the end.

    This gives PO a narrative he can hold on to which gives his life a meaning:-a he is the heroic world-saving unrecognised genius, constantly struggling against "the system" right up to his final breath!-a If he
    were to suddenly realise he was just a deluded dumbo who had wasted most
    of his life arguing over a succession of mistakes and misunderstandings
    on his part, and had never contributed a single idea of any academic
    value, would his life be better?-a I think not.

    Thankfully he has recently discovered chatbots who can give him the uncritical approval he craves,

    Clearly you have not kept up with the current state
    of the technology.

    LLM systems have gotten 67-fold more powerful in that
    their context window increased from 3000 words to
    200,000 words in the last one year.

    They seem to be very reliable at applying semantic
    logical entailment to a set of premises. This does
    seems to totally prevent any hallucination.

    It like talking to a guy with a 160 IQ that knows
    the subject of computer theory and practice like a PhD.

    It went from barely understanding my most basic proof
    to be able to accurately critique all of my work of how
    I apply an extension of Kripke

    https://files.commons.gc.cuny.edu/wp-content/blogs.dir/1358/files/2019/04/Outline-of-a-Theory-of-Truth.pdf

    to G||del, Tarski, the Liar Paradox and the Halting
    problem in a single conversation. I now have Kripke
    as the anchor of my ideas.

    so there is next to no chance of that
    happening now.-a [Assuming they don't suddenly get better, to the point where they can genuinely analyse and criticise his claims in the way we do...-a Given how they currently work, I don't see that happening any
    time soon.]

    Would the lives of other posters here be better?-a That's a trickier question.


    Mike.


    Am 12.10.2025 um 15:50 schrieb olcott:
    Please think this all the way through without making any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input until: >>> (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    -a-a-a-a then HHH is correct to abort this simulation and return 0.

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>


    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Sun Oct 12 20:56:38 2025
    From Newsgroup: comp.theory

    On 10/12/2025 8:22 PM, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:
    On 10/12/2025 3:11 PM, dbush wrote:
    On 10/12/2025 11:47 AM, olcott wrote:
    On 10/12/2025 9:19 AM, dbush wrote:
    On 10/12/2025 9:50 AM, olcott wrote:
    Please think this all the way through without making any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input >>>>>> until:
    (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non-
    termination
    -a-a-a-a then HHH is correct to abort this simulation and return 0. >>>>>
    These conditions make HHH not a halt decider because they are
    incompatible with the requirements:


    It is perfectly compatible with those requirements
    except in the case where the input calls its own
    simulating halt decider.

    In other words, not compatible.-a No "except".



    Given any algorithm (i.e. a fixed immutable sequence of
    instructions) X described as <X> with input Y:

    A solution to the halting problem is an algorithm H that computes
    the following mapping:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
    directly




    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>


    Error: assumes it's possible to design HHH to get a correct answer.


    HHH(DD) gets the correct answer within its set
    of assumptions / premises


    Which is incompatible with the requirements for a halt decider:



    Yes, but the requirements for a halt decider are inconsistent
    with reality.


    In other words, you agree with Turing and Linz that the following requirements cannot be satisfied:



    Sure and likewise no Turing machine can
    give birth to a real live fifteen story
    office building. All logical impossibilities
    are exactly equally logical impossible.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.theory on Mon Oct 13 03:06:26 2025
    From Newsgroup: comp.theory

    On 13/10/2025 02:22, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:

    <snip>

    Yes, but the requirements for a halt decider are inconsistent
    with reality.


    In other words, you agree with Turing and Linz


    He does. That's pretty much Game Over, I think.
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory on Sun Oct 12 22:15:26 2025
    From Newsgroup: comp.theory

    On 10/12/2025 9:56 PM, olcott wrote:
    On 10/12/2025 8:22 PM, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:
    On 10/12/2025 3:11 PM, dbush wrote:
    On 10/12/2025 11:47 AM, olcott wrote:
    On 10/12/2025 9:19 AM, dbush wrote:
    On 10/12/2025 9:50 AM, olcott wrote:
    Please think this all the way through without making any guesses >>>>>>>
    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input >>>>>>> until:
    (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non-
    termination
    -a-a-a-a then HHH is correct to abort this simulation and return 0. >>>>>>
    These conditions make HHH not a halt decider because they are
    incompatible with the requirements:


    It is perfectly compatible with those requirements
    except in the case where the input calls its own
    simulating halt decider.

    In other words, not compatible.-a No "except".



    Given any algorithm (i.e. a fixed immutable sequence of
    instructions) X described as <X> with input Y:

    A solution to the halting problem is an algorithm H that computes >>>>>> the following mapping:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly >>>>>> (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed >>>>>> directly




    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>


    Error: assumes it's possible to design HHH to get a correct answer. >>>>>>

    HHH(DD) gets the correct answer within its set
    of assumptions / premises


    Which is incompatible with the requirements for a halt decider:



    Yes, but the requirements for a halt decider are inconsistent
    with reality.


    In other words, you agree with Turing and Linz that the following
    requirements cannot be satisfied:



    Sure and likewise no Turing machine can
    give birth to a real live fifteen story
    office building. All logical impossibilities
    are exactly equally logical impossible.


    So we're in agreement: no algorithm exists that can tell us if any
    arbitrary algorithm X with input Y will halt when executed directly, as
    proven by Turning and Linz.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory on Sun Oct 12 22:17:40 2025
    From Newsgroup: comp.theory

    On 10/12/2025 10:06 PM, Richard Heathfield wrote:
    On 13/10/2025 02:22, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:

    <snip>

    Yes, but the requirements for a halt decider are inconsistent
    with reality.


    In other words, you agree with Turing and Linz


    He does. That's pretty much Game Over, I think.


    And this isn't the first time.


    On 3/24/2025 10:07 PM, olcott wrote:
    A halt decider cannot exist

    On 4/28/2025 2:47 PM, olcott wrote:
    On 4/28/2025 11:54 AM, dbush wrote:
    And the halting function below is not a computable function:


    It is NEVER a computable function

    Given any algorithm (i.e. a fixed immutable sequence of
    instructions) X described as <X> with input Y:

    A solution to the halting problem is an algorithm H that computes
    the following mapping:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
    directly

    On 3/14/2025 1:19 PM, olcott wrote:
    When we define the HP as having H return a value
    corresponding to the halting behavior of input D
    and input D can actually does the opposite of whatever
    value that H returns, then we have boxed ourselves
    in to a problem having no solution.

    On 6/21/2024 1:22 PM, olcott wrote:
    the logical impossibility of specifying a halt decider H
    that correctly reports the halt status of input D that is
    defined to do the opposite of whatever value that H reports.
    Of course this is impossible.

    On 7/4/2023 12:57 AM, olcott wrote:
    If you frame the problem in that a halt decider must divide up finite strings pairs into those that halt when directly executed and those that
    do not, then no single program can do this.

    On 5/5/2025 5:39 PM, olcott wrote:
    On 5/5/2025 4:31 PM, dbush wrote:
    Strawman. The square root of a dead rabbit does not exist, but the
    question of whether any arbitrary algorithm X with input Y halts when
    executed directly has a correct answer in all cases.


    It has a correct answer that cannot ever be computed

    On 5/13/2025 5:16 PM, olcott wrote:
    There is no time that we are ever going to directly
    encode omniscience into a computer program. The
    screwy idea of a universal halt decider that is
    literally ALL KNOWING is just a screwy idea.

    On 10/12/2025 9:20 PM, olcott wrote:
    Yes, but the requirements for a halt decider are inconsistent
    with reality.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Sun Oct 12 21:17:58 2025
    From Newsgroup: comp.theory

    On 10/12/2025 9:06 PM, Richard Heathfield wrote:
    On 13/10/2025 02:22, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:

    <snip>

    Yes, but the requirements for a halt decider are inconsistent
    with reality.


    In other words, you agree with Turing and Linz


    He does. That's pretty much Game Over, I think.


    I agreed that 21 years ago dummy.
    You are not very good at paying attention are you?
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Sun Oct 12 21:20:51 2025
    From Newsgroup: comp.theory

    On 10/12/2025 9:15 PM, dbush wrote:
    On 10/12/2025 9:56 PM, olcott wrote:
    On 10/12/2025 8:22 PM, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:
    On 10/12/2025 3:11 PM, dbush wrote:
    On 10/12/2025 11:47 AM, olcott wrote:
    On 10/12/2025 9:19 AM, dbush wrote:
    On 10/12/2025 9:50 AM, olcott wrote:
    Please think this all the way through without making any guesses >>>>>>>>
    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its
    input until:
    (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non-
    termination
    -a-a-a-a then HHH is correct to abort this simulation and return 0. >>>>>>>
    These conditions make HHH not a halt decider because they are
    incompatible with the requirements:


    It is perfectly compatible with those requirements
    except in the case where the input calls its own
    simulating halt decider.

    In other words, not compatible.-a No "except".



    Given any algorithm (i.e. a fixed immutable sequence of
    instructions) X described as <X> with input Y:

    A solution to the halting problem is an algorithm H that computes >>>>>>> the following mapping:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly >>>>>>> (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed >>>>>>> directly




    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>


    Error: assumes it's possible to design HHH to get a correct answer. >>>>>>>

    HHH(DD) gets the correct answer within its set
    of assumptions / premises


    Which is incompatible with the requirements for a halt decider:



    Yes, but the requirements for a halt decider are inconsistent
    with reality.


    In other words, you agree with Turing and Linz that the following
    requirements cannot be satisfied:



    Sure and likewise no Turing machine can
    give birth to a real live fifteen story
    office building. All logical impossibilities
    are exactly equally logical impossible.


    So we're in agreement: no algorithm exists that can tell us if any
    arbitrary algorithm X with input Y will halt when executed directly, as proven by Turning and Linz.

    In exactly the same way that: "this sentence is not true"
    cannot be proven true or false. It is a bogus decision
    problem anchored in a fundamentally incorrect notion of truth.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Sun Oct 12 21:23:34 2025
    From Newsgroup: comp.theory

    On 10/12/2025 9:17 PM, dbush wrote:
    On 10/12/2025 10:06 PM, Richard Heathfield wrote:
    On 13/10/2025 02:22, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:

    <snip>

    Yes, but the requirements for a halt decider are inconsistent
    with reality.


    In other words, you agree with Turing and Linz


    He does. That's pretty much Game Over, I think.


    And this isn't the first time.


    *The first time was back kn 2004*
    Is it possible for you to actually pay attention?

    As I first published here back in 2004:
    On 6/23/2004 9:34 PM, Olcott wrote:

    function LoopIfYouSayItHalts (bool YouSayItHalts):
    if YouSayItHalts () then
    while true do {}
    else
    return false;

    Does this program Halt?

    (Your (YES or NO) answer is to be considered
    translated to Boolean as the function's input
    parameter)

    Please ONLY PROVIDE CORRECT ANSWERS!
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.theory on Mon Oct 13 03:25:54 2025
    From Newsgroup: comp.theory

    On 13/10/2025 03:17, dbush wrote:
    On 10/12/2025 10:06 PM, Richard Heathfield wrote:
    On 13/10/2025 02:22, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:

    <snip>

    Yes, but the requirements for a halt decider are inconsistent
    with reality.


    In other words, you agree with Turing and Linz


    He does. That's pretty much Game Over, I think.


    And this isn't the first time.

    In which case it is reasonable to conclude that, modulo
    mortality, there will never be a last time.
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory on Sun Oct 12 22:29:02 2025
    From Newsgroup: comp.theory

    On 10/12/2025 10:20 PM, olcott wrote:
    On 10/12/2025 9:15 PM, dbush wrote:
    On 10/12/2025 9:56 PM, olcott wrote:
    On 10/12/2025 8:22 PM, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:
    On 10/12/2025 3:11 PM, dbush wrote:
    On 10/12/2025 11:47 AM, olcott wrote:
    On 10/12/2025 9:19 AM, dbush wrote:
    On 10/12/2025 9:50 AM, olcott wrote:
    Please think this all the way through without making any guesses >>>>>>>>>
    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its >>>>>>>>> input until:
    (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement: >>>>>>>>> -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non- >>>>>>>>> termination
    -a-a-a-a then HHH is correct to abort this simulation and return 0. >>>>>>>>
    These conditions make HHH not a halt decider because they are >>>>>>>> incompatible with the requirements:


    It is perfectly compatible with those requirements
    except in the case where the input calls its own
    simulating halt decider.

    In other words, not compatible.-a No "except".



    Given any algorithm (i.e. a fixed immutable sequence of
    instructions) X described as <X> with input Y:

    A solution to the halting problem is an algorithm H that
    computes the following mapping:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly >>>>>>>> (<X>,Y) maps to 0 if and only if X(Y) does not halt when
    executed directly




    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>


    Error: assumes it's possible to design HHH to get a correct answer. >>>>>>>>

    HHH(DD) gets the correct answer within its set
    of assumptions / premises


    Which is incompatible with the requirements for a halt decider:



    Yes, but the requirements for a halt decider are inconsistent
    with reality.


    In other words, you agree with Turing and Linz that the following
    requirements cannot be satisfied:



    Sure and likewise no Turing machine can
    give birth to a real live fifteen story
    office building. All logical impossibilities
    are exactly equally logical impossible.


    So we're in agreement: no algorithm exists that can tell us if any
    arbitrary algorithm X with input Y will halt when executed directly,
    as proven by Turning and Linz.

    In exactly the same way that: "this sentence is not true"
    cannot be proven true or false. It is a bogus decision
    problem anchored in
    The false assumption that such an algorithm *does* exist.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory on Sun Oct 12 22:29:54 2025
    From Newsgroup: comp.theory

    On 10/12/2025 10:23 PM, olcott wrote:
    On 10/12/2025 9:17 PM, dbush wrote:
    On 10/12/2025 10:06 PM, Richard Heathfield wrote:
    On 13/10/2025 02:22, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:

    <snip>

    Yes, but the requirements for a halt decider are inconsistent
    with reality.


    In other words, you agree with Turing and Linz


    He does. That's pretty much Game Over, I think.


    And this isn't the first time.


    *The first time was back kn 2004*

    You admitted that Turning was right in 2004? Because that's what we're talking about.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory on Sun Oct 12 22:40:31 2025
    From Newsgroup: comp.theory

    On 10/12/2025 10:34 PM, olcott wrote:
    On 10/12/2025 9:29 PM, dbush wrote:
    On 10/12/2025 10:20 PM, olcott wrote:
    On 10/12/2025 9:15 PM, dbush wrote:
    On 10/12/2025 9:56 PM, olcott wrote:
    On 10/12/2025 8:22 PM, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:
    On 10/12/2025 3:11 PM, dbush wrote:
    On 10/12/2025 11:47 AM, olcott wrote:
    On 10/12/2025 9:19 AM, dbush wrote:
    On 10/12/2025 9:50 AM, olcott wrote:
    Please think this all the way through without making any guesses >>>>>>>>>>>
    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its >>>>>>>>>>> input until:
    (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement: >>>>>>>>>>> -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non- >>>>>>>>>>> termination
    -a-a-a-a then HHH is correct to abort this simulation and return 0. >>>>>>>>>>
    These conditions make HHH not a halt decider because they are >>>>>>>>>> incompatible with the requirements:


    It is perfectly compatible with those requirements
    except in the case where the input calls its own
    simulating halt decider.

    In other words, not compatible.-a No "except".



    Given any algorithm (i.e. a fixed immutable sequence of
    instructions) X described as <X> with input Y:

    A solution to the halting problem is an algorithm H that
    computes the following mapping:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed >>>>>>>>>> directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when
    executed directly




    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>


    Error: assumes it's possible to design HHH to get a correct >>>>>>>>>> answer.


    HHH(DD) gets the correct answer within its set
    of assumptions / premises


    Which is incompatible with the requirements for a halt decider: >>>>>>>>


    Yes, but the requirements for a halt decider are inconsistent
    with reality.


    In other words, you agree with Turing and Linz that the following >>>>>> requirements cannot be satisfied:



    Sure and likewise no Turing machine can
    give birth to a real live fifteen story
    office building. All logical impossibilities
    are exactly equally logical impossible.


    So we're in agreement: no algorithm exists that can tell us if any
    arbitrary algorithm X with input Y will halt when executed directly,
    as proven by Turning and Linz.

    In exactly the same way that: "this sentence is not true"
    cannot be proven true or false. It is a bogus decision
    problem anchored in

    a fundamentally incorrect notion of truth.
    The false assumption that such an algorithm *does* exist.

    Can we correctly say that the color of your car is fifteen feet long?
    For the body of analytical truth coherence is the key and
    incoherence rules out truth.


    There is nothing incoherent about wanting to know if any arbitrary
    algorithm X with input Y will halt when executed directly.

    And until Turing's proof, no one knew whether or not an algorithm
    existed that can determine that in *all* possible cases.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Sun Oct 12 21:31:03 2025
    From Newsgroup: comp.theory

    On 10/12/2025 9:25 PM, Richard Heathfield wrote:
    On 13/10/2025 03:17, dbush wrote:
    On 10/12/2025 10:06 PM, Richard Heathfield wrote:
    On 13/10/2025 02:22, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:

    <snip>

    Yes, but the requirements for a halt decider are inconsistent
    with reality.


    In other words, you agree with Turing and Linz
    In exactly the same way that: "this sentence is not true"
    cannot be proven true or false. It is a bogus decision
    problem anchored in a fundamentally incorrect notion of truth.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Sun Oct 12 21:34:25 2025
    From Newsgroup: comp.theory

    On 10/12/2025 9:29 PM, dbush wrote:
    On 10/12/2025 10:20 PM, olcott wrote:
    On 10/12/2025 9:15 PM, dbush wrote:
    On 10/12/2025 9:56 PM, olcott wrote:
    On 10/12/2025 8:22 PM, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:
    On 10/12/2025 3:11 PM, dbush wrote:
    On 10/12/2025 11:47 AM, olcott wrote:
    On 10/12/2025 9:19 AM, dbush wrote:
    On 10/12/2025 9:50 AM, olcott wrote:
    Please think this all the way through without making any guesses >>>>>>>>>>
    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its >>>>>>>>>> input until:
    (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement: >>>>>>>>>> -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non- >>>>>>>>>> termination
    -a-a-a-a then HHH is correct to abort this simulation and return 0. >>>>>>>>>
    These conditions make HHH not a halt decider because they are >>>>>>>>> incompatible with the requirements:


    It is perfectly compatible with those requirements
    except in the case where the input calls its own
    simulating halt decider.

    In other words, not compatible.-a No "except".



    Given any algorithm (i.e. a fixed immutable sequence of
    instructions) X described as <X> with input Y:

    A solution to the halting problem is an algorithm H that
    computes the following mapping:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed directly >>>>>>>>> (<X>,Y) maps to 0 if and only if X(Y) does not halt when
    executed directly




    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>


    Error: assumes it's possible to design HHH to get a correct >>>>>>>>> answer.


    HHH(DD) gets the correct answer within its set
    of assumptions / premises


    Which is incompatible with the requirements for a halt decider:



    Yes, but the requirements for a halt decider are inconsistent
    with reality.


    In other words, you agree with Turing and Linz that the following
    requirements cannot be satisfied:



    Sure and likewise no Turing machine can
    give birth to a real live fifteen story
    office building. All logical impossibilities
    are exactly equally logical impossible.


    So we're in agreement: no algorithm exists that can tell us if any
    arbitrary algorithm X with input Y will halt when executed directly,
    as proven by Turning and Linz.

    In exactly the same way that: "this sentence is not true"
    cannot be proven true or false. It is a bogus decision
    problem anchored in

    a fundamentally incorrect notion of truth.
    The false assumption that such an algorithm *does* exist.

    Can we correctly say that the color of your car is fifteen feet long?
    For the body of analytical truth coherence is the key and
    incoherence rules out truth.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Sun Oct 12 21:35:41 2025
    From Newsgroup: comp.theory

    On 10/12/2025 9:29 PM, dbush wrote:
    On 10/12/2025 10:23 PM, olcott wrote:
    On 10/12/2025 9:17 PM, dbush wrote:
    On 10/12/2025 10:06 PM, Richard Heathfield wrote:
    On 13/10/2025 02:22, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:

    <snip>

    Yes, but the requirements for a halt decider are inconsistent
    with reality.


    In other words, you agree with Turing and Linz


    He does. That's pretty much Game Over, I think.


    And this isn't the first time.


    *The first time was back kn 2004*

    You admitted that Turning was right in 2004?-a Because that's what we're talking about.


    Go back and read and reread my 2004 words
    again and again until you understand exactly
    what they mean.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory on Sun Oct 12 22:38:57 2025
    From Newsgroup: comp.theory

    On 10/12/2025 10:35 PM, olcott wrote:
    On 10/12/2025 9:29 PM, dbush wrote:
    On 10/12/2025 10:23 PM, olcott wrote:
    On 10/12/2025 9:17 PM, dbush wrote:
    On 10/12/2025 10:06 PM, Richard Heathfield wrote:
    On 13/10/2025 02:22, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:

    <snip>

    Yes, but the requirements for a halt decider are inconsistent
    with reality.


    In other words, you agree with Turing and Linz


    He does. That's pretty much Game Over, I think.


    And this isn't the first time.


    *The first time was back kn 2004*

    You admitted that Turning was right in 2004?-a Because that's what
    we're talking about.


    Go back and read and reread my 2004 words
    again and again until you understand exactly
    what they mean.


    So if you agreed that Turning was right back in 2004, what have you been
    doing the last 21 years?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Sun Oct 12 21:55:20 2025
    From Newsgroup: comp.theory

    On 10/12/2025 9:38 PM, dbush wrote:
    On 10/12/2025 10:35 PM, olcott wrote:
    On 10/12/2025 9:29 PM, dbush wrote:
    On 10/12/2025 10:23 PM, olcott wrote:
    On 10/12/2025 9:17 PM, dbush wrote:
    On 10/12/2025 10:06 PM, Richard Heathfield wrote:
    On 13/10/2025 02:22, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:

    <snip>

    Yes, but the requirements for a halt decider are inconsistent
    with reality.


    In other words, you agree with Turing and Linz


    He does. That's pretty much Game Over, I think.


    And this isn't the first time.


    *The first time was back kn 2004*

    You admitted that Turning was right in 2004?-a Because that's what
    we're talking about.


    Go back and read and reread my 2004 words
    again and again until you understand exactly
    what they mean.


    So if you agreed that Turning was right back in 2004, what have you been doing the last 21 years?

    Read and reread the exact context of what
    I said or is your LLM model not capable
    of doing this?

    You don't seem to be quite as bright at ChatGPT
    when exceeding 3000 words caused it to act like
    it had Alzheimer's. (That was a year ago).
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory on Sun Oct 12 22:57:28 2025
    From Newsgroup: comp.theory

    On 10/12/2025 10:55 PM, olcott wrote:
    On 10/12/2025 9:38 PM, dbush wrote:
    On 10/12/2025 10:35 PM, olcott wrote:
    On 10/12/2025 9:29 PM, dbush wrote:
    On 10/12/2025 10:23 PM, olcott wrote:
    On 10/12/2025 9:17 PM, dbush wrote:
    On 10/12/2025 10:06 PM, Richard Heathfield wrote:
    On 13/10/2025 02:22, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:

    <snip>

    Yes, but the requirements for a halt decider are inconsistent >>>>>>>>> with reality.


    In other words, you agree with Turing and Linz


    He does. That's pretty much Game Over, I think.


    And this isn't the first time.


    *The first time was back kn 2004*

    You admitted that Turning was right in 2004?-a Because that's what
    we're talking about.


    Go back and read and reread my 2004 words
    again and again until you understand exactly
    what they mean.


    So if you agreed that Turning was right back in 2004, what have you
    been doing the last 21 years?

    Read and reread the exact context of what
    I said
    So now you're saying you *didn't* admit Turing was right in 2004?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Sun Oct 12 21:57:31 2025
    From Newsgroup: comp.theory

    On 10/12/2025 9:40 PM, dbush wrote:
    On 10/12/2025 10:34 PM, olcott wrote:
    On 10/12/2025 9:29 PM, dbush wrote:
    On 10/12/2025 10:20 PM, olcott wrote:
    On 10/12/2025 9:15 PM, dbush wrote:
    On 10/12/2025 9:56 PM, olcott wrote:
    On 10/12/2025 8:22 PM, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:
    On 10/12/2025 3:11 PM, dbush wrote:
    On 10/12/2025 11:47 AM, olcott wrote:
    On 10/12/2025 9:19 AM, dbush wrote:
    On 10/12/2025 9:50 AM, olcott wrote:
    Please think this all the way through without making any >>>>>>>>>>>> guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its >>>>>>>>>>>> input until:
    (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement: >>>>>>>>>>>> -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non- >>>>>>>>>>>> termination
    -a-a-a-a then HHH is correct to abort this simulation and return 0.

    These conditions make HHH not a halt decider because they are >>>>>>>>>>> incompatible with the requirements:


    It is perfectly compatible with those requirements
    except in the case where the input calls its own
    simulating halt decider.

    In other words, not compatible.-a No "except".



    Given any algorithm (i.e. a fixed immutable sequence of >>>>>>>>>>> instructions) X described as <X> with input Y:

    A solution to the halting problem is an algorithm H that >>>>>>>>>>> computes the following mapping:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed >>>>>>>>>>> directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when >>>>>>>>>>> executed directly




    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>


    Error: assumes it's possible to design HHH to get a correct >>>>>>>>>>> answer.


    HHH(DD) gets the correct answer within its set
    of assumptions / premises


    Which is incompatible with the requirements for a halt decider: >>>>>>>>>


    Yes, but the requirements for a halt decider are inconsistent
    with reality.


    In other words, you agree with Turing and Linz that the following >>>>>>> requirements cannot be satisfied:



    Sure and likewise no Turing machine can
    give birth to a real live fifteen story
    office building. All logical impossibilities
    are exactly equally logical impossible.


    So we're in agreement: no algorithm exists that can tell us if any
    arbitrary algorithm X with input Y will halt when executed
    directly, as proven by Turning and Linz.

    In exactly the same way that: "this sentence is not true"
    cannot be proven true or false. It is a bogus decision
    problem anchored in

    a fundamentally incorrect notion of truth.
    The false assumption that such an algorithm *does* exist.

    Can we correctly say that the color of your car is fifteen feet long?
    For the body of analytical truth coherence is the key and
    incoherence rules out truth.


    There is nothing incoherent about wanting to know if any arbitrary
    algorithm X with input Y will halt when executed directly.


    Tarski stupidly thought this exact same sort of thing.
    If a truth predicate exists then it could tell if the
    Liar Paradox is true or false. Since it cannot then
    there must be no truth predicate.

    And until Turing's proof, no one knew whether or not an algorithm
    existed that can determine that in *all* possible cases.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Sun Oct 12 21:58:58 2025
    From Newsgroup: comp.theory

    On 10/12/2025 9:57 PM, dbush wrote:
    On 10/12/2025 10:55 PM, olcott wrote:
    On 10/12/2025 9:38 PM, dbush wrote:
    On 10/12/2025 10:35 PM, olcott wrote:
    On 10/12/2025 9:29 PM, dbush wrote:
    On 10/12/2025 10:23 PM, olcott wrote:
    On 10/12/2025 9:17 PM, dbush wrote:
    On 10/12/2025 10:06 PM, Richard Heathfield wrote:
    On 13/10/2025 02:22, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:

    <snip>

    Yes, but the requirements for a halt decider are inconsistent >>>>>>>>>> with reality.


    In other words, you agree with Turing and Linz


    He does. That's pretty much Game Over, I think.


    And this isn't the first time.


    *The first time was back kn 2004*

    You admitted that Turning was right in 2004?-a Because that's what
    we're talking about.


    Go back and read and reread my 2004 words
    again and again until you understand exactly
    what they mean.


    So if you agreed that Turning was right back in 2004, what have you
    been doing the last 21 years?

    Read and reread the exact context of what
    I said
    So now you're saying you *didn't* admit Turing was right in 2004?

    Accurately paraphrase my exact words unless
    this is over your intellectual capacity.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory on Sun Oct 12 22:59:51 2025
    From Newsgroup: comp.theory

    On 10/12/2025 10:57 PM, olcott wrote:
    On 10/12/2025 9:40 PM, dbush wrote:
    On 10/12/2025 10:34 PM, olcott wrote:
    On 10/12/2025 9:29 PM, dbush wrote:
    On 10/12/2025 10:20 PM, olcott wrote:
    On 10/12/2025 9:15 PM, dbush wrote:
    On 10/12/2025 9:56 PM, olcott wrote:
    On 10/12/2025 8:22 PM, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:
    On 10/12/2025 3:11 PM, dbush wrote:
    On 10/12/2025 11:47 AM, olcott wrote:
    On 10/12/2025 9:19 AM, dbush wrote:
    On 10/12/2025 9:50 AM, olcott wrote:
    Please think this all the way through without making any >>>>>>>>>>>>> guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its >>>>>>>>>>>>> input until:
    (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement: >>>>>>>>>>>>> -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own >>>>>>>>>>>>> non- termination
    -a-a-a-a then HHH is correct to abort this simulation and >>>>>>>>>>>>> return 0.

    These conditions make HHH not a halt decider because they >>>>>>>>>>>> are incompatible with the requirements:


    It is perfectly compatible with those requirements
    except in the case where the input calls its own
    simulating halt decider.

    In other words, not compatible.-a No "except".



    Given any algorithm (i.e. a fixed immutable sequence of >>>>>>>>>>>> instructions) X described as <X> with input Y:

    A solution to the halting problem is an algorithm H that >>>>>>>>>>>> computes the following mapping:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed >>>>>>>>>>>> directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when >>>>>>>>>>>> executed directly




    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>


    Error: assumes it's possible to design HHH to get a correct >>>>>>>>>>>> answer.


    HHH(DD) gets the correct answer within its set
    of assumptions / premises


    Which is incompatible with the requirements for a halt decider: >>>>>>>>>>


    Yes, but the requirements for a halt decider are inconsistent >>>>>>>>> with reality.


    In other words, you agree with Turing and Linz that the
    following requirements cannot be satisfied:



    Sure and likewise no Turing machine can
    give birth to a real live fifteen story
    office building. All logical impossibilities
    are exactly equally logical impossible.


    So we're in agreement: no algorithm exists that can tell us if any >>>>>> arbitrary algorithm X with input Y will halt when executed
    directly, as proven by Turning and Linz.

    In exactly the same way that: "this sentence is not true"
    cannot be proven true or false. It is a bogus decision
    problem anchored in

    a fundamentally incorrect notion of truth.
    The false assumption that such an algorithm *does* exist.

    Can we correctly say that the color of your car is fifteen feet long?
    For the body of analytical truth coherence is the key and
    incoherence rules out truth.


    There is nothing incoherent about wanting to know if any arbitrary
    algorithm X with input Y will halt when executed directly.


    Tarski stupidly thought this exact same sort of thing.
    If a truth predicate exists then it could tell if the
    Liar Paradox is true or false. Since it cannot then
    there must be no truth predicate.

    Correct. If you understood proof by contradiction you wouldn't be
    questioning that.


    And until Turing's proof, no one knew whether or not an algorithm
    existed that can determine that in *all* possible cases.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory on Sun Oct 12 23:01:28 2025
    From Newsgroup: comp.theory

    On 10/12/2025 10:58 PM, olcott wrote:
    On 10/12/2025 9:57 PM, dbush wrote:
    On 10/12/2025 10:55 PM, olcott wrote:
    On 10/12/2025 9:38 PM, dbush wrote:
    On 10/12/2025 10:35 PM, olcott wrote:
    On 10/12/2025 9:29 PM, dbush wrote:
    On 10/12/2025 10:23 PM, olcott wrote:
    On 10/12/2025 9:17 PM, dbush wrote:
    On 10/12/2025 10:06 PM, Richard Heathfield wrote:
    On 13/10/2025 02:22, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:

    <snip>

    Yes, but the requirements for a halt decider are inconsistent >>>>>>>>>>> with reality.


    In other words, you agree with Turing and Linz


    He does. That's pretty much Game Over, I think.


    And this isn't the first time.


    *The first time was back kn 2004*

    You admitted that Turning was right in 2004?-a Because that's what >>>>>> we're talking about.


    Go back and read and reread my 2004 words
    again and again until you understand exactly
    what they mean.


    So if you agreed that Turning was right back in 2004, what have you
    been doing the last 21 years?

    Read and reread the exact context of what
    I said
    So now you're saying you *didn't* admit Turing was right in 2004?

    Accurately paraphrase my exact words unless
    this is over your intellectual capacity.


    So now you're saying you *did* admit Turing was right in 2004? If
    that's the case, you're admitting you wasted the last 21 years trying to overturn a proof whose conclusion you agreed with.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Sun Oct 12 22:43:16 2025
    From Newsgroup: comp.theory

    On 10/12/2025 9:59 PM, dbush wrote:
    On 10/12/2025 10:57 PM, olcott wrote:
    On 10/12/2025 9:40 PM, dbush wrote:
    On 10/12/2025 10:34 PM, olcott wrote:
    On 10/12/2025 9:29 PM, dbush wrote:
    On 10/12/2025 10:20 PM, olcott wrote:
    On 10/12/2025 9:15 PM, dbush wrote:
    On 10/12/2025 9:56 PM, olcott wrote:
    On 10/12/2025 8:22 PM, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:
    On 10/12/2025 3:11 PM, dbush wrote:
    On 10/12/2025 11:47 AM, olcott wrote:
    On 10/12/2025 9:19 AM, dbush wrote:
    On 10/12/2025 9:50 AM, olcott wrote:
    Please think this all the way through without making any >>>>>>>>>>>>>> guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates >>>>>>>>>>>>>> its input until:
    (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement: >>>>>>>>>>>>>> -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own >>>>>>>>>>>>>> non- termination
    -a-a-a-a then HHH is correct to abort this simulation and >>>>>>>>>>>>>> return 0.

    These conditions make HHH not a halt decider because they >>>>>>>>>>>>> are incompatible with the requirements:


    It is perfectly compatible with those requirements
    except in the case where the input calls its own
    simulating halt decider.

    In other words, not compatible.-a No "except".



    Given any algorithm (i.e. a fixed immutable sequence of >>>>>>>>>>>>> instructions) X described as <X> with input Y:

    A solution to the halting problem is an algorithm H that >>>>>>>>>>>>> computes the following mapping:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed >>>>>>>>>>>>> directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when >>>>>>>>>>>>> executed directly




    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>


    Error: assumes it's possible to design HHH to get a correct >>>>>>>>>>>>> answer.


    HHH(DD) gets the correct answer within its set
    of assumptions / premises


    Which is incompatible with the requirements for a halt decider: >>>>>>>>>>>


    Yes, but the requirements for a halt decider are inconsistent >>>>>>>>>> with reality.


    In other words, you agree with Turing and Linz that the
    following requirements cannot be satisfied:



    Sure and likewise no Turing machine can
    give birth to a real live fifteen story
    office building. All logical impossibilities
    are exactly equally logical impossible.


    So we're in agreement: no algorithm exists that can tell us if
    any arbitrary algorithm X with input Y will halt when executed
    directly, as proven by Turning and Linz.

    In exactly the same way that: "this sentence is not true"
    cannot be proven true or false. It is a bogus decision
    problem anchored in

    a fundamentally incorrect notion of truth.
    The false assumption that such an algorithm *does* exist.

    Can we correctly say that the color of your car is fifteen feet long?
    For the body of analytical truth coherence is the key and
    incoherence rules out truth.


    There is nothing incoherent about wanting to know if any arbitrary
    algorithm X with input Y will halt when executed directly.


    Tarski stupidly thought this exact same sort of thing.
    If a truth predicate exists then it could tell if the
    Liar Paradox is true or false. Since it cannot then
    there must be no truth predicate.

    Correct.-a If you understood proof by contradiction you wouldn't be questioning that.


    It looks like ChatGPT 5.0 is the winner here.
    It understood that requiring HHH to report on
    the behavior of the direct execution of DD()
    is requiring a function to report on something
    outside of its domain.

    Do you understand all those words?

    Do you understand that requiring a
    Turing machine to compute the square
    root of a dead chicken is also requiring
    the TM to compute a function outside of
    its domain?


    And until Turing's proof, no one knew whether or not an algorithm
    existed that can determine that in *all* possible cases.



    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory on Sun Oct 12 23:49:10 2025
    From Newsgroup: comp.theory

    On 10/12/2025 11:43 PM, olcott wrote:
    On 10/12/2025 9:59 PM, dbush wrote:
    On 10/12/2025 10:57 PM, olcott wrote:
    On 10/12/2025 9:40 PM, dbush wrote:
    On 10/12/2025 10:34 PM, olcott wrote:
    On 10/12/2025 9:29 PM, dbush wrote:
    On 10/12/2025 10:20 PM, olcott wrote:
    On 10/12/2025 9:15 PM, dbush wrote:
    On 10/12/2025 9:56 PM, olcott wrote:
    On 10/12/2025 8:22 PM, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:
    On 10/12/2025 3:11 PM, dbush wrote:
    On 10/12/2025 11:47 AM, olcott wrote:
    On 10/12/2025 9:19 AM, dbush wrote:
    On 10/12/2025 9:50 AM, olcott wrote:
    Please think this all the way through without making any >>>>>>>>>>>>>>> guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates >>>>>>>>>>>>>>> its input until:
    (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" >>>>>>>>>>>>>>> statement:
    -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own >>>>>>>>>>>>>>> non- termination
    -a-a-a-a then HHH is correct to abort this simulation and >>>>>>>>>>>>>>> return 0.

    These conditions make HHH not a halt decider because they >>>>>>>>>>>>>> are incompatible with the requirements:


    It is perfectly compatible with those requirements
    except in the case where the input calls its own
    simulating halt decider.

    In other words, not compatible.-a No "except".



    Given any algorithm (i.e. a fixed immutable sequence of >>>>>>>>>>>>>> instructions) X described as <X> with input Y:

    A solution to the halting problem is an algorithm H that >>>>>>>>>>>>>> computes the following mapping:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed >>>>>>>>>>>>>> directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when >>>>>>>>>>>>>> executed directly




    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>


    Error: assumes it's possible to design HHH to get a >>>>>>>>>>>>>> correct answer.


    HHH(DD) gets the correct answer within its set
    of assumptions / premises


    Which is incompatible with the requirements for a halt decider: >>>>>>>>>>>>


    Yes, but the requirements for a halt decider are inconsistent >>>>>>>>>>> with reality.


    In other words, you agree with Turing and Linz that the
    following requirements cannot be satisfied:



    Sure and likewise no Turing machine can
    give birth to a real live fifteen story
    office building. All logical impossibilities
    are exactly equally logical impossible.


    So we're in agreement: no algorithm exists that can tell us if >>>>>>>> any arbitrary algorithm X with input Y will halt when executed >>>>>>>> directly, as proven by Turning and Linz.

    In exactly the same way that: "this sentence is not true"
    cannot be proven true or false. It is a bogus decision
    problem anchored in

    a fundamentally incorrect notion of truth.
    The false assumption that such an algorithm *does* exist.

    Can we correctly say that the color of your car is fifteen feet long? >>>>> For the body of analytical truth coherence is the key and
    incoherence rules out truth.


    There is nothing incoherent about wanting to know if any arbitrary
    algorithm X with input Y will halt when executed directly.


    Tarski stupidly thought this exact same sort of thing.
    If a truth predicate exists then it could tell if the
    Liar Paradox is true or false. Since it cannot then
    there must be no truth predicate.

    Correct.-a If you understood proof by contradiction you wouldn't be
    questioning that.


    It looks like ChatGPT 5.0 is the winner here.
    It understood that requiring HHH to report on
    the behavior of the direct execution of DD()
    is requiring a function to report on something
    outside of its domain.

    False. It is proven true by the meaning of the words that a finite
    string description of a Turing machine specifies all semantic properties
    of the machine it describes, including whether that machine halts when executed directly.

    Therefore it is not outside the domain.


    Do you understand all those words?

    Do you understand that requiring a
    Turing machine to compute the square
    root of a dead chicken is also requiring
    the TM to compute a function outside of
    its domain?

    Repeat of previously refuted point:

    On 5/5/2025 4:31 PM, dbush wrote:
    Strawman. The square root of a dead rabbit does not exist, but the
    question of whether any arbitrary algorithm X with input Y halts when executed directly has a correct answer in all cases.

    This constitutes your admission that you don't understand proof by contradiction and admit that Tarski is correct.



    And until Turing's proof, no one knew whether or not an algorithm
    existed that can determine that in *all* possible cases.






    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Sun Oct 12 23:12:19 2025
    From Newsgroup: comp.theory

    On 10/12/2025 10:49 PM, dbush wrote:
    On 10/12/2025 11:43 PM, olcott wrote:
    On 10/12/2025 9:59 PM, dbush wrote:
    On 10/12/2025 10:57 PM, olcott wrote:
    On 10/12/2025 9:40 PM, dbush wrote:
    On 10/12/2025 10:34 PM, olcott wrote:
    On 10/12/2025 9:29 PM, dbush wrote:
    On 10/12/2025 10:20 PM, olcott wrote:
    On 10/12/2025 9:15 PM, dbush wrote:
    On 10/12/2025 9:56 PM, olcott wrote:
    On 10/12/2025 8:22 PM, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:
    On 10/12/2025 3:11 PM, dbush wrote:
    On 10/12/2025 11:47 AM, olcott wrote:
    On 10/12/2025 9:19 AM, dbush wrote:
    On 10/12/2025 9:50 AM, olcott wrote:
    Please think this all the way through without making any >>>>>>>>>>>>>>>> guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates >>>>>>>>>>>>>>>> its input until:
    (a) Detects a non-terminating behavior pattern: >>>>>>>>>>>>>>>> -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" >>>>>>>>>>>>>>>> statement:
    -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own >>>>>>>>>>>>>>>> non- termination
    -a-a-a-a then HHH is correct to abort this simulation and >>>>>>>>>>>>>>>> return 0.

    These conditions make HHH not a halt decider because they >>>>>>>>>>>>>>> are incompatible with the requirements:


    It is perfectly compatible with those requirements >>>>>>>>>>>>>> except in the case where the input calls its own
    simulating halt decider.

    In other words, not compatible.-a No "except".



    Given any algorithm (i.e. a fixed immutable sequence of >>>>>>>>>>>>>>> instructions) X described as <X> with input Y:

    A solution to the halting problem is an algorithm H that >>>>>>>>>>>>>>> computes the following mapping:

    (<X>,Y) maps to 1 if and only if X(Y) halts when executed >>>>>>>>>>>>>>> directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when >>>>>>>>>>>>>>> executed directly




    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>


    Error: assumes it's possible to design HHH to get a >>>>>>>>>>>>>>> correct answer.


    HHH(DD) gets the correct answer within its set
    of assumptions / premises


    Which is incompatible with the requirements for a halt >>>>>>>>>>>>> decider:



    Yes, but the requirements for a halt decider are inconsistent >>>>>>>>>>>> with reality.


    In other words, you agree with Turing and Linz that the >>>>>>>>>>> following requirements cannot be satisfied:



    Sure and likewise no Turing machine can
    give birth to a real live fifteen story
    office building. All logical impossibilities
    are exactly equally logical impossible.


    So we're in agreement: no algorithm exists that can tell us if >>>>>>>>> any arbitrary algorithm X with input Y will halt when executed >>>>>>>>> directly, as proven by Turning and Linz.

    In exactly the same way that: "this sentence is not true"
    cannot be proven true or false. It is a bogus decision
    problem anchored in

    a fundamentally incorrect notion of truth.
    The false assumption that such an algorithm *does* exist.

    Can we correctly say that the color of your car is fifteen feet long? >>>>>> For the body of analytical truth coherence is the key and
    incoherence rules out truth.


    There is nothing incoherent about wanting to know if any arbitrary
    algorithm X with input Y will halt when executed directly.


    Tarski stupidly thought this exact same sort of thing.
    If a truth predicate exists then it could tell if the
    Liar Paradox is true or false. Since it cannot then
    there must be no truth predicate.

    Correct.-a If you understood proof by contradiction you wouldn't be
    questioning that.


    It looks like ChatGPT 5.0 is the winner here.
    It understood that requiring HHH to report on
    the behavior of the direct execution of DD()
    is requiring a function to report on something
    outside of its domain.

    False.-a It is proven true by the meaning of the words that a finite
    string description of a Turing machine specifies all semantic properties
    of the machine it describes, including whether that machine halts when executed directly.


    ChatCPT 5.0 was the first LLM to be able to prove
    that is counter-factual. It is 67-fold more powerful
    than one year ago. Its about like talking with a
    guy that has a 160 IQ and a PhD in computation
    theory and practice. It can really handle philosophy
    of computation quite well.

    Therefore it is not outside the domain.


    Do you understand all those words?

    Do you understand that requiring a
    Turing machine to compute the square
    root of a dead chicken is also requiring
    the TM to compute a function outside of
    its domain?

    Repeat of previously refuted point:

    On 5/5/2025 4:31 PM, dbush wrote:
    Strawman.-a The square root of a dead rabbit does not exist, but the question of whether any arbitrary algorithm X with input Y halts when executed directly has a correct answer in all cases.

    This constitutes your admission that you don't understand proof by contradiction and admit that Tarski is correct.



    And until Turing's proof, no one knew whether or not an algorithm
    existed that can determine that in *all* possible cases.






    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dbush@dbush.mobile@gmail.com to comp.theory on Mon Oct 13 00:22:19 2025
    From Newsgroup: comp.theory

    On 10/13/2025 12:12 AM, olcott wrote:
    On 10/12/2025 10:49 PM, dbush wrote:
    On 10/12/2025 11:43 PM, olcott wrote:
    On 10/12/2025 9:59 PM, dbush wrote:
    On 10/12/2025 10:57 PM, olcott wrote:
    On 10/12/2025 9:40 PM, dbush wrote:
    On 10/12/2025 10:34 PM, olcott wrote:
    On 10/12/2025 9:29 PM, dbush wrote:
    On 10/12/2025 10:20 PM, olcott wrote:
    On 10/12/2025 9:15 PM, dbush wrote:
    On 10/12/2025 9:56 PM, olcott wrote:
    On 10/12/2025 8:22 PM, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:
    On 10/12/2025 3:11 PM, dbush wrote:
    On 10/12/2025 11:47 AM, olcott wrote:
    On 10/12/2025 9:19 AM, dbush wrote:
    On 10/12/2025 9:50 AM, olcott wrote:
    Please think this all the way through without making >>>>>>>>>>>>>>>>> any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates >>>>>>>>>>>>>>>>> its input until:
    (a) Detects a non-terminating behavior pattern: >>>>>>>>>>>>>>>>> -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" >>>>>>>>>>>>>>>>> statement:
    -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own >>>>>>>>>>>>>>>>> non- termination
    -a-a-a-a then HHH is correct to abort this simulation and >>>>>>>>>>>>>>>>> return 0.

    These conditions make HHH not a halt decider because >>>>>>>>>>>>>>>> they are incompatible with the requirements:


    It is perfectly compatible with those requirements >>>>>>>>>>>>>>> except in the case where the input calls its own >>>>>>>>>>>>>>> simulating halt decider.

    In other words, not compatible.-a No "except".



    Given any algorithm (i.e. a fixed immutable sequence of >>>>>>>>>>>>>>>> instructions) X described as <X> with input Y: >>>>>>>>>>>>>>>>
    A solution to the halting problem is an algorithm H that >>>>>>>>>>>>>>>> computes the following mapping:

    (<X>,Y) maps to 1 if and only if X(Y) halts when >>>>>>>>>>>>>>>> executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt when >>>>>>>>>>>>>>>> executed directly




    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>


    Error: assumes it's possible to design HHH to get a >>>>>>>>>>>>>>>> correct answer.


    HHH(DD) gets the correct answer within its set
    of assumptions / premises


    Which is incompatible with the requirements for a halt >>>>>>>>>>>>>> decider:



    Yes, but the requirements for a halt decider are inconsistent >>>>>>>>>>>>> with reality.


    In other words, you agree with Turing and Linz that the >>>>>>>>>>>> following requirements cannot be satisfied:



    Sure and likewise no Turing machine can
    give birth to a real live fifteen story
    office building. All logical impossibilities
    are exactly equally logical impossible.


    So we're in agreement: no algorithm exists that can tell us if >>>>>>>>>> any arbitrary algorithm X with input Y will halt when executed >>>>>>>>>> directly, as proven by Turning and Linz.

    In exactly the same way that: "this sentence is not true"
    cannot be proven true or false. It is a bogus decision
    problem anchored in

    a fundamentally incorrect notion of truth.
    The false assumption that such an algorithm *does* exist.

    Can we correctly say that the color of your car is fifteen feet >>>>>>> long?
    For the body of analytical truth coherence is the key and
    incoherence rules out truth.


    There is nothing incoherent about wanting to know if any arbitrary >>>>>> algorithm X with input Y will halt when executed directly.


    Tarski stupidly thought this exact same sort of thing.
    If a truth predicate exists then it could tell if the
    Liar Paradox is true or false. Since it cannot then
    there must be no truth predicate.

    Correct.-a If you understood proof by contradiction you wouldn't be
    questioning that.


    It looks like ChatGPT 5.0 is the winner here.
    It understood that requiring HHH to report on
    the behavior of the direct execution of DD()
    is requiring a function to report on something
    outside of its domain.

    False.-a It is proven true by the meaning of the words that a finite
    string description of a Turing machine specifies all semantic
    properties of the machine it describes, including whether that machine
    halts when executed directly.


    ChatCPT 5.0 was the first LLM to be able to prove
    that is counter-factual.

    Ah, so you don't believe in semantic tautologies?

    Therefore it is not outside the domain.


    Do you understand all those words?

    Do you understand that requiring a
    Turing machine to compute the square
    root of a dead chicken is also requiring
    the TM to compute a function outside of
    its domain?

    Repeat of previously refuted point:

    On 5/5/2025 4:31 PM, dbush wrote:
    Strawman.-a The square root of a dead rabbit does not exist, but the
    question of whether any arbitrary algorithm X with input Y halts when
    executed directly has a correct answer in all cases.

    This constitutes your admission that you don't understand proof by
    contradiction and admit that Tarski is correct.



    And until Turing's proof, no one knew whether or not an algorithm >>>>>> existed that can determine that in *all* possible cases.









    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.theory on Mon Oct 13 07:42:31 2025
    From Newsgroup: comp.theory

    On 13/10/2025 03:57, dbush wrote:
    On 10/12/2025 10:55 PM, olcott wrote:
    On 10/12/2025 9:38 PM, dbush wrote:
    On 10/12/2025 10:35 PM, olcott wrote:
    On 10/12/2025 9:29 PM, dbush wrote:
    On 10/12/2025 10:23 PM, olcott wrote:
    On 10/12/2025 9:17 PM, dbush wrote:
    On 10/12/2025 10:06 PM, Richard Heathfield wrote:
    On 13/10/2025 02:22, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:

    <snip>

    Yes, but the requirements for a halt decider are
    inconsistent
    with reality.


    In other words, you agree with Turing and Linz


    He does. That's pretty much Game Over, I think.


    And this isn't the first time.


    *The first time was back kn 2004*

    You admitted that Turning was right in 2004?-a Because that's
    what we're talking about.


    Go back and read and reread my 2004 words
    again and again until you understand exactly
    what they mean.


    So if you agreed that Turning was right back in 2004, what
    have you been doing the last 21 years?

    Read and reread the exact context of what
    I said
    So now you're saying you *didn't* admit Turing was right in 2004?

    It's both, I guess.
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory on Mon Oct 13 11:58:30 2025
    From Newsgroup: comp.theory

    On 2025-10-12 13:50:05 +0000, olcott said:

    Please think this all the way through without making any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) Detects a non-terminating behavior pattern:
    abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    then HHH is correct to abort this simulation and return 0.

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    int Halt_Status = HHH(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    int main()
    {
    HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>

    There is no need to prove that HHH(DD) returns 0. It is sufficient
    to run it at and see what it returns. Just add to the above main an
    output that tells what HHH(DD) returned.
    --
    Mikko

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Mon Oct 13 10:53:25 2025
    From Newsgroup: comp.theory

    On 10/13/2025 3:58 AM, Mikko wrote:
    On 2025-10-12 13:50:05 +0000, olcott said:

    Please think this all the way through without making any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    -a-a-a-a then HHH is correct to abort this simulation and return 0.

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>

    There is no need to prove that HHH(DD) returns 0. It is sufficient
    to run it at and see what it returns. Just add to the above main an
    output that tells what HHH(DD) returned.


    What value should HHH(DD) correctly return? (within its premises)
    This is not at all the same thing as what value does HHH(DD) return.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.lang.c++,comp.lang.c,comp.ai.philosophy on Mon Oct 13 11:26:49 2025
    From Newsgroup: comp.theory

    On 10/12/2025 11:22 PM, dbush wrote:
    On 10/13/2025 12:12 AM, olcott wrote:
    On 10/12/2025 10:49 PM, dbush wrote:
    On 10/12/2025 11:43 PM, olcott wrote:
    On 10/12/2025 9:59 PM, dbush wrote:
    On 10/12/2025 10:57 PM, olcott wrote:
    On 10/12/2025 9:40 PM, dbush wrote:
    On 10/12/2025 10:34 PM, olcott wrote:
    On 10/12/2025 9:29 PM, dbush wrote:
    On 10/12/2025 10:20 PM, olcott wrote:
    On 10/12/2025 9:15 PM, dbush wrote:
    On 10/12/2025 9:56 PM, olcott wrote:
    On 10/12/2025 8:22 PM, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:
    On 10/12/2025 3:11 PM, dbush wrote:
    On 10/12/2025 11:47 AM, olcott wrote:
    On 10/12/2025 9:19 AM, dbush wrote:
    On 10/12/2025 9:50 AM, olcott wrote:
    Please think this all the way through without making >>>>>>>>>>>>>>>>>> any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly >>>>>>>>>>>>>>>>>> simulates its input until:
    (a) Detects a non-terminating behavior pattern: >>>>>>>>>>>>>>>>>> -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" >>>>>>>>>>>>>>>>>> statement:
    -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its >>>>>>>>>>>>>>>>>> own non- termination
    -a-a-a-a then HHH is correct to abort this simulation and >>>>>>>>>>>>>>>>>> return 0.

    These conditions make HHH not a halt decider because >>>>>>>>>>>>>>>>> they are incompatible with the requirements: >>>>>>>>>>>>>>>>>

    It is perfectly compatible with those requirements >>>>>>>>>>>>>>>> except in the case where the input calls its own >>>>>>>>>>>>>>>> simulating halt decider.

    In other words, not compatible.-a No "except".



    Given any algorithm (i.e. a fixed immutable sequence of >>>>>>>>>>>>>>>>> instructions) X described as <X> with input Y: >>>>>>>>>>>>>>>>>
    A solution to the halting problem is an algorithm H >>>>>>>>>>>>>>>>> that computes the following mapping:

    (<X>,Y) maps to 1 if and only if X(Y) halts when >>>>>>>>>>>>>>>>> executed directly
    (<X>,Y) maps to 0 if and only if X(Y) does not halt >>>>>>>>>>>>>>>>> when executed directly




    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return? >>>>>>>>>>>>>>>>>> </Input to LLM systems>


    Error: assumes it's possible to design HHH to get a >>>>>>>>>>>>>>>>> correct answer.


    HHH(DD) gets the correct answer within its set >>>>>>>>>>>>>>>> of assumptions / premises


    Which is incompatible with the requirements for a halt >>>>>>>>>>>>>>> decider:



    Yes, but the requirements for a halt decider are inconsistent >>>>>>>>>>>>>> with reality.


    In other words, you agree with Turing and Linz that the >>>>>>>>>>>>> following requirements cannot be satisfied:



    Sure and likewise no Turing machine can
    give birth to a real live fifteen story
    office building. All logical impossibilities
    are exactly equally logical impossible.


    So we're in agreement: no algorithm exists that can tell us >>>>>>>>>>> if any arbitrary algorithm X with input Y will halt when >>>>>>>>>>> executed directly, as proven by Turning and Linz.

    In exactly the same way that: "this sentence is not true"
    cannot be proven true or false. It is a bogus decision
    problem anchored in

    a fundamentally incorrect notion of truth.
    The false assumption that such an algorithm *does* exist.

    Can we correctly say that the color of your car is fifteen feet >>>>>>>> long?
    For the body of analytical truth coherence is the key and
    incoherence rules out truth.


    There is nothing incoherent about wanting to know if any
    arbitrary algorithm X with input Y will halt when executed directly. >>>>>>>

    Tarski stupidly thought this exact same sort of thing.
    If a truth predicate exists then it could tell if the
    Liar Paradox is true or false. Since it cannot then
    there must be no truth predicate.

    Correct.-a If you understood proof by contradiction you wouldn't be >>>>> questioning that.


    It looks like ChatGPT 5.0 is the winner here.
    It understood that requiring HHH to report on
    the behavior of the direct execution of DD()
    is requiring a function to report on something
    outside of its domain.

    False.-a It is proven true by the meaning of the words that a finite
    string description of a Turing machine specifies all semantic
    properties of the machine it describes, including whether that
    machine halts when executed directly.


    ChatCPT 5.0 was the first LLM to be able to prove
    that is counter-factual.

    Ah, so you don't believe in semantic tautologies?


    *They are the foundation of this whole system*
    Any system of reasoning that begins with a consistent
    system of stipulated truths and only applies the truth
    preserving operation of semantic logical entailment to
    this finite set of basic facts inherently derives a
    truth predicate that works consistently and correctly
    for this entire body of knowledge that can be expressed
    in language.

    The above system explained in depth to Claude AI https://claude.ai/share/d371aaa1-63fe-4ebb-87bf-db8cf152927f



    LLM systems are 67-fold more powerful than they were
    a year ago because their context widow increased from
    3,000 words to 200,000 words. This is how much stuff
    they can simultaneously keep "in their head".

    It is also very valuable to know that these systems are
    extremely reliable when their reasoning is limited to
    semantic entailment for a well defined set of premises.
    In this case AI hallucination cannot possibly occur.

    https://chatgpt.com/share/68ec6e96-7eb8-8011-90c7-86248034d475

    Has verified the details of the reasoning that proves
    the behavior of the directly executed DD() is outside of
    the domain of the function computed by HHH(DD). It also
    verified that HHH(DD) is correct to reject its input and
    provided all of the reasoning proving that this is correct.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory on Mon Oct 13 12:20:13 2025
    From Newsgroup: comp.theory

    On 10/13/2025 8:53 AM, olcott wrote:
    On 10/13/2025 3:58 AM, Mikko wrote:
    On 2025-10-12 13:50:05 +0000, olcott said:

    Please think this all the way through without making any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input until: >>> (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    -a-a-a-a then HHH is correct to abort this simulation and return 0.

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>

    There is no need to prove that HHH(DD) returns 0. It is sufficient
    to run it at and see what it returns. Just add to the above main an
    output that tells what HHH(DD) returned.


    What value should HHH(DD) correctly return? (within its premises)
    This is not at all the same thing as what value does HHH(DD) return.


    HHH can return anything it wants to. You infected DD with it, they are
    now one. Now, DD is dependent on HHH. Show us pseudo code for HHH? Can
    HHH be replaced with a function that randomly returns 1 or 0? Seems so.
    You are not simulating anything?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory on Mon Oct 13 12:22:44 2025
    From Newsgroup: comp.theory

    On 10/12/2025 7:31 PM, olcott wrote:
    On 10/12/2025 9:25 PM, Richard Heathfield wrote:
    On 13/10/2025 03:17, dbush wrote:
    On 10/12/2025 10:06 PM, Richard Heathfield wrote:
    On 13/10/2025 02:22, dbush wrote:
    On 10/12/2025 9:20 PM, olcott wrote:

    <snip>

    Yes, but the requirements for a halt decider are inconsistent
    with reality.


    In other words, you agree with Turing and Linz
    In exactly the same way that: "this sentence is not true"
    cannot be proven true or false. It is a bogus decision
    problem anchored in a fundamentally incorrect notion of truth.


    Oh I forgot. You are the truth. You are the one, true god. Aliens around
    the universe worship at your dirty feet.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory on Tue Oct 14 12:33:47 2025
    From Newsgroup: comp.theory

    On 2025-10-13 15:53:25 +0000, olcott said:

    On 10/13/2025 3:58 AM, Mikko wrote:
    On 2025-10-12 13:50:05 +0000, olcott said:

    Please think this all the way through without making any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input until: >>> (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    -a-a-a-a then HHH is correct to abort this simulation and return 0.

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>

    There is no need to prove that HHH(DD) returns 0. It is sufficient
    to run it at and see what it returns. Just add to the above main an
    output that tells what HHH(DD) returned.

    What value should HHH(DD) correctly return? (within its premises)
    This is not at all the same thing as what value does HHH(DD) return.

    The behaviour of HHH is already fully determined when DD is presented to
    it so at that time it is too late to ask the question. But the answer is
    that the value HHH(DD) does not return is the correct value to return.
    --
    Mikko

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Tue Oct 14 11:17:22 2025
    From Newsgroup: comp.theory

    On 10/14/2025 4:33 AM, Mikko wrote:
    On 2025-10-13 15:53:25 +0000, olcott said:

    On 10/13/2025 3:58 AM, Mikko wrote:
    On 2025-10-12 13:50:05 +0000, olcott said:

    Please think this all the way through without making any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input
    until:
    (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination >>>> -a-a-a-a then HHH is correct to abort this simulation and return 0.

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>

    There is no need to prove that HHH(DD) returns 0. It is sufficient
    to run it at and see what it returns. Just add to the above main an
    output that tells what HHH(DD) returned.

    What value should HHH(DD) correctly return? (within its premises)
    This is not at all the same thing as what value does HHH(DD) return.

    The behaviour of HHH is already fully determined when DD is presented to
    it so at that time it is too late to ask the question. But the answer is
    that the value HHH(DD) does not return is the correct value to return.


    <Input to LLM systems>
    Please think this all the way through without making any guesses.

    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) Detects a non-terminating behavior pattern:
    abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    then HHH is correct to abort this simulation and return 0.

    See also
    [HHH(DD)==0 and the directly executed DD()
    proven not in the domain of HHH]
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory on Wed Oct 15 11:43:43 2025
    From Newsgroup: comp.theory

    On 2025-10-14 16:17:22 +0000, olcott said:

    On 10/14/2025 4:33 AM, Mikko wrote:
    On 2025-10-13 15:53:25 +0000, olcott said:

    On 10/13/2025 3:58 AM, Mikko wrote:
    On 2025-10-12 13:50:05 +0000, olcott said:

    Please think this all the way through without making any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input until: >>>>> (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination >>>>> -a-a-a-a then HHH is correct to abort this simulation and return 0.

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>

    There is no need to prove that HHH(DD) returns 0. It is sufficient
    to run it at and see what it returns. Just add to the above main an
    output that tells what HHH(DD) returned.

    What value should HHH(DD) correctly return? (within its premises)
    This is not at all the same thing as what value does HHH(DD) return.

    The behaviour of HHH is already fully determined when DD is presented to
    it so at that time it is too late to ask the question. But the answer is
    that the value HHH(DD) does not return is the correct value to return.

    <Input to LLM systems>
    Please think this all the way through without making any guesses.

    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) Detects a non-terminating behavior pattern:
    abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    then HHH is correct to abort this simulation and return 0.

    See also
    [HHH(DD)==0 and the directly executed DD()
    proven not in the domain of HHH]

    Reminds me of Asimov's "liar".
    --
    Mikko

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Oct 15 18:36:03 2025
    From Newsgroup: comp.theory

    On 10/15/2025 3:43 AM, Mikko wrote:
    On 2025-10-14 16:17:22 +0000, olcott said:

    On 10/14/2025 4:33 AM, Mikko wrote:
    On 2025-10-13 15:53:25 +0000, olcott said:

    On 10/13/2025 3:58 AM, Mikko wrote:
    On 2025-10-12 13:50:05 +0000, olcott said:

    Please think this all the way through without making any guesses

    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input >>>>>> until:
    (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non-
    termination
    -a-a-a-a then HHH is correct to abort this simulation and return 0. >>>>>>
    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>

    There is no need to prove that HHH(DD) returns 0. It is sufficient
    to run it at and see what it returns. Just add to the above main an
    output that tells what HHH(DD) returned.

    What value should HHH(DD) correctly return? (within its premises)
    This is not at all the same thing as what value does HHH(DD) return.

    The behaviour of HHH is already fully determined when DD is presented to >>> it so at that time it is too late to ask the question. But the answer is >>> that the value HHH(DD) does not return is the correct value to return.

    <Input to LLM systems>
    Please think this all the way through without making any guesses.

    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    -a-a-a-a then HHH is correct to abort this simulation and return 0.

    See also
    [HHH(DD)==0 and the directly executed DD()
    -a proven not in the domain of HHH]

    Reminds me of Asimov's "liar".


    https://en.wikipedia.org/wiki/Liar!_(short_story)
    I think that there was an episode of Star Trek with the same plot.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory on Thu Oct 16 11:42:33 2025
    From Newsgroup: comp.theory

    On 2025-10-15 23:36:03 +0000, olcott said:

    On 10/15/2025 3:43 AM, Mikko wrote:
    On 2025-10-14 16:17:22 +0000, olcott said:

    On 10/14/2025 4:33 AM, Mikko wrote:
    On 2025-10-13 15:53:25 +0000, olcott said:

    On 10/13/2025 3:58 AM, Mikko wrote:
    On 2025-10-12 13:50:05 +0000, olcott said:

    Please think this all the way through without making any guesses >>>>>>>
    <Input to LLM systems>
    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non- termination
    -a-a-a-a then HHH is correct to abort this simulation and return 0. >>>>>>>
    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a-a int Halt_Status = HHH(DD);
    -a-a if (Halt_Status)
    -a-a-a-a HERE: goto HERE;
    -a-a return Halt_Status;
    }

    int main()
    {
    -a-a HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>

    There is no need to prove that HHH(DD) returns 0. It is sufficient >>>>>> to run it at and see what it returns. Just add to the above main an >>>>>> output that tells what HHH(DD) returned.

    What value should HHH(DD) correctly return? (within its premises)
    This is not at all the same thing as what value does HHH(DD) return.

    The behaviour of HHH is already fully determined when DD is presented to >>>> it so at that time it is too late to ask the question. But the answer is >>>> that the value HHH(DD) does not return is the correct value to return.

    <Input to LLM systems>
    Please think this all the way through without making any guesses.

    Simulating Termination Analyzer HHH correctly simulates its input until: >>> (a) Detects a non-terminating behavior pattern:
    -a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    -a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    -a-a-a-a then HHH is correct to abort this simulation and return 0.

    See also
    [HHH(DD)==0 and the directly executed DD()
    -a proven not in the domain of HHH]

    Reminds me of Asimov's "liar".

    https://en.wikipedia.org/wiki/Liar!_(short_story)
    I think that there was an episode of Star Trek with the same plot.

    Better to read the original. But the Wikipedia article is enough to
    understand why the above reminds me.
    --
    Mikko

    --- Synchronet 3.21a-Linux NewsLink 1.2