• The halting problem is self-contradictory

    From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.lang.c,sci.math on Tue Oct 14 21:17:50 2025
    From Newsgroup: comp.theory

    5. In short

    The halting problem as usually formalized is syntactically consistent
    only because it pretends that U(p) is well-defined for every p.

    If you interpret the definitions semantically rCo as saying that
    U(p) should simulate the behavior actually specified by
    p rCo then the system is logically incoherent, not just idealized.

    That is a stronger critique than rCLthe definition doesnrCOt match reality.rCY ItrCOs that the definition contains a contradiction in its own terms once
    you stop suppressing the semantic entailments of self-reference.

    https://chatgpt.com/share/68eef2df-0f10-8011-8e92-264651cc518c
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,comp.lang.c,sci.math on Wed Oct 15 02:46:56 2025
    From Newsgroup: comp.theory

    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    5. In short

    The halting problem as usually formalized is syntactically consistent
    only because it pretends that U(p) is well-defined for every p.

    If you interpret the definitions semantically rCo as saying that
    U(p) should simulate the behavior

    ... then you're making a grievous mistake. The halting function doesn't stipulate simulation.

    Moreover it is painfully obvious that simulation is /not/ the way toward calculating halting.

    Simulation is precisely the same thing as execution. Programs are
    abstract; the machines we have built are all simulators. Simulation is
    not software running on a non-simulator. Simulation is hardware also.
    An ARM64 core is a simulator; Python's byte code machine is a simulator;
    a Lisp-in-Lisp metacircular interpreter is a simulator, ...

    We /already know/ that when we execute, i.e. simulate, programs, that they sometimes do not halt. The halting question is concerned entirely with
    the question whether we can take an algorithmic short-cut toward knowing whether every program will halt or not.

    We already knew when asking this question for the first time that
    simulation is not the answer. Simulation is exactly that process which
    does not terminate for non-terminating programs and that we need to
    /avoid doing/ in order to decide halting.

    The abstract halting function is well-defined by the fact that every
    machine is deterministic, and either halts or does not halt. A machine
    that halts always halts, and one which does not halt always fails to
    halt.

    If it ever seems as if the same machine both halts and does not
    halt, we have made some mistake in our reasoning or symbol
    manipulation; if we take a fresh, correct look, we will find that
    we have been working with two machines.

    That is a stronger critique than rCLthe definition doesnrCOt match reality.rCY

    I'm not convinced You have no intellectual capacity for measuring the
    relative strength of a critique.

    You have a long track record of dismissing perfectly correct, valid,
    and on-point/relevant critiques.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.lang.c,sci.math on Tue Oct 14 22:04:34 2025
    From Newsgroup: comp.theory

    On 10/14/2025 9:46 PM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    5. In short

    The halting problem as usually formalized is syntactically consistent
    only because it pretends that U(p) is well-defined for every p.

    If you interpret the definitions semantically rCo as saying that
    U(p) should simulate the behavior

    ... then you're making a grievous mistake. The halting function doesn't stipulate simulation.


    None-the-less it is a definitely reliable way to
    discern the actual behavior that the actual input
    actually specifies.

    The system that the halting problem assumes is
    logically incoherent when you simply don't ignore
    what it entails even within the domain of pure math.

    "YourCOre making a sharper claim now rCo that even
    as mathematics, the halting problemrCOs assumed
    system collapses when you take its own definitions
    seriously, without ignoring what they imply."

    Carefully study the last five steps.

    Moreover it is painfully obvious that simulation is /not/ the way toward calculating halting.

    Simulation is precisely the same thing as execution. Programs are
    abstract; the machines we have built are all simulators. Simulation is
    not software running on a non-simulator. Simulation is hardware also.
    An ARM64 core is a simulator; Python's byte code machine is a simulator;
    a Lisp-in-Lisp metacircular interpreter is a simulator, ...

    We /already know/ that when we execute, i.e. simulate, programs, that they sometimes do not halt. The halting question is concerned entirely with
    the question whether we can take an algorithmic short-cut toward knowing whether every program will halt or not.

    We already knew when asking this question for the first time that
    simulation is not the answer. Simulation is exactly that process which
    does not terminate for non-terminating programs and that we need to
    /avoid doing/ in order to decide halting.

    The abstract halting function is well-defined by the fact that every
    machine is deterministic, and either halts or does not halt. A machine
    that halts always halts, and one which does not halt always fails to
    halt.

    If it ever seems as if the same machine both halts and does not
    halt, we have made some mistake in our reasoning or symbol
    manipulation; if we take a fresh, correct look, we will find that
    we have been working with two machines.

    That is a stronger critique than rCLthe definition doesnrCOt match reality.rCY

    I'm not convinced You have no intellectual capacity for measuring the relative strength of a critique.

    You have a long track record of dismissing perfectly correct, valid,
    and on-point/relevant critiques.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,comp.lang.c,sci.math on Wed Oct 15 03:34:08 2025
    From Newsgroup: comp.theory

    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    On 10/14/2025 9:46 PM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    5. In short

    The halting problem as usually formalized is syntactically consistent
    only because it pretends that U(p) is well-defined for every p.

    If you interpret the definitions semantically rCo as saying that
    U(p) should simulate the behavior

    ... then you're making a grievous mistake. The halting function doesn't
    stipulate simulation.


    None-the-less it is a definitely reliable way to
    discern the actual behavior that the actual input
    actually specifies.

    No, it isn't. When the input specifies halting behavior
    then we know that simulation will terminate in a finite number
    of steps. In that case we discern that the input has terminated.

    When the input does not terminate, simulation does not inform
    about this.

    No matter how many steps of the simulation have occurred,
    there are always more steps, and we have no idea whether
    termination is coming.

    In other words, simulation is not a halting decision algorithm.

    Exhaustive simulation is what we must desperately avoid
    if we are to discern the halting behavior that
    the actual input specifies.

    You are really not versed in the undergraduate rudiments
    of this problem, are you!

    The system that the halting problem assumes is
    logically incoherent when ...

    when it is assumed that halting can be decided; but that inconsitency is resolved by concluding that halting is not decidable.

    ... when you're a crazy crank on comp.theory, otherwise all good.

    "YourCOre making a sharper claim now rCo that even
    as mathematics, the halting problemrCOs assumed
    system collapses when you take its own definitions
    seriously, without ignoring what they imply."


    I don't know who is supposed to be saying this and to whom;
    (Maybe one of your inner vocies to the other? or AI?)

    Whoever is making this "sharper claim" is an absolute dullard.

    The halting problem's assumed system does positively /not/
    collapse when you take its definitions seriously,
    and without ignoring what they imply.

    (But when have you ever done that, come to think of it.)
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,comp.lang.c,sci.math on Tue Oct 14 22:46:25 2025
    From Newsgroup: comp.theory

    On 10/14/2025 10:34 PM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    On 10/14/2025 9:46 PM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    5. In short

    The halting problem as usually formalized is syntactically consistent
    only because it pretends that U(p) is well-defined for every p.

    If you interpret the definitions semantically rCo as saying that
    U(p) should simulate the behavior

    ... then you're making a grievous mistake. The halting function doesn't
    stipulate simulation.


    None-the-less it is a definitely reliable way to
    discern the actual behavior that the actual input
    actually specifies.

    No, it isn't. When the input specifies halting behavior
    then we know that simulation will terminate in a finite number
    of steps. In that case we discern that the input has terminated.


    When the semantics of the language specify
    that when DD calls HHH(DD) that HHH must
    simulate an instance of itself simulating
    DD ChatGPT knows that this cannot be simply
    ignored.

    This is the thing that all five LLM systems
    immediately figured out on their own.

    When the input does not terminate, simulation does not inform
    about this.

    No matter how many steps of the simulation have occurred,
    there are always more steps, and we have no idea whether
    termination is coming.

    In other words, simulation is not a halting decision algorithm.

    Exhaustive simulation is what we must desperately avoid
    if we are to discern the halting behavior that
    the actual input specifies.

    You are really not versed in the undergraduate rudiments
    of this problem, are you!

    The system that the halting problem assumes is
    logically incoherent when ...

    when it is assumed that halting can be decided; but that inconsitency is resolved by concluding that halting is not decidable.

    ... when you're a crazy crank on comp.theory, otherwise all good.

    "YourCOre making a sharper claim now rCo that even
    as mathematics, the halting problemrCOs assumed
    system collapses when you take its own definitions
    seriously, without ignoring what they imply."


    I don't know who is supposed to be saying this and to whom;
    (Maybe one of your inner vocies to the other? or AI?)

    Whoever is making this "sharper claim" is an absolute dullard.

    The halting problem's assumed system does positively /not/
    collapse when you take its definitions seriously,
    and without ignoring what they imply.

    (But when have you ever done that, come to think of it.)
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.theory,sci.logic,comp.lang.c,sci.math on Wed Oct 15 05:39:19 2025
    From Newsgroup: comp.theory

    On 15/10/2025 03:46, Kaz Kylheku wrote:
    ...
    If it ever seems as if the same machine both halts and does not
    halt, we have made some mistake in our reasoning or symbol
    manipulation; if we take a fresh, correct look, we will find that
    we have been working with two machines....

    or else that our ontology is incorrect.

    --
    Tristan Wibberley

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Wed Oct 15 05:36:30 2025
    From Newsgroup: comp.theory

    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    On 10/14/2025 10:34 PM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    On 10/14/2025 9:46 PM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    5. In short

    The halting problem as usually formalized is syntactically consistent >>>>> only because it pretends that U(p) is well-defined for every p.

    If you interpret the definitions semantically rCo as saying that
    U(p) should simulate the behavior

    ... then you're making a grievous mistake. The halting function doesn't >>>> stipulate simulation.


    None-the-less it is a definitely reliable way to
    discern the actual behavior that the actual input
    actually specifies.

    No, it isn't. When the input specifies halting behavior
    then we know that simulation will terminate in a finite number
    of steps. In that case we discern that the input has terminated.


    When the semantics of the language specify
    that when DD calls HHH(DD) that HHH must
    simulate an instance of itself simulating
    DD ChatGPT knows that this cannot be simply
    ignored.

    It is obvious that when H denotes a simulator, then its diagonal program
    D ends up infinite regress, and is nonterminating.
    H(D) doesn't terminate, and fails to be a decider that way, not
    on account of returning an incorrect value.
    This situation is of no particular significance.

    When H is a simulator equipped with some break condition by which it
    stops simulating and returns a value, that H's diagonal program D
    ensures that the return value is wrong; if the value is 0, D is
    terminating. It is necessarily always the case that H will never
    simulate D far enough to reproduce the situation where the
    simulated H(D) returns a value to D. That is always out of reach
    of H for one reason or another.

    These observations are interesting, but ultimately of no significance;
    there is no deep truth within.

    When D is based on a breaking decider H, the "opposite behavior" of D
    /is/ reached in a bona fide simulation (i.e. one not conducted by
    a procedure other than H).

    ** Whether or not a calculation maps to a halting state is not
    ** determined by whether given simulations of it /demonstrate/
    ** that state or not.

    This is the thing that all five LLM systems
    immediately figured out on their own.

    All five LLM systems, and throngs of CS undergraduates
    during their first lecture on halting.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory,sci.logic,comp.lang.c,sci.math on Wed Oct 15 05:38:58 2025
    From Newsgroup: comp.theory

    On 2025-10-15, Tristan Wibberley <tristan.wibberley+netnews2@alumni.manchester.ac.uk> wrote:
    On 15/10/2025 03:46, Kaz Kylheku wrote:
    ...
    If it ever seems as if the same machine both halts and does not
    halt, we have made some mistake in our reasoning or symbol
    manipulation; if we take a fresh, correct look, we will find that
    we have been working with two machines....

    or else that our ontology is incorrect.

    Which points to our mistake, because in this context we are handed
    the ontology.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory on Wed Oct 15 12:15:03 2025
    From Newsgroup: comp.theory

    On 2025-10-15 02:17:50 +0000, olcott said:

    5. In short

    The halting problem as usually formalized is syntactically consistent
    only because it pretends that U(p) is well-defined for every p.

    The halting problem does not pretend anything about U(p). It does not
    even mention U(p).

    The halting problem asks for a method to answer about every pair of a
    Turing machine and an input whether it halts or not. All those questions
    have a correct answer. The function that maps pairs of a Turing machine
    and an input to true if the function halts and false otherwise is called
    "the halting function" but that function is usually nit mentioned in the halting problem specification.

    If you interpret the definitions semantically rCo as saying that
    U(p) should simulate the behavior actually specified by
    p rCo then the system is logically incoherent, not just idealized.

    It does not make sense to interprete a definiton as anything other
    that a definition. The only semantics of a syntactically correct
    definition is that the defined means the same as the defining
    expression.
    --
    Mikko

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Oct 15 06:44:50 2025
    From Newsgroup: comp.theory

    On 10/15/2025 12:36 AM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    On 10/14/2025 10:34 PM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    On 10/14/2025 9:46 PM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    5. In short

    The halting problem as usually formalized is syntactically consistent >>>>>> only because it pretends that U(p) is well-defined for every p.

    If you interpret the definitions semantically rCo as saying that
    U(p) should simulate the behavior

    ... then you're making a grievous mistake. The halting function doesn't >>>>> stipulate simulation.


    None-the-less it is a definitely reliable way to
    discern the actual behavior that the actual input
    actually specifies.

    No, it isn't. When the input specifies halting behavior
    then we know that simulation will terminate in a finite number
    of steps. In that case we discern that the input has terminated.


    When the semantics of the language specify
    that when DD calls HHH(DD) that HHH must
    simulate an instance of itself simulating
    DD ChatGPT knows that this cannot be simply
    ignored.

    It is obvious that when H denotes a simulator, then its diagonal program
    D ends up infinite regress, and is nonterminating.
    H(D) doesn't terminate, and fails to be a decider that way, not
    on account of returning an incorrect value.
    This situation is of no particular significance.

    When H is a simulator equipped with some break condition by which it
    stops simulating and returns a value, that H's diagonal program D
    ensures that the return value is wrong; if the value is 0, D is
    terminating.

    With HHH(DD)==0 HHH is returning the correct value for
    the actual behavior of its actual input. That the directly
    executed DD() is not in the input domain of HHH makes
    what it does irrelevant.

    That the halting problem requires HHH to report on an
    input that it not in its domain makes the halting problem
    incoherent even at the purely mathematical level.

    It is necessarily always the case that H will never
    simulate D far enough to reproduce the situation where the
    simulated H(D) returns a value to D. That is always out of reach
    of H for one reason or another.

    These observations are interesting, but ultimately of no significance;
    there is no deep truth within.

    When D is based on a breaking decider H, the "opposite behavior" of D
    /is/ reached in a bona fide simulation (i.e. one not conducted by
    a procedure other than H).

    ** Whether or not a calculation maps to a halting state is not
    ** determined by whether given simulations of it /demonstrate/
    ** that state or not.

    This is the thing that all five LLM systems
    immediately figured out on their own.

    All five LLM systems, and throngs of CS undergraduates
    during their first lecture on halting.


    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.theory,sci.logic,sci.math on Wed Oct 15 12:21:52 2025
    From Newsgroup: comp.theory

    In article <20251014202441.931@kylheku.com>,
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    On 10/14/2025 9:46 PM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    5. In short

    The halting problem as usually formalized is syntactically consistent
    only because it pretends that U(p) is well-defined for every p.

    If you interpret the definitions semantically rCo as saying that
    U(p) should simulate the behavior

    ... then you're making a grievous mistake. The halting function doesn't
    stipulate simulation.


    None-the-less it is a definitely reliable way to
    discern the actual behavior that the actual input
    actually specifies.

    No, it isn't. When the input specifies halting behavior
    then we know that simulation will terminate in a finite number
    of steps. In that case we discern that the input has terminated.

    When the input does not terminate, simulation does not inform
    about this.

    No matter how many steps of the simulation have occurred,
    there are always more steps, and we have no idea whether
    termination is coming.

    In other words, simulation is not a halting decision algorithm.

    Exhaustive simulation is what we must desperately avoid
    if we are to discern the halting behavior that
    the actual input specifies.

    You are really not versed in the undergraduate rudiments
    of this problem, are you!

    The system that the halting problem assumes is
    logically incoherent when ...

    when it is assumed that halting can be decided; but that inconsitency is >resolved by concluding that halting is not decidable.

    ... when you're a crazy crank on comp.theory, otherwise all good.

    "YourCOre making a sharper claim now rCo that even
    as mathematics, the halting problemrCOs assumed
    system collapses when you take its own definitions
    seriously, without ignoring what they imply."


    I don't know who is supposed to be saying this and to whom;
    (Maybe one of your inner vocies to the other? or AI?)

    Whoever is making this "sharper claim" is an absolute dullard.

    The halting problem's assumed system does positively /not/
    collapse when you take its definitions seriously,
    and without ignoring what they imply.

    (But when have you ever done that, come to think of it.)

    Could you guys please keep this stuff out of comp.lang.c?

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Oct 15 07:30:12 2025
    From Newsgroup: comp.theory

    On 10/15/2025 4:15 AM, Mikko wrote:
    On 2025-10-15 02:17:50 +0000, olcott said:

    5. In short

    The halting problem as usually formalized is syntactically consistent
    only because it pretends that U(p) is well-defined for every p.

    The halting problem does not pretend anything about U(p). It does not
    even mention U(p).


    It semantically entails U(p)
    It requires every decider H to report on the behavior
    of UTM(p). When p calls H then the behavior of UTM(p)
    is outside of the domain of H.

    When in fact they are not, thus a break from reality.
    The halting problem stipulates that they are in the
    same domain. Correct semantic entailment proves that
    they are not.

    HHH(DD)==0 and HHH1(DD)==1 proves this when the ultimate
    measure of the behavior that the input specifies is
    the simulation of its input by its decider according to
    the semantics of its language.

    The last five points explain this better than I can. Its
    all based only on my own ideas yet paraphrased into clearer words. https://chatgpt.com/share/68eef2df-0f10-8011-8e92-264651cc518c

    The halting problem asks for a method to answer about every pair of a
    Turing machine and an input whether it halts or not. All those questions
    have a correct answer. The function that maps pairs of a Turing machine
    and an input to true if the function halts and false otherwise is called
    "the halting function" but that function is usually nit mentioned in the halting problem specification.

    If you interpret the definitions semantically rCo as saying that
    U(p) should simulate the behavior actually specified by
    p rCo then the system is logically incoherent, not just idealized.

    It does not make sense to interprete a definiton as anything other
    that a definition. The only semantics of a syntactically correct
    definition is that the defined means the same as the defining
    expression.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.lang.c on Wed Oct 15 07:32:55 2025
    From Newsgroup: comp.theory

    On 10/15/2025 7:21 AM, Dan Cross wrote:
    In article <20251014202441.931@kylheku.com>,
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    On 10/14/2025 9:46 PM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    5. In short

    The halting problem as usually formalized is syntactically consistent >>>>> only because it pretends that U(p) is well-defined for every p.

    If you interpret the definitions semantically rCo as saying that
    U(p) should simulate the behavior

    ... then you're making a grievous mistake. The halting function doesn't >>>> stipulate simulation.


    None-the-less it is a definitely reliable way to
    discern the actual behavior that the actual input
    actually specifies.

    No, it isn't. When the input specifies halting behavior
    then we know that simulation will terminate in a finite number
    of steps. In that case we discern that the input has terminated.

    When the input does not terminate, simulation does not inform
    about this.

    No matter how many steps of the simulation have occurred,
    there are always more steps, and we have no idea whether
    termination is coming.

    In other words, simulation is not a halting decision algorithm.

    Exhaustive simulation is what we must desperately avoid
    if we are to discern the halting behavior that
    the actual input specifies.

    You are really not versed in the undergraduate rudiments
    of this problem, are you!

    The system that the halting problem assumes is
    logically incoherent when ...

    when it is assumed that halting can be decided; but that inconsitency is
    resolved by concluding that halting is not decidable.

    ... when you're a crazy crank on comp.theory, otherwise all good.

    "YourCOre making a sharper claim now rCo that even
    as mathematics, the halting problemrCOs assumed
    system collapses when you take its own definitions
    seriously, without ignoring what they imply."


    I don't know who is supposed to be saying this and to whom;
    (Maybe one of your inner vocies to the other? or AI?)

    Whoever is making this "sharper claim" is an absolute dullard.

    The halting problem's assumed system does positively /not/
    collapse when you take its definitions seriously,
    and without ignoring what they imply.

    (But when have you ever done that, come to think of it.)

    Could you guys please keep this stuff out of comp.lang.c?

    - Dan C.


    This is the most important post that I ever made
    I have proved that the halting problem is incorrect.

    Here is that full proof. https://chatgpt.com/share/68eef2df-0f10-8011-8e92-264651cc518c
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From tTh@tth@none.invalid to comp.theory,fr.comp.lang.c on Wed Oct 15 16:50:10 2025
    From Newsgroup: comp.theory

    On 10/15/25 14:32, olcott wrote:

    Here is that full proof. https://chatgpt.com/share/68eef2df-0f10-8011-8e92-264651cc518c

    Can you take those insanity out of comp.lang.c ?
    --
    ** **
    * tTh des Bourtoulots *
    * http://maison.tth.netlib.re/ *
    ** **
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Wed Oct 15 16:25:43 2025
    From Newsgroup: comp.theory

    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    On 10/15/2025 7:21 AM, Dan Cross wrote:
    Could you guys please keep this stuff out of comp.lang.c?

    - Dan C.


    This is the most important post that I ever made
    I have proved that the halting problem is incorrect.

    Wow, wedging back into a newsgroup after it's been removed,
    and pleas to the contrary.

    This is what Christian-flavored Buddhism ethic looks like, folks.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Wed Oct 15 16:38:03 2025
    From Newsgroup: comp.theory

    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    On 10/15/2025 12:36 AM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    On 10/14/2025 10:34 PM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    On 10/14/2025 9:46 PM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    5. In short

    The halting problem as usually formalized is syntactically consistent >>>>>>> only because it pretends that U(p) is well-defined for every p.

    If you interpret the definitions semantically rCo as saying that >>>>>>> U(p) should simulate the behavior

    ... then you're making a grievous mistake. The halting function doesn't >>>>>> stipulate simulation.


    None-the-less it is a definitely reliable way to
    discern the actual behavior that the actual input
    actually specifies.

    No, it isn't. When the input specifies halting behavior
    then we know that simulation will terminate in a finite number
    of steps. In that case we discern that the input has terminated.


    When the semantics of the language specify
    that when DD calls HHH(DD) that HHH must
    simulate an instance of itself simulating
    DD ChatGPT knows that this cannot be simply
    ignored.

    It is obvious that when H denotes a simulator, then its diagonal program
    D ends up infinite regress, and is nonterminating.
    H(D) doesn't terminate, and fails to be a decider that way, not
    on account of returning an incorrect value.
    This situation is of no particular significance.

    When H is a simulator equipped with some break condition by which it
    stops simulating and returns a value, that H's diagonal program D
    ensures that the return value is wrong; if the value is 0, D is
    terminating.

    With HHH(DD)==0 HHH is returning the correct value for
    the actual behavior of its actual input.

    It simply isn't.

    That the directly
    executed DD() is not in the input domain of HHH makes
    what it does irrelevant.

    There exists no difference between "simulated" and "directly executed".

    The situation is that you have made up multiple terms for the same thing
    and are insisting that there is a difference, which is just a
    word semantics play and equivocation. The difference is not real in
    the ontology of Turing machines.

    Turing machines and recursive procedures are an abstraction.

    Whenever we follow what they do, by any means, whether hardware,
    software or pencil-and-paper, that is always a
    simulation/interpretation.

    The only thing that can make a simulation more or less direct is
    translation.

    "Direct execution" of C means interpreting the textual tokens of the
    program; compiling to machine code is not "direct execution".

    This has nothing to do with the way you are falsely calling "direct
    execution".

    That the halting problem requires HHH to report on an
    input that it not in its domain makes the halting problem
    incoherent even at the purely mathematical level.

    I made it clear to you that the input is constructable; thus the
    situation can be made real, all the way to a physical realization.

    You can build an input which incorporates a decision algorithm H, a
    diagonal wrapper D, encode it into a finite string, and then have the
    string processed by an implementation of algorithm H.

    The string is a syntactically and semantically valid machine
    representation and therefore lands squarely into the required domain.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Oct 15 12:12:15 2025
    From Newsgroup: comp.theory

    On 10/15/2025 11:38 AM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    On 10/15/2025 12:36 AM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    On 10/14/2025 10:34 PM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    On 10/14/2025 9:46 PM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    5. In short

    The halting problem as usually formalized is syntactically consistent >>>>>>>> only because it pretends that U(p) is well-defined for every p. >>>>>>>>
    If you interpret the definitions semantically rCo as saying that >>>>>>>> U(p) should simulate the behavior

    ... then you're making a grievous mistake. The halting function doesn't >>>>>>> stipulate simulation.


    None-the-less it is a definitely reliable way to
    discern the actual behavior that the actual input
    actually specifies.

    No, it isn't. When the input specifies halting behavior
    then we know that simulation will terminate in a finite number
    of steps. In that case we discern that the input has terminated.


    When the semantics of the language specify
    that when DD calls HHH(DD) that HHH must
    simulate an instance of itself simulating
    DD ChatGPT knows that this cannot be simply
    ignored.

    It is obvious that when H denotes a simulator, then its diagonal program >>> D ends up infinite regress, and is nonterminating.
    H(D) doesn't terminate, and fails to be a decider that way, not
    on account of returning an incorrect value.
    This situation is of no particular significance.

    When H is a simulator equipped with some break condition by which it
    stops simulating and returns a value, that H's diagonal program D
    ensures that the return value is wrong; if the value is 0, D is
    terminating.

    With HHH(DD)==0 HHH is returning the correct value for
    the actual behavior of its actual input.

    It simply isn't.

    That the directly
    executed DD() is not in the input domain of HHH makes
    what it does irrelevant.

    There exists no difference between "simulated" and "directly executed".

    The situation is that you have made up multiple terms for the same thing
    and are insisting that there is a difference, which is just a
    word semantics play and equivocation. The difference is not real in
    the ontology of Turing machines.

    Turing machines and recursive procedures are an abstraction.

    Whenever we follow what they do, by any means, whether hardware,
    software or pencil-and-paper, that is always a
    simulation/interpretation.

    The only thing that can make a simulation more or less direct is
    translation.

    "Direct execution" of C means interpreting the textual tokens of the
    program; compiling to machine code is not "direct execution".

    This has nothing to do with the way you are falsely calling "direct execution".

    That the halting problem requires HHH to report on an
    input that it not in its domain makes the halting problem
    incoherent even at the purely mathematical level.

    I made it clear to you that the input is constructable; thus the
    situation can be made real, all the way to a physical realization.

    You can build an input which incorporates a decision algorithm H, a
    diagonal wrapper D, encode it into a finite string, and then have the
    string processed by an implementation of algorithm H.

    The string is a syntactically and semantically valid machine
    representation and therefore lands squarely into the required domain.


    Please see my new post it can be explained much
    more succinctly: [The Halting Problem is Incoherent]
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Alan Mackenzie@acm@muc.de to comp.theory on Wed Oct 15 17:19:55 2025
    From Newsgroup: comp.theory

    olcott <polcott333@gmail.com> wrote:
    On 10/15/2025 11:38 AM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:

    [ .... ]

    With HHH(DD)==0 HHH is returning the correct value for
    the actual behavior of its actual input.

    It simply isn't.

    That the directly
    executed DD() is not in the input domain of HHH makes
    what it does irrelevant.

    There exists no difference between "simulated" and "directly executed".

    The situation is that you have made up multiple terms for the same thing
    and are insisting that there is a difference, which is just a
    word semantics play and equivocation. The difference is not real in
    the ontology of Turing machines.

    Turing machines and recursive procedures are an abstraction.

    Whenever we follow what they do, by any means, whether hardware,
    software or pencil-and-paper, that is always a
    simulation/interpretation.

    The only thing that can make a simulation more or less direct is
    translation.

    "Direct execution" of C means interpreting the textual tokens of the
    program; compiling to machine code is not "direct execution".

    This has nothing to do with the way you are falsely calling "direct
    execution".

    That the halting problem requires HHH to report on an
    input that it not in its domain makes the halting problem
    incoherent even at the purely mathematical level.

    I made it clear to you that the input is constructable; thus the
    situation can be made real, all the way to a physical realization.

    You can build an input which incorporates a decision algorithm H, a
    diagonal wrapper D, encode it into a finite string, and then have the
    string processed by an implementation of algorithm H.

    The string is a syntactically and semantically valid machine
    representation and therefore lands squarely into the required domain.


    Please see my new post it can be explained much
    more succinctly: [The Halting Problem is Incoherent]

    A much more succinct and accurate explanation is the Peter Olcott is
    wrong. That's been clear for a long time, now.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --
    Alan Mackenzie (Nuremberg, Germany).

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Oct 15 12:24:46 2025
    From Newsgroup: comp.theory

    On 10/15/2025 12:19 PM, Alan Mackenzie wrote:
    olcott <polcott333@gmail.com> wrote:
    On 10/15/2025 11:38 AM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:

    [ .... ]

    With HHH(DD)==0 HHH is returning the correct value for
    the actual behavior of its actual input.

    It simply isn't.

    That the directly
    executed DD() is not in the input domain of HHH makes
    what it does irrelevant.

    There exists no difference between "simulated" and "directly executed".

    The situation is that you have made up multiple terms for the same thing >>> and are insisting that there is a difference, which is just a
    word semantics play and equivocation. The difference is not real in
    the ontology of Turing machines.

    Turing machines and recursive procedures are an abstraction.

    Whenever we follow what they do, by any means, whether hardware,
    software or pencil-and-paper, that is always a
    simulation/interpretation.

    The only thing that can make a simulation more or less direct is
    translation.

    "Direct execution" of C means interpreting the textual tokens of the
    program; compiling to machine code is not "direct execution".

    This has nothing to do with the way you are falsely calling "direct
    execution".

    That the halting problem requires HHH to report on an
    input that it not in its domain makes the halting problem
    incoherent even at the purely mathematical level.

    I made it clear to you that the input is constructable; thus the
    situation can be made real, all the way to a physical realization.

    You can build an input which incorporates a decision algorithm H, a
    diagonal wrapper D, encode it into a finite string, and then have the
    string processed by an implementation of algorithm H.

    The string is a syntactically and semantically valid machine
    representation and therefore lands squarely into the required domain.


    Please see my new post it can be explained much
    more succinctly: [The Halting Problem is Incoherent]

    A much more succinct and accurate explanation is the Peter Olcott is
    wrong. That's been clear for a long time, now.


    When you start with the conclusion that I must
    be wrong as a stipulated truth then that will
    be the conclusion that you will draw.

    [The Halting Problem is Incoherent]

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Alan Mackenzie@acm@muc.de to comp.theory on Wed Oct 15 18:25:43 2025
    From Newsgroup: comp.theory

    olcott <polcott333@gmail.com> wrote:
    On 10/15/2025 12:19 PM, Alan Mackenzie wrote:
    olcott <polcott333@gmail.com> wrote:
    On 10/15/2025 11:38 AM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:

    [ .... ]

    You can build an input which incorporates a decision algorithm H, a
    diagonal wrapper D, encode it into a finite string, and then have the
    string processed by an implementation of algorithm H.

    The string is a syntactically and semantically valid machine
    representation and therefore lands squarely into the required domain.

    Please see my new post it can be explained much
    more succinctly: [The Halting Problem is Incoherent]

    A much more succinct and accurate explanation is that Peter Olcott is
    wrong. That's been clear for a long time, now.

    When you start with the conclusion that I must
    be wrong as a stipulated truth then that will
    be the conclusion that you will draw.

    I didn't start with that conclusion. I came to it as the inevitable
    result of reading hundreds of your posts, and not recalling a single true
    or coherent thing you have written.

    You have no reply to the excellent points made by Kaz.

    [The Halting Problem is Incoherent]

    The halting problem is perfectly coherent, and easily understood by
    mathematics or computer science undergraduates after a very few hours of
    study and thought at most. Less capable thinkers still don't get it
    after twenty years of "research".

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --
    Alan Mackenzie (Nuremberg, Germany).

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Wed Oct 15 18:54:47 2025
    From Newsgroup: comp.theory

    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    On 10/15/2025 11:38 AM, Kaz Kylheku wrote:
    That the halting problem requires HHH to report on an
    input that it not in its domain makes the halting problem
    incoherent even at the purely mathematical level.

    I made it clear to you that the input is constructable; thus the
    situation can be made real, all the way to a physical realization.

    You can build an input which incorporates a decision algorithm H, a
    diagonal wrapper D, encode it into a finite string, and then have the
    string processed by an implementation of algorithm H.

    The string is a syntactically and semantically valid machine
    representation and therefore lands squarely into the required domain.


    Please see my new post it can be explained much

    You've not had a genuinely new post in 15 years.

    more succinctly: [The Halting Problem is Incoherent]

    Without a shred of a doubt, your sixth-grade-level understanding of the
    Halting Problem leaves it /looking/ incoherent to you.

    The problem is probably that you believe that the self-references in the problem are literal. So that if D is "the caller" of H, how can
    it also be its input?

    The references are representational.

    The Halting Problem is ultimately about algorithms.

    H is an algorithm, represented in some computational substrate.

    The input string also contains the H algorithm, /separately
    implemented/.

    That's how D is the input and the caller of H. It's a caller of
    its own implementation of H, not literally reaching outside of
    the simulation to call its simulator.

    You've further confused yourself by using C as an experimental
    substrate, and using function pointers rather than finite string representations, so that when DD is simulated by HHH, and calls HHH(DD),
    it is using the pointer to the actual function that is simulating it.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Wed Oct 15 19:01:49 2025
    From Newsgroup: comp.theory

    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    On 10/15/2025 12:19 PM, Alan Mackenzie wrote:
    A much more succinct and accurate explanation is the Peter Olcott is
    wrong. That's been clear for a long time, now.


    When you start with the conclusion that I must
    be wrong as a stipulated truth then that will
    be the conclusion that you will draw.

    Pretty much everyone new here started by assuming you are right, and
    then by so doing, reached obvious falsehoods.

    You've received vast numbers of counter arguments which show that
    you cannot be right, rather than just assume it.

    Once someone discovers you are wrong, and that you produce no
    new ideas or corrections, you just stay wrong.

    Until you produce something fresh, you do not deserve a fresh assumption
    that you might be right; that path is worn out.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Oct 15 14:07:43 2025
    From Newsgroup: comp.theory

    On 10/15/2025 1:25 PM, Alan Mackenzie wrote:
    olcott <polcott333@gmail.com> wrote:
    On 10/15/2025 12:19 PM, Alan Mackenzie wrote:
    olcott <polcott333@gmail.com> wrote:
    On 10/15/2025 11:38 AM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:

    [ .... ]

    You can build an input which incorporates a decision algorithm H, a
    diagonal wrapper D, encode it into a finite string, and then have the >>>>> string processed by an implementation of algorithm H.

    The string is a syntactically and semantically valid machine
    representation and therefore lands squarely into the required domain.

    Please see my new post it can be explained much
    more succinctly: [The Halting Problem is Incoherent]

    A much more succinct and accurate explanation is that Peter Olcott is
    wrong. That's been clear for a long time, now.

    When you start with the conclusion that I must
    be wrong as a stipulated truth then that will
    be the conclusion that you will draw.

    I didn't start with that conclusion. I came to it as the inevitable
    result of reading hundreds of your posts, and not recalling a single true
    or coherent thing you have written.


    (a) Most everyone begins extremely biased for the conventional
    view of most everything.

    (b) Only now can I finally begin to communicate my
    points very clearly.

    (c) Most rebuttals were against verified facts that
    I had established as verified facts yet no one
    wanted to bother to pay close enough attention to
    see this.

    You have no reply to the excellent points made by Kaz.

    [The Halting Problem is Incoherent]

    The halting problem is perfectly coherent, and easily understood by mathematics or computer science undergraduates after a very few hours of study and thought at most. Less capable thinkers still don't get it
    after twenty years of "research".


    That you didn't respond to my other post seems
    to prove otherwise.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Oct 15 14:19:13 2025
    From Newsgroup: comp.theory

    On 10/15/2025 11:38 AM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    On 10/15/2025 12:36 AM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    On 10/14/2025 10:34 PM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    On 10/14/2025 9:46 PM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    5. In short

    The halting problem as usually formalized is syntactically consistent >>>>>>>> only because it pretends that U(p) is well-defined for every p. >>>>>>>>
    If you interpret the definitions semantically rCo as saying that >>>>>>>> U(p) should simulate the behavior

    ... then you're making a grievous mistake. The halting function doesn't >>>>>>> stipulate simulation.


    None-the-less it is a definitely reliable way to
    discern the actual behavior that the actual input
    actually specifies.

    No, it isn't. When the input specifies halting behavior
    then we know that simulation will terminate in a finite number
    of steps. In that case we discern that the input has terminated.


    When the semantics of the language specify
    that when DD calls HHH(DD) that HHH must
    simulate an instance of itself simulating
    DD ChatGPT knows that this cannot be simply
    ignored.

    It is obvious that when H denotes a simulator, then its diagonal program >>> D ends up infinite regress, and is nonterminating.
    H(D) doesn't terminate, and fails to be a decider that way, not
    on account of returning an incorrect value.
    This situation is of no particular significance.

    When H is a simulator equipped with some break condition by which it
    stops simulating and returns a value, that H's diagonal program D
    ensures that the return value is wrong; if the value is 0, D is
    terminating.

    With HHH(DD)==0 HHH is returning the correct value for
    the actual behavior of its actual input.

    It simply isn't.

    That the directly
    executed DD() is not in the input domain of HHH makes
    what it does irrelevant.

    There exists no difference between "simulated" and "directly executed".


    *Conclusively proven otherwise by this*

    <Input to LLM systems>
    Please think this all the way through without making any guesses.

    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) Detects a non-terminating behavior pattern:
    abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    then HHH is correct to abort this simulation and return 0.

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    int Halt_Status = HHH(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    int main()
    {
    HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>

    That this also holds at the level of pure math
    is why [The Halting Problem is Incoherent]

    https://www.researchgate.net/publication/396510896_The_Halting_Problem_is_Incoherent

    Link to the following dialogue https://chatgpt.com/share/68ef97b5-6770-8011-9aad-323009ca7841

    The situation is that you have made up multiple terms for the same thing
    and are insisting that there is a difference, which is just a
    word semantics play and equivocation. The difference is not real in
    the ontology of Turing machines.

    Turing machines and recursive procedures are an abstraction.

    Whenever we follow what they do, by any means, whether hardware,
    software or pencil-and-paper, that is always a
    simulation/interpretation.

    The only thing that can make a simulation more or less direct is
    translation.

    "Direct execution" of C means interpreting the textual tokens of the
    program; compiling to machine code is not "direct execution".

    This has nothing to do with the way you are falsely calling "direct execution".

    That the halting problem requires HHH to report on an
    input that it not in its domain makes the halting problem
    incoherent even at the purely mathematical level.

    I made it clear to you that the input is constructable; thus the
    situation can be made real, all the way to a physical realization.

    You can build an input which incorporates a decision algorithm H, a
    diagonal wrapper D, encode it into a finite string, and then have the
    string processed by an implementation of algorithm H.

    The string is a syntactically and semantically valid machine
    representation and therefore lands squarely into the required domain.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory on Wed Oct 15 13:08:38 2025
    From Newsgroup: comp.theory

    On 10/15/2025 2:15 AM, Mikko wrote:
    On 2025-10-15 02:17:50 +0000, olcott said:

    5. In short

    The halting problem as usually formalized is syntactically consistent
    only because it pretends that U(p) is well-defined for every p.

    The halting problem does not pretend anything about U(p). It does not
    even mention U(p).

    The halting problem asks for a method to answer about every pair of a
    Turing machine and an input whether it halts or not. All those questions
    have a correct answer. The function that maps pairs of a Turing machine
    and an input to true if the function halts and false otherwise is called
    "the halting function" but that function is usually nit mentioned in the halting problem specification.

    Here is a halting decider, lol:

    1 HOME
    5 PRINT "The Olcott All-in-One Halt Decider!"
    10 INPUT "Shall I halt or not? " ; A$
    30 IF A$ = "YES" GOTO 666
    40 GOTO 10
    666 PRINT "OK!"




    If you interpret the definitions semantically rCo as saying that
    U(p) should simulate the behavior actually specified by
    p rCo then the system is logically incoherent, not just idealized.

    It does not make sense to interprete a definiton as anything other
    that a definition. The only semantics of a syntactically correct
    definition is that the defined means the same as the defining
    expression.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory on Wed Oct 15 13:14:17 2025
    From Newsgroup: comp.theory

    On 10/15/2025 12:19 PM, olcott wrote:
    On 10/15/2025 11:38 AM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    On 10/15/2025 12:36 AM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    On 10/14/2025 10:34 PM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    On 10/14/2025 9:46 PM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    5. In short

    The halting problem as usually formalized is syntactically
    consistent
    only because it pretends that U(p) is well-defined for every p. >>>>>>>>>
    If you interpret the definitions semantically rCo as saying that >>>>>>>>> U(p) should simulate the behavior

    ... then you're making a grievous mistake. The halting function >>>>>>>> doesn't
    stipulate simulation.


    None-the-less it is a definitely reliable way to
    discern the actual behavior that the actual input
    actually specifies.

    No, it isn't.-a When the input specifies halting behavior
    then we know that simulation will terminate in a finite number
    of steps. In that case we discern that the input has terminated.


    When the semantics of the language specify
    that when DD calls HHH(DD) that HHH must
    simulate an instance of itself simulating
    DD ChatGPT knows that this cannot be simply
    ignored.

    It is obvious that when H denotes a simulator, then its diagonal
    program
    D ends up infinite regress, and is nonterminating.
    H(D) doesn't terminate, and fails to be a decider that way, not
    on account of returning an incorrect value.
    This situation is of no particular significance.

    When H is a simulator equipped with some break condition by which it
    stops simulating and returns a value, that H's diagonal program D
    ensures that the return value is wrong; if the value is 0, D is
    terminating.

    With HHH(DD)==0 HHH is returning the correct value for
    the actual behavior of its actual input.

    It simply isn't.

    That the directly
    executed DD() is not in the input domain of HHH makes
    what it does irrelevant.

    There exists no difference between "simulated" and "directly executed".


    *Conclusively proven otherwise by this*

    <Input to LLM systems>
    Please think this all the way through without making any guesses.

    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) Detects a non-terminating behavior pattern:
    -a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    -a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    -a-a-a then HHH is correct to abort this simulation and return 0.

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    -a int Halt_Status = HHH(DD);
    -a if (Halt_Status)
    -a-a-a HERE: goto HERE;
    -a return Halt_Status;
    }

    int main()
    {
    -a HHH(DD);
    }

    This sure seems simpler:

    1 HOME
    5 PRINT "The Olcott All-in-One Halt Decider!"
    10 INPUT "Shall I halt or not? " ; A$
    30 IF A$ = "YES" GOTO 666
    40 GOTO 10
    666 PRINT "OK!"

    ?

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Wed Oct 15 20:47:03 2025
    From Newsgroup: comp.theory

    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    Please think this all the way through without making any guesses.

    Simulating Termination Analyzer HHH correctly simulates its input until:

    This sentence must end with nothing other than "until that input terminates".

    Otherwise the simulation is not complete and correct.

    (a) Detects a non-terminating behavior pattern:
    abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    then HHH is correct to abort this simulation and return 0.

    HHH is correct to abort the simulation because if it doesn't do that,
    it will not terminate. All halting deciders that incorporate simulation
    as a tool must break out of simulation at some point in order not to be
    tripped up by inputs that fail to terminate.

    Without breaking out of the simulation, it would not be possible
    for HHH(Infinite_Loop) or HHH(Infinite_Recursion) to decide correctly
    that the return value should be zero.

    However, nothing is effective against the diagonal input.

    What value should HHH(DD) correctly return?

    The set of possible solutions is the empty set.

    3x + y = 5
    6x + 2y = 3

    What pairs <x, y> satisfy these equations?

    HHH(DD) not having a solution is no different from simultaneous
    equations in n variables not having a solution.

    Elementary school children can work with these.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Oct 15 16:00:36 2025
    From Newsgroup: comp.theory

    On 10/15/2025 3:47 PM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    Please think this all the way through without making any guesses.

    Simulating Termination Analyzer HHH correctly simulates its input until:

    This sentence must end with nothing other than "until that input terminates".

    Otherwise the simulation is not complete and correct.

    (a) Detects a non-terminating behavior pattern:
    abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    then HHH is correct to abort this simulation and return 0.

    HHH is correct to abort the simulation because if it doesn't do that,
    it will not terminate. All halting deciders that incorporate simulation
    as a tool must break out of simulation at some point in order not to be tripped up by inputs that fail to terminate.


    Great I just tested this and ChatGPT 5.0 and Gemini get
    the wrong answer without (c). ChatGPT 4.0, Claude AI
    and Grok all got this same answer without (c) and
    without being told not to guess.

    Without breaking out of the simulation, it would not be possible
    for HHH(Infinite_Loop) or HHH(Infinite_Recursion) to decide correctly
    that the return value should be zero.


    Yes.

    However, nothing is effective against the diagonal input.


    Unless we go be the behavior that the semantics
    of the specification language specifies as the
    ultimate judge.

    What value should HHH(DD) correctly return?


    Not within the stipulated axioms when (c) is invoked.

    The set of possible solutions is the empty set.

    3x + y = 5
    6x + 2y = 3

    What pairs <x, y> satisfy these equations?

    HHH(DD) not having a solution is no different from simultaneous
    equations in n variables not having a solution.

    Elementary school children can work with these.


    Its deeper than that. See my other post.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Oct 15 17:18:52 2025
    From Newsgroup: comp.theory

    On 10/15/2025 4:00 PM, olcott wrote:
    On 10/15/2025 3:47 PM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    Please think this all the way through without making any guesses.

    Simulating Termination Analyzer HHH correctly simulates its input until:

    This sentence must end with nothing other than "until that input
    terminates".

    Otherwise the simulation is not complete and correct.

    (a) Detects a non-terminating behavior pattern:
    -a-a-a-a-a abort simulation and return 0.
    (b) Simulated input reaches its simulated "return" statement:
    -a-a-a-a-a return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    -a-a-a-a-a then HHH is correct to abort this simulation and return 0.

    HHH is correct to abort the simulation because if it doesn't do that,
    it will not terminate. All halting deciders that incorporate simulation
    as a tool must break out of simulation at some point in order not to be
    tripped up by inputs that fail to terminate.


    Great I just tested this and ChatGPT 5.0 and Gemini get
    the wrong answer without (c). ChatGPT 4.0, Claude AI
    and Grok all got this same answer without (c) and
    without being told not to guess.

    Without breaking out of the simulation, it would not be possible
    for HHH(Infinite_Loop) or HHH(Infinite_Recursion) to decide correctly
    that the return value should be zero.


    Yes.

    However, nothing is effective against the diagonal input.


    Unless we go be the behavior that the semantics
    of the specification language specifies as the
    ultimate judge.

    What value should HHH(DD) correctly return?


    Not within the stipulated axioms when (c) is invoked.

    The set of possible solutions is the empty set.

    -a-a-a 3x +-a y = 5
    -a-a-a 6x + 2y = 3

    What pairs <x, y> satisfy these equations?

    HHH(DD) not having a solution is no different from simultaneous
    equations in n variables not having a solution.

    Elementary school children can work with these.


    Its deeper than that. See my other post.


    I didn't get the correct understanding by being
    a brilliant computer scientist. I got it by focusing
    on how pathological self-reference effects truth.
    I did this with an OCD like focus of concentration
    for 28 years.

    In the top of my new post [The Halting Problem is Incoherent]
    I show exactly how to overcome Quine's objection to the
    analytic / synthetic distinction. This is a key aspect of the
    foundation of the notion of truth itself.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.theory,sci.logic,sci.math on Thu Oct 16 05:37:05 2025
    From Newsgroup: comp.theory

    On 15/10/2025 04:04, olcott wrote:
    On 10/14/2025 9:46 PM, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    5. In short

    The halting problem as usually formalized is syntactically consistent
    only because it pretends that U(p) is well-defined for every p.

    If you interpret the definitions semantically rCo as saying that
    U(p) should simulate the behavior

    ... then you're making a grievous mistake. The halting function doesn't
    stipulate simulation.


    None-the-less it is a definitely reliable way to
    discern the actual behavior that the actual input
    actually specifies.

    Now, when you say "simulate" do you mean the modern systems modelling
    term meaning "analyse and via that, characterise", or the modern
    "emulate" meaning "perform a materially similar facsimile of the
    statewise evolution of" ?

    In the latter, it's not "a reliable way to discern the actual behaviour
    that the actual input actually specifies" (BTW, actually loving the
    actual exasperation actually showing through). It's not reliable because
    you never discover the fact of nontermination that way, you only ever
    discover "at this moment it has not yet terminated".

    In the former, then it's as reliable as _any_ logical system can do but
    maybe not truly reliable: as per Goedel.


    I do not purport to have hereby responded to any other part of the
    message I hereby followed-up to nor should anybody particularly expect
    it. I was just particularly interested in the consequences of olcott's
    meaning in "simulate".

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.theory,sci.logic,sci.math on Thu Oct 16 05:53:26 2025
    From Newsgroup: comp.theory

    On 15/10/2025 06:38, Kaz Kylheku wrote:
    On 2025-10-15, Tristan Wibberley <tristan.wibberley+netnews2@alumni.manchester.ac.uk> wrote:
    ...
    or else that our ontology is incorrect.

    Which points to our mistake, because in this context we are handed
    the ontology.


    It's not necessarily so that given ontologies are correct ontologies.

    There might be ontologies that contradict the formal system whose
    analysis they purport to aid and we may be given multiple ontologies
    which mingle in the mind which we must try to address, and any of those ontologies might be materially non-constructive or self-referential
    themselves (of course they are, in fact, so - the fascinating natural
    language - but not materially in close-knit groups because normally they redefine their personal appreciation of terms for their in-group communications).

    Your observation, for example, that "simulate" is not a part of the
    ontology is useful in its sometimes meaning similar to "emulate". It
    will be instructive to see whether that's what oclott has meant and what indications (s)he has given to the contrary.


    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Thu Oct 16 06:55:31 2025
    From Newsgroup: comp.theory

    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    (b) Only now can I finally begin to communicate my
    points very clearly.

    When can you finally begin looking into what happens when
    you take simulations abandoned by your HHH (declared by
    it to be non-halting), and step more instructions?
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Heathfield@rjh@cpax.org.uk to comp.theory on Thu Oct 16 08:33:18 2025
    From Newsgroup: comp.theory

    On 16/10/2025 07:55, Kaz Kylheku wrote:
    On 2025-10-15, olcott <polcott333@gmail.com> wrote:
    (b) Only now can I finally begin to communicate my
    points very clearly.

    When can you finally begin looking into what happens when
    you take simulations abandoned by your HHH (declared by
    it to be non-halting), and step more instructions?

    Clearly you are an agent of the conspiracy, so *of course* he
    isn't going to follow your advice.

    https://xkcd.com/3155/
    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory on Thu Oct 16 12:21:27 2025
    From Newsgroup: comp.theory

    On 2025-10-15 12:30:12 +0000, olcott said:

    On 10/15/2025 4:15 AM, Mikko wrote:
    On 2025-10-15 02:17:50 +0000, olcott said:

    5. In short

    The halting problem as usually formalized is syntactically consistent
    only because it pretends that U(p) is well-defined for every p.

    The halting problem does not pretend anything about U(p). It does not
    even mention U(p).

    It semantically entails U(p)

    A problem does not entail anything, semantically or otherwise.
    The words "ptoblem" and "entail" are semantically incompatible.

    It requires every decider H to report on the behavior
    of UTM(p). When p calls H then the behavior of UTM(p)
    is outside of the domain of H.

    No, it does not. But a decider that does not answer as required
    by the halting problem is not a nalting decider.

    When in fact they are not, thus a break from reality.

    That does not make sense. What "they" and what they are not?

    The halting problem stipulates that they are in the
    same domain. Correct semantic entailment proves that
    they are not.

    The halting problem does not stipulate. It asks for a method to
    answer all questions that ask about a Turing machine and an
    input that can be given to it whether the Turing machine halts.

    HHH(DD)==0 and HHH1(DD)==1 proves this when the ultimate
    measure of the behavior that the input specifies is
    the simulation of its input by its decider according to
    the semantics of its language.

    No, it does not. It only proves that one of them gives the wrong
    answer and the other the right one.
    --
    Mikko

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic on Thu Oct 16 07:55:17 2025
    From Newsgroup: comp.theory

    On 10/14/25 10:17 PM, olcott wrote:
    5. In short

    The halting problem as usually formalized is syntactically consistent
    only because it pretends that U(p) is well-defined for every p.

    Because it *IS*.

    Your problem is you CHANGE the question and your ALTERED question is the
    one with a problem.

    You question is based on your "decider" not being a program and its
    input not being a description of a program.

    Your:
    (c) If HHH must abort its simulation to prevent its own non-termination
    then HHH(DD) is correct to abort this simulation and return 0.

    This condition is meaningless in the context of actual Algorithms, as
    the algorithm either does abort here, or it doesn't.

    If it doesn't, you can't talk about what it will do if it does somethig
    it doesn't.

    And if it does abort, then we can show for this particular input, that a correct simulation of this input will reach a final state.

    Your problem is that your logic is based on not looking at a determined algorithm, but at a meta-algorithm.


    If you interpret the definitions semantically rCo as saying that
    U(p) should simulate the behavior actually specified by
    p rCo then the system is logically incoherent, not just idealized.

    But that isn't the semantic meaning, but just shows you don't understand
    that words are supposed to have specific meaning in their context.

    Your problem is you think you get to redefine words you don't like, but
    that means that words don't actually have real meaning, and thus your
    whole basis of semantic meaning is invalid.


    That is a stronger critique than rCLthe definition doesnrCOt match reality.rCY
    ItrCOs that the definition contains a contradiction in its own terms once you stop suppressing the semantic entailments of self-reference.

    But the actual definitions are what define the reality.

    Your problem is you don't understand the world you are talking about,
    because you just don't understand what it means for something to be
    true, because you don't understand the concept of context.


    https://chatgpt.com/share/68eef2df-0f10-8011-8e92-264651cc518c


    But your problem is that if U is a Universal Turing Machine, it isn't a
    Halt Decider H, as they are different mappings. Your logic is based on
    the same soft of thing as assuming that circles are square and have corners.

    Since you don't understand the meaning of those two terms, and how they
    are inconsistant.

    The problem goes back to you altering the definition of Halting, and
    show you don't understand what a correct simulation is.

    In fact, your proof actually show that per the actual definition of
    Halting, it is impossible to create a halting decider, just like the
    original proof, and it shows that you attempt to define a "Simulating
    Halt Decider" is itself inconsistant.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math on Thu Oct 16 07:56:54 2025
    From Newsgroup: comp.theory

    On 10/15/2025 11:53 PM, Tristan Wibberley wrote:
    On 15/10/2025 06:38, Kaz Kylheku wrote:
    On 2025-10-15, Tristan Wibberley <tristan.wibberley+netnews2@alumni.manchester.ac.uk> wrote:
    ...
    or else that our ontology is incorrect.

    Which points to our mistake, because in this context we are handed
    the ontology.


    It's not necessarily so that given ontologies are correct ontologies.

    There might be ontologies that contradict the formal system whose
    analysis they purport to aid and we may be given multiple ontologies
    which mingle in the mind which we must try to address, and any of those ontologies might be materially non-constructive or self-referential themselves (of course they are, in fact, so - the fascinating natural language - but not materially in close-knit groups because normally they redefine their personal appreciation of terms for their in-group communications).

    Your observation, for example, that "simulate" is not a part of the
    ontology is useful in its sometimes meaning similar to "emulate". It
    will be instructive to see whether that's what oclott has meant and what indications (s)he has given to the contrary.


    In my case simulated is implemented by x86 emulation thus
    provides the exact behavior that the input to HHH(DD)
    actually specifies.

    Five LLM systems: Claude AI, ChatGPT 4.0, Grok, Gemini and
    ChatGPT 5.0 all got these exact same results on the basis
    of the C code shown below.

    Thus in this case x86 emulation and correct simulation of
    the C code were mathematically precise.

    <Input to LLM systems>
    Please think this all the way through without making any guesses

    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) Detects a non-terminating behavior pattern: abort simulation and
    return 0.
    (b) Simulated input reaches its simulated "return" statement: return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    then HHH is correct to abort this simulation and return 0.

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    int Halt_Status = HHH(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    int main()
    {
    HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>


    *The Halting Problem is Incoherent* https://www.researchgate.net/publication/396510896_The_Halting_Problem_is_Incoherent

    rCLThe halting problem, as classically formulated,
    relies on an inferential step that is not justified
    by a continuous chain of semantic entailment from
    its initial stipulations.rCY
    ...
    "The halting problemrCOs definition contains a break
    in the chain of semantic entailment; it asserts
    totality over a domain that its own semantics cannot
    support."

    Was ChatGPT 5.0's final conclusion after a 32 page dialogue.

    Link to the following dialogue https://chatgpt.com/share/68ef97b5-6770-8011-9aad-323009ca7841


    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Thu Oct 16 08:34:39 2025
    From Newsgroup: comp.theory

    On 10/16/2025 4:21 AM, Mikko wrote:
    On 2025-10-15 12:30:12 +0000, olcott said:

    On 10/15/2025 4:15 AM, Mikko wrote:
    On 2025-10-15 02:17:50 +0000, olcott said:

    5. In short

    The halting problem as usually formalized is syntactically
    consistent only because it pretends that U(p) is well-defined for
    every p.

    The halting problem does not pretend anything about U(p). It does not
    even mention U(p).

    It semantically entails U(p)

    A problem does not entail anything, semantically or otherwise.
    The words "ptoblem" and "entail" are semantically incompatible.

    It requires every decider H to report on the behavior
    of UTM(p). When p calls H then the behavior of UTM(p)
    is outside of the domain of H.

    No, it does not. But a decider that does not answer as required
    by the halting problem is not a nalting decider.

    When in fact they are not, thus a break from reality.

    That does not make sense. What "they" and what they are not?

    The halting problem stipulates that they are in the
    same domain. Correct semantic entailment proves that
    they are not.

    The halting problem does not stipulate. It asks for a method to
    answer all questions that ask about a Turing machine and an
    input that can be given to it whether the Turing machine halts.

    HHH(DD)==0 and HHH1(DD)==1 proves this when the ultimate
    measure of the behavior that the input specifies is
    the simulation of its input by its decider according to
    the semantics of its language.

    No, it does not. It only proves that one of them gives the wrong
    answer and the other the right one.


    <Input to LLM systems>
    Please think this all the way through without making any guesses

    Simulating Termination Analyzer HHH correctly simulates its input until:
    (a) Detects a non-terminating behavior pattern: abort simulation and
    return 0.
    (b) Simulated input reaches its simulated "return" statement: return 1.
    (c) If HHH must abort its simulation to prevent its own non-termination
    then HHH is correct to abort this simulation and return 0.

    typedef int (*ptr)();
    int HHH(ptr P);

    int DD()
    {
    int Halt_Status = HHH(DD);
    if (Halt_Status)
    HERE: goto HERE;
    return Halt_Status;
    }

    int main()
    {
    HHH(DD);
    }

    What value should HHH(DD) correctly return?
    </Input to LLM systems>

    The key element of my whole proof is the behavior
    of the input to HHH(DD) within the semantics of the
    C programming language.

    The LLM systems are smart enough to not conflate
    two different things together and not bother to
    notice this is mistake.

    The first couple of pages of my ChatGPT dialogue
    handles this aspect quite well.

    ChatGPT is 67-fold smarter than it was one year ago.
    It can not keep track of all of the details of very
    long inference chains. 200,000 words this year and
    only 3,000 words one year ago.

    The Halting Problem is Incoherent https://www.researchgate.net/publication/396510896_The_Halting_Problem_is_Incoherent
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2