Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 27 |
Nodes: | 6 (0 / 6) |
Uptime: | 38:01:58 |
Calls: | 631 |
Calls today: | 2 |
Files: | 1,187 |
D/L today: |
22 files (29,767K bytes) |
Messages: | 173,681 |
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
An indirect reference that does not really exist.
It is the job of HHH to report on the behavior
that its actual input actually specifies.
Yes; everyone agrees with this.
They keep ignoring the verified fact that when the
counter-example input is simulated by the same halt
decider that it calls that the "do-the-opposite"
code is unreachable and this input remains stuck
in recursive simulation.
On 10/7/2025 1:49 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
An indirect reference that does not really exist.
It is the job of HHH to report on the behavior
that its actual input actually specifies.
Yes; everyone agrees with this.
They keep ignoring the verified fact that when the
counter-example input is simulated by the same halt
decider that it calls that the "do-the-opposite"
code is unreachable and this input remains stuck
in recursive simulation.
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 1:49 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
An indirect reference that does not really exist.
It is the job of HHH to report on the behavior
that its actual input actually specifies.
Yes; everyone agrees with this.
They keep ignoring the verified fact that when the
You've not independently produced any useful verified fact since you
started on this in 2004.
counter-example input is simulated by the same halt
decider that it calls that the "do-the-opposite"
code is unreachable and this input remains stuck
in recursive simulation.
This is false. It depends on /what kind of decider/
is targeted by the diagonal case.
A decider which simply simulates its input for as many steps
as it takes to finish it will indeed not return to its diagonal
function. BY NOT RETURNING, IT FAILS TO DECIDE AND IS DISQUALIFIED
just as if it returned a wrong answer.
A decider which aborts its input and returns 0 to reject it
will cause its diagonal function to terminate.
On 10/7/2025 3:11 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 1:49 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
An indirect reference that does not really exist.
It is the job of HHH to report on the behavior
that its actual input actually specifies.
Yes; everyone agrees with this.
They keep ignoring the verified fact that when the
You've not independently produced any useful verified fact since you
started on this in 2004.
counter-example input is simulated by the same halt
decider that it calls that the "do-the-opposite"
code is unreachable and this input remains stuck
in recursive simulation.
This is false. It depends on /what kind of decider/
is targeted by the diagonal case.
A decider which simply simulates its input for as many steps
as it takes to finish it will indeed not return to its diagonal
function. BY NOT RETURNING, IT FAILS TO DECIDE AND IS DISQUALIFIED
just as if it returned a wrong answer.
A decider which aborts its input and returns 0 to reject it
will cause its diagonal function to terminate.
The decider is only deciding on the behavior of the
code it is simulating. I know that you know that this
correctly simulated code cannot possibly terminate
normally by reaching its simulated final halt state.
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 3:11 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 1:49 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
An indirect reference that does not really exist.
It is the job of HHH to report on the behavior
that its actual input actually specifies.
Yes; everyone agrees with this.
They keep ignoring the verified fact that when the
You've not independently produced any useful verified fact since you
started on this in 2004.
counter-example input is simulated by the same halt
decider that it calls that the "do-the-opposite"
code is unreachable and this input remains stuck
in recursive simulation.
This is false. It depends on /what kind of decider/
is targeted by the diagonal case.
A decider which simply simulates its input for as many steps
as it takes to finish it will indeed not return to its diagonal
function. BY NOT RETURNING, IT FAILS TO DECIDE AND IS DISQUALIFIED
just as if it returned a wrong answer.
A decider which aborts its input and returns 0 to reject it
will cause its diagonal function to terminate.
The decider is only deciding on the behavior of the
code it is simulating. I know that you know that this
correctly simulated code cannot possibly terminate
normally by reaching its simulated final halt state.
I specifically know that to be false.
I mean, right there you are acknowledging that it /has/ a final
halt state. There is a final halt state which is /its/
final halt state.
"To have a halt state" is synonymous with "to terminate"!!!
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 10:31 AM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 4:15 AM, joes wrote:
Am Mon, 06 Oct 2025 21:35:19 -0500 schrieb olcott:
On 10/6/2025 9:00 PM, Kaz Kylheku wrote:Yeah, because you stop simulating it. That doesnrCOt make it not halt. >>>>>
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
IT IS NEVER EVER AN ACTUAL INPUT IT IS ALWAYS SOME MACHINE SOMEWHERE >>>>>>>> ELSE THAT IS NOT THE ACTUAL INPUT
-n.q0 rf?-nrf- reo* -n.embedded_H rf?-nrf- rf?-nrf- reo* -n.reR, // accept state
-n.q0 rf?-nrf- reo* -n.embedded_H rf?-nrf- rf?-nrf- reo* -n.qn // reject state
*These steps keep repeating unless embedded_H aborts*
(a) -n copies its input rf?-nrf-
(b) -n invokes embedded_H rf?-nrf- rf?-nrf-
(c) embedded_H simulates rf?-nrf- rf?-nrf-
Those steps keep repeating unless embedded_H is //redefined//
So a pair of mutually exclusive hypothetical possibilities
is well beyond your capacity to understand.
It is beyond your own capacity to understand that these
possibilities cannot exist simultaneously //under the same symbols//.
On 10/7/2025 3:32 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 3:11 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 1:49 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
An indirect reference that does not really exist.
It is the job of HHH to report on the behavior
that its actual input actually specifies.
Yes; everyone agrees with this.
They keep ignoring the verified fact that when the
You've not independently produced any useful verified fact since you
started on this in 2004.
counter-example input is simulated by the same halt
decider that it calls that the "do-the-opposite"
code is unreachable and this input remains stuck
in recursive simulation.
This is false. It depends on /what kind of decider/
is targeted by the diagonal case.
A decider which simply simulates its input for as many steps
as it takes to finish it will indeed not return to its diagonal
function. BY NOT RETURNING, IT FAILS TO DECIDE AND IS DISQUALIFIED
just as if it returned a wrong answer.
A decider which aborts its input and returns 0 to reject it
will cause its diagonal function to terminate.
The decider is only deciding on the behavior of the
code it is simulating. I know that you know that this
correctly simulated code cannot possibly terminate
normally by reaching its simulated final halt state.
I specifically know that to be false.
I mean, right there you are acknowledging that it /has/ a final
halt state. There is a final halt state which is /its/
final halt state.
"To have a halt state" is synonymous with "to terminate"!!!
void Infinite_Loop()
{
-a HERE: goto HERE;
-a return;
}
You already lied about this once.
This time could be a miscommunication.
Infinite_Loop() has an unreachable final
halt state of its "return" statement.
On 10/7/2025 1:49 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
An indirect reference that does not really exist.
It is the job of HHH to report on the behavior
that its actual input actually specifies.
Yes; everyone agrees with this.
They keep ignoring the verified fact that when the
counter-example input is simulated by the same halt
decider that it calls that the "do-the-opposite"
code is unreachable and this input remains stuck
in recursive simulation.
On 10/7/2025 3:32 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 3:11 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 1:49 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
An indirect reference that does not really exist.
It is the job of HHH to report on the behavior
that its actual input actually specifies.
Yes; everyone agrees with this.
They keep ignoring the verified fact that when the
You've not independently produced any useful verified fact since you
started on this in 2004.
counter-example input is simulated by the same halt
decider that it calls that the "do-the-opposite"
code is unreachable and this input remains stuck
in recursive simulation.
This is false. It depends on /what kind of decider/
is targeted by the diagonal case.
A decider which simply simulates its input for as many steps
as it takes to finish it will indeed not return to its diagonal
function. BY NOT RETURNING, IT FAILS TO DECIDE AND IS DISQUALIFIED
just as if it returned a wrong answer.
A decider which aborts its input and returns 0 to reject it
will cause its diagonal function to terminate.
The decider is only deciding on the behavior of the
code it is simulating. I know that you know that this
correctly simulated code cannot possibly terminate
normally by reaching its simulated final halt state.
I specifically know that to be false.
I mean, right there you are acknowledging that it /has/ a final
halt state. There is a final halt state which is /its/
final halt state.
"To have a halt state" is synonymous with "to terminate"!!!
void Infinite_Loop()
{
HERE: goto HERE;
return;
}
Infinite_Loop() has an unreachable final
halt state of its "return" statement.
(defun infinite-loop ()(while t))
(compile 'infinite-loop)#<vm fun: 0 param>
(disassemble *2)data:
(defun return-42 () 42)return-42
(disassemble (compile 'return-42))data:
(defun infinite-recursion () (infinite-recursion))infinite-recursion
(compile 'infinite-recursion)#<vm fun: 0 param>
(disassemble *2)data:
(compile-toplevel '(defun infinite-recursion () (infinite-recursion)))#<sys:vm-desc: 9cb6bc0>
(disassemble *1)data:
On 10/7/2025 1:52 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 10:31 AM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 4:15 AM, joes wrote:
Am Mon, 06 Oct 2025 21:35:19 -0500 schrieb olcott:
On 10/6/2025 9:00 PM, Kaz Kylheku wrote:Yeah, because you stop simulating it. That doesnrCOt make it not halt. >>>>>>
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
IT IS NEVER EVER AN ACTUAL INPUT IT IS ALWAYS SOME MACHINE SOMEWHERE >>>>>>>>> ELSE THAT IS NOT THE ACTUAL INPUT
-n.q0 rf?-nrf- reo* -n.embedded_H rf?-nrf- rf?-nrf- reo* -n.reR, // accept state
-n.q0 rf?-nrf- reo* -n.embedded_H rf?-nrf- rf?-nrf- reo* -n.qn // reject state
*These steps keep repeating unless embedded_H aborts*
(a) -n copies its input rf?-nrf-
(b) -n invokes embedded_H rf?-nrf- rf?-nrf-
(c) embedded_H simulates rf?-nrf- rf?-nrf-
Those steps keep repeating unless embedded_H is //redefined//
So a pair of mutually exclusive hypothetical possibilities
is well beyond your capacity to understand.
It is beyond your own capacity to understand that these
possibilities cannot exist simultaneously //under the same symbols//.
That is true, but, irrelevant.
We could say that we cannot tell whether this sentence
is true or false: "This sentence is not true" and never
get to the point that it is semantically malformed.
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 1:49 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
An indirect reference that does not really exist.
It is the job of HHH to report on the behavior
that its actual input actually specifies.
Yes; everyone agrees with this.
They keep ignoring the verified fact that when the
counter-example input is simulated by the same halt
decider that it calls that the "do-the-opposite"
code is unreachable and this input remains stuck
in recursive simulation.
A subtle correction: it is simulated by /an equivalent/ halt decider.
Since the test case is a finite string of symbols, it cannot literally contain that decider which is operating on it. (You are right
to call that out.)
It contains an implementation of exactly the same algorithm.
On 2025-10-08, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 3:32 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 3:11 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 1:49 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
An indirect reference that does not really exist.
It is the job of HHH to report on the behavior
that its actual input actually specifies.
Yes; everyone agrees with this.
They keep ignoring the verified fact that when the
You've not independently produced any useful verified fact since you >>>>> started on this in 2004.
counter-example input is simulated by the same halt
decider that it calls that the "do-the-opposite"
code is unreachable and this input remains stuck
in recursive simulation.
This is false. It depends on /what kind of decider/
is targeted by the diagonal case.
A decider which simply simulates its input for as many steps
as it takes to finish it will indeed not return to its diagonal
function. BY NOT RETURNING, IT FAILS TO DECIDE AND IS DISQUALIFIED
just as if it returned a wrong answer.
A decider which aborts its input and returns 0 to reject it
will cause its diagonal function to terminate.
The decider is only deciding on the behavior of the
code it is simulating. I know that you know that this
correctly simulated code cannot possibly terminate
normally by reaching its simulated final halt state.
I specifically know that to be false.
I mean, right there you are acknowledging that it /has/ a final
halt state. There is a final halt state which is /its/
final halt state.
"To have a halt state" is synonymous with "to terminate"!!!
void Infinite_Loop()
{
HERE: goto HERE;
return;
}
This one doesn't have a halt state.
Infinite_Loop() has an unreachable final
halt state of its "return" statement.
Positively does not.
Unreachable code does not exist.
On 10/7/2025 9:30 PM, Kaz Kylheku wrote:
On 2025-10-08, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 3:32 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 3:11 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 1:49 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
An indirect reference that does not really exist.
It is the job of HHH to report on the behavior
that its actual input actually specifies.
Yes; everyone agrees with this.
They keep ignoring the verified fact that when the
You've not independently produced any useful verified fact since you >>>>>> started on this in 2004.
counter-example input is simulated by the same halt
decider that it calls that the "do-the-opposite"
code is unreachable and this input remains stuck
in recursive simulation.
This is false. It depends on /what kind of decider/
is targeted by the diagonal case.
A decider which simply simulates its input for as many steps
as it takes to finish it will indeed not return to its diagonal
function. BY NOT RETURNING, IT FAILS TO DECIDE AND IS DISQUALIFIED >>>>>> just as if it returned a wrong answer.
A decider which aborts its input and returns 0 to reject it
will cause its diagonal function to terminate.
The decider is only deciding on the behavior of the
code it is simulating. I know that you know that this
correctly simulated code cannot possibly terminate
normally by reaching its simulated final halt state.
I specifically know that to be false.
I mean, right there you are acknowledging that it /has/ a final
halt state. There is a final halt state which is /its/
final halt state.
"To have a halt state" is synonymous with "to terminate"!!!
void Infinite_Loop()
{
HERE: goto HERE;
return;
}
This one doesn't have a halt state.
Infinite_Loop() has an unreachable final
halt state of its "return" statement.
Positively does not.
Unreachable code does not exist.
A very stupid thing to say and you know it.
void Infinite_Loop()
{
HERE: goto HERE;
printf("Kaz is too dumb to know this code is unreachable!\n");
return;
}
On 10/7/2025 9:30 PM, Kaz Kylheku wrote:
On 2025-10-08, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 3:32 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 3:11 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 1:49 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
An indirect reference that does not really exist.
It is the job of HHH to report on the behavior
that its actual input actually specifies.
Yes; everyone agrees with this.
They keep ignoring the verified fact that when the
You've not independently produced any useful verified fact
since you
started on this in 2004.
counter-example input is simulated by the same halt
decider that it calls that the "do-the-opposite"
code is unreachable and this input remains stuck
in recursive simulation.
This is false. It depends on /what kind of decider/
is targeted by the diagonal case.
A decider which simply simulates its input for as many steps
as it takes to finish it will indeed not return to its
diagonal
function. BY NOT RETURNING, IT FAILS TO DECIDE AND IS
DISQUALIFIED
just as if it returned a wrong answer.
A decider which aborts its input and returns 0 to reject it
will cause its diagonal function to terminate.
The decider is only deciding on the behavior of the
code it is simulating. I know that you know that this
correctly simulated code cannot possibly terminate
normally by reaching its simulated final halt state.
I specifically know that to be false.
I mean, right there you are acknowledging that it /has/ a final
halt state. There is a final halt state which is /its/
final halt state.
"To have a halt state" is synonymous with "to terminate"!!!
void Infinite_Loop()
{
-a-a-a HERE: goto HERE;
-a-a-a return;
}
This one doesn't have a halt state.
Infinite_Loop() has an unreachable final
halt state of its "return" statement.
Positively does not.
Unreachable code does not exist.
A very stupid thing to say and you know it.
void Infinite_Loop()
{
-a HERE: goto HERE;
-a printf("Kaz is too dumb to know this code is unreachable!\n");
-a return;
}
On 10/7/2025 4:18 AM, Mikko wrote:
On 2025-10-07 00:35:42 +0000, olcott said:
On 10/6/2025 5:01 AM, Mikko wrote:
On 2025-10-05 14:33:37 +0000, olcott said:
On 10/5/2025 4:40 AM, Mikko wrote:
On 2025-10-05 03:50:44 +0000, olcott said:
On 10/4/2025 3:07 AM, Mikko wrote:
On 2025-10-03 14:37:12 +0000, olcott said:
On 10/3/2025 4:14 AM, Mikko wrote:
On 2025-10-01 13:27:16 +0000, olcott said:
On 10/1/2025 3:38 AM, Mikko wrote:
On 2025-09-30 18:25:07 +0000, olcott said:
On 9/30/2025 5:35 AM, Mikko wrote:
On 2025-09-29 15:16:33 +0000, olcott said:
That is the decision problem. I am not talking about the >>>>>>>>>>>>>>> decision problem. I am not talking about the decision problem >>>>>>>>>>>>>>> instance of H and D where D halts when H says loops and >>>>>>>>>>>>>>> loops when H says halts.
It is quite obvious that you are not talking to them or anything >>>>>>>>>>>>>> relevant to them. Consequently anything you conclude is not >>>>>>>>>>>>>> relevant to them.
Every polar (yes/no) question having no correct
yes/no answer is an incorrect question.
True but not relevant to the Halting problem, which is about questions
that do have a correct answer that is "yes" or "no".
There are two hypothetical possibilities (a) and (b)
within my system of categorically exhaustive reasoning.
Hypothetical possibilities are not concrete machines.
The proof that halting is not Turing-computable is about concrete Turing
machines.
For decider H and input P
input P halts when H says loops
input P loops when H says halts
making this specific HP decision problem
instance unsatisfiable.
(a) H(P)-->HALTS --- Wrong Answer
(b) H(P)-->LOOPS --- Wrong Answer
You should use "if" instead of "when" as what H says does not depend >>>>>>>>>> on when it says and consequently P either always halts or always loops.
Reading "when" as "if" the above is a proof that H is not a halt decider.
Quantify H universally over all Turing machine deciders and the proof
that no Turing machine is a halt decider is complete.
Here is the corrected version after much feedback.
void P()
{
-a-a if H(P)-a // returns 1 for halts 0 for loops
-a-a-a-a HERE: goto HERE;
}
For the set of H/P pairs of
decider H and input P:
If H says halts then P loops
If H says loops then P halts
making H(P) always incorrect.
That can be used as the core part of the proof that halting is >>>>>>>> uncomputable. It is fairly easy to expand that to a conclusive >>>>>>>> proof.
My purpose is to prove that the conventional halting
halting problem is defined to be unsolvable.
Depends on what exacly you mean by the "to be unsolvable". If you
mean that the definition says that the problem is unsolvable then
that is false. If you mean that the definition is intended to define >>>>>> an unsolvable problem then that is not provable without real world >>>>>> knowledge that you haven't. If you mean that the unsolvability is
a consequence of the definition then that is provably true.
But your purpose is irrelevant to the factw that the halting problem >>>>>> is well-posed and provably unsolvable.
-aAn
alternative definition does correctly determine
the halt status of the above counter-example input.
An alternative definition defines an alternative problem that is
not shown to have any practical or theoretical interest.
Halt deciders cannot report on non-inputs in the
same way that birthday cakes cannot land airliners,
both are merely logically impossible.
Birthday cakes at least exist. Whether they can land airliners
is irrelevant to their primary purpose.
No decider has psychic ability and that is
the only reason why the misconstrued halting
problem proof counter-example cannot be decided.
The important point is that halting is not decidable.
Only because they are asking the wrong question.
The why part is not that important.
If I ask you to correctly answer this question
to get a high paying job and you do not provide
a correct yes or no answer you don't get the
job then the fact that the question itself is
incorrect is most relevant.
What time is it (yes or no)?
Perhaps it can help to work out what is possible
instead but there are enough other ways to find out such possibilities.
<MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
If simulating halt decider H correctly simulates its
input D until *H correctly determines that its simulated D*
*would never stop running unless aborted* then
On 10/14/2022 7:44 PM, Ben Bacarisse wrote:
I don't think that is the shell game. PO really /has/ an H
(it's trivial to do for this one case) that correctly determines
that P(P) *would* never stop running *unless* aborted.
HHH correctly determines that its simulated DD
would never stop running unless aborted
On 10/7/2025 10:17 AM, joes wrote:
Am Tue, 07 Oct 2025 09:50:43 -0500 schrieb olcott:
On 10/7/2025 4:18 AM, Mikko wrote:
The important point is that halting is not decidable.Only because they are asking the wrong question.
Machines halt or not. Nothing wrong about that.
As I have proven back in 2004 the halting problem
is simply the Liar Paradox disguised.
On 2025-10-07 14:50:43 +0000, olcott said:
On 10/7/2025 4:18 AM, Mikko wrote:
On 2025-10-07 00:35:42 +0000, olcott said:
On 10/6/2025 5:01 AM, Mikko wrote:
On 2025-10-05 14:33:37 +0000, olcott said:
On 10/5/2025 4:40 AM, Mikko wrote:
On 2025-10-05 03:50:44 +0000, olcott said:
On 10/4/2025 3:07 AM, Mikko wrote:
On 2025-10-03 14:37:12 +0000, olcott said:
On 10/3/2025 4:14 AM, Mikko wrote:
On 2025-10-01 13:27:16 +0000, olcott said:
On 10/1/2025 3:38 AM, Mikko wrote:
On 2025-09-30 18:25:07 +0000, olcott said:
On 9/30/2025 5:35 AM, Mikko wrote:
On 2025-09-29 15:16:33 +0000, olcott said:
That is the decision problem. I am not talking about the >>>>>>>>>>>>>>>> decision problem. I am not talking about the decision >>>>>>>>>>>>>>>> problem
instance of H and D where D halts when H says loops and >>>>>>>>>>>>>>>> loops when H says halts.
It is quite obvious that you are not talking to them or >>>>>>>>>>>>>>> anything
relevant to them. Consequently anything you conclude is not >>>>>>>>>>>>>>> relevant to them.
Every polar (yes/no) question having no correct
yes/no answer is an incorrect question.
True but not relevant to the Halting problem, which is >>>>>>>>>>>>> about questions
that do have a correct answer that is "yes" or "no".
There are two hypothetical possibilities (a) and (b)
within my system of categorically exhaustive reasoning. >>>>>>>>>>>> Hypothetical possibilities are not concrete machines.
The proof that halting is not Turing-computable is about >>>>>>>>>>> concrete Turing
machines.
For decider H and input P
input P halts when H says loops
input P loops when H says halts
making this specific HP decision problem
instance unsatisfiable.
(a) H(P)-->HALTS --- Wrong Answer
(b) H(P)-->LOOPS --- Wrong Answer
You should use "if" instead of "when" as what H says does not >>>>>>>>>>> depend
on when it says and consequently P either always halts or >>>>>>>>>>> always loops.
Reading "when" as "if" the above is a proof that H is not a >>>>>>>>>>> halt decider.
Quantify H universally over all Turing machine deciders and >>>>>>>>>>> the proof
that no Turing machine is a halt decider is complete.
Here is the corrected version after much feedback.
void P()
{
-a-a if H(P)-a // returns 1 for halts 0 for loops
-a-a-a-a HERE: goto HERE;
}
For the set of H/P pairs of
decider H and input P:
If H says halts then P loops
If H says loops then P halts
making H(P) always incorrect.
That can be used as the core part of the proof that halting is >>>>>>>>> uncomputable. It is fairly easy to expand that to a conclusive >>>>>>>>> proof.
My purpose is to prove that the conventional halting
halting problem is defined to be unsolvable.
Depends on what exacly you mean by the "to be unsolvable". If you >>>>>>> mean that the definition says that the problem is unsolvable then >>>>>>> that is false. If you mean that the definition is intended to define >>>>>>> an unsolvable problem then that is not provable without real world >>>>>>> knowledge that you haven't. If you mean that the unsolvability is >>>>>>> a consequence of the definition then that is provably true.
But your purpose is irrelevant to the factw that the halting problem >>>>>>> is well-posed and provably unsolvable.
-aAn
alternative definition does correctly determine
the halt status of the above counter-example input.
An alternative definition defines an alternative problem that is >>>>>>> not shown to have any practical or theoretical interest.
Halt deciders cannot report on non-inputs in the
same way that birthday cakes cannot land airliners,
both are merely logically impossible.
Birthday cakes at least exist. Whether they can land airliners
is irrelevant to their primary purpose.
No decider has psychic ability and that is
the only reason why the misconstrued halting
problem proof counter-example cannot be decided.
The important point is that halting is not decidable.
Only because they are asking the wrong question.
The why part is not that important.
If I ask you to correctly answer this question
to get a high paying job and you do not provide
a correct yes or no answer you don't get the
job then the fact that the question itself is
incorrect is most relevant.
No, it is not. What matters is whether asking that question is
permitted by the empoyer's policy or law or other applicable
norms, and even then it may be unimportant. I might ask how it
relates to the job we are talking about and recosider whether
I want that job.
What time is it (yes or no)?
The correct answer is "neither". If you want something more specific
you should ask a differently formulated question.
Perhaps it can help to work out what is possible
instead but there are enough other ways to find out such possibilities.
<MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
-a-a-a-a If simulating halt decider H correctly simulates its
-a-a-a-a input D until *H correctly determines that its simulated D*
-a-a-a-a *would never stop running unless aborted* then
Professor Sipser never agreed that any of your examples satisfy
the above stated conditions. In particular, your examples of D
typically do halt when simulated by a simulator other than H.
On 10/14/2022 7:44 PM, Ben Bacarisse wrote:
I don't think that is the shell game. PO really /has/ an H
(it's trivial to do for this one case) that correctly determines
that P(P) *would* never stop running *unless* aborted.
HHH correctly determines that its simulated DD
would never stop running unless aborted
Not correctly, as the simulated DD stops without being aborted
when simulated by HHH1.
On 2025-10-07 15:30:11 +0000, olcott said:
On 10/7/2025 10:17 AM, joes wrote:
Am Tue, 07 Oct 2025 09:50:43 -0500 schrieb olcott:
On 10/7/2025 4:18 AM, Mikko wrote:
The important point is that halting is not decidable.Only because they are asking the wrong question.
Machines halt or not. Nothing wrong about that.
As I have proven back in 2004 the halting problem
is simply the Liar Paradox disguised.
Can't be, as the Liar Paradox is not a problem but a paradox.
The halting problem is a problem, not a paradox. There would
be a paradox if there were a solution to the halting problem
but there isn't any.
On 10/8/2025 5:30 AM, Mikko wrote:
On 2025-10-07 14:50:43 +0000, olcott said:
On 10/7/2025 4:18 AM, Mikko wrote:
On 2025-10-07 00:35:42 +0000, olcott said:
On 10/6/2025 5:01 AM, Mikko wrote:
On 2025-10-05 14:33:37 +0000, olcott said:
On 10/5/2025 4:40 AM, Mikko wrote:
On 2025-10-05 03:50:44 +0000, olcott said:
On 10/4/2025 3:07 AM, Mikko wrote:
On 2025-10-03 14:37:12 +0000, olcott said:
On 10/3/2025 4:14 AM, Mikko wrote:
On 2025-10-01 13:27:16 +0000, olcott said:
On 10/1/2025 3:38 AM, Mikko wrote:The proof that halting is not Turing-computable is about concrete Turing
On 2025-09-30 18:25:07 +0000, olcott said:There are two hypothetical possibilities (a) and (b) >>>>>>>>>>>>> within my system of categorically exhaustive reasoning. >>>>>>>>>>>>> Hypothetical possibilities are not concrete machines. >>>>>>>>>>>>
On 9/30/2025 5:35 AM, Mikko wrote:
On 2025-09-29 15:16:33 +0000, olcott said:
That is the decision problem. I am not talking about the >>>>>>>>>>>>>>>>> decision problem. I am not talking about the decision problem >>>>>>>>>>>>>>>>> instance of H and D where D halts when H says loops and >>>>>>>>>>>>>>>>> loops when H says halts.
It is quite obvious that you are not talking to them or anything
relevant to them. Consequently anything you conclude is not >>>>>>>>>>>>>>>> relevant to them.
Every polar (yes/no) question having no correct
yes/no answer is an incorrect question.
True but not relevant to the Halting problem, which is about questions
that do have a correct answer that is "yes" or "no". >>>>>>>>>>>>>
machines.
For decider H and input P
input P halts when H says loops
input P loops when H says halts
making this specific HP decision problem
instance unsatisfiable.
(a) H(P)-->HALTS --- Wrong Answer
(b) H(P)-->LOOPS --- Wrong Answer
You should use "if" instead of "when" as what H says does not depend
on when it says and consequently P either always halts or always loops.
Reading "when" as "if" the above is a proof that H is not a halt decider.
Quantify H universally over all Turing machine deciders and the proof
that no Turing machine is a halt decider is complete.
Here is the corrected version after much feedback.
void P()
{
-a-a if H(P)-a // returns 1 for halts 0 for loops
-a-a-a-a HERE: goto HERE;
}
For the set of H/P pairs of
decider H and input P:
If H says halts then P loops
If H says loops then P halts
making H(P) always incorrect.
That can be used as the core part of the proof that halting is >>>>>>>>>> uncomputable. It is fairly easy to expand that to a conclusive >>>>>>>>>> proof.
My purpose is to prove that the conventional halting
halting problem is defined to be unsolvable.
Depends on what exacly you mean by the "to be unsolvable". If you >>>>>>>> mean that the definition says that the problem is unsolvable then >>>>>>>> that is false. If you mean that the definition is intended to define >>>>>>>> an unsolvable problem then that is not provable without real world >>>>>>>> knowledge that you haven't. If you mean that the unsolvability is >>>>>>>> a consequence of the definition then that is provably true.
But your purpose is irrelevant to the factw that the halting problem >>>>>>>> is well-posed and provably unsolvable.
-aAn
alternative definition does correctly determine
the halt status of the above counter-example input.
An alternative definition defines an alternative problem that is >>>>>>>> not shown to have any practical or theoretical interest.
Halt deciders cannot report on non-inputs in the
same way that birthday cakes cannot land airliners,
both are merely logically impossible.
Birthday cakes at least exist. Whether they can land airliners
is irrelevant to their primary purpose.
No decider has psychic ability and that is
the only reason why the misconstrued halting
problem proof counter-example cannot be decided.
The important point is that halting is not decidable.
Only because they are asking the wrong question.
The why part is not that important.
If I ask you to correctly answer this question
to get a high paying job and you do not provide
a correct yes or no answer you don't get the
job then the fact that the question itself is
incorrect is most relevant.
No, it is not. What matters is whether asking that question is
permitted by the empoyer's policy or law or other applicable
norms, and even then it may be unimportant. I might ask how it
relates to the job we are talking about and recosider whether
I want that job.
What time is it (yes or no)?
The correct answer is "neither". If you want something more specific
you should ask a differently formulated question.
Neither is the correct answer yet not allowed.
Perhaps it can help to work out what is possible<MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
instead but there are enough other ways to find out such possibilities. >>>
-a-a-a-a If simulating halt decider H correctly simulates its
-a-a-a-a input D until *H correctly determines that its simulated D*
-a-a-a-a *would never stop running unless aborted* then
Professor Sipser never agreed that any of your examples satisfy
the above stated conditions. In particular, your examples of D
typically do halt when simulated by a simulator other than H.
There are no other meaning for my words that what
they alrfeady directly say.
On 2025-10-09 04:06:08 +0000, olcott said:
On 10/8/2025 5:30 AM, Mikko wrote:
On 2025-10-07 14:50:43 +0000, olcott said:
On 10/7/2025 4:18 AM, Mikko wrote:
On 2025-10-07 00:35:42 +0000, olcott said:
On 10/6/2025 5:01 AM, Mikko wrote:
On 2025-10-05 14:33:37 +0000, olcott said:
On 10/5/2025 4:40 AM, Mikko wrote:
On 2025-10-05 03:50:44 +0000, olcott said:
On 10/4/2025 3:07 AM, Mikko wrote:
On 2025-10-03 14:37:12 +0000, olcott said:
On 10/3/2025 4:14 AM, Mikko wrote:
On 2025-10-01 13:27:16 +0000, olcott said:
On 10/1/2025 3:38 AM, Mikko wrote:The proof that halting is not Turing-computable is about >>>>>>>>>>>>> concrete Turing
On 2025-09-30 18:25:07 +0000, olcott said:There are two hypothetical possibilities (a) and (b) >>>>>>>>>>>>>> within my system of categorically exhaustive reasoning. >>>>>>>>>>>>>> Hypothetical possibilities are not concrete machines. >>>>>>>>>>>>>
On 9/30/2025 5:35 AM, Mikko wrote:
On 2025-09-29 15:16:33 +0000, olcott said:
That is the decision problem. I am not talking about the >>>>>>>>>>>>>>>>>> decision problem. I am not talking about the decision >>>>>>>>>>>>>>>>>> problem
instance of H and D where D halts when H says loops and >>>>>>>>>>>>>>>>>> loops when H says halts.
It is quite obvious that you are not talking to them or >>>>>>>>>>>>>>>>> anything
relevant to them. Consequently anything you conclude is >>>>>>>>>>>>>>>>> not
relevant to them.
Every polar (yes/no) question having no correct >>>>>>>>>>>>>>>> yes/no answer is an incorrect question.
True but not relevant to the Halting problem, which is >>>>>>>>>>>>>>> about questions
that do have a correct answer that is "yes" or "no". >>>>>>>>>>>>>>
machines.
For decider H and input P
input P halts when H says loops
input P loops when H says halts
making this specific HP decision problem
instance unsatisfiable.
(a) H(P)-->HALTS --- Wrong Answer
(b) H(P)-->LOOPS --- Wrong Answer
You should use "if" instead of "when" as what H says does >>>>>>>>>>>>> not depend
on when it says and consequently P either always halts or >>>>>>>>>>>>> always loops.
Reading "when" as "if" the above is a proof that H is not a >>>>>>>>>>>>> halt decider.
Quantify H universally over all Turing machine deciders and >>>>>>>>>>>>> the proof
that no Turing machine is a halt decider is complete. >>>>>>>>>>>>>
Here is the corrected version after much feedback.
void P()
{
-a-a if H(P)-a // returns 1 for halts 0 for loops
-a-a-a-a HERE: goto HERE;
}
For the set of H/P pairs of
decider H and input P:
If H says halts then P loops
If H says loops then P halts
making H(P) always incorrect.
That can be used as the core part of the proof that halting is >>>>>>>>>>> uncomputable. It is fairly easy to expand that to a conclusive >>>>>>>>>>> proof.
My purpose is to prove that the conventional halting
halting problem is defined to be unsolvable.
Depends on what exacly you mean by the "to be unsolvable". If you >>>>>>>>> mean that the definition says that the problem is unsolvable then >>>>>>>>> that is false. If you mean that the definition is intended to >>>>>>>>> define
an unsolvable problem then that is not provable without real world >>>>>>>>> knowledge that you haven't. If you mean that the unsolvability is >>>>>>>>> a consequence of the definition then that is provably true.
But your purpose is irrelevant to the factw that the halting >>>>>>>>> problem
is well-posed and provably unsolvable.
-aAn
alternative definition does correctly determine
the halt status of the above counter-example input.
An alternative definition defines an alternative problem that is >>>>>>>>> not shown to have any practical or theoretical interest.
Halt deciders cannot report on non-inputs in the
same way that birthday cakes cannot land airliners,
both are merely logically impossible.
Birthday cakes at least exist. Whether they can land airliners
is irrelevant to their primary purpose.
No decider has psychic ability and that is
the only reason why the misconstrued halting
problem proof counter-example cannot be decided.
The important point is that halting is not decidable.
Only because they are asking the wrong question.
The why part is not that important.
If I ask you to correctly answer this question
to get a high paying job and you do not provide
a correct yes or no answer you don't get the
job then the fact that the question itself is
incorrect is most relevant.
No, it is not. What matters is whether asking that question is
permitted by the empoyer's policy or law or other applicable
norms, and even then it may be unimportant. I might ask how it
relates to the job we are talking about and recosider whether
I want that job.
What time is it (yes or no)?
The correct answer is "neither". If you want something more specific
you should ask a differently formulated question.
Neither is the correct answer yet not allowed.
Perhaps it can help to work out what is possible
instead but there are enough other ways to find out such
possibilities.
<MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
-a-a-a-a If simulating halt decider H correctly simulates its
-a-a-a-a input D until *H correctly determines that its simulated D*
-a-a-a-a *would never stop running unless aborted* then
Professor Sipser never agreed that any of your examples satisfy
the above stated conditions. In particular, your examples of D
typically do halt when simulated by a simulator other than H.
There are no other meaning for my words that what
they alrfeady directly say.
Maybe he assumed that "its simulated D" means D, as it means in the
Common Language, and therefore its simulated D halts if and only if
I don't think that is the shell game. PO really /has/ an H
(it's trivial to do for this one case) that correctly determines
that P(P) *would* never stop running *unless* aborted.
D halts, and consequently it never happens that D halts and H
correctly determines that "D would never halt unless aborted".
Then his awnswer is correct for his interpretation of your words
but mistaken for your interpretation of the same words.
On 10/8/2025 5:34 AM, Mikko wrote:
On 2025-10-07 15:30:11 +0000, olcott said:Every decision problem decider/input pair such that self-contradiction prevents a correct answer is bogus.
On 10/7/2025 10:17 AM, joes wrote:Can't be, as the Liar Paradox is not a problem but a paradox.
Am Tue, 07 Oct 2025 09:50:43 -0500 schrieb olcott:As I have proven back in 2004 the halting problem is simply the Liar
On 10/7/2025 4:18 AM, Mikko wrote:Machines halt or not. Nothing wrong about that.
The important point is that halting is not decidable.Only because they are asking the wrong question.
Paradox disguised.
The halting problem is a problem, not a paradox. There would be a
paradox if there were a solution to the halting problem but there isn't
any.
Am Wed, 08 Oct 2025 23:08:10 -0500 schrieb olcott:
On 10/8/2025 5:34 AM, Mikko wrote:
On 2025-10-07 15:30:11 +0000, olcott said:Every decision problem decider/input pair such that self-contradiction
On 10/7/2025 10:17 AM, joes wrote:Can't be, as the Liar Paradox is not a problem but a paradox.
Am Tue, 07 Oct 2025 09:50:43 -0500 schrieb olcott:As I have proven back in 2004 the halting problem is simply the Liar
On 10/7/2025 4:18 AM, Mikko wrote:Machines halt or not. Nothing wrong about that.
The important point is that halting is not decidable.Only because they are asking the wrong question.
Paradox disguised.
The halting problem is a problem, not a paradox. There would be a
paradox if there were a solution to the halting problem but there isn't
any.
prevents a correct answer is bogus.
Cool, itrCOs bogus to ask whether a machine halts.
On 10/9/2025 2:47 PM, joes wrote:Which pair? There are multiple so-called deciders. Or do you mean the
Am Wed, 08 Oct 2025 23:08:10 -0500 schrieb olcott:
On 10/8/2025 5:34 AM, Mikko wrote:
On 2025-10-07 15:30:11 +0000, olcott said:
On 10/7/2025 10:17 AM, joes wrote:
Machines halt or not. Nothing wrong about that.
It is only the self-contradictory decider/input pair that is bogus.Every decision problem decider/input pair such that self-contradictionCool, itrCOs bogus to ask whether a machine halts.
prevents a correct answer is bogus.
Am Thu, 09 Oct 2025 15:27:26 -0500 schrieb olcott:
On 10/9/2025 2:47 PM, joes wrote:
Am Wed, 08 Oct 2025 23:08:10 -0500 schrieb olcott:
On 10/8/2025 5:34 AM, Mikko wrote:
On 2025-10-07 15:30:11 +0000, olcott said:
On 10/7/2025 10:17 AM, joes wrote:
Machines halt or not. Nothing wrong about that.
Which pair? There are multiple so-called deciders. Or do you mean the template P?It is only the self-contradictory decider/input pair that is bogus.Every decision problem decider/input pair such that self-contradiction >>>> prevents a correct answer is bogus.Cool, itrCOs bogus to ask whether a machine halts.
On 10/9/2025 3:31 PM, joes wrote:Ok, the infinite set given by the template P, not a single pair.
Am Thu, 09 Oct 2025 15:27:26 -0500 schrieb olcott:
On 10/9/2025 2:47 PM, joes wrote:
Am Wed, 08 Oct 2025 23:08:10 -0500 schrieb olcott:
I have only said this a few dozen times now.Which pair? There are multiple so-called deciders. Or do you mean theIt is only the self-contradictory decider/input pair that is bogus.Every decision problem decider/input pair such thatCool, itrCOs bogus to ask whether a machine halts.
self-contradiction prevents a correct answer is bogus.
template P?
For the set of H/P pairs of decider H and input P:
If H says halts then P loops If H says loops then P halts making each
H(P) always incorrect.
Am Thu, 09 Oct 2025 16:30:51 -0500 schrieb olcott:
On 10/9/2025 3:31 PM, joes wrote:
Am Thu, 09 Oct 2025 15:27:26 -0500 schrieb olcott:
On 10/9/2025 2:47 PM, joes wrote:
Am Wed, 08 Oct 2025 23:08:10 -0500 schrieb olcott:
I have only said this a few dozen times now.Which pair? There are multiple so-called deciders. Or do you mean theIt is only the self-contradictory decider/input pair that is bogus.Every decision problem decider/input pair such thatCool, itrCOs bogus to ask whether a machine halts.
self-contradiction prevents a correct answer is bogus.
template P?
For the set of H/P pairs of decider H and input P:
If H says halts then P loops If H says loops then P halts making each
H(P) always incorrect.
Ok, the infinite set given by the template P, not a single pair.
What is bogus about P? It is constructible and every instantiation
with a given H has a definite halting status, as every H can only
return one value.
On 10/9/2025 2:47 PM, joes wrote:
Am Wed, 08 Oct 2025 23:08:10 -0500 schrieb olcott:
On 10/8/2025 5:34 AM, Mikko wrote:
On 2025-10-07 15:30:11 +0000, olcott said:Every decision problem decider/input pair such that self-contradiction
On 10/7/2025 10:17 AM, joes wrote:Can't be, as the Liar Paradox is not a problem but a paradox.
Am Tue, 07 Oct 2025 09:50:43 -0500 schrieb olcott:As I have proven back in 2004 the halting problem is simply the Liar >>>>> Paradox disguised.
On 10/7/2025 4:18 AM, Mikko wrote:Machines halt or not. Nothing wrong about that.
The important point is that halting is not decidable.Only because they are asking the wrong question.
The halting problem is a problem, not a paradox. There would be a
paradox if there were a solution to the halting problem but there isn't >>>> any.
prevents a correct answer is bogus.
Cool, itrCOs bogus to ask whether a machine halts.
It is only the self-contradictory decider/input
pair that is bogus.
On 2025-10-09, olcott <polcott333@gmail.com> wrote:Did you understand the C program that maps the bogus
On 10/9/2025 2:47 PM, joes wrote:
Am Wed, 08 Oct 2025 23:08:10 -0500 schrieb olcott:
On 10/8/2025 5:34 AM, Mikko wrote:
On 2025-10-07 15:30:11 +0000, olcott said:Every decision problem decider/input pair such that self-contradiction >>>> prevents a correct answer is bogus.
On 10/7/2025 10:17 AM, joes wrote:Can't be, as the Liar Paradox is not a problem but a paradox.
Am Tue, 07 Oct 2025 09:50:43 -0500 schrieb olcott:As I have proven back in 2004 the halting problem is simply the Liar >>>>>> Paradox disguised.
On 10/7/2025 4:18 AM, Mikko wrote:Machines halt or not. Nothing wrong about that.
The important point is that halting is not decidable.Only because they are asking the wrong question.
The halting problem is a problem, not a paradox. There would be a
paradox if there were a solution to the halting problem but there isn't >>>>> any.
Cool, itrCOs bogus to ask whether a machine halts.
It is only the self-contradictory decider/input
pair that is bogus.
Cool, so how do you decide that?
On 10/9/2025 7:40 PM, Kaz Kylheku wrote:
On 2025-10-09, olcott <polcott333@gmail.com> wrote:Did you understand the C program that maps the bogus
On 10/9/2025 2:47 PM, joes wrote:
Am Wed, 08 Oct 2025 23:08:10 -0500 schrieb olcott:
On 10/8/2025 5:34 AM, Mikko wrote:
On 2025-10-07 15:30:11 +0000, olcott said:Every decision problem decider/input pair such that self-contradiction >>>>> prevents a correct answer is bogus.
On 10/7/2025 10:17 AM, joes wrote:Can't be, as the Liar Paradox is not a problem but a paradox.
Am Tue, 07 Oct 2025 09:50:43 -0500 schrieb olcott:As I have proven back in 2004 the halting problem is simply the Liar >>>>>>> Paradox disguised.
On 10/7/2025 4:18 AM, Mikko wrote:Machines halt or not. Nothing wrong about that.
The important point is that halting is not decidable.Only because they are asking the wrong question.
The halting problem is a problem, not a paradox. There would be a
paradox if there were a solution to the halting problem but there isn't >>>>>> any.
Cool, itrCOs bogus to ask whether a machine halts.
It is only the self-contradictory decider/input
pair that is bogus.
Cool, so how do you decide that?
halting problem H/P pairs to the Liar Paradox or do
you not understand C?
printf("Does this program Halt?\n");
On 2025-10-10, olcott <polcott333@gmail.com> wrote:compile and run it.
On 10/9/2025 7:40 PM, Kaz Kylheku wrote:
On 2025-10-09, olcott <polcott333@gmail.com> wrote:Did you understand the C program that maps the bogus
On 10/9/2025 2:47 PM, joes wrote:
Am Wed, 08 Oct 2025 23:08:10 -0500 schrieb olcott:
On 10/8/2025 5:34 AM, Mikko wrote:
On 2025-10-07 15:30:11 +0000, olcott said:Every decision problem decider/input pair such that self-contradiction >>>>>> prevents a correct answer is bogus.
On 10/7/2025 10:17 AM, joes wrote:Can't be, as the Liar Paradox is not a problem but a paradox.
Am Tue, 07 Oct 2025 09:50:43 -0500 schrieb olcott:As I have proven back in 2004 the halting problem is simply the Liar >>>>>>>> Paradox disguised.
On 10/7/2025 4:18 AM, Mikko wrote:Machines halt or not. Nothing wrong about that.
The important point is that halting is not decidable.Only because they are asking the wrong question.
The halting problem is a problem, not a paradox. There would be a >>>>>>> paradox if there were a solution to the halting problem but there isn't >>>>>>> any.
Cool, itrCOs bogus to ask whether a machine halts.
It is only the self-contradictory decider/input
pair that is bogus.
Cool, so how do you decide that?
halting problem H/P pairs to the Liar Paradox or do
you not understand C?
Yes.
printf("Does this program Halt?\n");
Which program?
On 10/9/2025 4:47 PM, joes wrote:I have seen it in your other posts. Both possible instantiations of P
Am Thu, 09 Oct 2025 16:30:51 -0500 schrieb olcott:
On 10/9/2025 3:31 PM, joes wrote:
Am Thu, 09 Oct 2025 15:27:26 -0500 schrieb olcott:
If you totally understand C I can show how it precisely maps to the Liar Paradox in a program that I wrote today and designed back in 2004.Ok, the infinite set given by the template P, not a single pair. WhatFor the set of H/P pairs of decider H and input P:It is only the self-contradictory decider/input pair that is bogus.Which pair? There are multiple so-called deciders. Or do you mean the
template P?
If H says halts then P loops If H says loops then P halts making each
H(P) always incorrect.
is bogus about P? It is constructible and every instantiation with a
given H has a definite halting status, as every H can only return one
value.
On 10/8/2025 5:34 AM, Mikko wrote:
On 2025-10-07 15:30:11 +0000, olcott said:
On 10/7/2025 10:17 AM, joes wrote:
Am Tue, 07 Oct 2025 09:50:43 -0500 schrieb olcott:
On 10/7/2025 4:18 AM, Mikko wrote:
The important point is that halting is not decidable.Only because they are asking the wrong question.
Machines halt or not. Nothing wrong about that.
As I have proven back in 2004 the halting problem
is simply the Liar Paradox disguised.
Can't be, as the Liar Paradox is not a problem but a paradox.
The halting problem is a problem, not a paradox. There would
be a paradox if there were a solution to the halting problem
but there isn't any.
Every decision problem decider/input pair
such that self-contradiction prevents a
correct answer is bogus.
Am Thu, 09 Oct 2025 18:38:05 -0500 schrieb olcott:
On 10/9/2025 4:47 PM, joes wrote:
Am Thu, 09 Oct 2025 16:30:51 -0500 schrieb olcott:
On 10/9/2025 3:31 PM, joes wrote:
Am Thu, 09 Oct 2025 15:27:26 -0500 schrieb olcott:
If you totally understand C I can show how it precisely maps to the LiarOk, the infinite set given by the template P, not a single pair. WhatFor the set of H/P pairs of decider H and input P:It is only the self-contradictory decider/input pair that is bogus. >>>>> Which pair? There are multiple so-called deciders. Or do you mean the >>>>> template P?
If H says halts then P loops If H says loops then P halts making each
H(P) always incorrect.
is bogus about P? It is constructible and every instantiation with a
given H has a definite halting status, as every H can only return one
value.
Paradox in a program that I wrote today and designed back in 2004.
I have seen it in your other posts. Both possible instantiations of P
have a single halting status.
On 10/10/2025 2:23 AM, joes wrote:I am not a decider. A program can only give one answer. A version of that program that gives the other answer is a different program with a
Am Thu, 09 Oct 2025 18:38:05 -0500 schrieb olcott:
On 10/9/2025 4:47 PM, joes wrote:
That is the same as saying that the Liar Paradox has a single correctI have seen it in your other posts. Both possible instantiations of POk, the infinite set given by the template P, not a single pair. WhatIf you totally understand C I can show how it precisely maps to the
is bogus about P? It is constructible and every instantiation with a
given H has a definite halting status, as every H can only return one
value.
Liar Paradox in a program that I wrote today and designed back in
2004.
have a single halting status.
truth value. Compile the program and run it and see if you can give it a correct Y or N answer.
Am Fri, 10 Oct 2025 10:04:53 -0500 schrieb olcott:
On 10/10/2025 2:23 AM, joes wrote:
Am Thu, 09 Oct 2025 18:38:05 -0500 schrieb olcott:
On 10/9/2025 4:47 PM, joes wrote:
That is the same as saying that the Liar Paradox has a single correctI have seen it in your other posts. Both possible instantiations of POk, the infinite set given by the template P, not a single pair. What >>>>> is bogus about P? It is constructible and every instantiation with a >>>>> given H has a definite halting status, as every H can only return one >>>>> value.If you totally understand C I can show how it precisely maps to the
Liar Paradox in a program that I wrote today and designed back in
2004.
have a single halting status.
truth value. Compile the program and run it and see if you can give it a
correct Y or N answer.
I am not a decider. A program can only give one answer. A version of that program that gives the other answer is a different program with a
different diagonal input. A truth predicate could only assign one value
to the Liar sentence.
On 2025-10-09 04:08:10 +0000, olcott said:
On 10/8/2025 5:34 AM, Mikko wrote:
On 2025-10-07 15:30:11 +0000, olcott said:
On 10/7/2025 10:17 AM, joes wrote:
Am Tue, 07 Oct 2025 09:50:43 -0500 schrieb olcott:
On 10/7/2025 4:18 AM, Mikko wrote:
The important point is that halting is not decidable.Only because they are asking the wrong question.
Machines halt or not. Nothing wrong about that.
As I have proven back in 2004 the halting problem
is simply the Liar Paradox disguised.
Can't be, as the Liar Paradox is not a problem but a paradox.
The halting problem is a problem, not a paradox. There would
be a paradox if there were a solution to the halting problem
but there isn't any.
Every decision problem decider/input pair
such that self-contradiction prevents a
correct answer is bogus.
Is it bogus to ask how to construct with a straightedge and
a compass a square that has the same area as a given circle?
That is the same as saying that there are two Carol's
in two parallel universes, one that says yes and the
other that says no.
A deliberate attempt to simply ignore the faulty
structure of the actual problem.
On 2025-10-10, olcott <polcott333@gmail.com> wrote:
That is the same as saying that there are two Carol's
in two parallel universes, one that says yes and the
other that says no.
A deliberate attempt to simply ignore the faulty
structure of the actual problem.
You are trying yur hardest to ignore the ways in which
that is not similar to the halting problem.
By doing that, you have falsely concluded that it exactly
coincides.
If you ignore all that is different, what you are always left with
whatever is the same, and if that set is nonempty you can conclude "it's exactly the same".
There is something in common between a bicycle and a fish; so if you
ignore all else, ... bling! ... they are equivalent and interchangeable.
You are simply a pseudointellectual, nothing more.
You use argumentation that would embarrass the dumbest teaching
assistant from a Liberal Arts college.
I give you until Christmas to get off this Caroling stuff.
Deciders can only compute the mapping from their
inputs to the behavior that this input actually specifies.
On 10/10/2025 12:10 PM, Kaz Kylheku wrote:
On 2025-10-10, olcott <polcott333@gmail.com> wrote:
That is the same as saying that there are two Carol's
in two parallel universes, one that says yes and the
other that says no.
A deliberate attempt to simply ignore the faulty
structure of the actual problem.
You are trying yur hardest to ignore the ways in which
that is not similar to the halting problem.
By doing that, you have falsely concluded that it exactly
coincides.
The halting problem requires halt deciders to do
something that Turing machine deciders cannot do.
Deciders can only compute the mapping from their
inputs to the behavior that this input actually specifies.
The actual input to HHH(DD) specifies that it calls
HHH(DD) in recursive simulation that cannot possibly
step running unless aborted.
When the halting problem does this differently
then it is the HP itself that is incorrect.
On 2025-10-10, olcott <polcott333@gmail.com> wrote:
On 10/10/2025 12:10 PM, Kaz Kylheku wrote:
On 2025-10-10, olcott <polcott333@gmail.com> wrote:
That is the same as saying that there are two Carol's
in two parallel universes, one that says yes and the
other that says no.
A deliberate attempt to simply ignore the faulty
structure of the actual problem.
You are trying yur hardest to ignore the ways in which
that is not similar to the halting problem.
By doing that, you have falsely concluded that it exactly
coincides.
The halting problem requires halt deciders to do
something that Turing machine deciders cannot do.
Right, except when Peter Olcott, Certified Genius, is at the keyboard, writing C++, but go on ...
Deciders can only compute the mapping from their
inputs to the behavior that this input actually specifies.
But that's not why they cannot do what is described.
The actual input to HHH(DD) specifies that it calls
HHH(DD) in recursive simulation that cannot possibly
step running unless aborted.
When the halting problem does this differently
then it is the HP itself that is incorrect.
Nothing in the halting problem specification states that halting
deciders must naively follow a simulation of their input.
In facst, it is bleeping obvious that such a function is not a total
decider: it will correctly decide precisely those programs which
terminate. And for all the rest, those which do not terminate, the
function will itself fail to terminate, and thus not decide.
A supposed decider which just simulates the input is no different than
the operator running the input and waiting for it to terminate; the
simulator provides no better answer than just the original computer
which the program targets. It might even be /slower/ in deciding
terminating programs. Whereas the native machine could tell you that
osme program halts in one hour, the simulator might take ten hours.
And for non-terminating programs, neither one tells you.
The halting problem concerns itself with /predicting/ whether an input
will terminate without just waiting for it, which could be forever.
Thus simulation without analysis is a complete non-answer to halting.
And yes, whenever we construct a diagonal case against a useless,
blindly simulating decider, we get non-termination.
Congratulations, Genius, for figuring that out all by yourself!
And no, you cannot do the following:
- Propose a different simulating decider which analyzes and aborts.
- Construct a diagonal test case against /that/ decider.
- Claim that whhen this new decider is deciding its own diagonal
test case, and rejects it as non-halting it is actually deciding
the original diagonal test case against the original decider.
A decider always remarks about the input it is given, not some other
input from which that input was derived by editing.
On 10/10/2025 2:54 PM, Kaz Kylheku wrote:
On 2025-10-10, olcott <polcott333@gmail.com> wrote:
On 10/10/2025 12:10 PM, Kaz Kylheku wrote:
On 2025-10-10, olcott <polcott333@gmail.com> wrote:
That is the same as saying that there are two Carol's
in two parallel universes, one that says yes and the
other that says no.
A deliberate attempt to simply ignore the faulty
structure of the actual problem.
You are trying yur hardest to ignore the ways in which
that is not similar to the halting problem.
By doing that, you have falsely concluded that it exactly
coincides.
The halting problem requires halt deciders to do
something that Turing machine deciders cannot do.
Right, except when Peter Olcott, Certified Genius, is at the keyboard,
writing C++, but go on ...
Deciders can only compute the mapping from their
inputs to the behavior that this input actually specifies.
But that's not why they cannot do what is described.
The actual input to HHH(DD) specifies that it calls
HHH(DD) in recursive simulation that cannot possibly
step running unless aborted.
When the halting problem does this differently
then it is the HP itself that is incorrect.
Nothing in the halting problem specification states that halting
deciders must naively follow a simulation of their input.
It is not naive simulation. It is the correct simulation
according to the semantics of the language. Only a correct
simulation by the simulating halt decider correctly measures
the actual behavior that the input actually specifies.
That some other non-input has different behavior is
irrelevant.
In facst, it is bleeping obvious that such a function is not a total
decider: it will correctly decide precisely those programs which
terminate. And for all the rest, those which do not terminate, the
function will itself fail to terminate, and thus not decide.
A supposed decider which just simulates the input is no different than
the operator running the input and waiting for it to terminate; the
simulator provides no better answer than just the original computer
which the program targets. It might even be /slower/ in deciding
terminating programs. Whereas the native machine could tell you that
osme program halts in one hour, the simulator might take ten hours.
And for non-terminating programs, neither one tells you.
The halting problem concerns itself with /predicting/ whether an input
will terminate without just waiting for it, which could be forever.
Thus simulation without analysis is a complete non-answer to halting.
And yes, whenever we construct a diagonal case against a useless,
blindly simulating decider, we get non-termination.
Congratulations, Genius, for figuring that out all by yourself!
And no, you cannot do the following:
- Propose a different simulating decider which analyzes and aborts.
- Construct a diagonal test case against /that/ decider.
- Claim that whhen this new decider is deciding its own diagonal
-a-a test case, and rejects it as non-halting it is actually deciding
-a-a the original diagonal test case against the original decider.
A decider always remarks-a about the input it is given, not some other
input from which that input was derived by editing.
On 10/10/2025 4:23 PM, olcott wrote:
On 10/10/2025 2:54 PM, Kaz Kylheku wrote:
On 2025-10-10, olcott <polcott333@gmail.com> wrote:
On 10/10/2025 12:10 PM, Kaz Kylheku wrote:
On 2025-10-10, olcott <polcott333@gmail.com> wrote:
That is the same as saying that there are two Carol's
in two parallel universes, one that says yes and the
other that says no.
A deliberate attempt to simply ignore the faulty
structure of the actual problem.
You are trying yur hardest to ignore the ways in which
that is not similar to the halting problem.
By doing that, you have falsely concluded that it exactly
coincides.
The halting problem requires halt deciders to do
something that Turing machine deciders cannot do.
Right, except when Peter Olcott, Certified Genius, is at the keyboard,
writing C++, but go on ...
Deciders can only compute the mapping from their
inputs to the behavior that this input actually specifies.
But that's not why they cannot do what is described.
The actual input to HHH(DD) specifies that it calls
HHH(DD) in recursive simulation that cannot possibly
step running unless aborted.
When the halting problem does this differently
then it is the HP itself that is incorrect.
Nothing in the halting problem specification states that halting
deciders must naively follow a simulation of their input.
It is not naive simulation. It is the correct simulation
according to the semantics of the language. Only a correct
simulation by the simulating halt decider correctly measures
the actual behavior that the input actually specifies.
And since HHH aborts the simulation of DD, it doesn't do a correct simulation and therefore there is no basis for a decision.
The actual correct basis is what the machine described by the input will
do, or equivalently when the input is simulated by UTM.
That some other non-input has different behavior is
irrelevant.
In facst, it is bleeping obvious that such a function is not a total
decider: it will correctly decide precisely those programs which
terminate. And for all the rest, those which do not terminate, the
function will itself fail to terminate, and thus not decide.
A supposed decider which just simulates the input is no different than
the operator running the input and waiting for it to terminate; the
simulator provides no better answer than just the original computer
which the program targets. It might even be /slower/ in deciding
terminating programs. Whereas the native machine could tell you that
osme program halts in one hour, the simulator might take ten hours.
And for non-terminating programs, neither one tells you.
The halting problem concerns itself with /predicting/ whether an input
will terminate without just waiting for it, which could be forever.
Thus simulation without analysis is a complete non-answer to halting.
And yes, whenever we construct a diagonal case against a useless,
blindly simulating decider, we get non-termination.
Congratulations, Genius, for figuring that out all by yourself!
And no, you cannot do the following:
- Propose a different simulating decider which analyzes and aborts.
- Construct a diagonal test case against /that/ decider.
- Claim that whhen this new decider is deciding its own diagonal
-a-a test case, and rejects it as non-halting it is actually deciding
-a-a the original diagonal test case against the original decider.
A decider always remarks-a about the input it is given, not some other
input from which that input was derived by editing.
On 10/10/2025 3:28 PM, dbush wrote:
On 10/10/2025 4:23 PM, olcott wrote:
On 10/10/2025 2:54 PM, Kaz Kylheku wrote:
On 2025-10-10, olcott <polcott333@gmail.com> wrote:
On 10/10/2025 12:10 PM, Kaz Kylheku wrote:
On 2025-10-10, olcott <polcott333@gmail.com> wrote:
That is the same as saying that there are two Carol's
in two parallel universes, one that says yes and the
other that says no.
A deliberate attempt to simply ignore the faulty
structure of the actual problem.
You are trying yur hardest to ignore the ways in which
that is not similar to the halting problem.
By doing that, you have falsely concluded that it exactly
coincides.
The halting problem requires halt deciders to do
something that Turing machine deciders cannot do.
Right, except when Peter Olcott, Certified Genius, is at the keyboard, >>>> writing C++, but go on ...
Deciders can only compute the mapping from their
inputs to the behavior that this input actually specifies.
But that's not why they cannot do what is described.
The actual input to HHH(DD) specifies that it calls
HHH(DD) in recursive simulation that cannot possibly
step running unless aborted.
When the halting problem does this differently
then it is the HP itself that is incorrect.
Nothing in the halting problem specification states that halting
deciders must naively follow a simulation of their input.
It is not naive simulation. It is the correct simulation
according to the semantics of the language. Only a correct
simulation by the simulating halt decider correctly measures
the actual behavior that the input actually specifies.
And since HHH aborts the simulation of DD, it doesn't do a correct
simulation and therefore there is no basis for a decision.
That is a foolishly stupid thing to say within
the context that unless HHH(DD) does abort the
simulation of its input itself would never halt.
The actual correct basis is what the machine described by the input
will do, or equivalently when the input is simulated by UTM.
That some other non-input has different behavior is
irrelevant.
In facst, it is bleeping obvious that such a function is not a total
decider: it will correctly decide precisely those programs which
terminate. And for all the rest, those which do not terminate, the
function will itself fail to terminate, and thus not decide.
A supposed decider which just simulates the input is no different than >>>> the operator running the input and waiting for it to terminate; the
simulator provides no better answer than just the original computer
which the program targets. It might even be /slower/ in deciding
terminating programs. Whereas the native machine could tell you that
osme program halts in one hour, the simulator might take ten hours.
And for non-terminating programs, neither one tells you.
The halting problem concerns itself with /predicting/ whether an input >>>> will terminate without just waiting for it, which could be forever.
Thus simulation without analysis is a complete non-answer to halting.
And yes, whenever we construct a diagonal case against a useless,
blindly simulating decider, we get non-termination.
Congratulations, Genius, for figuring that out all by yourself!
And no, you cannot do the following:
- Propose a different simulating decider which analyzes and aborts.
- Construct a diagonal test case against /that/ decider.
- Claim that whhen this new decider is deciding its own diagonal
-a-a test case, and rejects it as non-halting it is actually deciding
-a-a the original diagonal test case against the original decider.
A decider always remarks-a about the input it is given, not some other >>>> input from which that input was derived by editing.
On 10/10/2025 4:34 PM, olcott wrote:
On 10/10/2025 3:28 PM, dbush wrote:
On 10/10/2025 4:23 PM, olcott wrote:
On 10/10/2025 2:54 PM, Kaz Kylheku wrote:
On 2025-10-10, olcott <polcott333@gmail.com> wrote:
On 10/10/2025 12:10 PM, Kaz Kylheku wrote:
On 2025-10-10, olcott <polcott333@gmail.com> wrote:
That is the same as saying that there are two Carol's
in two parallel universes, one that says yes and the
other that says no.
A deliberate attempt to simply ignore the faulty
structure of the actual problem.
You are trying yur hardest to ignore the ways in which
that is not similar to the halting problem.
By doing that, you have falsely concluded that it exactly
coincides.
The halting problem requires halt deciders to do
something that Turing machine deciders cannot do.
Right, except when Peter Olcott, Certified Genius, is at the keyboard, >>>>> writing C++, but go on ...
Deciders can only compute the mapping from their
inputs to the behavior that this input actually specifies.
But that's not why they cannot do what is described.
The actual input to HHH(DD) specifies that it calls
HHH(DD) in recursive simulation that cannot possibly
step running unless aborted.
When the halting problem does this differently
then it is the HP itself that is incorrect.
Nothing in the halting problem specification states that halting
deciders must naively follow a simulation of their input.
It is not naive simulation. It is the correct simulation
according to the semantics of the language. Only a correct
simulation by the simulating halt decider correctly measures
the actual behavior that the input actually specifies.
And since HHH aborts the simulation of DD, it doesn't do a correct
simulation and therefore there is no basis for a decision.
That is a foolishly stupid thing to say within
the context that unless HHH(DD) does abort the
simulation of its input itself would never halt.
Changing the subject from DD and HHH to DDn and HHHn is the dishonest
dodge of the strawman deception.
On 10/10/2025 3:36 PM, dbush wrote:
On 10/10/2025 4:34 PM, olcott wrote:
On 10/10/2025 3:28 PM, dbush wrote:
On 10/10/2025 4:23 PM, olcott wrote:
It is not naive simulation. It is the correct simulation
according to the semantics of the language. Only a correct
simulation by the simulating halt decider correctly measures
the actual behavior that the input actually specifies.
And since HHH aborts the simulation of DD, it doesn't do a correct
simulation and therefore there is no basis for a decision.
That is a foolishly stupid thing to say within
the context that unless HHH(DD) does abort the
simulation of its input itself would never halt.
Changing the subject from DD and HHH to DDn and HHHn is the dishonest
dodge of the strawman deception.
Unless the one and only directly executed HHH(DD)
aborts the simulation of its one any only actual
input this one and only directly executed HHH(DD)
would never stop running.
On Fri, 2025-09-26 at 14:09 -0700, Chris M. Thomasson wrote:
On 9/25/2025 9:50 PM, olcott wrote:
On 9/25/2025 11:34 PM, Kaz Kylheku wrote:
On 2025-09-26, olcott <polcott333@gmail.com> wrote:
On 9/25/2025 8:54 PM, Kaz Kylheku wrote:
The Halting Theorem just says that because, regrettably, the problem >>>>>> involves self-reference, it is not computable.
In the same way that a CAD system cannot represent a
single object of a square circle (a round thing that
is not round because is has four equal length sides).
Then why don't you object to that, too?
"It is just rote-learned ignorance that says that a CAD system cannot
represent a square circle.-a Behold my x86_UTM_CAD.exe, with its
SquareCircle.o plugin ..."
It is incorrect to include logical impossibilities
as undecidable instances of decision problems.
I.e. it is incorrect to include the truth that a CAD system cannot
represent a square circle. See, here we go ...
You are playing games. I am dead serious
because humans
Huh? Are not a human? ;^o
No, olcott's brain seems infected by zombie-fungus, amen.
He now thinks he is god.
https://www.nationalgeographic.com/animals/article/cordyceps-zombie-fungus-takes-over-ants
do not understand how truth
works we are seeing the rise of the fourth
Reich and human civilization has little
time left before climate kills off most
of us.
On 10/10/2025 5:34 PM, dbush wrote:
On 10/10/2025 6:14 PM, olcott wrote:
On 10/10/2025 3:36 PM, dbush wrote:
On 10/10/2025 4:34 PM, olcott wrote:
On 10/10/2025 3:28 PM, dbush wrote:
On 10/10/2025 4:23 PM, olcott wrote:
It is not naive simulation. It is the correct simulation
according to the semantics of the language. Only a correct
simulation by the simulating halt decider correctly measures
the actual behavior that the input actually specifies.
And since HHH aborts the simulation of DD, it doesn't do a correct >>>>>> simulation and therefore there is no basis for a decision.
That is a foolishly stupid thing to say within
the context that unless HHH(DD) does abort the
simulation of its input itself would never halt.
Changing the subject from DD and HHH to DDn and HHHn is the
dishonest dodge of the strawman deception.
Unless the one and only directly executed HHH(DD)
i.e. the fixed immutable code of the function DD, the fixed immutable
code of the function HHH, and the fixed immutable code of everything
HHH calls down to the OS level
aborts the simulation of its one any only actual
input this one and only directly executed HHH(DD)
would never stop running.
In other words, when you change HHH and DD to HHHn and DDn, HHHn(DDn)
never stops running.
Changing the input and reporting on that non-input is not allowed.
<MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
-a-a-a If simulating halt decider H correctly simulates its
-a-a-a input D until H correctly determines that its simulated D
-a-a-a would never stop running unless aborted then
This part of the criteria has been validated as only meaning
exactly one thing. This exactly one thing that it means seems
impossibly over your head.
Claude AI figured that the next part of the words could
mean exactly two things one of them is incorrect and my
meaning for it is correct.
On 10/10/2025 5:34 PM, dbush wrote:
On 10/10/2025 6:14 PM, olcott wrote:
On 10/10/2025 3:36 PM, dbush wrote:
On 10/10/2025 4:34 PM, olcott wrote:
On 10/10/2025 3:28 PM, dbush wrote:
On 10/10/2025 4:23 PM, olcott wrote:
It is not naive simulation. It is the correct simulation
according to the semantics of the language. Only a correct
simulation by the simulating halt decider correctly measures
the actual behavior that the input actually specifies.
And since HHH aborts the simulation of DD, it doesn't do a correct >>>>>> simulation and therefore there is no basis for a decision.
That is a foolishly stupid thing to say within
the context that unless HHH(DD) does abort the
simulation of its input itself would never halt.
Changing the subject from DD and HHH to DDn and HHHn is the
dishonest dodge of the strawman deception.
Unless the one and only directly executed HHH(DD)
i.e. the fixed immutable code of the function DD, the fixed immutable
code of the function HHH, and the fixed immutable code of everything
HHH calls down to the OS level
aborts the simulation of its one any only actual
input this one and only directly executed HHH(DD)
would never stop running.
In other words, when you change HHH and DD to HHHn and DDn, HHHn(DDn)
never stops running.
Changing the input and reporting on that non-input is not allowed.
<MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
-a-a-a If simulating halt decider H correctly simulates its
-a-a-a input D until H correctly determines that its simulated D
-a-a-a would never stop running unless aborted then
This part of the criteria has been validated as only meaning
exactly one thing. This exactly one thing that it means seems
impossibly over your head.
Claude AI figured that the next part of the words could
mean exactly two things one of them is incorrect and my
meaning for it is correct.
On 10/10/2025 6:14 PM, olcott wrote:
On 10/10/2025 3:36 PM, dbush wrote:
On 10/10/2025 4:34 PM, olcott wrote:
On 10/10/2025 3:28 PM, dbush wrote:
On 10/10/2025 4:23 PM, olcott wrote:
It is not naive simulation. It is the correct simulation
according to the semantics of the language. Only a correct
simulation by the simulating halt decider correctly measures
the actual behavior that the input actually specifies.
And since HHH aborts the simulation of DD, it doesn't do a correct
simulation and therefore there is no basis for a decision.
That is a foolishly stupid thing to say within
the context that unless HHH(DD) does abort the
simulation of its input itself would never halt.
Changing the subject from DD and HHH to DDn and HHHn is the dishonest
dodge of the strawman deception.
Unless the one and only directly executed HHH(DD)
i.e. the fixed immutable code of the function DD, the fixed immutable
code of the function HHH, and the fixed immutable code of everything HHH calls down to the OS level
aborts the simulation of its one any only actual
input this one and only directly executed HHH(DD)
would never stop running.
In other words, when you change HHH and DD to HHHn and DDn, HHHn(DDn)
never stops running.
Changing the input and reporting on that non-input is not allowed.
On 10/10/2025 2:54 PM, Kaz Kylheku wrote:
Nothing in the halting problem specification states that halting
deciders must naively follow a simulation of their input.
It is not naive simulation.
It is the correct simulation
according to the semantics of the language.
Only a correct
simulation by the simulating halt decider correctly measures
the actual behavior that the input actually specifies.
On 10/10/25 5:05 PM, olcott wrote:
On 10/10/2025 5:34 PM, dbush wrote:
On 10/10/2025 6:14 PM, olcott wrote:
On 10/10/2025 3:36 PM, dbush wrote:
On 10/10/2025 4:34 PM, olcott wrote:
On 10/10/2025 3:28 PM, dbush wrote:
On 10/10/2025 4:23 PM, olcott wrote:
It is not naive simulation. It is the correct simulation
according to the semantics of the language. Only a correct
simulation by the simulating halt decider correctly measures
the actual behavior that the input actually specifies.
And since HHH aborts the simulation of DD, it doesn't do a
correct simulation and therefore there is no basis for a decision. >>>>>>>
That is a foolishly stupid thing to say within
the context that unless HHH(DD) does abort the
simulation of its input itself would never halt.
Changing the subject from DD and HHH to DDn and HHHn is the
dishonest dodge of the strawman deception.
Unless the one and only directly executed HHH(DD)
i.e. the fixed immutable code of the function DD, the fixed immutable
code of the function HHH, and the fixed immutable code of everything
HHH calls down to the OS level
aborts the simulation of its one any only actual
input this one and only directly executed HHH(DD)
would never stop running.
In other words, when you change HHH and DD to HHHn and DDn, HHHn(DDn)
never stops running.
Changing the input and reporting on that non-input is not allowed.
<MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
-a-a-a-a If simulating halt decider H correctly simulates its
-a-a-a-a input D until H correctly determines that its simulated D
-a-a-a-a would never stop running unless aborted then
This part of the criteria has been validated as only meaning
exactly one thing. This exactly one thing that it means seems
impossibly over your head.
Claude AI figured that the next part of the words could
mean exactly two things one of them is incorrect and my
meaning for it is correct.
lol, can't you make claude AI tell you whats wrong about your theory too???
someone wanna do that and shove it in his face instead of wasting hours writing paragraphs back???
useless fucks the lot of u
On 10/10/25 5:05 PM, olcott wrote:
On 10/10/2025 5:34 PM, dbush wrote:
On 10/10/2025 6:14 PM, olcott wrote:
On 10/10/2025 3:36 PM, dbush wrote:
On 10/10/2025 4:34 PM, olcott wrote:
On 10/10/2025 3:28 PM, dbush wrote:
On 10/10/2025 4:23 PM, olcott wrote:
It is not naive simulation. It is the correct simulation
according to the semantics of the language. Only a correct
simulation by the simulating halt decider correctly measures
the actual behavior that the input actually specifies.
And since HHH aborts the simulation of DD, it doesn't do a
correct simulation and therefore there is no basis for a decision. >>>>>>>
That is a foolishly stupid thing to say within
the context that unless HHH(DD) does abort the
simulation of its input itself would never halt.
Changing the subject from DD and HHH to DDn and HHHn is the
dishonest dodge of the strawman deception.
Unless the one and only directly executed HHH(DD)
i.e. the fixed immutable code of the function DD, the fixed immutable
code of the function HHH, and the fixed immutable code of everything
HHH calls down to the OS level
aborts the simulation of its one any only actual
input this one and only directly executed HHH(DD)
would never stop running.
In other words, when you change HHH and DD to HHHn and DDn, HHHn(DDn)
never stops running.
Changing the input and reporting on that non-input is not allowed.
<MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
-a-a-a-a If simulating halt decider H correctly simulates its
-a-a-a-a input D until H correctly determines that its simulated D
-a-a-a-a would never stop running unless aborted then
This part of the criteria has been validated as only meaning
exactly one thing. This exactly one thing that it means seems
impossibly over your head.
Claude AI figured that the next part of the words could
mean exactly two things one of them is incorrect and my
meaning for it is correct.
lol, can't you make claude AI tell you whats wrong about your theory too???
someone wanna do that and shove it in his face instead of wasting hours writing paragraphs back???
useless fucks the lot of u
On 10/10/2025 8:05 PM, olcott wrote:
On 10/10/2025 5:34 PM, dbush wrote:
On 10/10/2025 6:14 PM, olcott wrote:
On 10/10/2025 3:36 PM, dbush wrote:
On 10/10/2025 4:34 PM, olcott wrote:
On 10/10/2025 3:28 PM, dbush wrote:
On 10/10/2025 4:23 PM, olcott wrote:
It is not naive simulation. It is the correct simulation
according to the semantics of the language. Only a correct
simulation by the simulating halt decider correctly measures
the actual behavior that the input actually specifies.
And since HHH aborts the simulation of DD, it doesn't do a
correct simulation and therefore there is no basis for a decision. >>>>>>>
That is a foolishly stupid thing to say within
the context that unless HHH(DD) does abort the
simulation of its input itself would never halt.
Changing the subject from DD and HHH to DDn and HHHn is the
dishonest dodge of the strawman deception.
Unless the one and only directly executed HHH(DD)
i.e. the fixed immutable code of the function DD, the fixed immutable
code of the function HHH, and the fixed immutable code of everything
HHH calls down to the OS level
aborts the simulation of its one any only actual
input this one and only directly executed HHH(DD)
would never stop running.
In other words, when you change HHH and DD to HHHn and DDn, HHHn(DDn)
never stops running.
Changing the input and reporting on that non-input is not allowed.
<MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
-a-a-a-a If simulating halt decider H correctly simulates its
-a-a-a-a input D until H correctly determines that its simulated D
-a-a-a-a would never stop running unless aborted then
This part of the criteria has been validated as only meaning
exactly one thing. This exactly one thing that it means seems
impossibly over your head.
Claude AI figured that the next part of the words could
mean exactly two things one of them is incorrect and my
meaning for it is correct.
None of what you said refutes what it replies to.
This constitutes your admissions that HHH(DD) decides on an non-input.
On 10/10/2025 7:13 PM, dbush wrote:
On 10/10/2025 8:05 PM, olcott wrote:
On 10/10/2025 5:34 PM, dbush wrote:
On 10/10/2025 6:14 PM, olcott wrote:
On 10/10/2025 3:36 PM, dbush wrote:
On 10/10/2025 4:34 PM, olcott wrote:
On 10/10/2025 3:28 PM, dbush wrote:
On 10/10/2025 4:23 PM, olcott wrote:
It is not naive simulation. It is the correct simulation
according to the semantics of the language. Only a correct
simulation by the simulating halt decider correctly measures >>>>>>>>> the actual behavior that the input actually specifies.
And since HHH aborts the simulation of DD, it doesn't do a
correct simulation and therefore there is no basis for a decision. >>>>>>>>
That is a foolishly stupid thing to say within
the context that unless HHH(DD) does abort the
simulation of its input itself would never halt.
Changing the subject from DD and HHH to DDn and HHHn is the
dishonest dodge of the strawman deception.
Unless the one and only directly executed HHH(DD)
i.e. the fixed immutable code of the function DD, the fixed
immutable code of the function HHH, and the fixed immutable code of
everything HHH calls down to the OS level
aborts the simulation of its one any only actual
input this one and only directly executed HHH(DD)
would never stop running.
In other words, when you change HHH and DD to HHHn and DDn,
HHHn(DDn) never stops running.
Changing the input and reporting on that non-input is not allowed.
<MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
-a-a-a-a If simulating halt decider H correctly simulates its
-a-a-a-a input D until H correctly determines that its simulated D
-a-a-a-a would never stop running unless aborted then
This part of the criteria has been validated as only meaning
exactly one thing. This exactly one thing that it means seems
impossibly over your head.
Claude AI figured that the next part of the words could
mean exactly two things one of them is incorrect and my
meaning for it is correct.
None of what you said refutes what it replies to.
This constitutes your admissions that HHH(DD) decides on an non-input.
That HHH does simulate its input according to the
semantics of the language
On 10/10/2025 9:05 PM, olcott wrote:
That HHH does simulate its input according to the
semantics of the language
False, because it aborts in violation of those semantics.
On 10/10/2025 8:33 PM, dbush wrote:
On 10/10/2025 9:05 PM, olcott wrote:
That HHH does simulate its input according to the
semantics of the language
False, because it aborts in violation of those semantics.
So we are just back to your lack of programming
ability. This rubric conclusively proves beyond
all possible doubt that I am correct about this.
On 10/10/2025 7:13 PM, dbush wrote:
On 10/10/2025 8:05 PM, olcott wrote:
On 10/10/2025 5:34 PM, dbush wrote:
On 10/10/2025 6:14 PM, olcott wrote:
On 10/10/2025 3:36 PM, dbush wrote:
On 10/10/2025 4:34 PM, olcott wrote:
On 10/10/2025 3:28 PM, dbush wrote:
On 10/10/2025 4:23 PM, olcott wrote:
It is not naive simulation. It is the correct simulation
according to the semantics of the language. Only a correct
simulation by the simulating halt decider correctly measures >>>>>>>>> the actual behavior that the input actually specifies.
And since HHH aborts the simulation of DD, it doesn't do a
correct simulation and therefore there is no basis for a decision. >>>>>>>>
That is a foolishly stupid thing to say within
the context that unless HHH(DD) does abort the
simulation of its input itself would never halt.
Changing the subject from DD and HHH to DDn and HHHn is the
dishonest dodge of the strawman deception.
Unless the one and only directly executed HHH(DD)
i.e. the fixed immutable code of the function DD, the fixed immutable >>>> code of the function HHH, and the fixed immutable code of everything
HHH calls down to the OS level
aborts the simulation of its one any only actual
input this one and only directly executed HHH(DD)
would never stop running.
In other words, when you change HHH and DD to HHHn and DDn, HHHn(DDn) >>>> never stops running.
Changing the input and reporting on that non-input is not allowed.
<MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
-a-a-a-a If simulating halt decider H correctly simulates its
-a-a-a-a input D until H correctly determines that its simulated D
-a-a-a-a would never stop running unless aborted then
This part of the criteria has been validated as only meaning
exactly one thing. This exactly one thing that it means seems
impossibly over your head.
Claude AI figured that the next part of the words could
mean exactly two things one of them is incorrect and my
meaning for it is correct.
None of what you said refutes what it replies to.
This constitutes your admissions that HHH(DD) decides on an non-input.
That HHH does simulate its input according to the
semantics of the language conclusively proves that
this simulation is correct.
That this input calls
an instance of itself requires HHH to simulate an
instance of itself.
On 10/10/2025 3:08 AM, Mikko wrote:
On 2025-10-09 04:08:10 +0000, olcott said:
On 10/8/2025 5:34 AM, Mikko wrote:
On 2025-10-07 15:30:11 +0000, olcott said:
On 10/7/2025 10:17 AM, joes wrote:
Am Tue, 07 Oct 2025 09:50:43 -0500 schrieb olcott:
On 10/7/2025 4:18 AM, Mikko wrote:
The important point is that halting is not decidable.Only because they are asking the wrong question.
Machines halt or not. Nothing wrong about that.
As I have proven back in 2004 the halting problem
is simply the Liar Paradox disguised.
Can't be, as the Liar Paradox is not a problem but a paradox.
The halting problem is a problem, not a paradox. There would
be a paradox if there were a solution to the halting problem
but there isn't any.
Every decision problem decider/input pair
such that self-contradiction prevents a
correct answer is bogus.
Is it bogus to ask how to construct with a straightedge and
a compass a square that has the same area as a given circle?
What do you get when you cross an elephant with a rhinoceros?
eleph ino! Just get a CAD system to do this for you.
r = (side_length / reU-C)
On 2025-10-10, olcott <polcott333@gmail.com> wrote:
On 10/10/2025 2:54 PM, Kaz Kylheku wrote:
Nothing in the halting problem specification states that halting
deciders must naively follow a simulation of their input.
It is not naive simulation.
It is the correct simulation
according to the semantics of the language.
What is naive is expecting simulation to decide halting.
(But nobody in their right mind does that; they want to know
whether there is a short-cut to know the halting status without
just running a program.)
Only a correct
simulation by the simulating halt decider correctly measures
the actual behavior that the input actually specifies.
But ... that's ... exactly your "HHH1" !!
HHH1 is a purely simulating function that starts a simulation
in which it steps through DD using the semantics of the languag.
Here you are literally saying that the HHH1(DD) == 1 result (that you
are familiar with and often cite yourself in your postings) is a
"correct measure" of the "actual behavior" that the
"input actually specifies"!
On 10/10/2025 7:32 PM, Kaz Kylheku wrote:
On 2025-10-10, olcott <polcott333@gmail.com> wrote:
On 10/10/2025 2:54 PM, Kaz Kylheku wrote:
Nothing in the halting problem specification states that halting
deciders must naively follow a simulation of their input.
It is not naive simulation.
It is the correct simulation
according to the semantics of the language.
What is naive is expecting simulation to decide halting.
(But nobody in their right mind does that; they want to know
whether there is a short-cut to know the halting status without
just running a program.)
Only a correct
simulation by the simulating halt decider correctly measures
the actual behavior that the input actually specifies.
But ... that's ... exactly your "HHH1" !!
HHH1 is a purely simulating function that starts a simulation
in which it steps through DD using the semantics of the languag.
Here you are literally saying that the HHH1(DD) == 1 result (that you
are familiar with and often cite yourself in your postings) is a
"correct measure" of the "actual behavior" that the
"input actually specifies"!
Turing machine deciders only compute the mapping
from their finite string inputs to an accept state
or reject state on the basis that this input finite
string specifies a semantic or syntactic property.
The only way to correctly determine the actual behavior
that an actual input actually specifies is for simulating
halt decider H to simulate its input D.
The input to HHH(DD) specifies that DD calls HHH(DD)
in recursive simulation, such that the call from the
simulated DD to the simulated HHH(DD) cannot possibly
return. *This cannot be correctly ignored*
The input to HHH1(DD) specifies that the call from the
simulated DD to the simulated HHH(DD) does return.
On 2025-10-11, olcott <polcott333@gmail.com> wrote:
On 10/10/2025 7:32 PM, Kaz Kylheku wrote:
On 2025-10-10, olcott <polcott333@gmail.com> wrote:
On 10/10/2025 2:54 PM, Kaz Kylheku wrote:
Nothing in the halting problem specification states that halting
deciders must naively follow a simulation of their input.
It is not naive simulation.
It is the correct simulation
according to the semantics of the language.
What is naive is expecting simulation to decide halting.
(But nobody in their right mind does that; they want to know
whether there is a short-cut to know the halting status without
just running a program.)
Only a correct
simulation by the simulating halt decider correctly measures
the actual behavior that the input actually specifies.
But ... that's ... exactly your "HHH1" !!
HHH1 is a purely simulating function that starts a simulation
in which it steps through DD using the semantics of the languag.
Here you are literally saying that the HHH1(DD) == 1 result (that you
are familiar with and often cite yourself in your postings) is a
"correct measure" of the "actual behavior" that the
"input actually specifies"!
Turing machine deciders only compute the mapping
from their finite string inputs to an accept state
or reject state on the basis that this input finite
string specifies a semantic or syntactic property.
The only way to correctly determine the actual behavior
that an actual input actually specifies is for simulating
halt decider H to simulate its input D.
The input to HHH(DD) specifies that DD calls HHH(DD)
in recursive simulation, such that the call from the
simulated DD to the simulated HHH(DD) cannot possibly
return. *This cannot be correctly ignored*
The input to HHH1(DD) specifies that the call from the
simulated DD to the simulated HHH(DD) does return.
The input to HHH and HHH1 is exactly the same thing which unambiguously denotes one computation: a function call DD() that returns.
You are abusing the word "specifies" above; you are describing
how two functions /react/ differently to the input, not what the
input specifies. (For one of the two functions, there is an agreement
between the two which points toward correctness).
How a function reacts to the input has no bearing on what the
input specifies.
If I tell you "clean you room" and you play video games, that does not
mean "clean your room" specifies video-game-playing behavior!
Or, where do you stand on that one?
Yes or no, do you agree with the idea that "clean your room" specifies video-game-playing behavior in situations in which that is what the kid
does, /and/ that it also specifies room-cleaning behavior in situation
in which /that/ behavior occurs?
On 10/11/2025 11:46 AM, Kaz Kylheku wrote:
On 2025-10-11, olcott <polcott333@gmail.com> wrote:
On 10/10/2025 7:32 PM, Kaz Kylheku wrote:
On 2025-10-10, olcott <polcott333@gmail.com> wrote:
On 10/10/2025 2:54 PM, Kaz Kylheku wrote:
Nothing in the halting problem specification states that halting
deciders must naively follow a simulation of their input.
It is not naive simulation.
It is the correct simulation
according to the semantics of the language.
What is naive is expecting simulation to decide halting.
(But nobody in their right mind does that; they want to know
whether there is a short-cut to know the halting status without
just running a program.)
Only a correct
simulation by the simulating halt decider correctly measures
the actual behavior that the input actually specifies.
But ... that's ... exactly your "HHH1" !!
HHH1 is a purely simulating function that starts a simulation
in which it steps through DD using the semantics of the languag.
Here you are literally saying that the HHH1(DD) == 1 result (that you
are familiar with and often cite yourself in your postings) is a
"correct measure" of the "actual behavior" that the
"input actually specifies"!
Turing machine deciders only compute the mapping
from their finite string inputs to an accept state
or reject state on the basis that this input finite
string specifies a semantic or syntactic property.
The only way to correctly determine the actual behavior
that an actual input actually specifies is for simulating
halt decider H to simulate its input D.
The input to HHH(DD) specifies that DD calls HHH(DD)
in recursive simulation, such that the call from the
simulated DD to the simulated HHH(DD) cannot possibly
return. *This cannot be correctly ignored*
The input to HHH1(DD) specifies that the call from the
simulated DD to the simulated HHH(DD) does return.
The input to HHH and HHH1 is exactly the same thing which unambiguously
denotes one computation: a function call DD() that returns.
You are abusing the word "specifies" above; you are describing
how two functions /react/ differently to the input, not what the
input specifies. (For one of the two functions, there is an agreement
between the two which points toward correctness).
How a function reacts to the input has no bearing on what the
input specifies.
If I tell you "clean you room" and you play video games, that does not
mean "clean your room" specifies video-game-playing behavior!
Or, where do you stand on that one?
Yes or no, do you agree with the idea that "clean your room" specifies
video-game-playing behavior in situations in which that is what the kid
does, /and/ that it also specifies room-cleaning behavior in situation
in which /that/ behavior occurs?
The input to HHH(DD) specifies that DD calls HHH(DD)
in recursive simulation, such that the call from the
simulated DD to the simulated HHH(DD) cannot possibly
return. *This cannot be correctly ignored*
The input to HHH1(DD) specifies that the call from the
simulated DD to the simulated HHH(DD) does return.
On 10/11/2025 11:46 AM, Kaz Kylheku wrote:
How a function reacts to the input has no bearing on what the
input specifies.
If I tell you "clean you room" and you play video games, that does not
mean "clean your room" specifies video-game-playing behavior!
Or, where do you stand on that one?
Yes or no, do you agree with the idea that "clean your room" specifies
video-game-playing behavior in situations in which that is what the kid
does, /and/ that it also specifies room-cleaning behavior in situation
in which /that/ behavior occurs?
The input to HHH(DD) specifies that DD calls HHH(DD)
On 2025-10-11, olcott <polcott333@gmail.com> wrote:
On 10/11/2025 11:46 AM, Kaz Kylheku wrote:
On 2025-10-11, olcott <polcott333@gmail.com> wrote:
On 10/10/2025 7:32 PM, Kaz Kylheku wrote:
On 2025-10-10, olcott <polcott333@gmail.com> wrote:
On 10/10/2025 2:54 PM, Kaz Kylheku wrote:
Nothing in the halting problem specification states that halting >>>>>>> deciders must naively follow a simulation of their input.
It is not naive simulation.
It is the correct simulation
according to the semantics of the language.
What is naive is expecting simulation to decide halting.
(But nobody in their right mind does that; they want to know
whether there is a short-cut to know the halting status without
just running a program.)
Only a correct
simulation by the simulating halt decider correctly measures
the actual behavior that the input actually specifies.
But ... that's ... exactly your "HHH1" !!
HHH1 is a purely simulating function that starts a simulation
in which it steps through DD using the semantics of the languag.
Here you are literally saying that the HHH1(DD) == 1 result (that you >>>>> are familiar with and often cite yourself in your postings) is a
"correct measure" of the "actual behavior" that the
"input actually specifies"!
Turing machine deciders only compute the mapping
from their finite string inputs to an accept state
or reject state on the basis that this input finite
string specifies a semantic or syntactic property.
The only way to correctly determine the actual behavior
that an actual input actually specifies is for simulating
halt decider H to simulate its input D.
The input to HHH(DD) specifies that DD calls HHH(DD)
in recursive simulation, such that the call from the
simulated DD to the simulated HHH(DD) cannot possibly
return. *This cannot be correctly ignored*
The input to HHH1(DD) specifies that the call from the
simulated DD to the simulated HHH(DD) does return.
The input to HHH and HHH1 is exactly the same thing which unambiguously
denotes one computation: a function call DD() that returns.
You are abusing the word "specifies" above; you are describing
how two functions /react/ differently to the input, not what the
input specifies. (For one of the two functions, there is an agreement
between the two which points toward correctness).
How a function reacts to the input has no bearing on what the
input specifies.
If I tell you "clean you room" and you play video games, that does not
mean "clean your room" specifies video-game-playing behavior!
Or, where do you stand on that one?
Yes or no, do you agree with the idea that "clean your room" specifies
video-game-playing behavior in situations in which that is what the kid
does, /and/ that it also specifies room-cleaning behavior in situation
in which /that/ behavior occurs?
The input to HHH(DD) specifies that DD calls HHH(DD)
Yes it does; because DD calls HHH(DD), DD does specify
that.
in recursive simulation, such that the call from the
Yes; DD contains HHH which perpetrates a nested simulation
tower when invoked as HHH(DD)/
simulated DD to the simulated HHH(DD) cannot possibly
return. *This cannot be correctly ignored*
This is incorrect; the simulated HHH(DD) is not prevented from returning
by any calculation that it performs, but because the machine which
simulates it suddenly deviates from the x86 semantics and decides not to fetch the next instruction.
The first level simulation of HHH(DD) doesn't have the /opportunity/ to
reach the point where it detects the abort criteria of the second
level simulation, and returns 0.
The input to HHH1(DD) specifies that the call from the
simulated DD to the simulated HHH(DD) does return.
DD alone specifies this whether it not it is the input to HHH1 or HHH or
any other function.
HHH(DD) is unconditionally a terminating calculation which returns 0,
and does that in any context whatsover which completely and correctly
steps its instructions from beginning to end.
Right until the point that the aborted DD is abruptly suspended, its execution is absolutely identical to a DD which isn't suspended, and the indefinite suspension is the only detail. It is not caused by anything
in DD.