Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 27 |
Nodes: | 6 (0 / 6) |
Uptime: | 40:14:34 |
Calls: | 631 |
Calls today: | 2 |
Files: | 1,187 |
D/L today: |
24 files (29,813K bytes) |
Messages: | 174,391 |
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote:
On 07/10/2025 19:14, olcott wrote:
No. You didn't pay close enough attention *to my exact words*
Vague again, I see. But I expect you meant to say that your exact
words mean that HHH has an excuse for getting it wrong.
And so it does, but the excuse you claim is nowhere near as
convincing as the real reason (for which we have a proof).
*Anyone that totally understands these exact words*
*understands that I proved the halting problem is incorrect*
<repeat of previously refuted point>
Repeating a previously refuted point is less than no rebuttal.
*You never even noticed ALL the words yet*
All Turing machine deciders only compute the mapping
from their finite string inputs to an accept state or reject
state on the basis that this input finite string specifies a
semantic or syntactic property.
You have therefore agreed that it is true by the meaning of the words
that a finite string description of a Turing machine specifies all
semantics properties of the machine it describes, including whether
the machine it describes halts when executed directly.
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote:
On 07/10/2025 19:14, olcott wrote:
No. You didn't pay close enough attention *to my exact words*
Vague again, I see. But I expect you meant to say that your exact
words mean that HHH has an excuse for getting it wrong.
And so it does, but the excuse you claim is nowhere near as
convincing as the real reason (for which we have a proof).
*Anyone that totally understands these exact words*
*understands that I proved the halting problem is incorrect*
<repeat of previously refuted point>
Repeating a previously refuted point is less than no rebuttal.
*You never even noticed ALL the words yet*
All Turing machine deciders only compute the mapping
from their finite string inputs to an accept state or reject
state on the basis that this input finite string specifies a
semantic or syntactic property.
And it is proven true by the meaning of the words that a finite string description of a Turing machine specifies all semantics properties of
the machine it describes, including whether the machine it describes
halts when executed directly.
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote:
On 07/10/2025 19:14, olcott wrote:
No. You didn't pay close enough attention *to my exact words*
Vague again, I see. But I expect you meant to say that your exact >>>>>> words mean that HHH has an excuse for getting it wrong.
And so it does, but the excuse you claim is nowhere near as
convincing as the real reason (for which we have a proof).
*Anyone that totally understands these exact words*
*understands that I proved the halting problem is incorrect*
<repeat of previously refuted point>
Repeating a previously refuted point is less than no rebuttal.
*You never even noticed ALL the words yet*
All Turing machine deciders only compute the mapping
from their finite string inputs to an accept state or reject
state on the basis that this input finite string specifies a
semantic or syntactic property.
And it is proven true by the meaning of the words that a finite string
description of a Turing machine specifies all semantics properties of
the machine it describes, including whether the machine it describes
halts when executed directly.
OK, so you are back to being a mere troll again.
Deciders only report on what they see and you are
far too ignorant to understand that they cannot be
correctly required to report om what they cannot see.
Deciders only report on what they see
On 10/7/2025 3:32 PM, olcott wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote:
On 07/10/2025 19:14, olcott wrote:
No. You didn't pay close enough attention *to my exact words*
Vague again, I see. But I expect you meant to say that your exact >>>>>>> words mean that HHH has an excuse for getting it wrong.
And so it does, but the excuse you claim is nowhere near as
convincing as the real reason (for which we have a proof).
*Anyone that totally understands these exact words*
*understands that I proved the halting problem is incorrect*
<repeat of previously refuted point>
Repeating a previously refuted point is less than no rebuttal.
*You never even noticed ALL the words yet*
All Turing machine deciders only compute the mapping
from their finite string inputs to an accept state or reject
state on the basis that this input finite string specifies a
semantic or syntactic property.
And it is proven true by the meaning of the words that a finite
string description of a Turing machine specifies all semantics
properties of the machine it describes, including whether the machine
it describes halts when executed directly.
OK, so you are back to being a mere troll again.
That you don't understand the meaning of the above words is not my error.
Deciders only report on what they see and you are
far too ignorant to understand that they cannot be
correctly required to report om what they cannot see.
In other words, you agree with Turning and Linz that these requirements cannot be satisfied:
On 10/7/2025 2:34 PM, dbush wrote:
On 10/7/2025 3:32 PM, olcott wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote:
On 07/10/2025 19:14, olcott wrote:
No. You didn't pay close enough attention *to my exact words* >>>>>>>>Vague again, I see. But I expect you meant to say that your
exact words mean that HHH has an excuse for getting it wrong.
And so it does, but the excuse you claim is nowhere near as
convincing as the real reason (for which we have a proof).
*Anyone that totally understands these exact words*
*understands that I proved the halting problem is incorrect*
<repeat of previously refuted point>
Repeating a previously refuted point is less than no rebuttal.
*You never even noticed ALL the words yet*
All Turing machine deciders only compute the mapping
from their finite string inputs to an accept state or reject
state on the basis that this input finite string specifies a
semantic or syntactic property.
And it is proven true by the meaning of the words that a finite
string description of a Turing machine specifies all semantics
properties of the machine it describes, including whether the
machine it describes halts when executed directly.
OK, so you are back to being a mere troll again.
That you don't understand the meaning of the above words is not my error.
Deciders only report on what they see and you are
far too ignorant to understand that they cannot be
correctly required to report om what they cannot see.
In other words, you agree with Turning and Linz that these
requirements cannot be satisfied:
And likewise no Turing machine can give birth
to a healthy baby boy. Expecting it to do this
is kind of nuts.
On 10/7/2025 2:34 PM, dbush wrote:
On 10/7/2025 3:32 PM, olcott wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote:
On 07/10/2025 19:14, olcott wrote:
No. You didn't pay close enough attention *to my exact
words*
Vague again, I see. But I expect you meant to say that
your exact words mean that HHH has an excuse for getting
it wrong.
And so it does, but the excuse you claim is nowhere near
as convincing as the real reason (for which we have a
proof).
*Anyone that totally understands these exact words*
*understands that I proved the halting problem is incorrect*
<repeat of previously refuted point>
Repeating a previously refuted point is less than no rebuttal.
*You never even noticed ALL the words yet*
All Turing machine deciders only compute the mapping
from their finite string inputs to an accept state or reject
state on the basis that this input finite string specifies a
semantic or syntactic property.
And it is proven true by the meaning of the words that a
finite string description of a Turing machine specifies all
semantics properties of the machine it describes, including
whether the machine it describes halts when executed directly.
OK, so you are back to being a mere troll again.
That you don't understand the meaning of the above words is not
my error.
Deciders only report on what they see and you are
far too ignorant to understand that they cannot be
correctly required to report om what they cannot see.
In other words, you agree with Turning and Linz that these
requirements cannot be satisfied:
And likewise no Turing machine can give birth
to a healthy baby boy. Expecting it to do this
is kind of nuts.
On 10/7/2025 3:53 PM, olcott wrote:
On 10/7/2025 2:34 PM, dbush wrote:
On 10/7/2025 3:32 PM, olcott wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote:
On 07/10/2025 19:14, olcott wrote:
No. You didn't pay close enough attention *to my exact words* >>>>>>>>>Vague again, I see. But I expect you meant to say that your >>>>>>>>> exact words mean that HHH has an excuse for getting it wrong. >>>>>>>>>
And so it does, but the excuse you claim is nowhere near as >>>>>>>>> convincing as the real reason (for which we have a proof).
*Anyone that totally understands these exact words*
*understands that I proved the halting problem is incorrect*
<repeat of previously refuted point>
Repeating a previously refuted point is less than no rebuttal.
*You never even noticed ALL the words yet*
All Turing machine deciders only compute the mapping
from their finite string inputs to an accept state or reject
state on the basis that this input finite string specifies a
semantic or syntactic property.
And it is proven true by the meaning of the words that a finite
string description of a Turing machine specifies all semantics
properties of the machine it describes, including whether the
machine it describes halts when executed directly.
OK, so you are back to being a mere troll again.
That you don't understand the meaning of the above words is not my
error.
Deciders only report on what they see and you are
far too ignorant to understand that they cannot be
correctly required to report om what they cannot see.
In other words, you agree with Turning and Linz that these
requirements cannot be satisfied:
And likewise no Turing machine can give birth
to a healthy baby boy. Expecting it to do this
is kind of nuts.
So we're in agreement: no Turing machine exists that can tell if any arbitrary Turing machine X with input Y will halt when executed directly.
So we're in agreement: no Turing machine exists that can tell if
any arbitrary Turing machine X with input Y will halt when
executed directly.
On 10/7/2025 2:55 PM, dbush wrote:
So we're in agreement: no Turing machine exists that can
tell if any arbitrary Turing machine X with input Y will
halt when executed directly.
For the same reason that it cannot give birth to a Healthy
baby boy.
Requiring the logically impossible is merely an incorrect
requirement.
On 10/7/2025 2:55 PM, dbush wrote:
On 10/7/2025 3:53 PM, olcott wrote:
On 10/7/2025 2:34 PM, dbush wrote:
On 10/7/2025 3:32 PM, olcott wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote:
On 07/10/2025 19:14, olcott wrote:
No. You didn't pay close enough attention *to my exact words* >>>>>>>>>>Vague again, I see. But I expect you meant to say that your >>>>>>>>>> exact words mean that HHH has an excuse for getting it wrong. >>>>>>>>>>
And so it does, but the excuse you claim is nowhere near as >>>>>>>>>> convincing as the real reason (for which we have a proof). >>>>>>>>>>
*Anyone that totally understands these exact words*
*understands that I proved the halting problem is incorrect* >>>>>>>>>
<repeat of previously refuted point>
Repeating a previously refuted point is less than no rebuttal. >>>>>>>>
*You never even noticed ALL the words yet*
All Turing machine deciders only compute the mapping
from their finite string inputs to an accept state or reject
state on the basis that this input finite string specifies a
semantic or syntactic property.
And it is proven true by the meaning of the words that a finite
string description of a Turing machine specifies all semantics
properties of the machine it describes, including whether the
machine it describes halts when executed directly.
OK, so you are back to being a mere troll again.
That you don't understand the meaning of the above words is not my
error.
Deciders only report on what they see and you are
far too ignorant to understand that they cannot be
correctly required to report om what they cannot see.
In other words, you agree with Turning and Linz that these
requirements cannot be satisfied:
And likewise no Turing machine can give birth
to a healthy baby boy. Expecting it to do this
is kind of nuts.
So we're in agreement: no Turing machine exists that can tell if any
arbitrary Turing machine X with input Y will halt when executed directly.
For the same reason that it cannot give birth
to a Healthy baby boy. Requiring the logically
impossible is merely an incorrect requirement.
On 10/7/2025 4:05 PM, olcott wrote:
On 10/7/2025 2:55 PM, dbush wrote:
On 10/7/2025 3:53 PM, olcott wrote:
On 10/7/2025 2:34 PM, dbush wrote:
On 10/7/2025 3:32 PM, olcott wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote:
On 07/10/2025 19:14, olcott wrote:
No. You didn't pay close enough attention *to my exact words* >>>>>>>>>>>Vague again, I see. But I expect you meant to say that your >>>>>>>>>>> exact words mean that HHH has an excuse for getting it wrong. >>>>>>>>>>>
And so it does, but the excuse you claim is nowhere near as >>>>>>>>>>> convincing as the real reason (for which we have a proof). >>>>>>>>>>>
*Anyone that totally understands these exact words*
*understands that I proved the halting problem is incorrect* >>>>>>>>>>
<repeat of previously refuted point>
Repeating a previously refuted point is less than no rebuttal. >>>>>>>>>
*You never even noticed ALL the words yet*
All Turing machine deciders only compute the mapping
from their finite string inputs to an accept state or reject
state on the basis that this input finite string specifies a
semantic or syntactic property.
And it is proven true by the meaning of the words that a finite >>>>>>> string description of a Turing machine specifies all semantics
properties of the machine it describes, including whether the
machine it describes halts when executed directly.
OK, so you are back to being a mere troll again.
That you don't understand the meaning of the above words is not my
error.
Deciders only report on what they see and you are
far too ignorant to understand that they cannot be
correctly required to report om what they cannot see.
In other words, you agree with Turning and Linz that these
requirements cannot be satisfied:
And likewise no Turing machine can give birth
to a healthy baby boy. Expecting it to do this
is kind of nuts.
So we're in agreement: no Turing machine exists that can tell if any
arbitrary Turing machine X with input Y will halt when executed
directly.
For the same reason that it cannot give birth
to a Healthy baby boy. Requiring the logically
impossible is merely an incorrect requirement.
So it seems by "incorrect requirement" you
mean "requirement that can't be satisfied".
Let's clarify that term.
Definition: Mythic Number
An integer N such that N > 5 and N < 2
Is the above an incorrect requirement?
On 10/7/2025 3:14 PM, dbush wrote:
On 10/7/2025 4:05 PM, olcott wrote:
On 10/7/2025 2:55 PM, dbush wrote:
On 10/7/2025 3:53 PM, olcott wrote:
On 10/7/2025 2:34 PM, dbush wrote:
On 10/7/2025 3:32 PM, olcott wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote:
On 07/10/2025 19:14, olcott wrote:
No. You didn't pay close enough attention *to my exact words* >>>>>>>>>>>>Vague again, I see. But I expect you meant to say that your >>>>>>>>>>>> exact words mean that HHH has an excuse for getting it wrong. >>>>>>>>>>>>
And so it does, but the excuse you claim is nowhere near as >>>>>>>>>>>> convincing as the real reason (for which we have a proof). >>>>>>>>>>>>
*Anyone that totally understands these exact words*
*understands that I proved the halting problem is incorrect* >>>>>>>>>>>
<repeat of previously refuted point>
Repeating a previously refuted point is less than no rebuttal. >>>>>>>>>>
*You never even noticed ALL the words yet*
All Turing machine deciders only compute the mapping
from their finite string inputs to an accept state or reject >>>>>>>>> state on the basis that this input finite string specifies a >>>>>>>>> semantic or syntactic property.
And it is proven true by the meaning of the words that a finite >>>>>>>> string description of a Turing machine specifies all semantics >>>>>>>> properties of the machine it describes, including whether the >>>>>>>> machine it describes halts when executed directly.
OK, so you are back to being a mere troll again.
That you don't understand the meaning of the above words is not my >>>>>> error.
Deciders only report on what they see and you are
far too ignorant to understand that they cannot be
correctly required to report om what they cannot see.
In other words, you agree with Turning and Linz that these
requirements cannot be satisfied:
And likewise no Turing machine can give birth
to a healthy baby boy. Expecting it to do this
is kind of nuts.
So we're in agreement: no Turing machine exists that can tell if any
arbitrary Turing machine X with input Y will halt when executed
directly.
For the same reason that it cannot give birth
to a Healthy baby boy. Requiring the logically
impossible is merely an incorrect requirement.
So it seems by "incorrect requirement" you mean "requirement that
can't be satisfied".
Let's clarify that term.
Definition: Mythic Number
An integer N such that N > 5 and N < 2
Is the above an incorrect requirement?
Yes that is an incorrect requirement.
It is not the stupid misnomer of "undecidable"
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote:
On 07/10/2025 19:14, olcott wrote:
No. You didn't pay close enough attention *to my exact words*
Vague again, I see. But I expect you meant to say that your exact >>>>>> words mean that HHH has an excuse for getting it wrong.
And so it does, but the excuse you claim is nowhere near as
convincing as the real reason (for which we have a proof).
*Anyone that totally understands these exact words*
*understands that I proved the halting problem is incorrect*
<repeat of previously refuted point>
Repeating a previously refuted point is less than no rebuttal.
*You never even noticed ALL the words yet*
All Turing machine deciders only compute the mapping
from their finite string inputs to an accept state or reject
state on the basis that this input finite string specifies a
semantic or syntactic property.
And it is proven true by the meaning of the words that a finite string
description of a Turing machine specifies all semantics properties of
the machine it describes, including whether the machine it describes
halts when executed directly.
OK, so you are back to being a mere troll again.
Deciders only report on what they see and you are
far too ignorant to understand that they cannot be
correctly required to report om what they cannot see.
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote:
On 07/10/2025 19:14, olcott wrote:
No. You didn't pay close enough attention *to my exact words*
Vague again, I see. But I expect you meant to say that your exact >>>>>>> words mean that HHH has an excuse for getting it wrong.
And so it does, but the excuse you claim is nowhere near as
convincing as the real reason (for which we have a proof).
*Anyone that totally understands these exact words*
*understands that I proved the halting problem is incorrect*
<repeat of previously refuted point>
Repeating a previously refuted point is less than no rebuttal.
*You never even noticed ALL the words yet*
All Turing machine deciders only compute the mapping
from their finite string inputs to an accept state or reject
state on the basis that this input finite string specifies a
semantic or syntactic property.
And it is proven true by the meaning of the words that a finite string
description of a Turing machine specifies all semantics properties of
the machine it describes, including whether the machine it describes
halts when executed directly.
OK, so you are back to being a mere troll again.
Deciders only report on what they see and you are
far too ignorant to understand that they cannot be
correctly required to report om what they cannot see.
The only reason that the decider of a diagonal test case cannot
that which it needs to report is that it is always out of reach.
The behavior occurs some instructions /after/ the decision is made.
When a simulating decider follow the simulation, it might
trace 4000 instructions and then make a decision.
But the diagonal test case builds on those 4000 instructions;
its behavior is not lpayed out until, say, 4005 instructions.
On 10/7/2025 4:22 PM, olcott wrote:
On 10/7/2025 3:14 PM, dbush wrote:
On 10/7/2025 4:05 PM, olcott wrote:
On 10/7/2025 2:55 PM, dbush wrote:
On 10/7/2025 3:53 PM, olcott wrote:
On 10/7/2025 2:34 PM, dbush wrote:
On 10/7/2025 3:32 PM, olcott wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote:
On 07/10/2025 19:14, olcott wrote:
No. You didn't pay close enough attention *to my exact words* >>>>>>>>>>>>>Vague again, I see. But I expect you meant to say that your >>>>>>>>>>>>> exact words mean that HHH has an excuse for getting it wrong. >>>>>>>>>>>>>
And so it does, but the excuse you claim is nowhere near as >>>>>>>>>>>>> convincing as the real reason (for which we have a proof). >>>>>>>>>>>>>
*Anyone that totally understands these exact words*
*understands that I proved the halting problem is incorrect* >>>>>>>>>>>>
<repeat of previously refuted point>
Repeating a previously refuted point is less than no rebuttal. >>>>>>>>>>>
*You never even noticed ALL the words yet*
All Turing machine deciders only compute the mapping
from their finite string inputs to an accept state or reject >>>>>>>>>> state on the basis that this input finite string specifies a >>>>>>>>>> semantic or syntactic property.
And it is proven true by the meaning of the words that a finite >>>>>>>>> string description of a Turing machine specifies all semantics >>>>>>>>> properties of the machine it describes, including whether the >>>>>>>>> machine it describes halts when executed directly.
OK, so you are back to being a mere troll again.
That you don't understand the meaning of the above words is not >>>>>>> my error.
Deciders only report on what they see and you are
far too ignorant to understand that they cannot be
correctly required to report om what they cannot see.
In other words, you agree with Turning and Linz that these
requirements cannot be satisfied:
And likewise no Turing machine can give birth
to a healthy baby boy. Expecting it to do this
is kind of nuts.
So we're in agreement: no Turing machine exists that can tell if
any arbitrary Turing machine X with input Y will halt when executed >>>>> directly.
For the same reason that it cannot give birth
to a Healthy baby boy. Requiring the logically
impossible is merely an incorrect requirement.
So it seems by "incorrect requirement" you mean "requirement that
can't be satisfied".
Let's clarify that term.
Definition: Mythic Number
An integer N such that N > 5 and N < 2
Is the above an incorrect requirement?
Yes that is an incorrect requirement.
It is not the stupid misnomer of "undecidable"
There is nothing wrong with the above definition.-a It is clear and concise.-a It just can't be satisfied.
On 10/7/2025 3:25 PM, dbush wrote:
On 10/7/2025 4:22 PM, olcott wrote:
On 10/7/2025 3:14 PM, dbush wrote:
On 10/7/2025 4:05 PM, olcott wrote:
On 10/7/2025 2:55 PM, dbush wrote:
On 10/7/2025 3:53 PM, olcott wrote:
On 10/7/2025 2:34 PM, dbush wrote:
On 10/7/2025 3:32 PM, olcott wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote:
On 07/10/2025 19:14, olcott wrote:
No. You didn't pay close enough attention *to my exact >>>>>>>>>>>>>>> words*
Vague again, I see. But I expect you meant to say that >>>>>>>>>>>>>> your exact words mean that HHH has an excuse for getting >>>>>>>>>>>>>> it wrong.
And so it does, but the excuse you claim is nowhere near >>>>>>>>>>>>>> as convincing as the real reason (for which we have a proof). >>>>>>>>>>>>>>
*Anyone that totally understands these exact words*
*understands that I proved the halting problem is incorrect* >>>>>>>>>>>>>
<repeat of previously refuted point>
Repeating a previously refuted point is less than no rebuttal. >>>>>>>>>>>>
*You never even noticed ALL the words yet*
All Turing machine deciders only compute the mapping
from their finite string inputs to an accept state or reject >>>>>>>>>>> state on the basis that this input finite string specifies a >>>>>>>>>>> semantic or syntactic property.
And it is proven true by the meaning of the words that a
finite string description of a Turing machine specifies all >>>>>>>>>> semantics properties of the machine it describes, including >>>>>>>>>> whether the machine it describes halts when executed directly. >>>>>>>>>>
OK, so you are back to being a mere troll again.
That you don't understand the meaning of the above words is not >>>>>>>> my error.
Deciders only report on what they see and you are
far too ignorant to understand that they cannot be
correctly required to report om what they cannot see.
In other words, you agree with Turning and Linz that these
requirements cannot be satisfied:
And likewise no Turing machine can give birth
to a healthy baby boy. Expecting it to do this
is kind of nuts.
So we're in agreement: no Turing machine exists that can tell if
any arbitrary Turing machine X with input Y will halt when
executed directly.
For the same reason that it cannot give birth
to a Healthy baby boy. Requiring the logically
impossible is merely an incorrect requirement.
So it seems by "incorrect requirement" you mean "requirement that
can't be satisfied".
Let's clarify that term.
Definition: Mythic Number
An integer N such that N > 5 and N < 2
Is the above an incorrect requirement?
Yes that is an incorrect requirement.
It is not the stupid misnomer of "undecidable"
There is nothing wrong with the above definition.-a It is clear and
concise.-a It just can't be satisfied.
I am stipulating that any question defined to
have no correct answer is an incorrect question.
On 10/7/2025 9:21 PM, olcott wrote:
On 10/7/2025 3:25 PM, dbush wrote:
On 10/7/2025 4:22 PM, olcott wrote:
On 10/7/2025 3:14 PM, dbush wrote:
On 10/7/2025 4:05 PM, olcott wrote:
On 10/7/2025 2:55 PM, dbush wrote:
On 10/7/2025 3:53 PM, olcott wrote:
On 10/7/2025 2:34 PM, dbush wrote:
On 10/7/2025 3:32 PM, olcott wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote:
On 07/10/2025 19:14, olcott wrote:
No. You didn't pay close enough attention *to my exact >>>>>>>>>>>>>>>> words*
Vague again, I see. But I expect you meant to say that >>>>>>>>>>>>>>> your exact words mean that HHH has an excuse for getting >>>>>>>>>>>>>>> it wrong.
And so it does, but the excuse you claim is nowhere near >>>>>>>>>>>>>>> as convincing as the real reason (for which we have a >>>>>>>>>>>>>>> proof).
*Anyone that totally understands these exact words* >>>>>>>>>>>>>> *understands that I proved the halting problem is incorrect* >>>>>>>>>>>>>>
<repeat of previously refuted point>
Repeating a previously refuted point is less than no rebuttal. >>>>>>>>>>>>>
*You never even noticed ALL the words yet*
All Turing machine deciders only compute the mapping
from their finite string inputs to an accept state or reject >>>>>>>>>>>> state on the basis that this input finite string specifies a >>>>>>>>>>>> semantic or syntactic property.
And it is proven true by the meaning of the words that a >>>>>>>>>>> finite string description of a Turing machine specifies all >>>>>>>>>>> semantics properties of the machine it describes, including >>>>>>>>>>> whether the machine it describes halts when executed directly. >>>>>>>>>>>
OK, so you are back to being a mere troll again.
That you don't understand the meaning of the above words is not >>>>>>>>> my error.
Deciders only report on what they see and you are
far too ignorant to understand that they cannot be
correctly required to report om what they cannot see.
In other words, you agree with Turning and Linz that these
requirements cannot be satisfied:
And likewise no Turing machine can give birth
to a healthy baby boy. Expecting it to do this
is kind of nuts.
So we're in agreement: no Turing machine exists that can tell if >>>>>>> any arbitrary Turing machine X with input Y will halt when
executed directly.
For the same reason that it cannot give birth
to a Healthy baby boy. Requiring the logically
impossible is merely an incorrect requirement.
So it seems by "incorrect requirement" you mean "requirement that
can't be satisfied".
Let's clarify that term.
Definition: Mythic Number
An integer N such that N > 5 and N < 2
Is the above an incorrect requirement?
Yes that is an incorrect requirement.
It is not the stupid misnomer of "undecidable"
There is nothing wrong with the above definition.-a It is clear and
concise.-a It just can't be satisfied.
I am stipulating that any question defined to
have no correct answer is an incorrect question.
We're not talking about incorrect questions.-a We're talking about
incorrect requirements, i.e. requirements that cannot be satisfied.
And Turing and Linz proved that the following is (by your definition) an incorrect requirement:
Given any algorithm (i.e. a fixed immutable sequence of instructions) X described as <X> with input Y:
A solution to the halting problem is an algorithm H that computes the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed directly
On 10/7/2025 8:32 PM, dbush wrote:
On 10/7/2025 9:21 PM, olcott wrote:
On 10/7/2025 3:25 PM, dbush wrote:
On 10/7/2025 4:22 PM, olcott wrote:
On 10/7/2025 3:14 PM, dbush wrote:
On 10/7/2025 4:05 PM, olcott wrote:
On 10/7/2025 2:55 PM, dbush wrote:
On 10/7/2025 3:53 PM, olcott wrote:
On 10/7/2025 2:34 PM, dbush wrote:
On 10/7/2025 3:32 PM, olcott wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote: >>>>>>>>>>>>>>>> On 07/10/2025 19:14, olcott wrote:
No. You didn't pay close enough attention *to my exact >>>>>>>>>>>>>>>>> words*
Vague again, I see. But I expect you meant to say that >>>>>>>>>>>>>>>> your exact words mean that HHH has an excuse for getting >>>>>>>>>>>>>>>> it wrong.
And so it does, but the excuse you claim is nowhere near >>>>>>>>>>>>>>>> as convincing as the real reason (for which we have a >>>>>>>>>>>>>>>> proof).
*Anyone that totally understands these exact words* >>>>>>>>>>>>>>> *understands that I proved the halting problem is incorrect* >>>>>>>>>>>>>>>
<repeat of previously refuted point>
Repeating a previously refuted point is less than no >>>>>>>>>>>>>> rebuttal.
*You never even noticed ALL the words yet*
All Turing machine deciders only compute the mapping >>>>>>>>>>>>> from their finite string inputs to an accept state or reject >>>>>>>>>>>>> state on the basis that this input finite string specifies a >>>>>>>>>>>>> semantic or syntactic property.
And it is proven true by the meaning of the words that a >>>>>>>>>>>> finite string description of a Turing machine specifies all >>>>>>>>>>>> semantics properties of the machine it describes, including >>>>>>>>>>>> whether the machine it describes halts when executed directly. >>>>>>>>>>>>
OK, so you are back to being a mere troll again.
That you don't understand the meaning of the above words is >>>>>>>>>> not my error.
Deciders only report on what they see and you are
far too ignorant to understand that they cannot be
correctly required to report om what they cannot see.
In other words, you agree with Turning and Linz that these >>>>>>>>>> requirements cannot be satisfied:
And likewise no Turing machine can give birth
to a healthy baby boy. Expecting it to do this
is kind of nuts.
So we're in agreement: no Turing machine exists that can tell if >>>>>>>> any arbitrary Turing machine X with input Y will halt when
executed directly.
For the same reason that it cannot give birth
to a Healthy baby boy. Requiring the logically
impossible is merely an incorrect requirement.
So it seems by "incorrect requirement" you mean "requirement that >>>>>> can't be satisfied".
Let's clarify that term.
Definition: Mythic Number
An integer N such that N > 5 and N < 2
Is the above an incorrect requirement?
Yes that is an incorrect requirement.
It is not the stupid misnomer of "undecidable"
There is nothing wrong with the above definition.-a It is clear and
concise.-a It just can't be satisfied.
I am stipulating that any question defined to
have no correct answer is an incorrect question.
We're not talking about incorrect questions.-a We're talking about
incorrect requirements, i.e. requirements that cannot be satisfied.
And Turing and Linz proved that the following is (by your definition)
an incorrect requirement:
Given any algorithm (i.e. a fixed immutable sequence of instructions)
X described as <X> with input Y:
A solution to the halting problem is an algorithm H that computes the
following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
directly
Sure and: What time is it (yes or no) is
an equally undecidable decision problem.
On 10/7/2025 8:32 PM, dbush wrote:
On 10/7/2025 9:21 PM, olcott wrote:
On 10/7/2025 3:25 PM, dbush wrote:
On 10/7/2025 4:22 PM, olcott wrote:
On 10/7/2025 3:14 PM, dbush wrote:
On 10/7/2025 4:05 PM, olcott wrote:
On 10/7/2025 2:55 PM, dbush wrote:
On 10/7/2025 3:53 PM, olcott wrote:
On 10/7/2025 2:34 PM, dbush wrote:
On 10/7/2025 3:32 PM, olcott wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote: >>>>>>>>>>>>>>>> On 07/10/2025 19:14, olcott wrote:
No. You didn't pay close enough attention *to my exact >>>>>>>>>>>>>>>>> words*
Vague again, I see. But I expect you meant to say that >>>>>>>>>>>>>>>> your exact words mean that HHH has an excuse for getting >>>>>>>>>>>>>>>> it wrong.
And so it does, but the excuse you claim is nowhere near >>>>>>>>>>>>>>>> as convincing as the real reason (for which we have a >>>>>>>>>>>>>>>> proof).
*Anyone that totally understands these exact words* >>>>>>>>>>>>>>> *understands that I proved the halting problem is incorrect* >>>>>>>>>>>>>>>
<repeat of previously refuted point>
Repeating a previously refuted point is less than no >>>>>>>>>>>>>> rebuttal.
*You never even noticed ALL the words yet*
All Turing machine deciders only compute the mapping >>>>>>>>>>>>> from their finite string inputs to an accept state or reject >>>>>>>>>>>>> state on the basis that this input finite string specifies a >>>>>>>>>>>>> semantic or syntactic property.
And it is proven true by the meaning of the words that a >>>>>>>>>>>> finite string description of a Turing machine specifies all >>>>>>>>>>>> semantics properties of the machine it describes, including >>>>>>>>>>>> whether the machine it describes halts when executed directly. >>>>>>>>>>>>
OK, so you are back to being a mere troll again.
That you don't understand the meaning of the above words is >>>>>>>>>> not my error.
Deciders only report on what they see and you are
far too ignorant to understand that they cannot be
correctly required to report om what they cannot see.
In other words, you agree with Turning and Linz that these >>>>>>>>>> requirements cannot be satisfied:
And likewise no Turing machine can give birth
to a healthy baby boy. Expecting it to do this
is kind of nuts.
So we're in agreement: no Turing machine exists that can tell if >>>>>>>> any arbitrary Turing machine X with input Y will halt when
executed directly.
For the same reason that it cannot give birth
to a Healthy baby boy. Requiring the logically
impossible is merely an incorrect requirement.
So it seems by "incorrect requirement" you mean "requirement that >>>>>> can't be satisfied".
Let's clarify that term.
Definition: Mythic Number
An integer N such that N > 5 and N < 2
Is the above an incorrect requirement?
Yes that is an incorrect requirement.
It is not the stupid misnomer of "undecidable"
There is nothing wrong with the above definition.-a It is clear and
concise.-a It just can't be satisfied.
I am stipulating that any question defined to
have no correct answer is an incorrect question.
We're not talking about incorrect questions.-a We're talking about
incorrect requirements, i.e. requirements that cannot be satisfied.
And Turing and Linz proved that the following is (by your definition)
an incorrect requirement:
Given any algorithm (i.e. a fixed immutable sequence of instructions)
X described as <X> with input Y:
A solution to the halting problem is an algorithm H that computes the
following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
directly
Sure and: What time is it (yes or no) is
an equally undecidable decision problem.
On 10/7/2025 3:26 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote:
On 07/10/2025 19:14, olcott wrote:
No. You didn't pay close enough attention *to my exact words* >>>>>>>>Vague again, I see. But I expect you meant to say that your exact >>>>>>>> words mean that HHH has an excuse for getting it wrong.
And so it does, but the excuse you claim is nowhere near as
convincing as the real reason (for which we have a proof).
*Anyone that totally understands these exact words*
*understands that I proved the halting problem is incorrect*
<repeat of previously refuted point>
Repeating a previously refuted point is less than no rebuttal.
*You never even noticed ALL the words yet*
All Turing machine deciders only compute the mapping
from their finite string inputs to an accept state or reject
state on the basis that this input finite string specifies a
semantic or syntactic property.
And it is proven true by the meaning of the words that a finite string >>>> description of a Turing machine specifies all semantics properties of
the machine it describes, including whether the machine it describes
halts when executed directly.
OK, so you are back to being a mere troll again.
Deciders only report on what they see and you are
far too ignorant to understand that they cannot be
correctly required to report om what they cannot see.
The only reason that the decider of a diagonal test case cannot
that which it needs to report is that it is always out of reach.
The behavior occurs some instructions /after/ the decision is made.
When a simulating decider follow the simulation, it might
trace 4000 instructions and then make a decision.
But the diagonal test case builds on those 4000 instructions;
its behavior is not lpayed out until, say, 4005 instructions.
Counter-factual and you utterly refuse to pay enough
attention to see this.
HHH sees that its own input cannot possibly reach
past its call to HHH(DD) even when infinitely
simulated.
I am beginning to think that you are trying to get
away with playing me.
On 10/7/2025 9:45 PM, olcott wrote:
On 10/7/2025 8:32 PM, dbush wrote:
On 10/7/2025 9:21 PM, olcott wrote:
On 10/7/2025 3:25 PM, dbush wrote:
On 10/7/2025 4:22 PM, olcott wrote:
On 10/7/2025 3:14 PM, dbush wrote:
On 10/7/2025 4:05 PM, olcott wrote:
On 10/7/2025 2:55 PM, dbush wrote:
On 10/7/2025 3:53 PM, olcott wrote:
On 10/7/2025 2:34 PM, dbush wrote:
On 10/7/2025 3:32 PM, olcott wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote: >>>>>>>>>>>>>>>>> On 07/10/2025 19:14, olcott wrote:
No. You didn't pay close enough attention *to my exact >>>>>>>>>>>>>>>>>> words*
Vague again, I see. But I expect you meant to say that >>>>>>>>>>>>>>>>> your exact words mean that HHH has an excuse for >>>>>>>>>>>>>>>>> getting it wrong.
And so it does, but the excuse you claim is nowhere >>>>>>>>>>>>>>>>> near as convincing as the real reason (for which we >>>>>>>>>>>>>>>>> have a proof).
*Anyone that totally understands these exact words* >>>>>>>>>>>>>>>> *understands that I proved the halting problem is >>>>>>>>>>>>>>>> incorrect*
<repeat of previously refuted point>
Repeating a previously refuted point is less than no >>>>>>>>>>>>>>> rebuttal.
*You never even noticed ALL the words yet*
All Turing machine deciders only compute the mapping >>>>>>>>>>>>>> from their finite string inputs to an accept state or reject >>>>>>>>>>>>>> state on the basis that this input finite string specifies a >>>>>>>>>>>>>> semantic or syntactic property.
And it is proven true by the meaning of the words that a >>>>>>>>>>>>> finite string description of a Turing machine specifies all >>>>>>>>>>>>> semantics properties of the machine it describes, including >>>>>>>>>>>>> whether the machine it describes halts when executed directly. >>>>>>>>>>>>>
OK, so you are back to being a mere troll again.
That you don't understand the meaning of the above words is >>>>>>>>>>> not my error.
Deciders only report on what they see and you are
far too ignorant to understand that they cannot be
correctly required to report om what they cannot see.
In other words, you agree with Turning and Linz that these >>>>>>>>>>> requirements cannot be satisfied:
And likewise no Turing machine can give birth
to a healthy baby boy. Expecting it to do this
is kind of nuts.
So we're in agreement: no Turing machine exists that can tell >>>>>>>>> if any arbitrary Turing machine X with input Y will halt when >>>>>>>>> executed directly.
For the same reason that it cannot give birth
to a Healthy baby boy. Requiring the logically
impossible is merely an incorrect requirement.
So it seems by "incorrect requirement" you mean "requirement that >>>>>>> can't be satisfied".
Let's clarify that term.
Definition: Mythic Number
An integer N such that N > 5 and N < 2
Is the above an incorrect requirement?
Yes that is an incorrect requirement.
It is not the stupid misnomer of "undecidable"
There is nothing wrong with the above definition.-a It is clear and >>>>> concise.-a It just can't be satisfied.
I am stipulating that any question defined to
have no correct answer is an incorrect question.
We're not talking about incorrect questions.-a We're talking about
incorrect requirements, i.e. requirements that cannot be satisfied.
And Turing and Linz proved that the following is (by your definition)
an incorrect requirement:
Given any algorithm (i.e. a fixed immutable sequence of instructions)
X described as <X> with input Y:
A solution to the halting problem is an algorithm H that computes the
following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
directly
Sure and: What time is it (yes or no) is
an equally undecidable decision problem.
The undecidable problem related to the halting problem contains the assumption that the above requirements can be met.-a The fact that it is
an undecidable problem is what proves that assumption false.
On 10/7/2025 8:53 PM, dbush wrote:
On 10/7/2025 9:45 PM, olcott wrote:
On 10/7/2025 8:32 PM, dbush wrote:
On 10/7/2025 9:21 PM, olcott wrote:
On 10/7/2025 3:25 PM, dbush wrote:
On 10/7/2025 4:22 PM, olcott wrote:
On 10/7/2025 3:14 PM, dbush wrote:
On 10/7/2025 4:05 PM, olcott wrote:
On 10/7/2025 2:55 PM, dbush wrote:
On 10/7/2025 3:53 PM, olcott wrote:
On 10/7/2025 2:34 PM, dbush wrote:
On 10/7/2025 3:32 PM, olcott wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote: >>>>>>>>>>>>>>>>>> On 07/10/2025 19:14, olcott wrote:
No. You didn't pay close enough attention *to my >>>>>>>>>>>>>>>>>>> exact words*
Vague again, I see. But I expect you meant to say that >>>>>>>>>>>>>>>>>> your exact words mean that HHH has an excuse for >>>>>>>>>>>>>>>>>> getting it wrong.
And so it does, but the excuse you claim is nowhere >>>>>>>>>>>>>>>>>> near as convincing as the real reason (for which we >>>>>>>>>>>>>>>>>> have a proof).
*Anyone that totally understands these exact words* >>>>>>>>>>>>>>>>> *understands that I proved the halting problem is >>>>>>>>>>>>>>>>> incorrect*
<repeat of previously refuted point>
Repeating a previously refuted point is less than no >>>>>>>>>>>>>>>> rebuttal.
*You never even noticed ALL the words yet*
All Turing machine deciders only compute the mapping >>>>>>>>>>>>>>> from their finite string inputs to an accept state or reject >>>>>>>>>>>>>>> state on the basis that this input finite string specifies a >>>>>>>>>>>>>>> semantic or syntactic property.
And it is proven true by the meaning of the words that a >>>>>>>>>>>>>> finite string description of a Turing machine specifies >>>>>>>>>>>>>> all semantics properties of the machine it describes, >>>>>>>>>>>>>> including whether the machine it describes halts when >>>>>>>>>>>>>> executed directly.
OK, so you are back to being a mere troll again.
That you don't understand the meaning of the above words is >>>>>>>>>>>> not my error.
Deciders only report on what they see and you are
far too ignorant to understand that they cannot be
correctly required to report om what they cannot see. >>>>>>>>>>>>>
In other words, you agree with Turning and Linz that these >>>>>>>>>>>> requirements cannot be satisfied:
And likewise no Turing machine can give birth
to a healthy baby boy. Expecting it to do this
is kind of nuts.
So we're in agreement: no Turing machine exists that can tell >>>>>>>>>> if any arbitrary Turing machine X with input Y will halt when >>>>>>>>>> executed directly.
For the same reason that it cannot give birth
to a Healthy baby boy. Requiring the logically
impossible is merely an incorrect requirement.
So it seems by "incorrect requirement" you mean "requirement
that can't be satisfied".
Let's clarify that term.
Definition: Mythic Number
An integer N such that N > 5 and N < 2
Is the above an incorrect requirement?
Yes that is an incorrect requirement.
It is not the stupid misnomer of "undecidable"
There is nothing wrong with the above definition.-a It is clear and >>>>>> concise.-a It just can't be satisfied.
I am stipulating that any question defined to
have no correct answer is an incorrect question.
We're not talking about incorrect questions.-a We're talking about
incorrect requirements, i.e. requirements that cannot be satisfied.
And Turing and Linz proved that the following is (by your
definition) an incorrect requirement:
Given any algorithm (i.e. a fixed immutable sequence of
instructions) X described as <X> with input Y:
A solution to the halting problem is an algorithm H that computes
the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
directly
Sure and: What time is it (yes or no) is
an equally undecidable decision problem.
The undecidable problem related to the halting problem contains the
assumption that the above requirements can be met.-a The fact that it
is an undecidable problem is what proves that assumption false.
All undecidable decision problems are merely yes/no
questions framed such a correct solution is not possible.
On 10/7/2025 10:07 PM, olcott wrote:
On 10/7/2025 8:53 PM, dbush wrote:
On 10/7/2025 9:45 PM, olcott wrote:
On 10/7/2025 8:32 PM, dbush wrote:
On 10/7/2025 9:21 PM, olcott wrote:
On 10/7/2025 3:25 PM, dbush wrote:
On 10/7/2025 4:22 PM, olcott wrote:
On 10/7/2025 3:14 PM, dbush wrote:
On 10/7/2025 4:05 PM, olcott wrote:
On 10/7/2025 2:55 PM, dbush wrote:
On 10/7/2025 3:53 PM, olcott wrote:
On 10/7/2025 2:34 PM, dbush wrote:
On 10/7/2025 3:32 PM, olcott wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote: >>>>>>>>>>>>>>>>>>> On 07/10/2025 19:14, olcott wrote:
No. You didn't pay close enough attention *to my >>>>>>>>>>>>>>>>>>>> exact words*
Vague again, I see. But I expect you meant to say >>>>>>>>>>>>>>>>>>> that your exact words mean that HHH has an excuse for >>>>>>>>>>>>>>>>>>> getting it wrong.
And so it does, but the excuse you claim is nowhere >>>>>>>>>>>>>>>>>>> near as convincing as the real reason (for which we >>>>>>>>>>>>>>>>>>> have a proof).
*Anyone that totally understands these exact words* >>>>>>>>>>>>>>>>>> *understands that I proved the halting problem is >>>>>>>>>>>>>>>>>> incorrect*
<repeat of previously refuted point>
Repeating a previously refuted point is less than no >>>>>>>>>>>>>>>>> rebuttal.
*You never even noticed ALL the words yet*
All Turing machine deciders only compute the mapping >>>>>>>>>>>>>>>> from their finite string inputs to an accept state or >>>>>>>>>>>>>>>> reject
state on the basis that this input finite string >>>>>>>>>>>>>>>> specifies a
semantic or syntactic property.
And it is proven true by the meaning of the words that a >>>>>>>>>>>>>>> finite string description of a Turing machine specifies >>>>>>>>>>>>>>> all semantics properties of the machine it describes, >>>>>>>>>>>>>>> including whether the machine it describes halts when >>>>>>>>>>>>>>> executed directly.
OK, so you are back to being a mere troll again.
That you don't understand the meaning of the above words is >>>>>>>>>>>>> not my error.
Deciders only report on what they see and you are
far too ignorant to understand that they cannot be >>>>>>>>>>>>>> correctly required to report om what they cannot see. >>>>>>>>>>>>>>
In other words, you agree with Turning and Linz that these >>>>>>>>>>>>> requirements cannot be satisfied:
And likewise no Turing machine can give birth
to a healthy baby boy. Expecting it to do this
is kind of nuts.
So we're in agreement: no Turing machine exists that can tell >>>>>>>>>>> if any arbitrary Turing machine X with input Y will halt when >>>>>>>>>>> executed directly.
For the same reason that it cannot give birth
to a Healthy baby boy. Requiring the logically
impossible is merely an incorrect requirement.
So it seems by "incorrect requirement" you mean "requirement >>>>>>>>> that can't be satisfied".
Let's clarify that term.
Definition: Mythic Number
An integer N such that N > 5 and N < 2
Is the above an incorrect requirement?
Yes that is an incorrect requirement.
It is not the stupid misnomer of "undecidable"
There is nothing wrong with the above definition.-a It is clear >>>>>>> and concise.-a It just can't be satisfied.
I am stipulating that any question defined to
have no correct answer is an incorrect question.
We're not talking about incorrect questions.-a We're talking about
incorrect requirements, i.e. requirements that cannot be satisfied.
And Turing and Linz proved that the following is (by your
definition) an incorrect requirement:
Given any algorithm (i.e. a fixed immutable sequence of
instructions) X described as <X> with input Y:
A solution to the halting problem is an algorithm H that computes
the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
directly
Sure and: What time is it (yes or no) is
an equally undecidable decision problem.
The undecidable problem related to the halting problem contains the
assumption that the above requirements can be met.-a The fact that it
is an undecidable problem is what proves that assumption false.
All undecidable decision problems are merely yes/no
questions framed such a correct solution is not possible.
And if such questions contain an implicit assumption, for example that
the below requirements can be satisfied, that proves the assumption
false.-a Which is precisely what Turing and Linz did.
Given any algorithm (i.e. a fixed immutable sequence of instructions) X described as <X> with input Y:
A solution to the halting problem is an algorithm H that computes the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed directly
On 10/7/2025 9:09 PM, dbush wrote:
On 10/7/2025 10:07 PM, olcott wrote:
On 10/7/2025 8:53 PM, dbush wrote:
On 10/7/2025 9:45 PM, olcott wrote:
On 10/7/2025 8:32 PM, dbush wrote:
On 10/7/2025 9:21 PM, olcott wrote:
On 10/7/2025 3:25 PM, dbush wrote:
On 10/7/2025 4:22 PM, olcott wrote:
On 10/7/2025 3:14 PM, dbush wrote:
On 10/7/2025 4:05 PM, olcott wrote:
On 10/7/2025 2:55 PM, dbush wrote:
On 10/7/2025 3:53 PM, olcott wrote:
On 10/7/2025 2:34 PM, dbush wrote:
On 10/7/2025 3:32 PM, olcott wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote: >>>>>>>>>>>>>>>>>>>> On 07/10/2025 19:14, olcott wrote:
No. You didn't pay close enough attention *to my >>>>>>>>>>>>>>>>>>>>> exact words*
Vague again, I see. But I expect you meant to say >>>>>>>>>>>>>>>>>>>> that your exact words mean that HHH has an excuse >>>>>>>>>>>>>>>>>>>> for getting it wrong.
And so it does, but the excuse you claim is nowhere >>>>>>>>>>>>>>>>>>>> near as convincing as the real reason (for which we >>>>>>>>>>>>>>>>>>>> have a proof).
*Anyone that totally understands these exact words* >>>>>>>>>>>>>>>>>>> *understands that I proved the halting problem is >>>>>>>>>>>>>>>>>>> incorrect*
<repeat of previously refuted point>
Repeating a previously refuted point is less than no >>>>>>>>>>>>>>>>>> rebuttal.
*You never even noticed ALL the words yet*
All Turing machine deciders only compute the mapping >>>>>>>>>>>>>>>>> from their finite string inputs to an accept state or >>>>>>>>>>>>>>>>> reject
state on the basis that this input finite string >>>>>>>>>>>>>>>>> specifies a
semantic or syntactic property.
And it is proven true by the meaning of the words that a >>>>>>>>>>>>>>>> finite string description of a Turing machine specifies >>>>>>>>>>>>>>>> all semantics properties of the machine it describes, >>>>>>>>>>>>>>>> including whether the machine it describes halts when >>>>>>>>>>>>>>>> executed directly.
OK, so you are back to being a mere troll again. >>>>>>>>>>>>>>>
That you don't understand the meaning of the above words >>>>>>>>>>>>>> is not my error.
Deciders only report on what they see and you are >>>>>>>>>>>>>>> far too ignorant to understand that they cannot be >>>>>>>>>>>>>>> correctly required to report om what they cannot see. >>>>>>>>>>>>>>>
In other words, you agree with Turning and Linz that these >>>>>>>>>>>>>> requirements cannot be satisfied:
And likewise no Turing machine can give birth
to a healthy baby boy. Expecting it to do this
is kind of nuts.
So we're in agreement: no Turing machine exists that can >>>>>>>>>>>> tell if any arbitrary Turing machine X with input Y will >>>>>>>>>>>> halt when executed directly.
For the same reason that it cannot give birth
to a Healthy baby boy. Requiring the logically
impossible is merely an incorrect requirement.
So it seems by "incorrect requirement" you mean "requirement >>>>>>>>>> that can't be satisfied".
Let's clarify that term.
Definition: Mythic Number
An integer N such that N > 5 and N < 2
Is the above an incorrect requirement?
Yes that is an incorrect requirement.
It is not the stupid misnomer of "undecidable"
There is nothing wrong with the above definition.-a It is clear >>>>>>>> and concise.-a It just can't be satisfied.
I am stipulating that any question defined to
have no correct answer is an incorrect question.
We're not talking about incorrect questions.-a We're talking about >>>>>> incorrect requirements, i.e. requirements that cannot be satisfied. >>>>>>
And Turing and Linz proved that the following is (by your
definition) an incorrect requirement:
Given any algorithm (i.e. a fixed immutable sequence of
instructions) X described as <X> with input Y:
A solution to the halting problem is an algorithm H that computes >>>>>> the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly >>>>>> (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed >>>>>> directly
Sure and: What time is it (yes or no) is
an equally undecidable decision problem.
The undecidable problem related to the halting problem contains the
assumption that the above requirements can be met.-a The fact that it >>>> is an undecidable problem is what proves that assumption false.
All undecidable decision problems are merely yes/no
questions framed such a correct solution is not possible.
And if such questions contain an implicit assumption, for example that
the below requirements can be satisfied, that proves the assumption
false.-a Which is precisely what Turing and Linz did.
But undecidable decision problems especially because
of self-contradiction have no actual value. They are vacuous.
They are all the same class of problems as this question:
What time is it (yes or no)?
Given any algorithm (i.e. a fixed immutable sequence of instructions)
X described as <X> with input Y:
A solution to the halting problem is an algorithm H that computes the
following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
directly
On 10/7/2025 10:49 PM, olcott wrote:
On 10/7/2025 9:09 PM, dbush wrote:
On 10/7/2025 10:07 PM, olcott wrote:
On 10/7/2025 8:53 PM, dbush wrote:
On 10/7/2025 9:45 PM, olcott wrote:
On 10/7/2025 8:32 PM, dbush wrote:
On 10/7/2025 9:21 PM, olcott wrote:
On 10/7/2025 3:25 PM, dbush wrote:
On 10/7/2025 4:22 PM, olcott wrote:
On 10/7/2025 3:14 PM, dbush wrote:
On 10/7/2025 4:05 PM, olcott wrote:
On 10/7/2025 2:55 PM, dbush wrote:
On 10/7/2025 3:53 PM, olcott wrote:
On 10/7/2025 2:34 PM, dbush wrote:
On 10/7/2025 3:32 PM, olcott wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote: >>>>>>>>>>>>>>>>>>>>> On 07/10/2025 19:14, olcott wrote:
No. You didn't pay close enough attention *to my >>>>>>>>>>>>>>>>>>>>>> exact words*
Vague again, I see. But I expect you meant to say >>>>>>>>>>>>>>>>>>>>> that your exact words mean that HHH has an excuse >>>>>>>>>>>>>>>>>>>>> for getting it wrong.
And so it does, but the excuse you claim is nowhere >>>>>>>>>>>>>>>>>>>>> near as convincing as the real reason (for which we >>>>>>>>>>>>>>>>>>>>> have a proof).
*Anyone that totally understands these exact words* >>>>>>>>>>>>>>>>>>>> *understands that I proved the halting problem is >>>>>>>>>>>>>>>>>>>> incorrect*
<repeat of previously refuted point>
Repeating a previously refuted point is less than no >>>>>>>>>>>>>>>>>>> rebuttal.
*You never even noticed ALL the words yet* >>>>>>>>>>>>>>>>>>
All Turing machine deciders only compute the mapping >>>>>>>>>>>>>>>>>> from their finite string inputs to an accept state or >>>>>>>>>>>>>>>>>> reject
state on the basis that this input finite string >>>>>>>>>>>>>>>>>> specifies a
semantic or syntactic property.
And it is proven true by the meaning of the words that >>>>>>>>>>>>>>>>> a finite string description of a Turing machine >>>>>>>>>>>>>>>>> specifies all semantics properties of the machine it >>>>>>>>>>>>>>>>> describes, including whether the machine it describes >>>>>>>>>>>>>>>>> halts when executed directly.
OK, so you are back to being a mere troll again. >>>>>>>>>>>>>>>>
That you don't understand the meaning of the above words >>>>>>>>>>>>>>> is not my error.
Deciders only report on what they see and you are >>>>>>>>>>>>>>>> far too ignorant to understand that they cannot be >>>>>>>>>>>>>>>> correctly required to report om what they cannot see. >>>>>>>>>>>>>>>>
In other words, you agree with Turning and Linz that >>>>>>>>>>>>>>> these requirements cannot be satisfied:
And likewise no Turing machine can give birth
to a healthy baby boy. Expecting it to do this
is kind of nuts.
So we're in agreement: no Turing machine exists that can >>>>>>>>>>>>> tell if any arbitrary Turing machine X with input Y will >>>>>>>>>>>>> halt when executed directly.
For the same reason that it cannot give birth
to a Healthy baby boy. Requiring the logically
impossible is merely an incorrect requirement.
So it seems by "incorrect requirement" you mean "requirement >>>>>>>>>>> that can't be satisfied".
Let's clarify that term.
Definition: Mythic Number
An integer N such that N > 5 and N < 2
Is the above an incorrect requirement?
Yes that is an incorrect requirement.
It is not the stupid misnomer of "undecidable"
There is nothing wrong with the above definition.-a It is clear >>>>>>>>> and concise.-a It just can't be satisfied.
I am stipulating that any question defined to
have no correct answer is an incorrect question.
We're not talking about incorrect questions.-a We're talking about >>>>>>> incorrect requirements, i.e. requirements that cannot be satisfied. >>>>>>>
And Turing and Linz proved that the following is (by your
definition) an incorrect requirement:
Given any algorithm (i.e. a fixed immutable sequence of
instructions) X described as <X> with input Y:
A solution to the halting problem is an algorithm H that computes >>>>>>> the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly >>>>>>> (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed >>>>>>> directly
Sure and: What time is it (yes or no) is
an equally undecidable decision problem.
The undecidable problem related to the halting problem contains the >>>>> assumption that the above requirements can be met.-a The fact that
it is an undecidable problem is what proves that assumption false.
All undecidable decision problems are merely yes/no
questions framed such a correct solution is not possible.
And if such questions contain an implicit assumption, for example
that the below requirements can be satisfied, that proves the
assumption false.-a Which is precisely what Turing and Linz did.
But undecidable decision problems especially because
of self-contradiction have no actual value. They are vacuous.
False.-a If those decision problems contain a false assumption, such as
that a total halt decider exists, it proves that assumption false.
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 3:26 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote:
On 07/10/2025 19:14, olcott wrote:
No. You didn't pay close enough attention *to my exact words* >>>>>>>>>Vague again, I see. But I expect you meant to say that your exact >>>>>>>>> words mean that HHH has an excuse for getting it wrong.
And so it does, but the excuse you claim is nowhere near as
convincing as the real reason (for which we have a proof).
*Anyone that totally understands these exact words*
*understands that I proved the halting problem is incorrect*
<repeat of previously refuted point>
Repeating a previously refuted point is less than no rebuttal.
*You never even noticed ALL the words yet*
All Turing machine deciders only compute the mapping
from their finite string inputs to an accept state or reject
state on the basis that this input finite string specifies a
semantic or syntactic property.
And it is proven true by the meaning of the words that a finite string >>>>> description of a Turing machine specifies all semantics properties of >>>>> the machine it describes, including whether the machine it describes >>>>> halts when executed directly.
OK, so you are back to being a mere troll again.
Deciders only report on what they see and you are
far too ignorant to understand that they cannot be
correctly required to report om what they cannot see.
The only reason that the decider of a diagonal test case cannot
that which it needs to report is that it is always out of reach.
The behavior occurs some instructions /after/ the decision is made.
When a simulating decider follow the simulation, it might
trace 4000 instructions and then make a decision.
But the diagonal test case builds on those 4000 instructions;
its behavior is not lpayed out until, say, 4005 instructions.
Counter-factual and you utterly refuse to pay enough
attention to see this.
HHH sees that its own input cannot possibly reach
past its call to HHH(DD) even when infinitely
simulated.
I am beginning to think that you are trying to get
away with playing me.
If you think I'm wrong, it is very simple to prove it.
Take the abandoned simulation, and put it thorugh, say, a million
additional Debug_Step operations, or until it reaches the RET out of
DD(), whichever comes first.
If it reaches the million instructions rather than returning from
DD, you win.
(Do not mess with any counters, like changing how many CALL
instructions you are counting.)
Your x86_utm can keep track of active simulations in
a linked list and you can add a function to x86_utm which,
after finishing the test case (such as Halt7.o) examines those
simulations and tests them to see how far they can go.
On 10/7/2025 9:03 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 3:26 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote:
On 07/10/2025 19:14, olcott wrote:
No. You didn't pay close enough attention *to my exact words* >>>>>>>>>>Vague again, I see. But I expect you meant to say that your exact >>>>>>>>>> words mean that HHH has an excuse for getting it wrong.
And so it does, but the excuse you claim is nowhere near as >>>>>>>>>> convincing as the real reason (for which we have a proof). >>>>>>>>>>
*Anyone that totally understands these exact words*
*understands that I proved the halting problem is incorrect* >>>>>>>>>
<repeat of previously refuted point>
Repeating a previously refuted point is less than no rebuttal. >>>>>>>>
*You never even noticed ALL the words yet*
All Turing machine deciders only compute the mapping
from their finite string inputs to an accept state or reject
state on the basis that this input finite string specifies a
semantic or syntactic property.
And it is proven true by the meaning of the words that a finite string >>>>>> description of a Turing machine specifies all semantics properties of >>>>>> the machine it describes, including whether the machine it describes >>>>>> halts when executed directly.
OK, so you are back to being a mere troll again.
Deciders only report on what they see and you are
far too ignorant to understand that they cannot be
correctly required to report om what they cannot see.
The only reason that the decider of a diagonal test case cannot
that which it needs to report is that it is always out of reach.
The behavior occurs some instructions /after/ the decision is made.
When a simulating decider follow the simulation, it might
trace 4000 instructions and then make a decision.
But the diagonal test case builds on those 4000 instructions;
its behavior is not lpayed out until, say, 4005 instructions.
Counter-factual and you utterly refuse to pay enough
attention to see this.
HHH sees that its own input cannot possibly reach
past its call to HHH(DD) even when infinitely
simulated.
I am beginning to think that you are trying to get
away with playing me.
If you think I'm wrong, it is very simple to prove it.
Not if you are actually playing me.
Take the abandoned simulation, and put it thorugh, say, a million
additional Debug_Step operations, or until it reaches the RET out of
DD(), whichever comes first.
If it reaches the million instructions rather than returning from
DD, you win.
My truth is proven axiomatically.
On 2025-10-08, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 9:03 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 3:26 PM, Kaz Kylheku wrote:
On 2025-10-07, olcott <polcott333@gmail.com> wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote:
On 07/10/2025 19:14, olcott wrote:
No. You didn't pay close enough attention *to my exact words* >>>>>>>>>>>Vague again, I see. But I expect you meant to say that your exact >>>>>>>>>>> words mean that HHH has an excuse for getting it wrong.
And so it does, but the excuse you claim is nowhere near as >>>>>>>>>>> convincing as the real reason (for which we have a proof). >>>>>>>>>>>
*Anyone that totally understands these exact words*
*understands that I proved the halting problem is incorrect* >>>>>>>>>>
<repeat of previously refuted point>
Repeating a previously refuted point is less than no rebuttal. >>>>>>>>>
*You never even noticed ALL the words yet*
All Turing machine deciders only compute the mapping
from their finite string inputs to an accept state or reject
state on the basis that this input finite string specifies a
semantic or syntactic property.
And it is proven true by the meaning of the words that a finite string >>>>>>> description of a Turing machine specifies all semantics properties of >>>>>>> the machine it describes, including whether the machine it describes >>>>>>> halts when executed directly.
OK, so you are back to being a mere troll again.
Deciders only report on what they see and you are
far too ignorant to understand that they cannot be
correctly required to report om what they cannot see.
The only reason that the decider of a diagonal test case cannot
that which it needs to report is that it is always out of reach.
The behavior occurs some instructions /after/ the decision is made.
When a simulating decider follow the simulation, it might
trace 4000 instructions and then make a decision.
But the diagonal test case builds on those 4000 instructions;
its behavior is not lpayed out until, say, 4005 instructions.
Counter-factual and you utterly refuse to pay enough
attention to see this.
HHH sees that its own input cannot possibly reach
past its call to HHH(DD) even when infinitely
simulated.
I am beginning to think that you are trying to get
away with playing me.
If you think I'm wrong, it is very simple to prove it.
Not if you are actually playing me.
A confident genius wouldn't be fazed in this way.
I say that your abandoned simulation of DD is halting.
If you show that it's non-halting, what else is there to say?
On 10/7/2025 10:50 PM, Kaz Kylheku wrote:
If you think I'm wrong, it is very simple to prove it.
Not if you are actually playing me.
A confident genius wouldn't be fazed in this way.
A confident genius would not waste time being played.
I say that your abandoned simulation of DD is halting.
Yes and you do that by making sure to not understand
a single detail of the execution trace that proves you wrong.
On 2025-10-08, olcott <polcott333@gmail.com> wrote:^^^^^^^^^^
On 10/7/2025 10:50 PM, Kaz Kylheku wrote:
If you think I'm wrong, it is very simple to prove it.
Not if you are actually playing me.
A confident genius wouldn't be fazed in this way.
A confident genius would not waste time being played.
Since you've been at this 21 years since 2004 and counting, you're
either not a confident genius, or you are not being played.
Thus it is iron-clad that you are not being played.
I say that your abandoned simulation of DD is halting.
Yes and you do that by making sure to not understand
a single detail of the execution trace that proves you wrong.
I cannot not be convinced by anything other than that trace
On 10/7/2025 9:56 PM, dbush wrote:
On 10/7/2025 10:49 PM, olcott wrote:
On 10/7/2025 9:09 PM, dbush wrote:
On 10/7/2025 10:07 PM, olcott wrote:
But undecidable decision problems especially because
of self-contradiction have no actual value. They are vacuous.
False.-a If those decision problems contain a false assumption,
such as that a total halt decider exists, it proves that
assumption false.
Not at all. A total decider may exist under different
set of assumptions.
On 10/7/2025 3:26 PM, Kaz Kylheku wrote:No, that is actually your point.
The only reason that the decider of a diagonal test case cannot thatCounter-factual and you utterly refuse to pay enough attention to see
which it needs to report is that it is always out of reach.
The behavior occurs some instructions /after/ the decision is made.
When a simulating decider follow the simulation, it might trace 4000
instructions and then make a decision.
But the diagonal test case builds on those 4000 instructions;
its behavior is not lpayed out until, say, 4005 instructions.
this.
HHH sees that its own input cannot possibly reach past its call toNo. It sees that the diagonal program of a UTM doesnrCOt halt, when it
HHH(DD) even when infinitely simulated.
This causes HHH to abort the simulation of its input.The fact that the input calls this aborting HHH makes the input halt.
This causes every other simulation to immediately stop because the ONLYNot simulating doesnrCOt mean shit about the input.
thing that was driving them was HHH simulating its own DD.
On 10/7/2025 9:56 PM, dbush wrote:
On 10/7/2025 10:49 PM, olcott wrote:
On 10/7/2025 9:09 PM, dbush wrote:
On 10/7/2025 10:07 PM, olcott wrote:
On 10/7/2025 8:53 PM, dbush wrote:
On 10/7/2025 9:45 PM, olcott wrote:
On 10/7/2025 8:32 PM, dbush wrote:
On 10/7/2025 9:21 PM, olcott wrote:
On 10/7/2025 3:25 PM, dbush wrote:
On 10/7/2025 4:22 PM, olcott wrote:
On 10/7/2025 3:14 PM, dbush wrote:
On 10/7/2025 4:05 PM, olcott wrote:
On 10/7/2025 2:55 PM, dbush wrote:
On 10/7/2025 3:53 PM, olcott wrote:
On 10/7/2025 2:34 PM, dbush wrote:
On 10/7/2025 3:32 PM, olcott wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote:
On 10/7/2025 1:24 PM, Richard Heathfield wrote: >>>>>>>>>>>>>>>>>>>>>> On 07/10/2025 19:14, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>> No. You didn't pay close enough attention *to my >>>>>>>>>>>>>>>>>>>>>>> exact words*
Vague again, I see. But I expect you meant to say >>>>>>>>>>>>>>>>>>>>>> that your exact words mean that HHH has an excuse >>>>>>>>>>>>>>>>>>>>>> for getting it wrong.
And so it does, but the excuse you claim is >>>>>>>>>>>>>>>>>>>>>> nowhere near as convincing as the real reason (for >>>>>>>>>>>>>>>>>>>>>> which we have a proof).
*Anyone that totally understands these exact words* >>>>>>>>>>>>>>>>>>>>> *understands that I proved the halting problem is >>>>>>>>>>>>>>>>>>>>> incorrect*
<repeat of previously refuted point> >>>>>>>>>>>>>>>>>>>>>
Repeating a previously refuted point is less than no >>>>>>>>>>>>>>>>>>>> rebuttal.
*You never even noticed ALL the words yet* >>>>>>>>>>>>>>>>>>>
All Turing machine deciders only compute the mapping >>>>>>>>>>>>>>>>>>> from their finite string inputs to an accept state or >>>>>>>>>>>>>>>>>>> reject
state on the basis that this input finite string >>>>>>>>>>>>>>>>>>> specifies a
semantic or syntactic property.
And it is proven true by the meaning of the words that >>>>>>>>>>>>>>>>>> a finite string description of a Turing machine >>>>>>>>>>>>>>>>>> specifies all semantics properties of the machine it >>>>>>>>>>>>>>>>>> describes, including whether the machine it describes >>>>>>>>>>>>>>>>>> halts when executed directly.
OK, so you are back to being a mere troll again. >>>>>>>>>>>>>>>>>
That you don't understand the meaning of the above words >>>>>>>>>>>>>>>> is not my error.
Deciders only report on what they see and you are >>>>>>>>>>>>>>>>> far too ignorant to understand that they cannot be >>>>>>>>>>>>>>>>> correctly required to report om what they cannot see. >>>>>>>>>>>>>>>>>
In other words, you agree with Turning and Linz that >>>>>>>>>>>>>>>> these requirements cannot be satisfied:
And likewise no Turing machine can give birth
to a healthy baby boy. Expecting it to do this
is kind of nuts.
So we're in agreement: no Turing machine exists that can >>>>>>>>>>>>>> tell if any arbitrary Turing machine X with input Y will >>>>>>>>>>>>>> halt when executed directly.
For the same reason that it cannot give birth
to a Healthy baby boy. Requiring the logically
impossible is merely an incorrect requirement.
So it seems by "incorrect requirement" you mean "requirement >>>>>>>>>>>> that can't be satisfied".
Let's clarify that term.
Definition: Mythic Number
An integer N such that N > 5 and N < 2
Is the above an incorrect requirement?
Yes that is an incorrect requirement.
It is not the stupid misnomer of "undecidable"
There is nothing wrong with the above definition.-a It is clear >>>>>>>>>> and concise.-a It just can't be satisfied.
I am stipulating that any question defined to
have no correct answer is an incorrect question.
We're not talking about incorrect questions.-a We're talking
about incorrect requirements, i.e. requirements that cannot be >>>>>>>> satisfied.
And Turing and Linz proved that the following is (by your
definition) an incorrect requirement:
Given any algorithm (i.e. a fixed immutable sequence of
instructions) X described as <X> with input Y:
A solution to the halting problem is an algorithm H that
computes the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly >>>>>>>> (<X>,Y) maps to 0 if and only if X(Y) does not halt when
executed directly
Sure and: What time is it (yes or no) is
an equally undecidable decision problem.
The undecidable problem related to the halting problem contains
the assumption that the above requirements can be met.-a The fact >>>>>> that it is an undecidable problem is what proves that assumption
false.
All undecidable decision problems are merely yes/no
questions framed such a correct solution is not possible.
And if such questions contain an implicit assumption, for example
that the below requirements can be satisfied, that proves the
assumption false.-a Which is precisely what Turing and Linz did.
But undecidable decision problems especially because
of self-contradiction have no actual value. They are vacuous.
False.-a If those decision problems contain a false assumption, such as
that a total halt decider exists, it proves that assumption false.
Not at all. A total decider may exist under different
set of assumptions.
On 10/7/2025 11:37 PM, olcott wrote:
On 10/7/2025 9:56 PM, dbush wrote:
On 10/7/2025 10:49 PM, olcott wrote:
On 10/7/2025 9:09 PM, dbush wrote:
On 10/7/2025 10:07 PM, olcott wrote:
On 10/7/2025 8:53 PM, dbush wrote:
On 10/7/2025 9:45 PM, olcott wrote:
On 10/7/2025 8:32 PM, dbush wrote:
On 10/7/2025 9:21 PM, olcott wrote:
On 10/7/2025 3:25 PM, dbush wrote:
On 10/7/2025 4:22 PM, olcott wrote:
On 10/7/2025 3:14 PM, dbush wrote:
On 10/7/2025 4:05 PM, olcott wrote:
On 10/7/2025 2:55 PM, dbush wrote:
On 10/7/2025 3:53 PM, olcott wrote:
On 10/7/2025 2:34 PM, dbush wrote:
On 10/7/2025 3:32 PM, olcott wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>> On 10/7/2025 1:24 PM, Richard Heathfield wrote: >>>>>>>>>>>>>>>>>>>>>>> On 07/10/2025 19:14, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>> No. You didn't pay close enough attention *to my >>>>>>>>>>>>>>>>>>>>>>>> exact words*
Vague again, I see. But I expect you meant to say >>>>>>>>>>>>>>>>>>>>>>> that your exact words mean that HHH has an excuse >>>>>>>>>>>>>>>>>>>>>>> for getting it wrong.
And so it does, but the excuse you claim is >>>>>>>>>>>>>>>>>>>>>>> nowhere near as convincing as the real reason >>>>>>>>>>>>>>>>>>>>>>> (for which we have a proof).
*Anyone that totally understands these exact words* >>>>>>>>>>>>>>>>>>>>>> *understands that I proved the halting problem is >>>>>>>>>>>>>>>>>>>>>> incorrect*
<repeat of previously refuted point> >>>>>>>>>>>>>>>>>>>>>>
Repeating a previously refuted point is less than >>>>>>>>>>>>>>>>>>>>> no rebuttal.
*You never even noticed ALL the words yet* >>>>>>>>>>>>>>>>>>>>
All Turing machine deciders only compute the mapping >>>>>>>>>>>>>>>>>>>> from their finite string inputs to an accept state >>>>>>>>>>>>>>>>>>>> or reject
state on the basis that this input finite string >>>>>>>>>>>>>>>>>>>> specifies a
semantic or syntactic property.
And it is proven true by the meaning of the words >>>>>>>>>>>>>>>>>>> that a finite string description of a Turing machine >>>>>>>>>>>>>>>>>>> specifies all semantics properties of the machine it >>>>>>>>>>>>>>>>>>> describes, including whether the machine it describes >>>>>>>>>>>>>>>>>>> halts when executed directly.
OK, so you are back to being a mere troll again. >>>>>>>>>>>>>>>>>>
That you don't understand the meaning of the above >>>>>>>>>>>>>>>>> words is not my error.
Deciders only report on what they see and you are >>>>>>>>>>>>>>>>>> far too ignorant to understand that they cannot be >>>>>>>>>>>>>>>>>> correctly required to report om what they cannot see. >>>>>>>>>>>>>>>>>>
In other words, you agree with Turning and Linz that >>>>>>>>>>>>>>>>> these requirements cannot be satisfied:
And likewise no Turing machine can give birth
to a healthy baby boy. Expecting it to do this >>>>>>>>>>>>>>>> is kind of nuts.
So we're in agreement: no Turing machine exists that can >>>>>>>>>>>>>>> tell if any arbitrary Turing machine X with input Y will >>>>>>>>>>>>>>> halt when executed directly.
For the same reason that it cannot give birth
to a Healthy baby boy. Requiring the logically
impossible is merely an incorrect requirement.
So it seems by "incorrect requirement" you mean
"requirement that can't be satisfied".
Let's clarify that term.
Definition: Mythic Number
An integer N such that N > 5 and N < 2
Is the above an incorrect requirement?
Yes that is an incorrect requirement.
It is not the stupid misnomer of "undecidable"
There is nothing wrong with the above definition.-a It is >>>>>>>>>>> clear and concise.-a It just can't be satisfied.
I am stipulating that any question defined to
have no correct answer is an incorrect question.
We're not talking about incorrect questions.-a We're talking >>>>>>>>> about incorrect requirements, i.e. requirements that cannot be >>>>>>>>> satisfied.
And Turing and Linz proved that the following is (by your
definition) an incorrect requirement:
Given any algorithm (i.e. a fixed immutable sequence of
instructions) X described as <X> with input Y:
A solution to the halting problem is an algorithm H that
computes the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly >>>>>>>>> (<X>,Y) maps to 0 if and only if X(Y) does not halt when
executed directly
Sure and: What time is it (yes or no) is
an equally undecidable decision problem.
The undecidable problem related to the halting problem contains >>>>>>> the assumption that the above requirements can be met.-a The fact >>>>>>> that it is an undecidable problem is what proves that assumption >>>>>>> false.
All undecidable decision problems are merely yes/no
questions framed such a correct solution is not possible.
And if such questions contain an implicit assumption, for example
that the below requirements can be satisfied, that proves the
assumption false.-a Which is precisely what Turing and Linz did.
But undecidable decision problems especially because
of self-contradiction have no actual value. They are vacuous.
False.-a If those decision problems contain a false assumption, such
as that a total halt decider exists, it proves that assumption false.
Not at all. A total decider may exist under different
set of assumptions.
The existence of a total decider *is* the assumption.-a That's what
allows the undecidable problem to happen.
Am Tue, 07 Oct 2025 15:39:54 -0500 schrieb olcott:
On 10/7/2025 3:26 PM, Kaz Kylheku wrote:
No, that is actually your point.The only reason that the decider of a diagonal test case cannot thatCounter-factual and you utterly refuse to pay enough attention to see
which it needs to report is that it is always out of reach.
The behavior occurs some instructions /after/ the decision is made.
When a simulating decider follow the simulation, it might trace 4000
instructions and then make a decision.
But the diagonal test case builds on those 4000 instructions;
its behavior is not lpayed out until, say, 4005 instructions.
this.
HHH sees that its own input cannot possibly reach past its call to
HHH(DD) even when infinitely simulated.
No. It sees that the diagonal program of a UTM doesnrCOt halt, when it
should see that the full simulation of *this*, its own diagonal
program, halts.
This causes HHH to abort the simulation of its input.The fact that the input calls this aborting HHH makes the input halt.
This causes every other simulation to immediately stop because the ONLYNot simulating doesnrCOt mean shit about the input.
thing that was driving them was HHH simulating its own DD.
On 10/8/2025 4:39 PM, dbush wrote:
On 10/7/2025 11:37 PM, olcott wrote:
On 10/7/2025 9:56 PM, dbush wrote:
On 10/7/2025 10:49 PM, olcott wrote:
On 10/7/2025 9:09 PM, dbush wrote:
On 10/7/2025 10:07 PM, olcott wrote:
On 10/7/2025 8:53 PM, dbush wrote:
On 10/7/2025 9:45 PM, olcott wrote:
On 10/7/2025 8:32 PM, dbush wrote:
On 10/7/2025 9:21 PM, olcott wrote:
On 10/7/2025 3:25 PM, dbush wrote:
On 10/7/2025 4:22 PM, olcott wrote:
On 10/7/2025 3:14 PM, dbush wrote:
On 10/7/2025 4:05 PM, olcott wrote:
On 10/7/2025 2:55 PM, dbush wrote:
On 10/7/2025 3:53 PM, olcott wrote:
On 10/7/2025 2:34 PM, dbush wrote:
On 10/7/2025 3:32 PM, olcott wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote:
On 10/7/2025 1:40 PM, dbush wrote:
On 10/7/2025 2:36 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>> On 10/7/2025 1:24 PM, Richard Heathfield wrote: >>>>>>>>>>>>>>>>>>>>>>>> On 07/10/2025 19:14, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>>> No. You didn't pay close enough attention *to >>>>>>>>>>>>>>>>>>>>>>>>> my exact words*
Vague again, I see. But I expect you meant to >>>>>>>>>>>>>>>>>>>>>>>> say that your exact words mean that HHH has an >>>>>>>>>>>>>>>>>>>>>>>> excuse for getting it wrong.
And so it does, but the excuse you claim is >>>>>>>>>>>>>>>>>>>>>>>> nowhere near as convincing as the real reason >>>>>>>>>>>>>>>>>>>>>>>> (for which we have a proof).
*Anyone that totally understands these exact words* >>>>>>>>>>>>>>>>>>>>>>> *understands that I proved the halting problem is >>>>>>>>>>>>>>>>>>>>>>> incorrect*
<repeat of previously refuted point> >>>>>>>>>>>>>>>>>>>>>>>
Repeating a previously refuted point is less than >>>>>>>>>>>>>>>>>>>>>> no rebuttal.
*You never even noticed ALL the words yet* >>>>>>>>>>>>>>>>>>>>>
All Turing machine deciders only compute the mapping >>>>>>>>>>>>>>>>>>>>> from their finite string inputs to an accept state >>>>>>>>>>>>>>>>>>>>> or reject
state on the basis that this input finite string >>>>>>>>>>>>>>>>>>>>> specifies a
semantic or syntactic property.
And it is proven true by the meaning of the words >>>>>>>>>>>>>>>>>>>> that a finite string description of a Turing machine >>>>>>>>>>>>>>>>>>>> specifies all semantics properties of the machine it >>>>>>>>>>>>>>>>>>>> describes, including whether the machine it >>>>>>>>>>>>>>>>>>>> describes halts when executed directly. >>>>>>>>>>>>>>>>>>>>
OK, so you are back to being a mere troll again. >>>>>>>>>>>>>>>>>>>
That you don't understand the meaning of the above >>>>>>>>>>>>>>>>>> words is not my error.
Deciders only report on what they see and you are >>>>>>>>>>>>>>>>>>> far too ignorant to understand that they cannot be >>>>>>>>>>>>>>>>>>> correctly required to report om what they cannot see. >>>>>>>>>>>>>>>>>>>
In other words, you agree with Turning and Linz that >>>>>>>>>>>>>>>>>> these requirements cannot be satisfied:
And likewise no Turing machine can give birth >>>>>>>>>>>>>>>>> to a healthy baby boy. Expecting it to do this >>>>>>>>>>>>>>>>> is kind of nuts.
So we're in agreement: no Turing machine exists that can >>>>>>>>>>>>>>>> tell if any arbitrary Turing machine X with input Y will >>>>>>>>>>>>>>>> halt when executed directly.
For the same reason that it cannot give birth
to a Healthy baby boy. Requiring the logically
impossible is merely an incorrect requirement.
So it seems by "incorrect requirement" you mean
"requirement that can't be satisfied".
Let's clarify that term.
Definition: Mythic Number
An integer N such that N > 5 and N < 2
Is the above an incorrect requirement?
Yes that is an incorrect requirement.
It is not the stupid misnomer of "undecidable"
There is nothing wrong with the above definition.-a It is >>>>>>>>>>>> clear and concise.-a It just can't be satisfied.
I am stipulating that any question defined to
have no correct answer is an incorrect question.
We're not talking about incorrect questions.-a We're talking >>>>>>>>>> about incorrect requirements, i.e. requirements that cannot be >>>>>>>>>> satisfied.
And Turing and Linz proved that the following is (by your >>>>>>>>>> definition) an incorrect requirement:
Given any algorithm (i.e. a fixed immutable sequence of
instructions) X described as <X> with input Y:
A solution to the halting problem is an algorithm H that
computes the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed >>>>>>>>>> directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when
executed directly
Sure and: What time is it (yes or no) is
an equally undecidable decision problem.
The undecidable problem related to the halting problem contains >>>>>>>> the assumption that the above requirements can be met.-a The fact >>>>>>>> that it is an undecidable problem is what proves that assumption >>>>>>>> false.
All undecidable decision problems are merely yes/no
questions framed such a correct solution is not possible.
And if such questions contain an implicit assumption, for example >>>>>> that the below requirements can be satisfied, that proves the
assumption false.-a Which is precisely what Turing and Linz did.
But undecidable decision problems especially because
of self-contradiction have no actual value. They are vacuous.
False.-a If those decision problems contain a false assumption, such
as that a total halt decider exists, it proves that assumption false.
Not at all. A total decider may exist under different
set of assumptions.
The existence of a total decider *is* the assumption.-a That's what
allows the undecidable problem to happen.
As long as we understand that all deciders only compute
the mapping from their inputs
and everything else is
out-of-scope then HHH(DD) and the Linz embedded_H
both correctly report on the actual behavior that
their actual input actually specifies.
On 10/8/2025 10:52 PM, olcott wrote:
On 10/8/2025 4:39 PM, dbush wrote:
On 10/7/2025 11:37 PM, olcott wrote:
On 10/7/2025 9:56 PM, dbush wrote:
On 10/7/2025 10:49 PM, olcott wrote:
On 10/7/2025 9:09 PM, dbush wrote:
On 10/7/2025 10:07 PM, olcott wrote:
On 10/7/2025 8:53 PM, dbush wrote:
On 10/7/2025 9:45 PM, olcott wrote:
On 10/7/2025 8:32 PM, dbush wrote:
On 10/7/2025 9:21 PM, olcott wrote:
On 10/7/2025 3:25 PM, dbush wrote:
On 10/7/2025 4:22 PM, olcott wrote:
On 10/7/2025 3:14 PM, dbush wrote:
On 10/7/2025 4:05 PM, olcott wrote:
On 10/7/2025 2:55 PM, dbush wrote:
On 10/7/2025 3:53 PM, olcott wrote:
On 10/7/2025 2:34 PM, dbush wrote:
On 10/7/2025 3:32 PM, olcott wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>> On 10/7/2025 1:40 PM, dbush wrote: >>>>>>>>>>>>>>>>>>>>>>> On 10/7/2025 2:36 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>> On 10/7/2025 1:24 PM, Richard Heathfield wrote: >>>>>>>>>>>>>>>>>>>>>>>>> On 07/10/2025 19:14, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>>>> No. You didn't pay close enough attention *to >>>>>>>>>>>>>>>>>>>>>>>>>> my exact words*
Vague again, I see. But I expect you meant to >>>>>>>>>>>>>>>>>>>>>>>>> say that your exact words mean that HHH has an >>>>>>>>>>>>>>>>>>>>>>>>> excuse for getting it wrong. >>>>>>>>>>>>>>>>>>>>>>>>>
And so it does, but the excuse you claim is >>>>>>>>>>>>>>>>>>>>>>>>> nowhere near as convincing as the real reason >>>>>>>>>>>>>>>>>>>>>>>>> (for which we have a proof). >>>>>>>>>>>>>>>>>>>>>>>>>
*Anyone that totally understands these exact words* >>>>>>>>>>>>>>>>>>>>>>>> *understands that I proved the halting problem >>>>>>>>>>>>>>>>>>>>>>>> is incorrect*
<repeat of previously refuted point> >>>>>>>>>>>>>>>>>>>>>>>>
Repeating a previously refuted point is less than >>>>>>>>>>>>>>>>>>>>>>> no rebuttal.
*You never even noticed ALL the words yet* >>>>>>>>>>>>>>>>>>>>>>
All Turing machine deciders only compute the mapping >>>>>>>>>>>>>>>>>>>>>> from their finite string inputs to an accept state >>>>>>>>>>>>>>>>>>>>>> or reject
state on the basis that this input finite string >>>>>>>>>>>>>>>>>>>>>> specifies a
semantic or syntactic property.
And it is proven true by the meaning of the words >>>>>>>>>>>>>>>>>>>>> that a finite string description of a Turing >>>>>>>>>>>>>>>>>>>>> machine specifies all semantics properties of the >>>>>>>>>>>>>>>>>>>>> machine it describes, including whether the machine >>>>>>>>>>>>>>>>>>>>> it describes halts when executed directly. >>>>>>>>>>>>>>>>>>>>>
OK, so you are back to being a mere troll again. >>>>>>>>>>>>>>>>>>>>
That you don't understand the meaning of the above >>>>>>>>>>>>>>>>>>> words is not my error.
Deciders only report on what they see and you are >>>>>>>>>>>>>>>>>>>> far too ignorant to understand that they cannot be >>>>>>>>>>>>>>>>>>>> correctly required to report om what they cannot see. >>>>>>>>>>>>>>>>>>>>
In other words, you agree with Turning and Linz that >>>>>>>>>>>>>>>>>>> these requirements cannot be satisfied:
And likewise no Turing machine can give birth >>>>>>>>>>>>>>>>>> to a healthy baby boy. Expecting it to do this >>>>>>>>>>>>>>>>>> is kind of nuts.
So we're in agreement: no Turing machine exists that >>>>>>>>>>>>>>>>> can tell if any arbitrary Turing machine X with input Y >>>>>>>>>>>>>>>>> will halt when executed directly.
For the same reason that it cannot give birth
to a Healthy baby boy. Requiring the logically >>>>>>>>>>>>>>>> impossible is merely an incorrect requirement. >>>>>>>>>>>>>>>>
So it seems by "incorrect requirement" you mean >>>>>>>>>>>>>>> "requirement that can't be satisfied".
Let's clarify that term.
Definition: Mythic Number
An integer N such that N > 5 and N < 2
Is the above an incorrect requirement?
Yes that is an incorrect requirement.
It is not the stupid misnomer of "undecidable"
There is nothing wrong with the above definition.-a It is >>>>>>>>>>>>> clear and concise.-a It just can't be satisfied.
I am stipulating that any question defined to
have no correct answer is an incorrect question.
We're not talking about incorrect questions.-a We're talking >>>>>>>>>>> about incorrect requirements, i.e. requirements that cannot >>>>>>>>>>> be satisfied.
And Turing and Linz proved that the following is (by your >>>>>>>>>>> definition) an incorrect requirement:
Given any algorithm (i.e. a fixed immutable sequence of >>>>>>>>>>> instructions) X described as <X> with input Y:
A solution to the halting problem is an algorithm H that >>>>>>>>>>> computes the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed >>>>>>>>>>> directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when >>>>>>>>>>> executed directly
Sure and: What time is it (yes or no) is
an equally undecidable decision problem.
The undecidable problem related to the halting problem contains >>>>>>>>> the assumption that the above requirements can be met.-a The >>>>>>>>> fact that it is an undecidable problem is what proves that
assumption false.
All undecidable decision problems are merely yes/no
questions framed such a correct solution is not possible.
And if such questions contain an implicit assumption, for example >>>>>>> that the below requirements can be satisfied, that proves the
assumption false.-a Which is precisely what Turing and Linz did. >>>>>>>
But undecidable decision problems especially because
of self-contradiction have no actual value. They are vacuous.
False.-a If those decision problems contain a false assumption, such >>>>> as that a total halt decider exists, it proves that assumption false. >>>>>
Not at all. A total decider may exist under different
set of assumptions.
The existence of a total decider *is* the assumption.-a That's what
allows the undecidable problem to happen.
As long as we understand that all deciders only compute
the mapping from their inputs
i.e. a machine description which is defined to have the semantic
properties of the machine it describes, including if that machine halts
when executed directly.
and everything else is
out-of-scope then HHH(DD) and the Linz embedded_H
both correctly report on the actual behavior that
their actual input actually specifies.
False, because the actual input, i.e. finite string DD, is the
description of machine DD and therefore specifies all semantic
properties of that machine including the fact that it halts when
executed directly, and HHH fails to report on that semantic property.
On 10/8/2025 4:39 AM, joes wrote:
Am Tue, 07 Oct 2025 15:39:54 -0500 schrieb olcott:
HHH sees that its own input cannot possibly reach past its call to
HHH(DD) even when infinitely simulated.
No. It sees that the diagonal program of a UTM doesnrCOt halt, when it
should see that the full simulation of *this*, its own diagonal
program, halts.
Do you understand that this is true?
*Keep repeating unless aborted*
(a) -n copies its input rf?-nrf-
(b) -n invokes embedded_H rf?-nrf- rf?-nrf-
(c) embedded_H simulates rf?-nrf- rf?-nrf-
So you want to change the concrete input to always be the diagonalThis causes HHH to abort the simulation of its input.The fact that the input calls this aborting HHH makes the input halt.
On 10/8/2025 10:51 PM, dbush wrote:
On 10/8/2025 10:52 PM, olcott wrote:
On 10/8/2025 4:39 PM, dbush wrote:
On 10/7/2025 11:37 PM, olcott wrote:
On 10/7/2025 9:56 PM, dbush wrote:
On 10/7/2025 10:49 PM, olcott wrote:
On 10/7/2025 9:09 PM, dbush wrote:
On 10/7/2025 10:07 PM, olcott wrote:
On 10/7/2025 8:53 PM, dbush wrote:
On 10/7/2025 9:45 PM, olcott wrote:
On 10/7/2025 8:32 PM, dbush wrote:
On 10/7/2025 9:21 PM, olcott wrote:
On 10/7/2025 3:25 PM, dbush wrote:
On 10/7/2025 4:22 PM, olcott wrote:
On 10/7/2025 3:14 PM, dbush wrote:
On 10/7/2025 4:05 PM, olcott wrote:
On 10/7/2025 2:55 PM, dbush wrote:
On 10/7/2025 3:53 PM, olcott wrote:
On 10/7/2025 2:34 PM, dbush wrote:
On 10/7/2025 3:32 PM, olcott wrote:
On 10/7/2025 2:08 PM, dbush wrote:
On 10/7/2025 3:04 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>> On 10/7/2025 1:40 PM, dbush wrote: >>>>>>>>>>>>>>>>>>>>>>>> On 10/7/2025 2:36 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>>> On 10/7/2025 1:24 PM, Richard Heathfield wrote: >>>>>>>>>>>>>>>>>>>>>>>>>> On 07/10/2025 19:14, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>> No. You didn't pay close enough attention *to >>>>>>>>>>>>>>>>>>>>>>>>>>> my exact words*
Vague again, I see. But I expect you meant to >>>>>>>>>>>>>>>>>>>>>>>>>> say that your exact words mean that HHH has an >>>>>>>>>>>>>>>>>>>>>>>>>> excuse for getting it wrong. >>>>>>>>>>>>>>>>>>>>>>>>>>
And so it does, but the excuse you claim is >>>>>>>>>>>>>>>>>>>>>>>>>> nowhere near as convincing as the real reason >>>>>>>>>>>>>>>>>>>>>>>>>> (for which we have a proof). >>>>>>>>>>>>>>>>>>>>>>>>>>
*Anyone that totally understands these exact >>>>>>>>>>>>>>>>>>>>>>>>> words*
*understands that I proved the halting problem >>>>>>>>>>>>>>>>>>>>>>>>> is incorrect*
<repeat of previously refuted point> >>>>>>>>>>>>>>>>>>>>>>>>>
Repeating a previously refuted point is less >>>>>>>>>>>>>>>>>>>>>>>> than no rebuttal.
*You never even noticed ALL the words yet* >>>>>>>>>>>>>>>>>>>>>>>
All Turing machine deciders only compute the mapping >>>>>>>>>>>>>>>>>>>>>>> from their finite string inputs to an accept >>>>>>>>>>>>>>>>>>>>>>> state or reject
state on the basis that this input finite string >>>>>>>>>>>>>>>>>>>>>>> specifies a
semantic or syntactic property.
And it is proven true by the meaning of the words >>>>>>>>>>>>>>>>>>>>>> that a finite string description of a Turing >>>>>>>>>>>>>>>>>>>>>> machine specifies all semantics properties of the >>>>>>>>>>>>>>>>>>>>>> machine it describes, including whether the >>>>>>>>>>>>>>>>>>>>>> machine it describes halts when executed directly. >>>>>>>>>>>>>>>>>>>>>>
OK, so you are back to being a mere troll again. >>>>>>>>>>>>>>>>>>>>>
That you don't understand the meaning of the above >>>>>>>>>>>>>>>>>>>> words is not my error.
Deciders only report on what they see and you are >>>>>>>>>>>>>>>>>>>>> far too ignorant to understand that they cannot be >>>>>>>>>>>>>>>>>>>>> correctly required to report om what they cannot see. >>>>>>>>>>>>>>>>>>>>>
In other words, you agree with Turning and Linz that >>>>>>>>>>>>>>>>>>>> these requirements cannot be satisfied: >>>>>>>>>>>>>>>>>>>>
And likewise no Turing machine can give birth >>>>>>>>>>>>>>>>>>> to a healthy baby boy. Expecting it to do this >>>>>>>>>>>>>>>>>>> is kind of nuts.
So we're in agreement: no Turing machine exists that >>>>>>>>>>>>>>>>>> can tell if any arbitrary Turing machine X with input >>>>>>>>>>>>>>>>>> Y will halt when executed directly.
For the same reason that it cannot give birth >>>>>>>>>>>>>>>>> to a Healthy baby boy. Requiring the logically >>>>>>>>>>>>>>>>> impossible is merely an incorrect requirement. >>>>>>>>>>>>>>>>>
So it seems by "incorrect requirement" you mean >>>>>>>>>>>>>>>> "requirement that can't be satisfied".
Let's clarify that term.
Definition: Mythic Number
An integer N such that N > 5 and N < 2
Is the above an incorrect requirement?
Yes that is an incorrect requirement.
It is not the stupid misnomer of "undecidable"
There is nothing wrong with the above definition.-a It is >>>>>>>>>>>>>> clear and concise.-a It just can't be satisfied.
I am stipulating that any question defined to
have no correct answer is an incorrect question.
We're not talking about incorrect questions.-a We're talking >>>>>>>>>>>> about incorrect requirements, i.e. requirements that cannot >>>>>>>>>>>> be satisfied.
And Turing and Linz proved that the following is (by your >>>>>>>>>>>> definition) an incorrect requirement:
Given any algorithm (i.e. a fixed immutable sequence of >>>>>>>>>>>> instructions) X described as <X> with input Y:
A solution to the halting problem is an algorithm H that >>>>>>>>>>>> computes the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed >>>>>>>>>>>> directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when >>>>>>>>>>>> executed directly
Sure and: What time is it (yes or no) is
an equally undecidable decision problem.
The undecidable problem related to the halting problem
contains the assumption that the above requirements can be >>>>>>>>>> met.-a The fact that it is an undecidable problem is what >>>>>>>>>> proves that assumption false.
All undecidable decision problems are merely yes/no
questions framed such a correct solution is not possible.
And if such questions contain an implicit assumption, for
example that the below requirements can be satisfied, that
proves the assumption false.-a Which is precisely what Turing and >>>>>>>> Linz did.
But undecidable decision problems especially because
of self-contradiction have no actual value. They are vacuous.
False.-a If those decision problems contain a false assumption,
such as that a total halt decider exists, it proves that
assumption false.
Not at all. A total decider may exist under different
set of assumptions.
The existence of a total decider *is* the assumption.-a That's what
allows the undecidable problem to happen.
As long as we understand that all deciders only compute
the mapping from their inputs
i.e. a machine description which is defined to have the semantic
properties of the machine it describes, including if that machine
halts when executed directly.
You have never understood this yet.
-n.q0 rf?-nrf- reo* -n.embedded_H rf?-nrf- rf?-nrf- reo* -n.reR, // accept state
-n.q0 rf?-nrf- reo* -n.embedded_H rf?-nrf- rf?-nrf- reo* -n.qn // reject state
*Keep repeating unless aborted*
(a) -n copies its input rf?-nrf-
(b) -n invokes embedded_H rf?-nrf- rf?-nrf-
(c) embedded_H simulates rf?-nrf- rf?-nrf-
and everything else is
out-of-scope then HHH(DD) and the Linz embedded_H
both correctly report on the actual behavior that
their actual input actually specifies.
False, because the actual input, i.e. finite string DD, is the
description of machine DD and therefore specifies all semantic
properties of that machine including the fact that it halts when
executed directly, and HHH fails to report on that semantic property.
Am Wed, 08 Oct 2025 22:37:17 -0500 schrieb olcott:
On 10/8/2025 4:39 AM, joes wrote:
Am Tue, 07 Oct 2025 15:39:54 -0500 schrieb olcott:
HHH sees that its own input cannot possibly reach past its call to
HHH(DD) even when infinitely simulated.
No. It sees that the diagonal program of a UTM doesnrCOt halt, when it
should see that the full simulation of *this*, its own diagonal
program, halts.
Do you understand that this is true?
*Keep repeating unless aborted*
(a) -n copies its input rf?-nrf-
(b) -n invokes embedded_H rf?-nrf- rf?-nrf-
(c) embedded_H simulates rf?-nrf- rf?-nrf-
It is not true. embedded_H will abort after two recursions, as its programming tells it to.
Still, H <-n> will stop simulating its
input before that happens, exactly like embedded_H in its input.
If the input didnrCOt abort, the simulator wouldnrCOt abort either;
they are constructed to be the same.
Imagine you have the tentative rCRdeciderrCY HHH_no_abort and want to
make its diagonal program halt, so that HHH_no_abort can be termi-
nating. LetrCOs change the version of HHH_no_abort inside DDD_no_abort
into HHH, which aborts after two levels of simulation, but still
run it with our original HHH_no_abort. Now it halts!
So you want to change the concrete input to always be the diagonalThis causes HHH to abort the simulation of its input.The fact that the input calls this aborting HHH makes the input halt.
*template* of the partial simulator it is running in. You argue that
changing HHH to not abort makes the corresponding diagonal program DD_no_abort not halt (which is correct). But then you turn around
and say that changing HHH_no_abort back into HHH *does not* make
*its* diagonal program DD halt.
You canrCOt have it both ways: either you keep the diagonal relationship while modifying the diagonal program, or you fix the input while
changing simulators.
On 10/9/2025 12:10 AM, olcott wrote:rf?Mrf- Turing machine description of M.
You have never understood this yet.
-n.q0 rf?-nrf- reo* -n.embedded_H rf?-nrf- rf?-nrf- reo* -n.reR, // accept state
-n.q0 rf?-nrf- reo* -n.embedded_H rf?-nrf- rf?-nrf- reo* -n.qn // reject state
The above is rf?-nrf-
*Keep repeating unless aborted*
(a) -n copies its input rf?-nrf-
(b) -n invokes embedded_H rf?-nrf- rf?-nrf-
(c) embedded_H simulates rf?-nrf- rf?-nrf-
The above is *not* rf?-nrf- but rf?-nnrf- and therefore irrelevant to rf?-nrf-
On 10/9/2025 5:46 AM, dbush wrote:
On 10/9/2025 12:10 AM, olcott wrote:rf?Mrf- Turing machine description of M.
reo* an arbitrary number of moves where a
move is the execution of one TM instruction.
reR the traditional infinite loop at the accept state.
-n.embedded_H is a simulating partial halt decider.
You have never understood this yet.
-n.q0 rf?-nrf- reo* -n.embedded_H rf?-nrf- rf?-nrf- reo* -n.reR, // accept state
-n.q0 rf?-nrf- reo* -n.embedded_H rf?-nrf- rf?-nrf- reo* -n.qn // reject state
The above is rf?-nrf-
*Keep repeating unless aborted*
(a) -n copies its input rf?-nrf-
(b) -n invokes embedded_H rf?-nrf- rf?-nrf-
(c) embedded_H simulates rf?-nrf- rf?-nrf-
The above is *not* rf?-nrf- but rf?-nnrf- and therefore irrelevant to rf?-nrf-
When H is a simulating partial halt decider
both
are the Turing machine template of -n.
rf?-nrf- means
the machine description of -n.
On 10/9/2025 5:43 AM, joes wrote:
Am Wed, 08 Oct 2025 22:37:17 -0500 schrieb olcott:
On 10/8/2025 4:39 AM, joes wrote:
No. It sees that the diagonal program of a UTM doesnrCOt halt, when it >>>> should see that the full simulation of *this*, its own diagonal
program, halts.
Do you understand that this is true?
*Keep repeating unless aborted*
(a) -n copies its input rf?-nrf-
(b) -n invokes embedded_H rf?-nrf- rf?-nrf-
(c) embedded_H simulates rf?-nrf- rf?-nrf-
It is not true. embedded_H will abort after two recursions, as its
programming tells it to.
Still, H <-n> will stop simulating its input before that happens,
exactly like embedded_H in its input.
If the input didnrCOt abort, the simulator wouldnrCOt abort either; they
are constructed to be the same.
Imagine you have the tentative rCRdeciderrCY HHH_no_abort and want to make >> its diagonal program halt, so that HHH_no_abort can be terminating.
LetrCOs change the version of HHH_no_abort inside DDD_no_abort into HHH,
which aborts after two levels of simulation, but still run it with our
original HHH_no_abort. Now it halts!
So you want to change the concrete input to always be the diagonalThe fact that the input calls this aborting HHH makes the input halt.
*template* of the partial simulator it is running in. You argue that
changing HHH to not abort makes the corresponding diagonal program
DD_no_abort not halt (which is correct). But then you turn around and
say that changing HHH_no_abort back into HHH *does not* make *its*
diagonal program DD halt.
You canrCOt have it both ways: either you keep the diagonal relationship
while modifying the diagonal program, or you fix the input while
changing simulators.
You changed the question and then answered the changed question.
Pleased try again.
Am Thu, 09 Oct 2025 07:52:01 -0500 schrieb olcott:
On 10/9/2025 5:43 AM, joes wrote:
Am Wed, 08 Oct 2025 22:37:17 -0500 schrieb olcott:
On 10/8/2025 4:39 AM, joes wrote:
No. It sees that the diagonal program of a UTM doesnrCOt halt, when it >>>>> should see that the full simulation of *this*, its own diagonal
program, halts.
Do you understand that this is true?
*Keep repeating unless aborted*
(a) -n copies its input rf?-nrf-
(b) -n invokes embedded_H rf?-nrf- rf?-nrf-
(c) embedded_H simulates rf?-nrf- rf?-nrf-
It is not true. embedded_H will abort after two recursions, as its
programming tells it to.
Or is embedded_H a pure simulator?
Still, H <-n> will stop simulating its input before that happens,
exactly like embedded_H in its input.
If the input didnrCOt abort, the simulator wouldnrCOt abort either; they >>> are constructed to be the same.
Imagine you have the tentative rCRdeciderrCY HHH_no_abort and want to make >>> its diagonal program halt, so that HHH_no_abort can be terminating.
LetrCOs change the version of HHH_no_abort inside DDD_no_abort into HHH, >>> which aborts after two levels of simulation, but still run it with our
original HHH_no_abort. Now it halts!
So you want to change the concrete input to always be the diagonalThe fact that the input calls this aborting HHH makes the input halt.
*template* of the partial simulator it is running in. You argue that
changing HHH to not abort makes the corresponding diagonal program
DD_no_abort not halt (which is correct). But then you turn around and
say that changing HHH_no_abort back into HHH *does not* make *its*
diagonal program DD halt.
You canrCOt have it both ways: either you keep the diagonal relationship >>> while modifying the diagonal program, or you fix the input while
changing simulators.
You changed the question and then answered the changed question.
Pleased try again.
I did not. I said that no, I do not understand it as true. Your question presupposes the falsity that that that (sic) is following it were true.
On 10/9/2025 2:43 PM, joes wrote:
Am Thu, 09 Oct 2025 07:52:01 -0500 schrieb olcott:
On 10/9/2025 5:43 AM, joes wrote:
Still, H <-n> will stop simulating its input before that happens,
exactly like embedded_H in its input.
If the input didnrCOt abort, the simulator wouldnrCOt abort either; they >>>> are constructed to be the same.
Imagine you have the tentative rCRdeciderrCY HHH_no_abort and want to
make its diagonal program halt, so that HHH_no_abort can be
terminating. LetrCOs change the version of HHH_no_abort inside
DDD_no_abort into HHH, which aborts after two levels of simulation,
but still run it with our original HHH_no_abort. Now it halts!
So you want to change the concrete input to always be the diagonal
*template* of the partial simulator it is running in. You argue that
changing HHH to not abort makes the corresponding diagonal program
DD_no_abort not halt (which is correct). But then you turn around and
say that changing HHH_no_abort back into HHH *does not* make *its*
diagonal program DD halt.
You canrCOt have it both ways: either you keep the diagonal
relationship while modifying the diagonal program, or you fix the
input while changing simulators.
You changed the question and then answered the changed question.
Pleased try again.
I did not. I said that no, I do not understand it as true. Your
question presupposes the falsity that that that (sic) is following it
were true.
Do you understand that if embedded_H was a UTM that -n applied to rf?-nrf- would never halt?
If you say yes then that does semantically entail this when embedded_H
is a simulating halt decider
*Keep repeating unless aborted*
(a) -n copies its input rf?-nrf-
(b) -n invokes embedded_H rf?-nrf- rf?-nrf-
(c) embedded_H simulates rf?-nrf- rf?-nrf-
Am Thu, 09 Oct 2025 15:15:27 -0500 schrieb olcott:
On 10/9/2025 2:43 PM, joes wrote:
Am Thu, 09 Oct 2025 07:52:01 -0500 schrieb olcott:
On 10/9/2025 5:43 AM, joes wrote:
Still, H <-n> will stop simulating its input before that happens,
exactly like embedded_H in its input.
If the input didnrCOt abort, the simulator wouldnrCOt abort either; they >>>>> are constructed to be the same.
Imagine you have the tentative rCRdeciderrCY HHH_no_abort and want to >>>>> make its diagonal program halt, so that HHH_no_abort can be
terminating. LetrCOs change the version of HHH_no_abort inside
DDD_no_abort into HHH, which aborts after two levels of simulation,
but still run it with our original HHH_no_abort. Now it halts!
So you want to change the concrete input to always be the diagonal
*template* of the partial simulator it is running in. You argue that >>>>> changing HHH to not abort makes the corresponding diagonal program
DD_no_abort not halt (which is correct). But then you turn around and >>>>> say that changing HHH_no_abort back into HHH *does not* make *its*
diagonal program DD halt.
You canrCOt have it both ways: either you keep the diagonal
relationship while modifying the diagonal program, or you fix the
input while changing simulators.
You changed the question and then answered the changed question.
Pleased try again.
I did not. I said that no, I do not understand it as true. Your
question presupposes the falsity that that that (sic) is following it
were true.
Do you understand that if embedded_H was a UTM that -n applied to rf?-nrf- >> would never halt?
If you say yes then that does semantically entail this when embedded_H
is a simulating halt decider
*Keep repeating unless aborted*
(a) -n copies its input rf?-nrf-
(b) -n invokes embedded_H rf?-nrf- rf?-nrf-
(c) embedded_H simulates rf?-nrf- rf?-nrf-
Nope. LetrCOs reserve the names embedded_H for your partial simulator,
-n for its diagonal program and call the modifications embedded_UTM
and -n_UTM.
-n <-n> halts.
-n_UTM <-n_UTM> doesnrCOt halt.
-n <-n_UTM> correctly returns rCRnon-haltingrCY, but thatrCOs not the input we care about.
And finally, -n_UTM <-n> halts, proving the correct behaviour.
-n_UTM not halting doesnrCOt have anything to do with -n.
On 10/9/2025 3:26 PM, joes wrote:rCaand every occurrence of embedded_H in the input were a UTM. ThatrCOs a different program from the input.
Am Thu, 09 Oct 2025 15:15:27 -0500 schrieb olcott:
On 10/9/2025 2:43 PM, joes wrote:
The criterion measure that embedded_H uses is what would the behavior beI did not. I said that no, I do not understand it as true. Your
question presupposes the falsity that that that (sic) is following it
were true.
Do you understand that if embedded_H was a UTM that -n applied to rf?-nrf- >>> would never halt?
If you say yes then that does semantically entail this when embedded_H
is a simulating halt decider *Keep repeating unless aborted*
(a) -n copies its input rf?-nrf-
(b) -n invokes embedded_H rf?-nrf- rf?-nrf-
(c) embedded_H simulates rf?-nrf- rf?-nrf-
Nope. LetrCOs reserve the names embedded_H for your partial simulator,
-n for its diagonal program and call the modifications embedded_UTM and
-n_UTM.
-n <-n> halts.
-n_UTM <-n_UTM> doesnrCOt halt.
-n <-n_UTM> correctly returns rCRnon-haltingrCY, but thatrCOs not the input we
care about.
And finally, -n_UTM <-n> halts, proving the correct behaviour.
-n_UTM not halting doesnrCOt have anything to do with -n.
if I was a UTM?
Am Thu, 09 Oct 2025 16:28:51 -0500 schrieb olcott:
On 10/9/2025 3:26 PM, joes wrote:
Am Thu, 09 Oct 2025 15:15:27 -0500 schrieb olcott:
On 10/9/2025 2:43 PM, joes wrote:
rCaand every occurrence of embedded_H in the input were a UTM. ThatrCOs a different program from the input.The criterion measure that embedded_H uses is what would the behavior beI did not. I said that no, I do not understand it as true. Your
question presupposes the falsity that that that (sic) is following it >>>>> were true.
Do you understand that if embedded_H was a UTM that -n applied to rf?-nrf- >>>> would never halt?
If you say yes then that does semantically entail this when embedded_H >>>> is a simulating halt decider *Keep repeating unless aborted*
(a) -n copies its input rf?-nrf-
(b) -n invokes embedded_H rf?-nrf- rf?-nrf-
(c) embedded_H simulates rf?-nrf- rf?-nrf-
Nope. LetrCOs reserve the names embedded_H for your partial simulator,
-n for its diagonal program and call the modifications embedded_UTM and
-n_UTM.
-n <-n> halts.
-n_UTM <-n_UTM> doesnrCOt halt.
-n <-n_UTM> correctly returns rCRnon-haltingrCY, but thatrCOs not the input we
care about.
And finally, -n_UTM <-n> halts, proving the correct behaviour.
-n_UTM not halting doesnrCOt have anything to do with -n.
if I was a UTM?
On 10/9/2025 4:32 PM, joes wrote:Then you admit H is deciding on a non-input.
Am Thu, 09 Oct 2025 16:28:51 -0500 schrieb olcott:
On 10/9/2025 3:26 PM, joes wrote:rCaand every occurrence of embedded_H in the input were a UTM. ThatrCOs a
Am Thu, 09 Oct 2025 15:15:27 -0500 schrieb olcott:
On 10/9/2025 2:43 PM, joes wrote:
The criterion measure that embedded_H uses is what would the behavior be >>> if I was a UTM?I did not. I said that no, I do not understand it as true. Your
question presupposes the falsity that that that (sic) is following it >>>>>> were true.
Do you understand that if embedded_H was a UTM that -n applied to rf?-nrf-
would never halt?
If you say yes then that does semantically entail this when embedded_H >>>>> is a simulating halt decider *Keep repeating unless aborted*
(a) -n copies its input rf?-nrf-
(b) -n invokes embedded_H rf?-nrf- rf?-nrf-
(c) embedded_H simulates rf?-nrf- rf?-nrf-
Nope. LetrCOs reserve the names embedded_H for your partial simulator, >>>> -n for its diagonal program and call the modifications embedded_UTM and >>>> -n_UTM.
-n <-n> halts.
-n_UTM <-n_UTM> doesnrCOt halt.
-n <-n_UTM> correctly returns rCRnon-haltingrCY, but thatrCOs not the input we
care about.
And finally, -n_UTM <-n> halts, proving the correct behaviour.
-n_UTM not halting doesnrCOt have anything to do with -n.
different program from the input.
Yes it is,
On 10/9/2025 6:25 PM, olcott wrote:
On 10/9/2025 4:32 PM, joes wrote:Then you admit H is deciding on a non-input.
Am Thu, 09 Oct 2025 16:28:51 -0500 schrieb olcott:
On 10/9/2025 3:26 PM, joes wrote:rCaand every occurrence of embedded_H in the input were a UTM. ThatrCOs a >>> different program from the input.
Am Thu, 09 Oct 2025 15:15:27 -0500 schrieb olcott:
On 10/9/2025 2:43 PM, joes wrote:
The criterion measure that embedded_H uses is what would theI did not. I said that no, I do not understand it as true. Your
question presupposes the falsity that that that (sic) is
following it
were true.
Do you understand that if embedded_H was a UTM that -n applied to rf?-nrf-
would never halt?
If you say yes then that does semantically entail this when
embedded_H
is a simulating halt decider *Keep repeating unless aborted*
(a) -n copies its input rf?-nrf-
(b) -n invokes embedded_H rf?-nrf- rf?-nrf-
(c) embedded_H simulates rf?-nrf- rf?-nrf-
Nope. LetrCOs reserve the names embedded_H for your partial simulator, >>>>> -n for its diagonal program and call the modifications embedded_UTM and >>>>> -n_UTM.
-n <-n> halts.
-n_UTM <-n_UTM> doesnrCOt halt.
-n <-n_UTM> correctly returns rCRnon-haltingrCY, but thatrCOs not the input we
care about.
And finally, -n_UTM <-n> halts, proving the correct behaviour.
-n_UTM not halting doesnrCOt have anything to do with -n.
behavior be
if I was a UTM?
Yes it is,
On 10/9/2025 5:46 PM, dbush wrote:
On 10/9/2025 6:25 PM, olcott wrote:
On 10/9/2025 4:32 PM, joes wrote:Then you admit H is deciding on a non-input.
Am Thu, 09 Oct 2025 16:28:51 -0500 schrieb olcott:
On 10/9/2025 3:26 PM, joes wrote:rCaand every occurrence of embedded_H in the input were a UTM. ThatrCOs a >>>> different program from the input.
Am Thu, 09 Oct 2025 15:15:27 -0500 schrieb olcott:
On 10/9/2025 2:43 PM, joes wrote:
The criterion measure that embedded_H uses is what would theI did not. I said that no, I do not understand it as true. Your >>>>>>>> question presupposes the falsity that that that (sic) is
following it
were true.
Do you understand that if embedded_H was a UTM that -n applied to rf?-nrf-
would never halt?
If you say yes then that does semantically entail this when
embedded_H
is a simulating halt decider *Keep repeating unless aborted*
(a) -n copies its input rf?-nrf-
(b) -n invokes embedded_H rf?-nrf- rf?-nrf-
(c) embedded_H simulates rf?-nrf- rf?-nrf-
Nope. LetrCOs reserve the names embedded_H for your partial simulator, >>>>>> -n for its diagonal program and call the modifications embedded_UTM >>>>>> and
-n_UTM.
-n <-n> halts.
-n_UTM <-n_UTM> doesnrCOt halt.
-n <-n_UTM> correctly returns rCRnon-haltingrCY, but thatrCOs not the >>>>>> input we
care about.
And finally, -n_UTM <-n> halts, proving the correct behaviour.
-n_UTM not halting doesnrCOt have anything to do with -n.
behavior be
if I was a UTM?
Yes it is,
<MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
-a-a-a If simulating halt decider H correctly simulates its
-a-a-a input D until H correctly determines that its simulated D
-a-a-a *would never stop running unless aborted*
-a-a-a *would never stop running unless aborted*
-a-a-a *would never stop running unless aborted*
-a-a-a *would never stop running unless aborted*
I admit that you dishonestly refuse to pay attention
to those words.
professor Sipser did not understand the
significance of these words
I exchanged emails with him about this. He does not agree with anything substantive that PO has written. I won't quote him, as I don't have permission, but he was, let's say... forthright, in his reply to me.
So that PO will have no cause to quote me as supporting his case: what Sipser understood he was agreeing to was NOT what PO interprets it as meaning. Sipser would not agree that the conclusion applies in PO's HHH(DDD) scenario, where DDD halts.
joes <noreply@example.org> writes:
Am Wed, 21 Aug 2024 20:55:52 -0500 schrieb olcott:
Professor Sipser clearly agreed that an H that does a finite simulation
of D is to predict the behavior of an unlimited simulation of D.
If the simulator *itself* would not abort. The H called by D is,
by construction, the same and *does* abort.
We don't really know what context Sipser was given. I got in touch at
the time so do I know he had enough context to know that PO's ideas were "wacky" and that had agreed to what he considered a "minor remark".
Since PO considers his words finely crafted and key to his so-called
work I think it's clear that Sipser did not take the "minor remark" he agreed to to mean what PO takes it to mean! My own take if that he
(Sipser) read it as a general remark about how to determine some cases,
i.e. that D names an input that H can partially simulate to determine
it's halting or otherwise. We all know or could construct some such
cases.
I suspect he was tricked because PO used H and D as the names without
making it clear that D was constructed from H in the usual way (Sipser
uses H and D in at least one of his proofs). Of course, he is clued in enough know that, if D is indeed constructed from H like that, the
"minor remark" becomes true by being a hypothetical: if the moon is made
of cheese, the Martians can look forward to a fine fondue. But,
personally, I think the professor is more straight talking than that,
and he simply took as a method that can work for some inputs. That's
the only way is could be seen as a "minor remark" with being accused of being disingenuous.
On 10/9/2025 6:57 PM, olcott wrote:
On 10/9/2025 5:46 PM, dbush wrote:
On 10/9/2025 6:25 PM, olcott wrote:
On 10/9/2025 4:32 PM, joes wrote:Then you admit H is deciding on a non-input.
Am Thu, 09 Oct 2025 16:28:51 -0500 schrieb olcott:
On 10/9/2025 3:26 PM, joes wrote:rCaand every occurrence of embedded_H in the input were a UTM. ThatrCOs a >>>>> different program from the input.
Am Thu, 09 Oct 2025 15:15:27 -0500 schrieb olcott:
On 10/9/2025 2:43 PM, joes wrote:
The criterion measure that embedded_H uses is what would theI did not. I said that no, I do not understand it as true. Your >>>>>>>>> question presupposes the falsity that that that (sic) is
following it
were true.
Do you understand that if embedded_H was a UTM that -n applied to >>>>>>>> rf?-nrf-
would never halt?
If you say yes then that does semantically entail this when
embedded_H
is a simulating halt decider *Keep repeating unless aborted*
(a) -n copies its input rf?-nrf-
(b) -n invokes embedded_H rf?-nrf- rf?-nrf-
(c) embedded_H simulates rf?-nrf- rf?-nrf-
Nope. LetrCOs reserve the names embedded_H for your partial simulator, >>>>>>> -n for its diagonal program and call the modifications
embedded_UTM and
-n_UTM.
-n <-n> halts.
-n_UTM <-n_UTM> doesnrCOt halt.
-n <-n_UTM> correctly returns rCRnon-haltingrCY, but thatrCOs not the >>>>>>> input we
care about.
And finally, -n_UTM <-n> halts, proving the correct behaviour.
-n_UTM not halting doesnrCOt have anything to do with -n.
behavior be
if I was a UTM?
Yes it is,
<MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
-a-a-a-a If simulating halt decider H correctly simulates its
-a-a-a-a input D until H correctly determines that its simulated D
-a-a-a-a *would never stop running unless aborted*
-a-a-a-a *would never stop running unless aborted*
-a-a-a-a *would never stop running unless aborted*
-a-a-a-a *would never stop running unless aborted*
I admit that you dishonestly refuse to pay attention
to those words.
I don't think that is the shell game. PO really /has/ an H
(it's trivial to do for this one case) that correctly determines
that P(P) *would* never stop running *unless* aborted.
On 10/9/2025 6:05 PM, dbush wrote:
On 10/9/2025 6:57 PM, olcott wrote:
On 10/9/2025 5:46 PM, dbush wrote:You mean the words where Sipser didn't actually agree to your meaning
On 10/9/2025 6:25 PM, olcott wrote:
On 10/9/2025 4:32 PM, joes wrote:Then you admit H is deciding on a non-input.
Am Thu, 09 Oct 2025 16:28:51 -0500 schrieb olcott:
On 10/9/2025 3:26 PM, joes wrote:rCaand every occurrence of embedded_H in the input were a UTM. ThatrCOs a
Am Thu, 09 Oct 2025 15:15:27 -0500 schrieb olcott:
On 10/9/2025 2:43 PM, joes wrote:
The criterion measure that embedded_H uses is what would theI did not. I said that no, I do not understand it as true. Your >>>>>>>>>> question presupposes the falsity that that that (sic) is
following it
were true.
Do you understand that if embedded_H was a UTM that -n applied >>>>>>>>> to rf?-nrf-
would never halt?
If you say yes then that does semantically entail this when >>>>>>>>> embedded_H
is a simulating halt decider *Keep repeating unless aborted* >>>>>>>>> (a) -n copies its input rf?-nrf-
(b) -n invokes embedded_H rf?-nrf- rf?-nrf-
(c) embedded_H simulates rf?-nrf- rf?-nrf-
Nope. LetrCOs reserve the names embedded_H for your partial
simulator,
-n for its diagonal program and call the modifications
embedded_UTM and
-n_UTM.
-n <-n> halts.
-n_UTM <-n_UTM> doesnrCOt halt.
-n <-n_UTM> correctly returns rCRnon-haltingrCY, but thatrCOs not the >>>>>>>> input we
care about.
And finally, -n_UTM <-n> halts, proving the correct behaviour. >>>>>>>>
-n_UTM not halting doesnrCOt have anything to do with -n.
behavior be
if I was a UTM?
different program from the input.
Yes it is,
<MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
-a-a-a-a If simulating halt decider H correctly simulates its
-a-a-a-a input D until H correctly determines that its simulated D
-a-a-a-a *would never stop running unless aborted*
-a-a-a-a *would never stop running unless aborted*
-a-a-a-a *would never stop running unless aborted*
-a-a-a-a *would never stop running unless aborted*
I admit that you dishonestly refuse to pay attention
to those words.
but you dishonestly imply that he did? (see below):
On 10/4/2025 5:00 PM, olcott wrote:
professor Sipser did not understand the
significance of these words
On Monday, March 6, 2023 at 2:41:27 PM UTC-5, Ben Bacarisse wrote:
I exchanged emails with him about this. He does not agree with anything >> > substantive that PO has written. I won't quote him, as I don't have
permission, but he was, let's say... forthright, in his reply to me.
On 8/23/2024 9:10 PM, Mike Terry wrote:
So that PO will have no cause to quote me as supporting his case: what >> > Sipser understood he was agreeing to was NOT what PO interprets it as
meaning. Sipser would not agree that the conclusion applies in PO's
HHH(DDD) scenario, where DDD halts.
On 8/23/2024 5:07 PM, Ben Bacarisse wrote:
joes <noreply@example.org> writes:simulation
Am Wed, 21 Aug 2024 20:55:52 -0500 schrieb olcott:
Professor Sipser clearly agreed that an H that does a finite
of D is to predict the behavior of an unlimited simulation of D.
If the simulator *itself* would not abort. The H called by D is,
by construction, the same and *does* abort.
We don't really know what context Sipser was given. I got in touch at
the time so do I know he had enough context to know that PO's ideas were >> > "wacky" and that had agreed to what he considered a "minor remark".
Since PO considers his words finely crafted and key to his so-called
work I think it's clear that Sipser did not take the "minor remark" he
agreed to to mean what PO takes it to mean! My own take if that he
(Sipser) read it as a general remark about how to determine some cases, >> > i.e. that D names an input that H can partially simulate to determine
it's halting or otherwise. We all know or could construct some such
cases.
I suspect he was tricked because PO used H and D as the names without
making it clear that D was constructed from H in the usual way (Sipser
uses H and D in at least one of his proofs). Of course, he is clued in >> > enough know that, if D is indeed constructed from H like that, the
"minor remark" becomes true by being a hypothetical: if the moon is made >> > of cheese, the Martians can look forward to a fine fondue. But,
personally, I think the professor is more straight talking than that,
and he simply took as a method that can work for some inputs. That's
the only way is could be seen as a "minor remark" with being accused of >> > being disingenuous.
I am just saying the exact meaning of those exact words
nothing more and nothing less.
Ben agrees that I did meet the exact meaning of those
exact words.
On 10/14/2022 7:44 PM, Ben Bacarisse wrote:
I don't think that is the shell game. PO really /has/ an H
(it's trivial to do for this one case) that correctly determines
that P(P) *would* never stop running *unless* aborted.
HHH(DD) reports on the behavior that its input specifies
as measured by the simulation of the input according to
the semantics of the language of this input.
This does include that HHH does simulate an instance
of itself simulating an instance of DD.
On 10/9/2025 7:15 PM, olcott wrote:
On 10/9/2025 6:05 PM, dbush wrote:
On 10/9/2025 6:57 PM, olcott wrote:
On 10/9/2025 5:46 PM, dbush wrote:You mean the words where Sipser didn't actually agree to your meaning
On 10/9/2025 6:25 PM, olcott wrote:
On 10/9/2025 4:32 PM, joes wrote:Then you admit H is deciding on a non-input.
Am Thu, 09 Oct 2025 16:28:51 -0500 schrieb olcott:
On 10/9/2025 3:26 PM, joes wrote:rCaand every occurrence of embedded_H in the input were a UTM.
Am Thu, 09 Oct 2025 15:15:27 -0500 schrieb olcott:
On 10/9/2025 2:43 PM, joes wrote:
The criterion measure that embedded_H uses is what would theI did not. I said that no, I do not understand it as true. Your >>>>>>>>>>> question presupposes the falsity that that that (sic) is >>>>>>>>>>> following it
were true.
Do you understand that if embedded_H was a UTM that -n applied >>>>>>>>>> to rf?-nrf-
would never halt?
If you say yes then that does semantically entail this when >>>>>>>>>> embedded_H
is a simulating halt decider *Keep repeating unless aborted* >>>>>>>>>> (a) -n copies its input rf?-nrf-
(b) -n invokes embedded_H rf?-nrf- rf?-nrf-
(c) embedded_H simulates rf?-nrf- rf?-nrf-
Nope. LetrCOs reserve the names embedded_H for your partial >>>>>>>>> simulator,
-n for its diagonal program and call the modifications
embedded_UTM and
-n_UTM.
-n <-n> halts.
-n_UTM <-n_UTM> doesnrCOt halt.
-n <-n_UTM> correctly returns rCRnon-haltingrCY, but thatrCOs not the
input we
care about.
And finally, -n_UTM <-n> halts, proving the correct behaviour. >>>>>>>>>
-n_UTM not halting doesnrCOt have anything to do with -n.
behavior be
if I was a UTM?
ThatrCOs a
different program from the input.
Yes it is,
<MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
-a-a-a-a If simulating halt decider H correctly simulates its
-a-a-a-a input D until H correctly determines that its simulated D
-a-a-a-a *would never stop running unless aborted*
-a-a-a-a *would never stop running unless aborted*
-a-a-a-a *would never stop running unless aborted*
-a-a-a-a *would never stop running unless aborted*
I admit that you dishonestly refuse to pay attention
to those words.
but you dishonestly imply that he did? (see below):
On 10/4/2025 5:00 PM, olcott wrote:
professor Sipser did not understand the
significance of these words
On Monday, March 6, 2023 at 2:41:27 PM UTC-5, Ben Bacarisse wrote:
I exchanged emails with him about this. He does not agree withanything
substantive that PO has written. I won't quote him, as I don't have
permission, but he was, let's say... forthright, in his reply to me. >>>
On 8/23/2024 9:10 PM, Mike Terry wrote:
So that PO will have no cause to quote me as supporting his case:what
Sipser understood he was agreeing to was NOT what PO interprets it as >>> -a> meaning.-a Sipser would not agree that the conclusion applies in PO's >>> -a> HHH(DDD) scenario, where DDD halts.
On 8/23/2024 5:07 PM, Ben Bacarisse wrote:
joes <noreply@example.org> writes:simulation
Am Wed, 21 Aug 2024 20:55:52 -0500 schrieb olcott:
Professor Sipser clearly agreed that an H that does a finite
touch atof D is to predict the behavior of an unlimited simulation of D.
If the simulator *itself* would not abort. The H called by D is,
by construction, the same and *does* abort.
We don't really know what context Sipser was given.-a I got in
the time so do I know he had enough context to know that PO'sideas were
"wacky" and that had agreed to what he considered a "minor remark".remark" he
Since PO considers his words finely crafted and key to his so-called >>> -a> work I think it's clear that Sipser did not take the "minor
agreed to to mean what PO takes it to mean!-a My own take if that he >>> -a> (Sipser) read it as a general remark about how to determine somecases,
i.e. that D names an input that H can partially simulate to determine >>> -a> it's halting or otherwise.-a We all know or could construct some such >>> -a> cases.(Sipser
I suspect he was tricked because PO used H and D as the names without >>> -a> making it clear that D was constructed from H in the usual way
uses H and D in at least one of his proofs).-a Of course, he isclued in
enough know that, if D is indeed constructed from H like that, theis made
"minor remark" becomes true by being a hypothetical: if the moon
of cheese, the Martians can look forward to a fine fondue.-a But,accused of
personally, I think the professor is more straight talking than that, >>> -a> and he simply took as a method that can work for some inputs.-a That's >>> -a> the only way is could be seen as a "minor remark" with being
being disingenuous.
I am just saying the exact meaning of those exact words
nothing more and nothing less.
But you dishonestly imply that Sipser agrees with your meaning when it's been proven that he doesn't, as shown in the above text that you
dishonestly trimmed in order to hide the evidence.
Ben agrees that I did meet the exact meaning of those
exact words.
On 10/14/2022 7:44 PM, Ben Bacarisse wrote:
I don't think that is the shell game. PO really /has/ an H
(it's trivial to do for this one case) that correctly determines
that P(P) *would* never stop running *unless* aborted.
But not the meaning everyone would agree is correct, as shown above.
On 10/9/2025 6:34 PM, dbush wrote:
On 10/9/2025 7:15 PM, olcott wrote:
On 10/9/2025 6:05 PM, dbush wrote:
On 10/9/2025 6:57 PM, olcott wrote:
On 10/9/2025 5:46 PM, dbush wrote:You mean the words where Sipser didn't actually agree to your
On 10/9/2025 6:25 PM, olcott wrote:
On 10/9/2025 4:32 PM, joes wrote:Then you admit H is deciding on a non-input.
Am Thu, 09 Oct 2025 16:28:51 -0500 schrieb olcott:
On 10/9/2025 3:26 PM, joes wrote:rCaand every occurrence of embedded_H in the input were a UTM. >>>>>>>> ThatrCOs a
Am Thu, 09 Oct 2025 15:15:27 -0500 schrieb olcott:
On 10/9/2025 2:43 PM, joes wrote:
The criterion measure that embedded_H uses is what would the >>>>>>>>> behavior beI did not. I said that no, I do not understand it as true. Your >>>>>>>>>>>> question presupposes the falsity that that that (sic) is >>>>>>>>>>>> following it
were true.
Do you understand that if embedded_H was a UTM that -n applied >>>>>>>>>>> to rf?-nrf-
would never halt?
If you say yes then that does semantically entail this when >>>>>>>>>>> embedded_H
is a simulating halt decider *Keep repeating unless aborted* >>>>>>>>>>> (a) -n copies its input rf?-nrf-
(b) -n invokes embedded_H rf?-nrf- rf?-nrf-
(c) embedded_H simulates rf?-nrf- rf?-nrf-
Nope. LetrCOs reserve the names embedded_H for your partial >>>>>>>>>> simulator,
-n for its diagonal program and call the modifications
embedded_UTM and
-n_UTM.
-n <-n> halts.
-n_UTM <-n_UTM> doesnrCOt halt.
-n <-n_UTM> correctly returns rCRnon-haltingrCY, but thatrCOs not the
input we
care about.
And finally, -n_UTM <-n> halts, proving the correct behaviour. >>>>>>>>>>
-n_UTM not halting doesnrCOt have anything to do with -n.
if I was a UTM?
different program from the input.
Yes it is,
<MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022> >>>>> -a-a-a-a If simulating halt decider H correctly simulates its
-a-a-a-a input D until H correctly determines that its simulated D
-a-a-a-a *would never stop running unless aborted*
-a-a-a-a *would never stop running unless aborted*
-a-a-a-a *would never stop running unless aborted*
-a-a-a-a *would never stop running unless aborted*
I admit that you dishonestly refuse to pay attention
to those words.
meaning but you dishonestly imply that he did? (see below):
On 10/4/2025 5:00 PM, olcott wrote:
professor Sipser did not understand the
significance of these words
On Monday, March 6, 2023 at 2:41:27 PM UTC-5, Ben Bacarisse wrote:
I exchanged emails with him about this. He does not agree withanything
substantive that PO has written. I won't quote him, as I don't have >>>> -a> permission, but he was, let's say... forthright, in his reply to me. >>>>
On 8/23/2024 9:10 PM, Mike Terry wrote:
So that PO will have no cause to quote me as supporting his case: >>>> whatit as
Sipser understood he was agreeing to was NOT what PO interprets
meaning.-a Sipser would not agree that the conclusion applies in PO's >>>> -a> HHH(DDD) scenario, where DDD halts.
On 8/23/2024 5:07 PM, Ben Bacarisse wrote:
joes <noreply@example.org> writes:simulation
Am Wed, 21 Aug 2024 20:55:52 -0500 schrieb olcott:
Professor Sipser clearly agreed that an H that does a finite
touch atof D is to predict the behavior of an unlimited simulation of D. >>>> -a>>If the simulator *itself* would not abort. The H called by D is,
by construction, the same and *does* abort.
We don't really know what context Sipser was given.-a I got in
the time so do I know he had enough context to know that PO'sideas were
"wacky" and that had agreed to what he considered a "minor remark". >>>> -a>remark" he
Since PO considers his words finely crafted and key to his so-called >>>> -a> work I think it's clear that Sipser did not take the "minor
agreed to to mean what PO takes it to mean!-a My own take if that he >>>> -a> (Sipser) read it as a general remark about how to determine somecases,
i.e. that D names an input that H can partially simulate todetermine
it's halting or otherwise.-a We all know or could construct some such >>>> -a> cases.without
I suspect he was tricked because PO used H and D as the names
making it clear that D was constructed from H in the usual way(Sipser
uses H and D in at least one of his proofs).-a Of course, he isclued in
enough know that, if D is indeed constructed from H like that, the >>>> -a> "minor remark" becomes true by being a hypothetical: if the moonis made
of cheese, the Martians can look forward to a fine fondue.-a But,that,
personally, I think the professor is more straight talking than
and he simply took as a method that can work for some inputs.That's
the only way is could be seen as a "minor remark" with beingaccused of
being disingenuous.
I am just saying the exact meaning of those exact words
nothing more and nothing less.
But you dishonestly imply that Sipser agrees with your meaning when
it's been proven that he doesn't, as shown in the above text that you
dishonestly trimmed in order to hide the evidence.
Ben agrees that I did meet the exact meaning of those
exact words.
On 10/14/2022 7:44 PM, Ben Bacarisse wrote:
I don't think that is the shell game. PO really /has/ an H
(it's trivial to do for this one case) that correctly determines
that P(P) *would* never stop running *unless* aborted.
But not the meaning everyone would agree is correct, as shown above.
There is only one meaning and Ben agreed to that.
On 10/9/2025 8:03 PM, olcott wrote:Ben acknowledged the my criteria have been met.
On 10/9/2025 6:34 PM, dbush wrote:
On 10/9/2025 7:15 PM, olcott wrote:
Ben agrees that I did meet the exact meaning of those
exact words.
On 10/14/2022 7:44 PM, Ben Bacarisse wrote:
I don't think that is the shell game. PO really /has/ an H
(it's trivial to do for this one case) that correctly determines
that P(P) *would* never stop running *unless* aborted.
But not the meaning everyone would agree is correct, as shown above.
There is only one meaning and Ben agreed to that.
False, as proven above.
On 10/9/2025 7:13 PM, dbush wrote:
On 10/9/2025 8:03 PM, olcott wrote:Ben acknowledged the my criteria have been met.
On 10/9/2025 6:34 PM, dbush wrote:
On 10/9/2025 7:15 PM, olcott wrote:
Ben agrees that I did meet the exact meaning of those
exact words.
On 10/14/2022 7:44 PM, Ben Bacarisse wrote:
I don't think that is the shell game. PO really /has/ an H
(it's trivial to do for this one case) that correctly determines >>>>> -a> that P(P) *would* never stop running *unless* aborted.
But not the meaning everyone would agree is correct, as shown above.
There is only one meaning and Ben agreed to that.
False, as proven above.
(That whole paragraph) He disagreed with the
second half of the semantic tautology.
(The part after the "then")
<MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
-a-a-a If simulating halt decider H correctly simulates its
-a-a-a input D until H correctly determines that its simulated D
-a-a-a would never stop running unless aborted then
The whole paragraph is proven true entirely on the basis
of the meaning of its words.
-a-a-a H can abort its simulation of D and correctly report that D
-a-a-a specifies a non-halting sequence of configurations.
</MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
It is a fact that D
does specify a non-halting sequence of
configurations that requires H to abort its simulation or
itself will fail to halt.
The words that professor Sipser agreed to
and the
fact that my HHH(DD) meets those words are proven
On 10/9/2025 10:32 PM, olcott wrote:
On 10/9/2025 7:13 PM, dbush wrote:
On 10/9/2025 8:03 PM, olcott wrote:Ben acknowledged the my criteria have been met.
On 10/9/2025 6:34 PM, dbush wrote:
On 10/9/2025 7:15 PM, olcott wrote:
Ben agrees that I did meet the exact meaning of those
exact words.
On 10/14/2022 7:44 PM, Ben Bacarisse wrote:
I don't think that is the shell game. PO really /has/ an H
(it's trivial to do for this one case) that correctly determines >>>>>> -a> that P(P) *would* never stop running *unless* aborted.
But not the meaning everyone would agree is correct, as shown above. >>>>>
There is only one meaning and Ben agreed to that.
False, as proven above.
(That whole paragraph) He disagreed with the
second half of the semantic tautology.
(The part after the "then")
<MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
-a-a-a-a If simulating halt decider H correctly simulates its
-a-a-a-a input D until H correctly determines that its simulated D
-a-a-a-a would never stop running unless aborted then
The whole paragraph is proven true entirely on the basis
of the meaning of its words.
-a-a-a-a H can abort its simulation of D and correctly report that D
-a-a-a-a specifies a non-halting sequence of configurations.
</MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
And *again* you imply that Sipser agrees with your meaning of the above
when it's been proven that he doesn't
It is a fact that D
i.e. the finite string description of machine D which is stipulated to specify all of the semantic properties of the machine D, including the
fact that it halts when executed directly.
does specify a non-halting sequence of configurations that requires H
to abort its simulation or
itself will fail to halt.
False, see above.
The words that professor Sipser agreed to
But not *your* meaning, as previously proven.
and the
fact that my HHH(DD) meets those words are proven
Nope, it is proven by the meaning of the words that finite string D
which is the description of machine D which is stipulated to specify all
of the semantic properties of the machine D, including the fact that it halts when executed directly.
On 10/9/2025 10:22 PM, dbush wrote:I'll let you reply to yourself:
On 10/9/2025 10:32 PM, olcott wrote:
On 10/9/2025 7:13 PM, dbush wrote:
On 10/9/2025 8:03 PM, olcott wrote:Ben acknowledged the my criteria have been met.
On 10/9/2025 6:34 PM, dbush wrote:
On 10/9/2025 7:15 PM, olcott wrote:
Ben agrees that I did meet the exact meaning of those
exact words.
On 10/14/2022 7:44 PM, Ben Bacarisse wrote:
I don't think that is the shell game. PO really /has/ an H
(it's trivial to do for this one case) that correctly determines >>>>>>> -a> that P(P) *would* never stop running *unless* aborted.
But not the meaning everyone would agree is correct, as shown above. >>>>>>
There is only one meaning and Ben agreed to that.
False, as proven above.
(That whole paragraph) He disagreed with the
second half of the semantic tautology.
(The part after the "then")
<MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
-a-a-a-a If simulating halt decider H correctly simulates its
-a-a-a-a input D until H correctly determines that its simulated D
-a-a-a-a would never stop running unless aborted then
The whole paragraph is proven true entirely on the basis
of the meaning of its words.
-a-a-a-a H can abort its simulation of D and correctly report that D
-a-a-a-a specifies a non-halting sequence of configurations.
</MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
And *again* you imply that Sipser agrees with your meaning of the
above when it's been proven that he doesn't
It has never been proven that he doesn't.
It is proven that you are a liar by the part of
my reply that you erased.
It is a fact that D
i.e. the finite string description of machine D which is stipulated to
specify all of the semantic properties of the machine D, including the
fact that it halts when executed directly.
When it is executed in a different context it does
have different behavior that is empirically proven.
does specify a non-halting sequence of configurations that requires H
to abort its simulation or
itself will fail to halt.
False, see above.
When we turn off the abort code it keeps running.
The words that professor Sipser agreed to
But not *your* meaning, as previously proven.
and the
fact that my HHH(DD) meets those words are proven
Nope, it is proven by the meaning of the words that finite string D
which is the description of machine D which is stipulated to specify
all of the semantic properties of the machine D, including the fact
that it halts when executed directly.
It turns out that is merely a false assumption
when DD calls HHH(DD) THIS IS AN ASPECT OF THE
BEHAVIOR OF DD.
On 10/10/2025 12:38 AM, olcott wrote:
On 10/9/2025 10:22 PM, dbush wrote:I'll let you reply to yourself:
On 10/9/2025 10:32 PM, olcott wrote:
On 10/9/2025 7:13 PM, dbush wrote:
On 10/9/2025 8:03 PM, olcott wrote:Ben acknowledged the my criteria have been met.
On 10/9/2025 6:34 PM, dbush wrote:
On 10/9/2025 7:15 PM, olcott wrote:
Ben agrees that I did meet the exact meaning of those
exact words.
On 10/14/2022 7:44 PM, Ben Bacarisse wrote:
I don't think that is the shell game. PO really /has/ an H >>>>>>>> -a> (it's trivial to do for this one case) that correctly determines >>>>>>>> -a> that P(P) *would* never stop running *unless* aborted.
But not the meaning everyone would agree is correct, as shown above. >>>>>>>
There is only one meaning and Ben agreed to that.
False, as proven above.
(That whole paragraph) He disagreed with the
second half of the semantic tautology.
(The part after the "then")
<MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
-a-a-a-a If simulating halt decider H correctly simulates its
-a-a-a-a input D until H correctly determines that its simulated D
-a-a-a-a would never stop running unless aborted then
The whole paragraph is proven true entirely on the basis
of the meaning of its words.
-a-a-a-a H can abort its simulation of D and correctly report that D
-a-a-a-a specifies a non-halting sequence of configurations.
</MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022> >>>>
And *again* you imply that Sipser agrees with your meaning of the
above when it's been proven that he doesn't
It has never been proven that he doesn't.
On 6/9/2025 10:55 AM, olcott wrote:
It is proven that you are a liar by the part of
my reply that you erased.
It is a fact that D
i.e. the finite string description of machine D which is stipulated
to specify all of the semantic properties of the machine D, including
the fact that it halts when executed directly.
When it is executed in a different context it does
have different behavior that is empirically proven.
In other words, the execution trace is an additional input to H/HHH and
it is therefore DISQUALIFIED from being a halt decider.
On 10/10/2025 7:08 AM, dbush wrote:
On 10/10/2025 12:38 AM, olcott wrote:
On 10/9/2025 10:22 PM, dbush wrote:I'll let you reply to yourself:
On 10/9/2025 10:32 PM, olcott wrote:
On 10/9/2025 7:13 PM, dbush wrote:
On 10/9/2025 8:03 PM, olcott wrote:Ben acknowledged the my criteria have been met.
On 10/9/2025 6:34 PM, dbush wrote:
On 10/9/2025 7:15 PM, olcott wrote:
Ben agrees that I did meet the exact meaning of those
exact words.
On 10/14/2022 7:44 PM, Ben Bacarisse wrote:
I don't think that is the shell game. PO really /has/ an H >>>>>>>>> -a> (it's trivial to do for this one case) that correctlydetermines
that P(P) *would* never stop running *unless* aborted.
But not the meaning everyone would agree is correct, as shown >>>>>>>> above.
There is only one meaning and Ben agreed to that.
False, as proven above.
(That whole paragraph) He disagreed with the
second half of the semantic tautology.
(The part after the "then")
<MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022> >>>>> -a-a-a-a If simulating halt decider H correctly simulates its
-a-a-a-a input D until H correctly determines that its simulated D
-a-a-a-a would never stop running unless aborted then
The whole paragraph is proven true entirely on the basis
of the meaning of its words.
-a-a-a-a H can abort its simulation of D and correctly report that D >>>>> -a-a-a-a specifies a non-halting sequence of configurations.
</MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022> >>>>>
And *again* you imply that Sipser agrees with your meaning of the
above when it's been proven that he doesn't
It has never been proven that he doesn't.
On 6/9/2025 10:55 AM, olcott wrote:
It is proven that you are a liar by the part of
my reply that you erased.
It is a fact that D
i.e. the finite string description of machine D which is stipulated
to specify all of the semantic properties of the machine D,
including the fact that it halts when executed directly.
When it is executed in a different context it does
have different behavior that is empirically proven.
In other words, the execution trace is an additional input to H/HHH
and it is therefore DISQUALIFIED from being a halt decider.
When an input DD calls its own decider HHH(DD)