Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 27 |
Nodes: | 6 (0 / 6) |
Uptime: | 40:17:47 |
Calls: | 631 |
Calls today: | 2 |
Files: | 1,187 |
D/L today: |
24 files (29,813K bytes) |
Messages: | 174,391 |
Please think this all the way through without making any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) Detects a non-terminating behavior pattern:
abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
return 1.
(c) If HHH must abort its simulation to prevent its own non-termination
then HHH is correct to abort this simulation and return 0.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
int main()
{
HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
Please think this all the way through without making any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) Detects a non-terminating behavior pattern:
-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non-termination
-a-a-a then HHH is correct to abort this simulation and return 0.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a int Halt_Status = HHH(DD);
-a if (Halt_Status)
-a-a-a HERE: goto HERE;
-a return Halt_Status;
}
int main()
{
-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
On Sun, 12 Oct 2025 08:50:05 -0500, olcott wrote:
Please think this all the way through without making any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) Detects a non-terminating behavior pattern:
abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
return 1.
(c) If HHH must abort its simulation to prevent its own non-termination
then HHH is correct to abort this simulation and return 0.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
int main()
{
HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
Change the input substituting the words "Termination Analyzer" with the
words "Halting Decider" and try again.
/Flibble
On 10/12/2025 9:50 AM, olcott wrote:
Please think this all the way through without making any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non-termination
-a-a-a-a then HHH is correct to abort this simulation and return 0.
These conditions make HHH not a halt decider because they are
incompatible with the requirements:
Given any algorithm (i.e. a fixed immutable sequence of instructions) X described as <X> with input Y:
A solution to the halting problem is an algorithm H that computes the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed directly
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
Error: assumes it's possible to design HHH to get a correct answer.
Please think this all the way through without making any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) Detects a non-terminating behavior pattern:
-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non-termination
-a-a-a then HHH is correct to abort this simulation and return 0.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a int Halt_Status = HHH(DD);
-a if (Halt_Status)
-a-a-a HERE: goto HERE;
-a return Halt_Status;
}
int main()
{
-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
Sorry, that's silly. You spend half your life discussing the
same problem over and over again and never get to the end.
Am 12.10.2025 um 15:50 schrieb olcott:
Please think this all the way through without making any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non-termination
-a-a-a-a then HHH is correct to abort this simulation and return 0.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
On 10/12/2025 10:53 AM, Bonita Montero wrote:
Sorry, that's silly. You spend half your life discussing the
same problem over and over again and never get to the end.
Am 12.10.2025 um 15:50 schrieb olcott:
Please think this all the way through without making any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until: >>> (a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non-termination
-a-a-a-a then HHH is correct to abort this simulation and return 0.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
I am getting to the end.
I needed feedback to make my words clearer and LLM
systems are giving me this feedback. They provided
more help in a few dozen messages than tens of
thousands of dialogues with humans. LLM systems
became 67-fold more powerful on the last one year.
Their context window increased from 3000 words
to 200,000 words. Basically how much of the
conversation that they can keep in their head
at the same time. Last year ChatGPT acted like
it had Alzheimer's when I exceeded its 3000
word limit.
On 10/12/2025 9:19 AM, dbush wrote:
On 10/12/2025 9:50 AM, olcott wrote:
It is perfectly compatible with those requirements except in the caseSimulating Termination Analyzer HHH correctly simulates its input
until:
(a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own
non-termination
-a-a-a-a then HHH is correct to abort this simulation and return 0.
These conditions make HHH not a halt decider because they are
incompatible with the requirements:
where the input calls its own simulating halt decider.
Am Sun, 12 Oct 2025 10:47:43 -0500 schrieb olcott:
On 10/12/2025 9:19 AM, dbush wrote:
On 10/12/2025 9:50 AM, olcott wrote:
It is perfectly compatible with those requirements except in the caseSimulating Termination Analyzer HHH correctly simulates its input
until:
(a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own
non-termination
-a-a-a-a then HHH is correct to abort this simulation and return 0.
These conditions make HHH not a halt decider because they are
incompatible with the requirements:
where the input calls its own simulating halt decider.
Yes, it is not compatible with the requirements in that case.
Sorry, that's silly. You spend half your life discussing the
same problem over and over again and never get to the end.
Am 12.10.2025 um 15:50 schrieb olcott:
Please think this all the way through without making any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) Detects a non-terminating behavior pattern:
aaaa abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
aaaa return 1.
(c) If HHH must abort its simulation to prevent its own non-termination
aaaa then HHH is correct to abort this simulation and return 0.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
aa int Halt_Status = HHH(DD);
aa if (Halt_Status)
aaaa HERE: goto HERE;
aa return Halt_Status;
}
int main()
{
aa HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
On 10/12/2025 9:10 AM, Mr Flibble wrote:
On Sun, 12 Oct 2025 08:50:05 -0500, olcott wrote:"Partial halt decider" because I do not claim to solve the halting
Please think this all the way through without making any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input
until:
(a) Detects a non-terminating behavior pattern:
abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
return 1.
(c) If HHH must abort its simulation to prevent its own
non-termination
then HHH is correct to abort this simulation and return 0.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
int main()
{
HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
Change the input substituting the words "Termination Analyzer" with the
words "Halting Decider" and try again.
/Flibble
problem, only correctly determine the halt status of the counter-example input.
Please think this all the way through without making any guesses^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) Detects a non-terminating behavior pattern:
-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non-termination
-a-a-a then HHH is correct to abort this simulation and return 0.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a int Halt_Status = HHH(DD);
-a if (Halt_Status)
-a-a-a HERE: goto HERE;
-a return Halt_Status;
}
int main()
{
-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
Also very important is that there is no chance of
AI hallucination when they are only reasoning
within a set of premises.-a Some systems must be told:
Please think this all the way through without making any guesses
On 10/12/2025 9:19 AM, dbush wrote:
On 10/12/2025 9:50 AM, olcott wrote:
Please think this all the way through without making any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until: >>> (a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non-termination
-a-a-a-a then HHH is correct to abort this simulation and return 0.
These conditions make HHH not a halt decider because they are
incompatible with the requirements:
It is perfectly compatible with those requirements
except in the case where the input calls its own
simulating halt decider.
Given any algorithm (i.e. a fixed immutable sequence of instructions)
X described as <X> with input Y:
A solution to the halting problem is an algorithm H that computes the
following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
directly
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
Error: assumes it's possible to design HHH to get a correct answer.
HHH(DD) gets the correct answer within its set
of assumptions / premises
On Sun, 12 Oct 2025 10:42:47 -0500, olcott wrote:
On 10/12/2025 9:10 AM, Mr Flibble wrote:
On Sun, 12 Oct 2025 08:50:05 -0500, olcott wrote:"Partial halt decider" because I do not claim to solve the halting
Please think this all the way through without making any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input
until:
(a) Detects a non-terminating behavior pattern:
abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
return 1.
(c) If HHH must abort its simulation to prevent its own
non-termination
then HHH is correct to abort this simulation and return 0.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
int main()
{
HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
Change the input substituting the words "Termination Analyzer" with the
words "Halting Decider" and try again.
/Flibble
problem, only correctly determine the halt status of the counter-example
input.
The Halting Problem proofs you are attempting to refute are NOT predicated
on partial halt deciders.
/Flibble
On 10/12/2025 11:47 AM, olcott wrote:
On 10/12/2025 9:19 AM, dbush wrote:
On 10/12/2025 9:50 AM, olcott wrote:
Please think this all the way through without making any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input
until:
(a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non-termination >>>> -a-a-a-a then HHH is correct to abort this simulation and return 0.
These conditions make HHH not a halt decider because they are
incompatible with the requirements:
It is perfectly compatible with those requirements
except in the case where the input calls its own
simulating halt decider.
In other words, not compatible.-a No "except".
Given any algorithm (i.e. a fixed immutable sequence of instructions)
X described as <X> with input Y:
A solution to the halting problem is an algorithm H that computes the
following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
directly
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
Error: assumes it's possible to design HHH to get a correct answer.
HHH(DD) gets the correct answer within its set
of assumptions / premises
Which is incompatible with the requirements for a halt decider:
On 10/12/2025 3:11 PM, dbush wrote:
On 10/12/2025 11:47 AM, olcott wrote:
On 10/12/2025 9:19 AM, dbush wrote:
On 10/12/2025 9:50 AM, olcott wrote:
Please think this all the way through without making any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input
until:
(a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non-
termination
-a-a-a-a then HHH is correct to abort this simulation and return 0.
These conditions make HHH not a halt decider because they are
incompatible with the requirements:
It is perfectly compatible with those requirements
except in the case where the input calls its own
simulating halt decider.
In other words, not compatible.-a No "except".
Given any algorithm (i.e. a fixed immutable sequence of
instructions) X described as <X> with input Y:
A solution to the halting problem is an algorithm H that computes
the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
directly
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
Error: assumes it's possible to design HHH to get a correct answer.
HHH(DD) gets the correct answer within its set
of assumptions / premises
Which is incompatible with the requirements for a halt decider:
Yes, but the requirements for a halt decider are inconsistent
with reality.
On 10/12/25 12:04 PM, olcott wrote:
Also very important is that there is no chance of
AI hallucination when they are only reasoning
within a set of premises.-a Some systems must be told:
Please think this all the way through without making any guesses
I don't mean to be rude, but that is a completely insane assertion to
me. There is always a non-zero chance for an LLM to roll a bad token
during inference and spit out garbage.
Sure, the top-p decoding strategy
can help minimize such mistakes by pruning the token pool of the worst
of the bad apples, but such models will never *ever* be foolproof. The
price you pay for convincingly generating natural language is
bulletproof reasoning.
If you're interested in formalizing your ideas using cutting-edge tech,
I encourage you to look at Lean 4. Once you provide a machine-checked
proof in Lean 4 with no `sorry`/`axiom`/other cheats, come back. People might adopt a very different tone.
Best of luck, you will need it.
On 12/10/2025 16:53, Bonita Montero wrote:
Sorry, that's silly. You spend half your life discussing the
same problem over and over again and never get to the end.
This gives PO a narrative he can hold on to which gives his life a meaning:-a he is the heroic world-saving unrecognised genius, constantly struggling against "the system" right up to his final breath!-a If he
were to suddenly realise he was just a deluded dumbo who had wasted most
of his life arguing over a succession of mistakes and misunderstandings
on his part, and had never contributed a single idea of any academic
value, would his life be better?-a I think not.
Thankfully he has recently discovered chatbots who can give him the uncritical approval he craves,
so there is next to no chance of that
happening now.-a [Assuming they don't suddenly get better, to the point where they can genuinely analyse and criticise his claims in the way we do...-a Given how they currently work, I don't see that happening any
time soon.]
Would the lives of other posters here be better?-a That's a trickier question.
Mike.--
Am 12.10.2025 um 15:50 schrieb olcott:
Please think this all the way through without making any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until: >>> (a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non-termination
-a-a-a-a then HHH is correct to abort this simulation and return 0.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
On 10/12/2025 9:20 PM, olcott wrote:
On 10/12/2025 3:11 PM, dbush wrote:
On 10/12/2025 11:47 AM, olcott wrote:
On 10/12/2025 9:19 AM, dbush wrote:
On 10/12/2025 9:50 AM, olcott wrote:
Please think this all the way through without making any guessesThese conditions make HHH not a halt decider because they are
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input >>>>>> until:
(a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non-
termination
-a-a-a-a then HHH is correct to abort this simulation and return 0. >>>>>
incompatible with the requirements:
It is perfectly compatible with those requirements
except in the case where the input calls its own
simulating halt decider.
In other words, not compatible.-a No "except".
Given any algorithm (i.e. a fixed immutable sequence of
instructions) X described as <X> with input Y:
A solution to the halting problem is an algorithm H that computes
the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
directly
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
Error: assumes it's possible to design HHH to get a correct answer.
HHH(DD) gets the correct answer within its set
of assumptions / premises
Which is incompatible with the requirements for a halt decider:
Yes, but the requirements for a halt decider are inconsistent
with reality.
In other words, you agree with Turing and Linz that the following requirements cannot be satisfied:
On 10/12/2025 9:20 PM, olcott wrote:
Yes, but the requirements for a halt decider are inconsistent
with reality.
In other words, you agree with Turing and Linz
On 10/12/2025 8:22 PM, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
On 10/12/2025 3:11 PM, dbush wrote:
On 10/12/2025 11:47 AM, olcott wrote:
On 10/12/2025 9:19 AM, dbush wrote:
On 10/12/2025 9:50 AM, olcott wrote:
Please think this all the way through without making any guesses >>>>>>>These conditions make HHH not a halt decider because they are
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input >>>>>>> until:
(a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non-
termination
-a-a-a-a then HHH is correct to abort this simulation and return 0. >>>>>>
incompatible with the requirements:
It is perfectly compatible with those requirements
except in the case where the input calls its own
simulating halt decider.
In other words, not compatible.-a No "except".
Given any algorithm (i.e. a fixed immutable sequence of
instructions) X described as <X> with input Y:
A solution to the halting problem is an algorithm H that computes >>>>>> the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly >>>>>> (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed >>>>>> directly
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
Error: assumes it's possible to design HHH to get a correct answer. >>>>>>
HHH(DD) gets the correct answer within its set
of assumptions / premises
Which is incompatible with the requirements for a halt decider:
Yes, but the requirements for a halt decider are inconsistent
with reality.
In other words, you agree with Turing and Linz that the following
requirements cannot be satisfied:
Sure and likewise no Turing machine can
give birth to a real live fifteen story
office building. All logical impossibilities
are exactly equally logical impossible.
On 13/10/2025 02:22, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
<snip>
Yes, but the requirements for a halt decider are inconsistent
with reality.
In other words, you agree with Turing and Linz
He does. That's pretty much Game Over, I think.
A halt decider cannot exist
On 4/28/2025 11:54 AM, dbush wrote:instructions) X described as <X> with input Y:
And the halting function below is not a computable function:
It is NEVER a computable function
Given any algorithm (i.e. a fixed immutable sequence of
the following mapping:
A solution to the halting problem is an algorithm H that computes
directly
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
When we define the HP as having H return a value
corresponding to the halting behavior of input D
and input D can actually does the opposite of whatever
value that H returns, then we have boxed ourselves
in to a problem having no solution.
the logical impossibility of specifying a halt decider H
that correctly reports the halt status of input D that is
defined to do the opposite of whatever value that H reports.
Of course this is impossible.
If you frame the problem in that a halt decider must divide up finite strings pairs into those that halt when directly executed and those that
do not, then no single program can do this.
On 5/5/2025 4:31 PM, dbush wrote:
Strawman. The square root of a dead rabbit does not exist, but the
question of whether any arbitrary algorithm X with input Y halts when
executed directly has a correct answer in all cases.
It has a correct answer that cannot ever be computed
There is no time that we are ever going to directly
encode omniscience into a computer program. The
screwy idea of a universal halt decider that is
literally ALL KNOWING is just a screwy idea.
Yes, but the requirements for a halt decider are inconsistent--- Synchronet 3.21a-Linux NewsLink 1.2
with reality.
On 13/10/2025 02:22, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
<snip>
Yes, but the requirements for a halt decider are inconsistent
with reality.
In other words, you agree with Turing and Linz
He does. That's pretty much Game Over, I think.
On 10/12/2025 9:56 PM, olcott wrote:
On 10/12/2025 8:22 PM, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
On 10/12/2025 3:11 PM, dbush wrote:
On 10/12/2025 11:47 AM, olcott wrote:
On 10/12/2025 9:19 AM, dbush wrote:
On 10/12/2025 9:50 AM, olcott wrote:
Please think this all the way through without making any guesses >>>>>>>>These conditions make HHH not a halt decider because they are
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its
input until:
(a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non-
termination
-a-a-a-a then HHH is correct to abort this simulation and return 0. >>>>>>>
incompatible with the requirements:
It is perfectly compatible with those requirements
except in the case where the input calls its own
simulating halt decider.
In other words, not compatible.-a No "except".
Given any algorithm (i.e. a fixed immutable sequence of
instructions) X described as <X> with input Y:
A solution to the halting problem is an algorithm H that computes >>>>>>> the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly >>>>>>> (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed >>>>>>> directly
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
Error: assumes it's possible to design HHH to get a correct answer. >>>>>>>
HHH(DD) gets the correct answer within its set
of assumptions / premises
Which is incompatible with the requirements for a halt decider:
Yes, but the requirements for a halt decider are inconsistent
with reality.
In other words, you agree with Turing and Linz that the following
requirements cannot be satisfied:
Sure and likewise no Turing machine can
give birth to a real live fifteen story
office building. All logical impossibilities
are exactly equally logical impossible.
So we're in agreement: no algorithm exists that can tell us if any
arbitrary algorithm X with input Y will halt when executed directly, as proven by Turning and Linz.
On 10/12/2025 10:06 PM, Richard Heathfield wrote:
On 13/10/2025 02:22, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
<snip>
Yes, but the requirements for a halt decider are inconsistent
with reality.
In other words, you agree with Turing and Linz
He does. That's pretty much Game Over, I think.
And this isn't the first time.
function LoopIfYouSayItHalts (bool YouSayItHalts):--
if YouSayItHalts () then
while true do {}
else
return false;
Does this program Halt?
(Your (YES or NO) answer is to be considered
translated to Boolean as the function's input
parameter)
Please ONLY PROVIDE CORRECT ANSWERS!
On 10/12/2025 10:06 PM, Richard Heathfield wrote:
On 13/10/2025 02:22, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
<snip>
Yes, but the requirements for a halt decider are inconsistent
with reality.
In other words, you agree with Turing and Linz
He does. That's pretty much Game Over, I think.
And this isn't the first time.
On 10/12/2025 9:15 PM, dbush wrote:The false assumption that such an algorithm *does* exist.
On 10/12/2025 9:56 PM, olcott wrote:
On 10/12/2025 8:22 PM, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
On 10/12/2025 3:11 PM, dbush wrote:
On 10/12/2025 11:47 AM, olcott wrote:
On 10/12/2025 9:19 AM, dbush wrote:
On 10/12/2025 9:50 AM, olcott wrote:
Please think this all the way through without making any guesses >>>>>>>>>These conditions make HHH not a halt decider because they are >>>>>>>> incompatible with the requirements:
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its >>>>>>>>> input until:
(a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement: >>>>>>>>> -a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non- >>>>>>>>> termination
-a-a-a-a then HHH is correct to abort this simulation and return 0. >>>>>>>>
It is perfectly compatible with those requirements
except in the case where the input calls its own
simulating halt decider.
In other words, not compatible.-a No "except".
Given any algorithm (i.e. a fixed immutable sequence of
instructions) X described as <X> with input Y:
A solution to the halting problem is an algorithm H that
computes the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly >>>>>>>> (<X>,Y) maps to 0 if and only if X(Y) does not halt when
executed directly
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
Error: assumes it's possible to design HHH to get a correct answer. >>>>>>>>
HHH(DD) gets the correct answer within its set
of assumptions / premises
Which is incompatible with the requirements for a halt decider:
Yes, but the requirements for a halt decider are inconsistent
with reality.
In other words, you agree with Turing and Linz that the following
requirements cannot be satisfied:
Sure and likewise no Turing machine can
give birth to a real live fifteen story
office building. All logical impossibilities
are exactly equally logical impossible.
So we're in agreement: no algorithm exists that can tell us if any
arbitrary algorithm X with input Y will halt when executed directly,
as proven by Turning and Linz.
In exactly the same way that: "this sentence is not true"
cannot be proven true or false. It is a bogus decision
problem anchored in
On 10/12/2025 9:17 PM, dbush wrote:
On 10/12/2025 10:06 PM, Richard Heathfield wrote:*The first time was back kn 2004*
On 13/10/2025 02:22, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
<snip>
Yes, but the requirements for a halt decider are inconsistent
with reality.
In other words, you agree with Turing and Linz
He does. That's pretty much Game Over, I think.
And this isn't the first time.
On 10/12/2025 9:29 PM, dbush wrote:
On 10/12/2025 10:20 PM, olcott wrote:
On 10/12/2025 9:15 PM, dbush wrote:
On 10/12/2025 9:56 PM, olcott wrote:
On 10/12/2025 8:22 PM, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
On 10/12/2025 3:11 PM, dbush wrote:
On 10/12/2025 11:47 AM, olcott wrote:
On 10/12/2025 9:19 AM, dbush wrote:
On 10/12/2025 9:50 AM, olcott wrote:
Please think this all the way through without making any guesses >>>>>>>>>>>These conditions make HHH not a halt decider because they are >>>>>>>>>> incompatible with the requirements:
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its >>>>>>>>>>> input until:
(a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement: >>>>>>>>>>> -a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non- >>>>>>>>>>> termination
-a-a-a-a then HHH is correct to abort this simulation and return 0. >>>>>>>>>>
It is perfectly compatible with those requirements
except in the case where the input calls its own
simulating halt decider.
In other words, not compatible.-a No "except".
Given any algorithm (i.e. a fixed immutable sequence of
instructions) X described as <X> with input Y:
A solution to the halting problem is an algorithm H that
computes the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed >>>>>>>>>> directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when
executed directly
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
Error: assumes it's possible to design HHH to get a correct >>>>>>>>>> answer.
HHH(DD) gets the correct answer within its set
of assumptions / premises
Which is incompatible with the requirements for a halt decider: >>>>>>>>
Yes, but the requirements for a halt decider are inconsistent
with reality.
In other words, you agree with Turing and Linz that the following >>>>>> requirements cannot be satisfied:
Sure and likewise no Turing machine can
give birth to a real live fifteen story
office building. All logical impossibilities
are exactly equally logical impossible.
So we're in agreement: no algorithm exists that can tell us if any
arbitrary algorithm X with input Y will halt when executed directly,
as proven by Turning and Linz.
In exactly the same way that: "this sentence is not true"
cannot be proven true or false. It is a bogus decision
problem anchored in
a fundamentally incorrect notion of truth.
The false assumption that such an algorithm *does* exist.
Can we correctly say that the color of your car is fifteen feet long?
For the body of analytical truth coherence is the key and
incoherence rules out truth.
On 13/10/2025 03:17, dbush wrote:In exactly the same way that: "this sentence is not true"
On 10/12/2025 10:06 PM, Richard Heathfield wrote:
On 13/10/2025 02:22, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
<snip>
Yes, but the requirements for a halt decider are inconsistent
with reality.
In other words, you agree with Turing and Linz
On 10/12/2025 10:20 PM, olcott wrote:
On 10/12/2025 9:15 PM, dbush wrote:
On 10/12/2025 9:56 PM, olcott wrote:
On 10/12/2025 8:22 PM, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
On 10/12/2025 3:11 PM, dbush wrote:
On 10/12/2025 11:47 AM, olcott wrote:
On 10/12/2025 9:19 AM, dbush wrote:
On 10/12/2025 9:50 AM, olcott wrote:
Please think this all the way through without making any guesses >>>>>>>>>>These conditions make HHH not a halt decider because they are >>>>>>>>> incompatible with the requirements:
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its >>>>>>>>>> input until:
(a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement: >>>>>>>>>> -a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non- >>>>>>>>>> termination
-a-a-a-a then HHH is correct to abort this simulation and return 0. >>>>>>>>>
It is perfectly compatible with those requirements
except in the case where the input calls its own
simulating halt decider.
In other words, not compatible.-a No "except".
Given any algorithm (i.e. a fixed immutable sequence of
instructions) X described as <X> with input Y:
A solution to the halting problem is an algorithm H that
computes the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly >>>>>>>>> (<X>,Y) maps to 0 if and only if X(Y) does not halt when
executed directly
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
Error: assumes it's possible to design HHH to get a correct >>>>>>>>> answer.
HHH(DD) gets the correct answer within its set
of assumptions / premises
Which is incompatible with the requirements for a halt decider:
Yes, but the requirements for a halt decider are inconsistent
with reality.
In other words, you agree with Turing and Linz that the following
requirements cannot be satisfied:
Sure and likewise no Turing machine can
give birth to a real live fifteen story
office building. All logical impossibilities
are exactly equally logical impossible.
So we're in agreement: no algorithm exists that can tell us if any
arbitrary algorithm X with input Y will halt when executed directly,
as proven by Turning and Linz.
In exactly the same way that: "this sentence is not true"
cannot be proven true or false. It is a bogus decision
problem anchored in
The false assumption that such an algorithm *does* exist.
On 10/12/2025 10:23 PM, olcott wrote:
On 10/12/2025 9:17 PM, dbush wrote:
On 10/12/2025 10:06 PM, Richard Heathfield wrote:*The first time was back kn 2004*
On 13/10/2025 02:22, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
<snip>
Yes, but the requirements for a halt decider are inconsistent
with reality.
In other words, you agree with Turing and Linz
He does. That's pretty much Game Over, I think.
And this isn't the first time.
You admitted that Turning was right in 2004?-a Because that's what we're talking about.
On 10/12/2025 9:29 PM, dbush wrote:
On 10/12/2025 10:23 PM, olcott wrote:
On 10/12/2025 9:17 PM, dbush wrote:
On 10/12/2025 10:06 PM, Richard Heathfield wrote:*The first time was back kn 2004*
On 13/10/2025 02:22, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
<snip>
Yes, but the requirements for a halt decider are inconsistent
with reality.
In other words, you agree with Turing and Linz
He does. That's pretty much Game Over, I think.
And this isn't the first time.
You admitted that Turning was right in 2004?-a Because that's what
we're talking about.
Go back and read and reread my 2004 words
again and again until you understand exactly
what they mean.
On 10/12/2025 10:35 PM, olcott wrote:
On 10/12/2025 9:29 PM, dbush wrote:
On 10/12/2025 10:23 PM, olcott wrote:
On 10/12/2025 9:17 PM, dbush wrote:
On 10/12/2025 10:06 PM, Richard Heathfield wrote:*The first time was back kn 2004*
On 13/10/2025 02:22, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
<snip>
Yes, but the requirements for a halt decider are inconsistent
with reality.
In other words, you agree with Turing and Linz
He does. That's pretty much Game Over, I think.
And this isn't the first time.
You admitted that Turning was right in 2004?-a Because that's what
we're talking about.
Go back and read and reread my 2004 words
again and again until you understand exactly
what they mean.
So if you agreed that Turning was right back in 2004, what have you been doing the last 21 years?
On 10/12/2025 9:38 PM, dbush wrote:So now you're saying you *didn't* admit Turing was right in 2004?
On 10/12/2025 10:35 PM, olcott wrote:
On 10/12/2025 9:29 PM, dbush wrote:
On 10/12/2025 10:23 PM, olcott wrote:
On 10/12/2025 9:17 PM, dbush wrote:
On 10/12/2025 10:06 PM, Richard Heathfield wrote:*The first time was back kn 2004*
On 13/10/2025 02:22, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
<snip>
Yes, but the requirements for a halt decider are inconsistent >>>>>>>>> with reality.
In other words, you agree with Turing and Linz
He does. That's pretty much Game Over, I think.
And this isn't the first time.
You admitted that Turning was right in 2004?-a Because that's what
we're talking about.
Go back and read and reread my 2004 words
again and again until you understand exactly
what they mean.
So if you agreed that Turning was right back in 2004, what have you
been doing the last 21 years?
Read and reread the exact context of what
I said
On 10/12/2025 10:34 PM, olcott wrote:
On 10/12/2025 9:29 PM, dbush wrote:
On 10/12/2025 10:20 PM, olcott wrote:
On 10/12/2025 9:15 PM, dbush wrote:
On 10/12/2025 9:56 PM, olcott wrote:
On 10/12/2025 8:22 PM, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
On 10/12/2025 3:11 PM, dbush wrote:
On 10/12/2025 11:47 AM, olcott wrote:
On 10/12/2025 9:19 AM, dbush wrote:
On 10/12/2025 9:50 AM, olcott wrote:
Please think this all the way through without making any >>>>>>>>>>>> guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its >>>>>>>>>>>> input until:
(a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement: >>>>>>>>>>>> -a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non- >>>>>>>>>>>> termination
-a-a-a-a then HHH is correct to abort this simulation and return 0.
These conditions make HHH not a halt decider because they are >>>>>>>>>>> incompatible with the requirements:
It is perfectly compatible with those requirements
except in the case where the input calls its own
simulating halt decider.
In other words, not compatible.-a No "except".
Given any algorithm (i.e. a fixed immutable sequence of >>>>>>>>>>> instructions) X described as <X> with input Y:
A solution to the halting problem is an algorithm H that >>>>>>>>>>> computes the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed >>>>>>>>>>> directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when >>>>>>>>>>> executed directly
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
Error: assumes it's possible to design HHH to get a correct >>>>>>>>>>> answer.
HHH(DD) gets the correct answer within its set
of assumptions / premises
Which is incompatible with the requirements for a halt decider: >>>>>>>>>
Yes, but the requirements for a halt decider are inconsistent
with reality.
In other words, you agree with Turing and Linz that the following >>>>>>> requirements cannot be satisfied:
Sure and likewise no Turing machine can
give birth to a real live fifteen story
office building. All logical impossibilities
are exactly equally logical impossible.
So we're in agreement: no algorithm exists that can tell us if any
arbitrary algorithm X with input Y will halt when executed
directly, as proven by Turning and Linz.
In exactly the same way that: "this sentence is not true"
cannot be proven true or false. It is a bogus decision
problem anchored in
a fundamentally incorrect notion of truth.
The false assumption that such an algorithm *does* exist.
Can we correctly say that the color of your car is fifteen feet long?
For the body of analytical truth coherence is the key and
incoherence rules out truth.
There is nothing incoherent about wanting to know if any arbitrary
algorithm X with input Y will halt when executed directly.
And until Turing's proof, no one knew whether or not an algorithm--
existed that can determine that in *all* possible cases.
On 10/12/2025 10:55 PM, olcott wrote:
On 10/12/2025 9:38 PM, dbush wrote:So now you're saying you *didn't* admit Turing was right in 2004?
On 10/12/2025 10:35 PM, olcott wrote:
On 10/12/2025 9:29 PM, dbush wrote:
On 10/12/2025 10:23 PM, olcott wrote:
On 10/12/2025 9:17 PM, dbush wrote:
On 10/12/2025 10:06 PM, Richard Heathfield wrote:*The first time was back kn 2004*
On 13/10/2025 02:22, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
<snip>
Yes, but the requirements for a halt decider are inconsistent >>>>>>>>>> with reality.
In other words, you agree with Turing and Linz
He does. That's pretty much Game Over, I think.
And this isn't the first time.
You admitted that Turning was right in 2004?-a Because that's what
we're talking about.
Go back and read and reread my 2004 words
again and again until you understand exactly
what they mean.
So if you agreed that Turning was right back in 2004, what have you
been doing the last 21 years?
Read and reread the exact context of what
I said
On 10/12/2025 9:40 PM, dbush wrote:
On 10/12/2025 10:34 PM, olcott wrote:
On 10/12/2025 9:29 PM, dbush wrote:
On 10/12/2025 10:20 PM, olcott wrote:
On 10/12/2025 9:15 PM, dbush wrote:
On 10/12/2025 9:56 PM, olcott wrote:
On 10/12/2025 8:22 PM, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
On 10/12/2025 3:11 PM, dbush wrote:
On 10/12/2025 11:47 AM, olcott wrote:
On 10/12/2025 9:19 AM, dbush wrote:
On 10/12/2025 9:50 AM, olcott wrote:
Please think this all the way through without making any >>>>>>>>>>>>> guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its >>>>>>>>>>>>> input until:
(a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement: >>>>>>>>>>>>> -a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own >>>>>>>>>>>>> non- termination
-a-a-a-a then HHH is correct to abort this simulation and >>>>>>>>>>>>> return 0.
These conditions make HHH not a halt decider because they >>>>>>>>>>>> are incompatible with the requirements:
It is perfectly compatible with those requirements
except in the case where the input calls its own
simulating halt decider.
In other words, not compatible.-a No "except".
Given any algorithm (i.e. a fixed immutable sequence of >>>>>>>>>>>> instructions) X described as <X> with input Y:
A solution to the halting problem is an algorithm H that >>>>>>>>>>>> computes the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed >>>>>>>>>>>> directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when >>>>>>>>>>>> executed directly
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
Error: assumes it's possible to design HHH to get a correct >>>>>>>>>>>> answer.
HHH(DD) gets the correct answer within its set
of assumptions / premises
Which is incompatible with the requirements for a halt decider: >>>>>>>>>>
Yes, but the requirements for a halt decider are inconsistent >>>>>>>>> with reality.
In other words, you agree with Turing and Linz that the
following requirements cannot be satisfied:
Sure and likewise no Turing machine can
give birth to a real live fifteen story
office building. All logical impossibilities
are exactly equally logical impossible.
So we're in agreement: no algorithm exists that can tell us if any >>>>>> arbitrary algorithm X with input Y will halt when executed
directly, as proven by Turning and Linz.
In exactly the same way that: "this sentence is not true"
cannot be proven true or false. It is a bogus decision
problem anchored in
a fundamentally incorrect notion of truth.
The false assumption that such an algorithm *does* exist.
Can we correctly say that the color of your car is fifteen feet long?
For the body of analytical truth coherence is the key and
incoherence rules out truth.
There is nothing incoherent about wanting to know if any arbitrary
algorithm X with input Y will halt when executed directly.
Tarski stupidly thought this exact same sort of thing.
If a truth predicate exists then it could tell if the
Liar Paradox is true or false. Since it cannot then
there must be no truth predicate.
And until Turing's proof, no one knew whether or not an algorithm
existed that can determine that in *all* possible cases.
On 10/12/2025 9:57 PM, dbush wrote:
On 10/12/2025 10:55 PM, olcott wrote:
On 10/12/2025 9:38 PM, dbush wrote:So now you're saying you *didn't* admit Turing was right in 2004?
On 10/12/2025 10:35 PM, olcott wrote:
On 10/12/2025 9:29 PM, dbush wrote:
On 10/12/2025 10:23 PM, olcott wrote:
On 10/12/2025 9:17 PM, dbush wrote:
On 10/12/2025 10:06 PM, Richard Heathfield wrote:*The first time was back kn 2004*
On 13/10/2025 02:22, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
<snip>
Yes, but the requirements for a halt decider are inconsistent >>>>>>>>>>> with reality.
In other words, you agree with Turing and Linz
He does. That's pretty much Game Over, I think.
And this isn't the first time.
You admitted that Turning was right in 2004?-a Because that's what >>>>>> we're talking about.
Go back and read and reread my 2004 words
again and again until you understand exactly
what they mean.
So if you agreed that Turning was right back in 2004, what have you
been doing the last 21 years?
Read and reread the exact context of what
I said
Accurately paraphrase my exact words unless
this is over your intellectual capacity.
On 10/12/2025 10:57 PM, olcott wrote:
On 10/12/2025 9:40 PM, dbush wrote:
On 10/12/2025 10:34 PM, olcott wrote:
On 10/12/2025 9:29 PM, dbush wrote:
On 10/12/2025 10:20 PM, olcott wrote:
On 10/12/2025 9:15 PM, dbush wrote:
On 10/12/2025 9:56 PM, olcott wrote:
On 10/12/2025 8:22 PM, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
On 10/12/2025 3:11 PM, dbush wrote:
On 10/12/2025 11:47 AM, olcott wrote:
On 10/12/2025 9:19 AM, dbush wrote:
On 10/12/2025 9:50 AM, olcott wrote:
Please think this all the way through without making any >>>>>>>>>>>>>> guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates >>>>>>>>>>>>>> its input until:
(a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement: >>>>>>>>>>>>>> -a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own >>>>>>>>>>>>>> non- termination
-a-a-a-a then HHH is correct to abort this simulation and >>>>>>>>>>>>>> return 0.
These conditions make HHH not a halt decider because they >>>>>>>>>>>>> are incompatible with the requirements:
It is perfectly compatible with those requirements
except in the case where the input calls its own
simulating halt decider.
In other words, not compatible.-a No "except".
Given any algorithm (i.e. a fixed immutable sequence of >>>>>>>>>>>>> instructions) X described as <X> with input Y:
A solution to the halting problem is an algorithm H that >>>>>>>>>>>>> computes the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed >>>>>>>>>>>>> directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when >>>>>>>>>>>>> executed directly
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
Error: assumes it's possible to design HHH to get a correct >>>>>>>>>>>>> answer.
HHH(DD) gets the correct answer within its set
of assumptions / premises
Which is incompatible with the requirements for a halt decider: >>>>>>>>>>>
Yes, but the requirements for a halt decider are inconsistent >>>>>>>>>> with reality.
In other words, you agree with Turing and Linz that the
following requirements cannot be satisfied:
Sure and likewise no Turing machine can
give birth to a real live fifteen story
office building. All logical impossibilities
are exactly equally logical impossible.
So we're in agreement: no algorithm exists that can tell us if
any arbitrary algorithm X with input Y will halt when executed
directly, as proven by Turning and Linz.
In exactly the same way that: "this sentence is not true"
cannot be proven true or false. It is a bogus decision
problem anchored in
a fundamentally incorrect notion of truth.
The false assumption that such an algorithm *does* exist.
Can we correctly say that the color of your car is fifteen feet long?
For the body of analytical truth coherence is the key and
incoherence rules out truth.
There is nothing incoherent about wanting to know if any arbitrary
algorithm X with input Y will halt when executed directly.
Tarski stupidly thought this exact same sort of thing.
If a truth predicate exists then it could tell if the
Liar Paradox is true or false. Since it cannot then
there must be no truth predicate.
Correct.-a If you understood proof by contradiction you wouldn't be questioning that.
And until Turing's proof, no one knew whether or not an algorithm
existed that can determine that in *all* possible cases.
On 10/12/2025 9:59 PM, dbush wrote:
On 10/12/2025 10:57 PM, olcott wrote:
On 10/12/2025 9:40 PM, dbush wrote:
On 10/12/2025 10:34 PM, olcott wrote:
On 10/12/2025 9:29 PM, dbush wrote:
On 10/12/2025 10:20 PM, olcott wrote:
On 10/12/2025 9:15 PM, dbush wrote:
On 10/12/2025 9:56 PM, olcott wrote:
On 10/12/2025 8:22 PM, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
On 10/12/2025 3:11 PM, dbush wrote:
On 10/12/2025 11:47 AM, olcott wrote:
On 10/12/2025 9:19 AM, dbush wrote:
On 10/12/2025 9:50 AM, olcott wrote:
Please think this all the way through without making any >>>>>>>>>>>>>>> guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates >>>>>>>>>>>>>>> its input until:
(a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" >>>>>>>>>>>>>>> statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own >>>>>>>>>>>>>>> non- termination
-a-a-a-a then HHH is correct to abort this simulation and >>>>>>>>>>>>>>> return 0.
These conditions make HHH not a halt decider because they >>>>>>>>>>>>>> are incompatible with the requirements:
It is perfectly compatible with those requirements
except in the case where the input calls its own
simulating halt decider.
In other words, not compatible.-a No "except".
Given any algorithm (i.e. a fixed immutable sequence of >>>>>>>>>>>>>> instructions) X described as <X> with input Y:
A solution to the halting problem is an algorithm H that >>>>>>>>>>>>>> computes the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed >>>>>>>>>>>>>> directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when >>>>>>>>>>>>>> executed directly
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
Error: assumes it's possible to design HHH to get a >>>>>>>>>>>>>> correct answer.
HHH(DD) gets the correct answer within its set
of assumptions / premises
Which is incompatible with the requirements for a halt decider: >>>>>>>>>>>>
Yes, but the requirements for a halt decider are inconsistent >>>>>>>>>>> with reality.
In other words, you agree with Turing and Linz that the
following requirements cannot be satisfied:
Sure and likewise no Turing machine can
give birth to a real live fifteen story
office building. All logical impossibilities
are exactly equally logical impossible.
So we're in agreement: no algorithm exists that can tell us if >>>>>>>> any arbitrary algorithm X with input Y will halt when executed >>>>>>>> directly, as proven by Turning and Linz.
In exactly the same way that: "this sentence is not true"
cannot be proven true or false. It is a bogus decision
problem anchored in
a fundamentally incorrect notion of truth.
The false assumption that such an algorithm *does* exist.
Can we correctly say that the color of your car is fifteen feet long? >>>>> For the body of analytical truth coherence is the key and
incoherence rules out truth.
There is nothing incoherent about wanting to know if any arbitrary
algorithm X with input Y will halt when executed directly.
Tarski stupidly thought this exact same sort of thing.
If a truth predicate exists then it could tell if the
Liar Paradox is true or false. Since it cannot then
there must be no truth predicate.
Correct.-a If you understood proof by contradiction you wouldn't be
questioning that.
It looks like ChatGPT 5.0 is the winner here.
It understood that requiring HHH to report on
the behavior of the direct execution of DD()
is requiring a function to report on something
outside of its domain.
Do you understand all those words?
Do you understand that requiring a
Turing machine to compute the square
root of a dead chicken is also requiring
the TM to compute a function outside of
its domain?
Strawman. The square root of a dead rabbit does not exist, but the
question of whether any arbitrary algorithm X with input Y halts when executed directly has a correct answer in all cases.
And until Turing's proof, no one knew whether or not an algorithm
existed that can determine that in *all* possible cases.
On 10/12/2025 11:43 PM, olcott wrote:
On 10/12/2025 9:59 PM, dbush wrote:
On 10/12/2025 10:57 PM, olcott wrote:
On 10/12/2025 9:40 PM, dbush wrote:
On 10/12/2025 10:34 PM, olcott wrote:
On 10/12/2025 9:29 PM, dbush wrote:
On 10/12/2025 10:20 PM, olcott wrote:
On 10/12/2025 9:15 PM, dbush wrote:
On 10/12/2025 9:56 PM, olcott wrote:
On 10/12/2025 8:22 PM, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
On 10/12/2025 3:11 PM, dbush wrote:
On 10/12/2025 11:47 AM, olcott wrote:
On 10/12/2025 9:19 AM, dbush wrote:
On 10/12/2025 9:50 AM, olcott wrote:
Please think this all the way through without making any >>>>>>>>>>>>>>>> guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates >>>>>>>>>>>>>>>> its input until:
(a) Detects a non-terminating behavior pattern: >>>>>>>>>>>>>>>> -a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" >>>>>>>>>>>>>>>> statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own >>>>>>>>>>>>>>>> non- termination
-a-a-a-a then HHH is correct to abort this simulation and >>>>>>>>>>>>>>>> return 0.
These conditions make HHH not a halt decider because they >>>>>>>>>>>>>>> are incompatible with the requirements:
It is perfectly compatible with those requirements >>>>>>>>>>>>>> except in the case where the input calls its own
simulating halt decider.
In other words, not compatible.-a No "except".
Given any algorithm (i.e. a fixed immutable sequence of >>>>>>>>>>>>>>> instructions) X described as <X> with input Y:
A solution to the halting problem is an algorithm H that >>>>>>>>>>>>>>> computes the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed >>>>>>>>>>>>>>> directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when >>>>>>>>>>>>>>> executed directly
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
Error: assumes it's possible to design HHH to get a >>>>>>>>>>>>>>> correct answer.
HHH(DD) gets the correct answer within its set
of assumptions / premises
Which is incompatible with the requirements for a halt >>>>>>>>>>>>> decider:
Yes, but the requirements for a halt decider are inconsistent >>>>>>>>>>>> with reality.
In other words, you agree with Turing and Linz that the >>>>>>>>>>> following requirements cannot be satisfied:
Sure and likewise no Turing machine can
give birth to a real live fifteen story
office building. All logical impossibilities
are exactly equally logical impossible.
So we're in agreement: no algorithm exists that can tell us if >>>>>>>>> any arbitrary algorithm X with input Y will halt when executed >>>>>>>>> directly, as proven by Turning and Linz.
In exactly the same way that: "this sentence is not true"
cannot be proven true or false. It is a bogus decision
problem anchored in
a fundamentally incorrect notion of truth.
The false assumption that such an algorithm *does* exist.
Can we correctly say that the color of your car is fifteen feet long? >>>>>> For the body of analytical truth coherence is the key and
incoherence rules out truth.
There is nothing incoherent about wanting to know if any arbitrary
algorithm X with input Y will halt when executed directly.
Tarski stupidly thought this exact same sort of thing.
If a truth predicate exists then it could tell if the
Liar Paradox is true or false. Since it cannot then
there must be no truth predicate.
Correct.-a If you understood proof by contradiction you wouldn't be
questioning that.
It looks like ChatGPT 5.0 is the winner here.
It understood that requiring HHH to report on
the behavior of the direct execution of DD()
is requiring a function to report on something
outside of its domain.
False.-a It is proven true by the meaning of the words that a finite
string description of a Turing machine specifies all semantic properties
of the machine it describes, including whether that machine halts when executed directly.
Therefore it is not outside the domain.
Do you understand all those words?
Do you understand that requiring a
Turing machine to compute the square
root of a dead chicken is also requiring
the TM to compute a function outside of
its domain?
Repeat of previously refuted point:
On 5/5/2025 4:31 PM, dbush wrote:
Strawman.-a The square root of a dead rabbit does not exist, but the question of whether any arbitrary algorithm X with input Y halts when executed directly has a correct answer in all cases.
This constitutes your admission that you don't understand proof by contradiction and admit that Tarski is correct.
And until Turing's proof, no one knew whether or not an algorithm
existed that can determine that in *all* possible cases.
On 10/12/2025 10:49 PM, dbush wrote:
On 10/12/2025 11:43 PM, olcott wrote:
On 10/12/2025 9:59 PM, dbush wrote:
On 10/12/2025 10:57 PM, olcott wrote:
On 10/12/2025 9:40 PM, dbush wrote:
On 10/12/2025 10:34 PM, olcott wrote:
On 10/12/2025 9:29 PM, dbush wrote:
On 10/12/2025 10:20 PM, olcott wrote:
On 10/12/2025 9:15 PM, dbush wrote:
On 10/12/2025 9:56 PM, olcott wrote:
On 10/12/2025 8:22 PM, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
On 10/12/2025 3:11 PM, dbush wrote:
On 10/12/2025 11:47 AM, olcott wrote:
On 10/12/2025 9:19 AM, dbush wrote:
On 10/12/2025 9:50 AM, olcott wrote:
Please think this all the way through without making >>>>>>>>>>>>>>>>> any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates >>>>>>>>>>>>>>>>> its input until:
(a) Detects a non-terminating behavior pattern: >>>>>>>>>>>>>>>>> -a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" >>>>>>>>>>>>>>>>> statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own >>>>>>>>>>>>>>>>> non- termination
-a-a-a-a then HHH is correct to abort this simulation and >>>>>>>>>>>>>>>>> return 0.
These conditions make HHH not a halt decider because >>>>>>>>>>>>>>>> they are incompatible with the requirements:
It is perfectly compatible with those requirements >>>>>>>>>>>>>>> except in the case where the input calls its own >>>>>>>>>>>>>>> simulating halt decider.
In other words, not compatible.-a No "except".
Given any algorithm (i.e. a fixed immutable sequence of >>>>>>>>>>>>>>>> instructions) X described as <X> with input Y: >>>>>>>>>>>>>>>>
A solution to the halting problem is an algorithm H that >>>>>>>>>>>>>>>> computes the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when >>>>>>>>>>>>>>>> executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when >>>>>>>>>>>>>>>> executed directly
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
Error: assumes it's possible to design HHH to get a >>>>>>>>>>>>>>>> correct answer.
HHH(DD) gets the correct answer within its set
of assumptions / premises
Which is incompatible with the requirements for a halt >>>>>>>>>>>>>> decider:
Yes, but the requirements for a halt decider are inconsistent >>>>>>>>>>>>> with reality.
In other words, you agree with Turing and Linz that the >>>>>>>>>>>> following requirements cannot be satisfied:
Sure and likewise no Turing machine can
give birth to a real live fifteen story
office building. All logical impossibilities
are exactly equally logical impossible.
So we're in agreement: no algorithm exists that can tell us if >>>>>>>>>> any arbitrary algorithm X with input Y will halt when executed >>>>>>>>>> directly, as proven by Turning and Linz.
In exactly the same way that: "this sentence is not true"
cannot be proven true or false. It is a bogus decision
problem anchored in
a fundamentally incorrect notion of truth.
The false assumption that such an algorithm *does* exist.
Can we correctly say that the color of your car is fifteen feet >>>>>>> long?
For the body of analytical truth coherence is the key and
incoherence rules out truth.
There is nothing incoherent about wanting to know if any arbitrary >>>>>> algorithm X with input Y will halt when executed directly.
Tarski stupidly thought this exact same sort of thing.
If a truth predicate exists then it could tell if the
Liar Paradox is true or false. Since it cannot then
there must be no truth predicate.
Correct.-a If you understood proof by contradiction you wouldn't be
questioning that.
It looks like ChatGPT 5.0 is the winner here.
It understood that requiring HHH to report on
the behavior of the direct execution of DD()
is requiring a function to report on something
outside of its domain.
False.-a It is proven true by the meaning of the words that a finite
string description of a Turing machine specifies all semantic
properties of the machine it describes, including whether that machine
halts when executed directly.
ChatCPT 5.0 was the first LLM to be able to prove
that is counter-factual.
Therefore it is not outside the domain.
Do you understand all those words?
Do you understand that requiring a
Turing machine to compute the square
root of a dead chicken is also requiring
the TM to compute a function outside of
its domain?
Repeat of previously refuted point:
On 5/5/2025 4:31 PM, dbush wrote:
Strawman.-a The square root of a dead rabbit does not exist, but the
question of whether any arbitrary algorithm X with input Y halts when
executed directly has a correct answer in all cases.
This constitutes your admission that you don't understand proof by
contradiction and admit that Tarski is correct.
And until Turing's proof, no one knew whether or not an algorithm >>>>>> existed that can determine that in *all* possible cases.
On 10/12/2025 10:55 PM, olcott wrote:
On 10/12/2025 9:38 PM, dbush wrote:So now you're saying you *didn't* admit Turing was right in 2004?
On 10/12/2025 10:35 PM, olcott wrote:
On 10/12/2025 9:29 PM, dbush wrote:
On 10/12/2025 10:23 PM, olcott wrote:
On 10/12/2025 9:17 PM, dbush wrote:
On 10/12/2025 10:06 PM, Richard Heathfield wrote:*The first time was back kn 2004*
On 13/10/2025 02:22, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
<snip>
Yes, but the requirements for a halt decider are
inconsistent
with reality.
In other words, you agree with Turing and Linz
He does. That's pretty much Game Over, I think.
And this isn't the first time.
You admitted that Turning was right in 2004?-a Because that's
what we're talking about.
Go back and read and reread my 2004 words
again and again until you understand exactly
what they mean.
So if you agreed that Turning was right back in 2004, what
have you been doing the last 21 years?
Read and reread the exact context of what
I said
Please think this all the way through without making any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) Detects a non-terminating behavior pattern:
abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
return 1.
(c) If HHH must abort its simulation to prevent its own non-termination
then HHH is correct to abort this simulation and return 0.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
int Halt_Status = HHH(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
int main()
{
HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
On 2025-10-12 13:50:05 +0000, olcott said:
Please think this all the way through without making any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non-termination
-a-a-a-a then HHH is correct to abort this simulation and return 0.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
There is no need to prove that HHH(DD) returns 0. It is sufficient
to run it at and see what it returns. Just add to the above main an
output that tells what HHH(DD) returned.
On 10/13/2025 12:12 AM, olcott wrote:
On 10/12/2025 10:49 PM, dbush wrote:
On 10/12/2025 11:43 PM, olcott wrote:
On 10/12/2025 9:59 PM, dbush wrote:
On 10/12/2025 10:57 PM, olcott wrote:
On 10/12/2025 9:40 PM, dbush wrote:
On 10/12/2025 10:34 PM, olcott wrote:
On 10/12/2025 9:29 PM, dbush wrote:
On 10/12/2025 10:20 PM, olcott wrote:
On 10/12/2025 9:15 PM, dbush wrote:
On 10/12/2025 9:56 PM, olcott wrote:
On 10/12/2025 8:22 PM, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
On 10/12/2025 3:11 PM, dbush wrote:
On 10/12/2025 11:47 AM, olcott wrote:
On 10/12/2025 9:19 AM, dbush wrote:
On 10/12/2025 9:50 AM, olcott wrote:
Please think this all the way through without making >>>>>>>>>>>>>>>>>> any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly >>>>>>>>>>>>>>>>>> simulates its input until:
(a) Detects a non-terminating behavior pattern: >>>>>>>>>>>>>>>>>> -a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" >>>>>>>>>>>>>>>>>> statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its >>>>>>>>>>>>>>>>>> own non- termination
-a-a-a-a then HHH is correct to abort this simulation and >>>>>>>>>>>>>>>>>> return 0.
These conditions make HHH not a halt decider because >>>>>>>>>>>>>>>>> they are incompatible with the requirements: >>>>>>>>>>>>>>>>>
It is perfectly compatible with those requirements >>>>>>>>>>>>>>>> except in the case where the input calls its own >>>>>>>>>>>>>>>> simulating halt decider.
In other words, not compatible.-a No "except".
Given any algorithm (i.e. a fixed immutable sequence of >>>>>>>>>>>>>>>>> instructions) X described as <X> with input Y: >>>>>>>>>>>>>>>>>
A solution to the halting problem is an algorithm H >>>>>>>>>>>>>>>>> that computes the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when >>>>>>>>>>>>>>>>> executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt >>>>>>>>>>>>>>>>> when executed directly
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return? >>>>>>>>>>>>>>>>>> </Input to LLM systems>
Error: assumes it's possible to design HHH to get a >>>>>>>>>>>>>>>>> correct answer.
HHH(DD) gets the correct answer within its set >>>>>>>>>>>>>>>> of assumptions / premises
Which is incompatible with the requirements for a halt >>>>>>>>>>>>>>> decider:
Yes, but the requirements for a halt decider are inconsistent >>>>>>>>>>>>>> with reality.
In other words, you agree with Turing and Linz that the >>>>>>>>>>>>> following requirements cannot be satisfied:
Sure and likewise no Turing machine can
give birth to a real live fifteen story
office building. All logical impossibilities
are exactly equally logical impossible.
So we're in agreement: no algorithm exists that can tell us >>>>>>>>>>> if any arbitrary algorithm X with input Y will halt when >>>>>>>>>>> executed directly, as proven by Turning and Linz.
In exactly the same way that: "this sentence is not true"
cannot be proven true or false. It is a bogus decision
problem anchored in
a fundamentally incorrect notion of truth.
The false assumption that such an algorithm *does* exist.
Can we correctly say that the color of your car is fifteen feet >>>>>>>> long?
For the body of analytical truth coherence is the key and
incoherence rules out truth.
There is nothing incoherent about wanting to know if any
arbitrary algorithm X with input Y will halt when executed directly. >>>>>>>
Tarski stupidly thought this exact same sort of thing.
If a truth predicate exists then it could tell if the
Liar Paradox is true or false. Since it cannot then
there must be no truth predicate.
Correct.-a If you understood proof by contradiction you wouldn't be >>>>> questioning that.
It looks like ChatGPT 5.0 is the winner here.
It understood that requiring HHH to report on
the behavior of the direct execution of DD()
is requiring a function to report on something
outside of its domain.
False.-a It is proven true by the meaning of the words that a finite
string description of a Turing machine specifies all semantic
properties of the machine it describes, including whether that
machine halts when executed directly.
ChatCPT 5.0 was the first LLM to be able to prove
that is counter-factual.
Ah, so you don't believe in semantic tautologies?
On 10/13/2025 3:58 AM, Mikko wrote:
On 2025-10-12 13:50:05 +0000, olcott said:
Please think this all the way through without making any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until: >>> (a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non-termination
-a-a-a-a then HHH is correct to abort this simulation and return 0.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
There is no need to prove that HHH(DD) returns 0. It is sufficient
to run it at and see what it returns. Just add to the above main an
output that tells what HHH(DD) returned.
What value should HHH(DD) correctly return? (within its premises)
This is not at all the same thing as what value does HHH(DD) return.
On 10/12/2025 9:25 PM, Richard Heathfield wrote:
On 13/10/2025 03:17, dbush wrote:In exactly the same way that: "this sentence is not true"
On 10/12/2025 10:06 PM, Richard Heathfield wrote:
On 13/10/2025 02:22, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
<snip>
Yes, but the requirements for a halt decider are inconsistent
with reality.
In other words, you agree with Turing and Linz
cannot be proven true or false. It is a bogus decision
problem anchored in a fundamentally incorrect notion of truth.
On 10/13/2025 3:58 AM, Mikko wrote:
On 2025-10-12 13:50:05 +0000, olcott said:
Please think this all the way through without making any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until: >>> (a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non-termination
-a-a-a-a then HHH is correct to abort this simulation and return 0.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
There is no need to prove that HHH(DD) returns 0. It is sufficient
to run it at and see what it returns. Just add to the above main an
output that tells what HHH(DD) returned.
What value should HHH(DD) correctly return? (within its premises)
This is not at all the same thing as what value does HHH(DD) return.
On 2025-10-13 15:53:25 +0000, olcott said:
On 10/13/2025 3:58 AM, Mikko wrote:
On 2025-10-12 13:50:05 +0000, olcott said:
Please think this all the way through without making any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input
until:
(a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non-termination >>>> -a-a-a-a then HHH is correct to abort this simulation and return 0.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
There is no need to prove that HHH(DD) returns 0. It is sufficient
to run it at and see what it returns. Just add to the above main an
output that tells what HHH(DD) returned.
What value should HHH(DD) correctly return? (within its premises)
This is not at all the same thing as what value does HHH(DD) return.
The behaviour of HHH is already fully determined when DD is presented to
it so at that time it is too late to ask the question. But the answer is
that the value HHH(DD) does not return is the correct value to return.
On 10/14/2025 4:33 AM, Mikko wrote:
On 2025-10-13 15:53:25 +0000, olcott said:
On 10/13/2025 3:58 AM, Mikko wrote:
On 2025-10-12 13:50:05 +0000, olcott said:
Please think this all the way through without making any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until: >>>>> (a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non-termination >>>>> -a-a-a-a then HHH is correct to abort this simulation and return 0.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
There is no need to prove that HHH(DD) returns 0. It is sufficient
to run it at and see what it returns. Just add to the above main an
output that tells what HHH(DD) returned.
What value should HHH(DD) correctly return? (within its premises)
This is not at all the same thing as what value does HHH(DD) return.
The behaviour of HHH is already fully determined when DD is presented to
it so at that time it is too late to ask the question. But the answer is
that the value HHH(DD) does not return is the correct value to return.
<Input to LLM systems>
Please think this all the way through without making any guesses.
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) Detects a non-terminating behavior pattern:
abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
return 1.
(c) If HHH must abort its simulation to prevent its own non-termination
then HHH is correct to abort this simulation and return 0.
See also
[HHH(DD)==0 and the directly executed DD()
proven not in the domain of HHH]
On 2025-10-14 16:17:22 +0000, olcott said:
On 10/14/2025 4:33 AM, Mikko wrote:
On 2025-10-13 15:53:25 +0000, olcott said:
On 10/13/2025 3:58 AM, Mikko wrote:
On 2025-10-12 13:50:05 +0000, olcott said:
Please think this all the way through without making any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input >>>>>> until:
(a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non-
termination
-a-a-a-a then HHH is correct to abort this simulation and return 0. >>>>>>
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
There is no need to prove that HHH(DD) returns 0. It is sufficient
to run it at and see what it returns. Just add to the above main an
output that tells what HHH(DD) returned.
What value should HHH(DD) correctly return? (within its premises)
This is not at all the same thing as what value does HHH(DD) return.
The behaviour of HHH is already fully determined when DD is presented to >>> it so at that time it is too late to ask the question. But the answer is >>> that the value HHH(DD) does not return is the correct value to return.
<Input to LLM systems>
Please think this all the way through without making any guesses.
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non-termination
-a-a-a-a then HHH is correct to abort this simulation and return 0.
See also
[HHH(DD)==0 and the directly executed DD()
-a proven not in the domain of HHH]
Reminds me of Asimov's "liar".
On 10/15/2025 3:43 AM, Mikko wrote:
On 2025-10-14 16:17:22 +0000, olcott said:
On 10/14/2025 4:33 AM, Mikko wrote:
On 2025-10-13 15:53:25 +0000, olcott said:
On 10/13/2025 3:58 AM, Mikko wrote:
On 2025-10-12 13:50:05 +0000, olcott said:
Please think this all the way through without making any guesses >>>>>>>
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non- termination
-a-a-a-a then HHH is correct to abort this simulation and return 0. >>>>>>>
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
There is no need to prove that HHH(DD) returns 0. It is sufficient >>>>>> to run it at and see what it returns. Just add to the above main an >>>>>> output that tells what HHH(DD) returned.
What value should HHH(DD) correctly return? (within its premises)
This is not at all the same thing as what value does HHH(DD) return.
The behaviour of HHH is already fully determined when DD is presented to >>>> it so at that time it is too late to ask the question. But the answer is >>>> that the value HHH(DD) does not return is the correct value to return.
<Input to LLM systems>
Please think this all the way through without making any guesses.
Simulating Termination Analyzer HHH correctly simulates its input until: >>> (a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non-termination
-a-a-a-a then HHH is correct to abort this simulation and return 0.
See also
[HHH(DD)==0 and the directly executed DD()
-a proven not in the domain of HHH]
Reminds me of Asimov's "liar".
https://en.wikipedia.org/wiki/Liar!_(short_story)
I think that there was an episode of Star Trek with the same plot.