On 2025-11-13, olcott <polcott333@gmail.com> wrote:
H computes the mapping from its input to the behavior
that this actual input actually specifies as measured
by N statements of D simulated by H according to the
semantics of the C language until N statements of D
match the their non-halting behavior pattern:
If the computation D is known to terminate in N + 5 steps,
then that measure is simply not long enough.
You're emeasuring a 15' room with a 12' measuring tape,
declaring the room to be infinite.
D calls H(D) twice in sequence with the same argument
Really? Let's look at the code:
int D()
{
int Halt_Status = H(D);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
No competent programmer would look at that and say that D
calls H twice.
On 2025-11-19, olcott <polcott333@gmail.com> wrote:
On 11/18/2025 7:01 PM, Kaz Kylheku wrote:
On 2025-11-18, olcott <polcott333@gmail.com> wrote:
On 11/18/2025 3:21 PM, Kaz Kylheku wrote:
On 2025-11-18, olcott <polcott333@gmail.com> wrote:
If you ask a decider to determine if my
sister's name is "Sally" and I don't tell
it who I am then the information contained
in the input is insufficient. This does not
in any way limit computation itself.
The problem is that UTM(D) can work out the fact that
D halts. Why is it that UTM knows that D's sister's
name is Sally, but H does not?
UTM(D) is answering a different question.
(a) It is not providing any answer at all.
Well, of course, by "UTM" we mean a /decider/ that purely simulates:
bool UTM(ptr P) {
sim S = sim_create(P);
sim_step_exhaustively(S);
return true;
}
All deciders applied to D are tasked with answering exactly the same
question.
Pretending that a different question was asked is nonproductive;
the answer will be interpreted to the original question.
All the information needed to answer is positively contained in D.
It is just too complex relative to H.
What The F does UTM decide when DD calls UTM(DD)?
That doesn't happen; DD calls HHH(DD).
A diagonal functon set against UTM, call it DDUTM,
cannot be decided by UTM(DDUTM).
That call simply does not return.
On 11/18/2025 8:53 PM, Kaz Kylheku wrote:
On 2025-11-19, olcott <polcott333@gmail.com> wrote:
On 11/18/2025 7:01 PM, Kaz Kylheku wrote:
On 2025-11-18, olcott <polcott333@gmail.com> wrote:
On 11/18/2025 3:21 PM, Kaz Kylheku wrote:
On 2025-11-18, olcott <polcott333@gmail.com> wrote:
If you ask a decider to determine if my
sister's name is "Sally" and I don't tell
it who I am then the information contained
in the input is insufficient. This does not
in any way limit computation itself.
The problem is that UTM(D) can work out the fact that
D halts. Why is it that UTM knows that D's sister's
name is Sally, but H does not?
UTM(D) is answering a different question.
(a) It is not providing any answer at all.
Well, of course, by "UTM" we mean a /decider/ that purely simulates:
bool UTM(ptr P) {
sim S = sim_create(P);
sim_step_exhaustively(S);
return true;
}
All deciders applied to D are tasked with answering exactly the same
question.
Pretending that a different question was asked is nonproductive;
the answer will be interpreted to the original question.
All the information needed to answer is positively contained in D.
It is just too complex relative to H.
What The F does UTM decide when DD calls UTM(DD)?
That doesn't happen; DD calls HHH(DD).
A diagonal functon set against UTM, call it DDUTM,
cannot be decided by UTM(DDUTM).
That call simply does not return.
Yes, and the other one does return proving the
whole point that I have been making for three
years that everyone (besides Ben) was too damned
dishonest to acknowledge has been true all along.
D simulated by H cannot possibly reach its own
simulated final halt state.
I am not going to talk about any non-nonsense of
resuming a simulation after we already have this
final answer.
We just proved that the input to H(D) specifies
non-halting. Anything beyond this is flogging a
dead horse.
news://news.eternal-september.org/20251104183329.967@kylheku.com
On 11/4/2025 8:43 PM, Kaz Kylheku wrote:
On 2025-11-05, olcott <polcott333@gmail.com> wrote:
The whole point is that D simulated by H
cannot possbly reach its own simulated
"return" statement no matter what H does.
Yes; this doesn't happen while H is running.
So while H does /something/, no matter what H does,
that D simulation won't reach the return statement.
Am 06.11.2025 um 21:48 schrieb olcott:
D simulated by H cannot possibly reach its ownWhat you do is like thinking in circles before falling asleep.
simulated final halt state.
I am not going to talk about any non-nonsense of
resuming a simulation after we already have this
final answer.
We just proved that the input to H(D) specifies
non-halting. Anything beyond this is flogging a
dead horse.
news://news.eternal-september.org/20251104183329.967@kylheku.com
On 11/4/2025 8:43 PM, Kaz Kylheku wrote:
On 2025-11-05, olcott <polcott333@gmail.com> wrote:
The whole point is that D simulated by H
cannot possbly reach its own simulated
"return" statement no matter what H does.
Yes; this doesn't happen while H is running.
So while H does /something/, no matter what H does,
that D simulation won't reach the return statement.
It never ends. You're gonna die with that for sure sooner or later.
On 11/25/2025 9:20 AM, Bonita Montero wrote:It don't matters if you're correct. There's no benefit in discussing
What you do is like thinking in circles before falling asleep.
It never ends. You're gonna die with that for sure sooner or later.
I now have four different LLM AI models that prove
that I am correct on the basis that they derive the
proof steps that prove that I am correct.
Even Kimi that was dead set against me now fully
understands my new formal foundation for correct
reasoning.
Am 25.11.2025 um 16:47 schrieb olcott:
On 11/25/2025 9:20 AM, Bonita Montero wrote:
What you do is like thinking in circles before falling asleep.
It never ends. You're gonna die with that for sure sooner or later.
I now have four different LLM AI models that prove
that I am correct on the basis that they derive the
proof steps that prove that I am correct.
It don't matters if you're correct. There's no benefit in discussing
such a theoretical topic for years. You won't even stop if everyone
tells you're right.
Even Kimi that was dead set against me now fully
understands my new formal foundation for correct
reasoning.
On 11/25/2025 9:20 AM, Bonita Montero wrote:
Am 06.11.2025 um 21:48 schrieb olcott:
D simulated by H cannot possibly reach its ownWhat you do is like thinking in circles before falling asleep.
simulated final halt state.
I am not going to talk about any non-nonsense of
resuming a simulation after we already have this
final answer.
We just proved that the input to H(D) specifies
non-halting. Anything beyond this is flogging a
dead horse.
news://news.eternal-september.org/20251104183329.967@kylheku.com
On 11/4/2025 8:43 PM, Kaz Kylheku wrote:
On 2025-11-05, olcott <polcott333@gmail.com> wrote:
The whole point is that D simulated by H
cannot possbly reach its own simulated
"return" statement no matter what H does.
Yes; this doesn't happen while H is running.
So while H does /something/, no matter what H does,
that D simulation won't reach the return statement.
It never ends. You're gonna die with that for sure sooner or later.
I now have four different LLM AI models that prove
that I am correct on the basis that they derive the
proof steps that prove that I am correct.
Even Kimi that was dead set against me now fully
understands my new formal foundation for correct
reasoning.
On 11/25/2025 9:50 AM, Bonita Montero wrote:
Am 25.11.2025 um 16:47 schrieb olcott:
On 11/25/2025 9:20 AM, Bonita Montero wrote:
What you do is like thinking in circles before falling asleep.
It never ends. You're gonna die with that for sure sooner or later.
I now have four different LLM AI models that prove
that I am correct on the basis that they derive the
proof steps that prove that I am correct.
It don't matters if you're correct. There's no benefit in discussing
such a theoretical topic for years. You won't even stop if everyone
tells you're right.
My whole purpose of this has been to establish a
new foundation for correct reasoning that gets rid
The timing for such a system is perfect because it
could solve the LLM AI reliability issues.
On 11/12/2025 8:25 PM, Kaz Kylheku wrote:
If those two are in any way whatsoever different, the entire
castle you built in the sand is washed away.
*This is a FOREVER thing until someone admits the truth*
*This is a FOREVER thing until someone admits the truth*
*This is a FOREVER thing until someone admits the truth*
int D()
{
-a int Halt_Status = H(D);
-a if (Halt_Status)
-a-a-a HERE: goto HERE;
-a return Halt_Status;
}
Everyone here rejects that the execution trace
of 5 statements of D simulated by H according to
the semantics of C is this:
(1)-a-a-a H simulates D that calls H(D)
(2) that simulates D that calls H(D)
(3) that simulates D that calls H(D)
(4) that simulates D that calls H(D)
(5) that simulates D that calls H(D)
| Sysop: | Amessyroom |
|---|---|
| Location: | Fayetteville, NC |
| Users: | 59 |
| Nodes: | 6 (0 / 6) |
| Uptime: | 18:04:21 |
| Calls: | 810 |
| Calls today: | 1 |
| Files: | 1,287 |
| D/L today: |
10 files (21,017K bytes) |
| Messages: | 193,396 |