Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 26 |
Nodes: | 6 (1 / 5) |
Uptime: | 18:34:03 |
Calls: | 629 |
Files: | 1,186 |
D/L today: |
18 files (29,890K bytes) |
Messages: | 167,605 |
Python <python@invalid.org> writes:
Olcott (annotated):
If simulating halt decider H correctly simulates its input D until H
correctly determines that its simulated D would never stop running
[comment: as D halts, the simulation is faulty, Pr. Sipser has been
fooled by Olcott shell game confusion "pretending to simulate" and
"correctly simulate"]
unless aborted then H can abort its simulation of D and correctly
report that D specifies a non-halting sequence of configurations.
I don't think that is the shell game. PO really /has/ an H (it's
trivial to do for this one case) that correctly determines that P(P)
*would* never stop running *unless* aborted. He knows and accepts that
P(P) actually does stop. The wrong answer is justified by what would
happen if H (and hence a different P) where not what they actually are.
(I've gone back to his previous names what P is Linz's H^.)
In other words: "if the simulation were right the answer would be
right".
I don't think that's the right paraphrase. He is saying if P were
different (built from a non-aborting H) H's answer would be the right
one.
But the simulation is not right. D actually halts.
But H determines (correctly) that D would not halt if it were not
halted. That much is a truism. What's wrong is to pronounce that
answer as being correct for the D that does, in fact, stop.
And Peter Olcott is a [*beep*]
It's certainly dishonest to claim support from an expert who clearly
does not agree with the conclusions. Pestering, and then tricking,
someone into agreeing to some vague hypothetical is not how academic
research is done. Had PO come clean and ended his magic paragraph with
"and therefore 'does not 'halt' is the correct answer even though D
halts" he would have got a more useful reply.
Let's keep in mind this is exactly what he's saying:
"Yes [H(P,P) == false] is the correct answer even though P(P) halts."
Why? Because:
"we can prove that Halts() did make the correct halting decision when
we comment out the part of Halts() that makes this decision and
H_Hat() remains in infinite recursion"
On 2025-10-14, olcott <polcott333@gmail.com> wrote:
On 10/14/2025 12:25 PM, Kaz Kylheku wrote:
On 2025-10-14, olcott <NoOne@NoWhere.com> wrote:
1. A deciderrCOs domain is its input encoding, not the physical program >>>>
Every total computable function rCo including a hypothetical halting
decider rCo is, formally, a mapping
H:+ureu raA{0,1}
It's obvious you used AI to write this.
I did not exactly use AI to write this.
AI took my ideas and paraphrased them
into its "understanding".
That's what is called "writing with AI" or "writing using AI",
or "AI assisted writing".
If I wanted to say that you flatly generated the content with AI,
so that the ideas are not yours, I would use that wording.
Obviously, the ideas are yours or very similar to yours in
a different wording.
I was able to capture the entire dialog
with formatting as 27 pages of text.
I will publish this very soon.
Please don't.
*It is all on this updated link*
https://chatgpt.com/share/68ee799d-d548-8011-9227-dce897245daa
That's a good thing because it's a lot smoother and readable than the
utter garbage that you write by yourself!
I always needed a reviewer that could fully understand
And validate my ideas to the extent that they are correct.
It looks like ChatGPT 5.0 is that agent.
It's behaving as nothing more but a glorified grammar, wording and style fixer.
When it verifies my ideas it does this by paraphrasing
them into its own words and then verifies that these
paraphrased words are correct.
While it is parphrasing it is doing no such thing as verifying
that the ideas are correct.
It's just regurgitating your indiosyncratic crank ideas, almost verbatim
in their original form, though with more smooth language.
Please, from now on, do not /ever/ write anything in comp.theory that is >>> not revised by AI.
As soon as humans verify the reasoning of my
paraphrased words it seems that I will finally
have complete closure on the halting problem stuff.
It's been my understanding that you are using the Usenet newsgroup
as a staging ground for your ideas, so that you can improve them and
formally present them to CS academia.
Unfortunately, if you examine your behavior, you will see that you are
not on this trajectory at all, and never have been. You are hardly
closer to the goal than 20 years ago.
You've not seriously followed up on any of the detailed rebuttals of
your work; instead insisisting that you are correct and everyone is
simply not intelligent enough to understand it.
So it is puzzling why you choose to stay (for years!) in a review pool
in which you don't find the reviewers to be helpful at all; you
find them lacking and dismiss every one of their points.
How is that supposed to move you toward your goal?
In the world, there is such a thing as the reviewers of an intellectual
work being too stupid to be of use. But in such cases, the author
quickly gets past such reviewers and finds others. Especially in cases
where they are just volunteers from the population, and not assigned
by an institution or journal.
In other words, how is it possible that you allow reviewers you have
/found yourself/ in the wild and which you do not find to have
suitable capability, to block your progress?
(With the declining popularity of Usenet, do you really think that
academia will suddenly come to comp.theory, displacing all of us
idiots that are here now, if you just stick around here long enough?)
where +ureu is the set of all finite strings (program encodings).
What H computes is determined entirely by those encodings and its own
transition rules.
Great. D is such a string, and has one correct answer.
That is where ChatGPT totally agrees that the
halting problem directly contradicts reality.
You've conviced the bot to reproduce writing which states
that there is a difference between simulation and "direct execution",
which is false. Machines are abstractions. All executions of them
are simulations of the abstraction.
E.g. an Intel chip is a simulator of the abstract instruction set.
On top of that, in your x86_utm, what you are calling "direct
exzecution" is actually simulated.
Moreover, HHH1(DD) perpetrates a stepwise simulation using
a parallel "level" and very similar approach to HHH(DD).
It's even the same code, other than the function name.
The difference being that DD calls HHH and not HHH1.
(And you've made function names/addresses falsely significant in your system.)
HHH1(DD) is a simulation of the same nature as HHH except for
not checking for abort criteria, making it a much more faithful
simulation. HHH1(DD) concludes with a 1.
How can that not be the one and only correct result.
rCLFormal computability theory is internally consistent,
but it presupposes that rCLthe behavior of the encoded
programrCY is a formal object inside the same domain
as the deciderrCOs input. If that identification is treated
as a fact about reality rather than a modeling convention,
then yesrCoit would be a false assumption.rCY
Does this say that the halting problem is contradicting
"Does this say?" That's your problem; you generated this with our
long chat with AI.
Before you finalize your wording paraphrased with AI and share it with others, be sure you have to questions yourself about what it says!!!
Doh?
reality when it stipulates that the executable and the
input are in the same domain because in fact they are
not in the same domain?
No; it's saying that the halting problem is confined to a formal,
abstract domain which is not to be confused with some concept of
"reality".
Maybe in reality, machines that transcend the Turing computational
model are possible. (We have not found them.)
In any case, the Halting Theorem is carefully about the formal
abstraction; it doesn't conflict with "reality" because it doesn't
make claims about "reality".
https://chatgpt.com/share/68ee799d-d548-8011-9227-dce897245daa
Yes rCo thatrCOs exactly what follows from your reasoning.
It goes on and on showing all the details of how I
am correct.
If you start with your writing whereby you assume you are correct, and
get AI to polish it for you, of course the resulting wording still
assumes you are correct.