Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 26 |
Nodes: | 6 (1 / 5) |
Uptime: | 18:34:08 |
Calls: | 629 |
Files: | 1,186 |
D/L today: |
18 files (29,890K bytes) |
Messages: | 167,605 |
Does there exist a single halt decider that can
compute the mapping from its finite string input(s)
to an accept or reject value on the basis of the
semantic halting property specified by this/these
finite string input(s) for all inputs?
*Defines a different result as shown below*
From just my own two sentences (a) and (b) five LLM
systems figured out how to correctly decide the halting
problem's counter example input.
They all figured out the recursive simulation non-halting
behavior pattern on their own.
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
-a(a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
-a(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
-a{
-a int Halt_Status = HHH(DD);
-a if (Halt_Status)
-a-a-a HERE: goto HERE;
-a return Halt_Status;
-a}
What value should HHH(DD) correctly return?
</Input to LLM systems>
*Here are the best three* https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21- eedd0f09e141
Am 25.09.2025 um 15:56 schrieb olcott:
Does there exist a single halt decider that can
compute the mapping from its finite string input(s)
to an accept or reject value on the basis of the
semantic halting property specified by this/these
finite string input(s) for all inputs?
*Defines a different result as shown below*
-aFrom just my own two sentences (a) and (b) five LLM
systems figured out how to correctly decide the halting
problem's counter example input.
They all figured out the recursive simulation non-halting
behavior pattern on their own.
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
-a-a(a) Detects a non-terminating behavior pattern:
-a-a-a-a-a abort simulation and return 0.
-a-a(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a-a return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
-a-a{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
-a-a}
What value should HHH(DD) correctly return?
</Input to LLM systems>
*Here are the best three*
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
eedd0f09e141
Do something more meaningful with your life than discussing
the same detail for years. I find that absolutely insane.
The semantic properties of finite strings is
the key aspect of the halting problem that no
one has ever properly addressed.
Am 25.09.2025 um 15:56 schrieb olcott:
Does there exist a single halt decider that can
compute the mapping from its finite string input(s)
to an accept or reject value on the basis of the
semantic halting property specified by this/these
finite string input(s) for all inputs?
*Defines a different result as shown below*
-aFrom just my own two sentences (a) and (b) five LLM
systems figured out how to correctly decide the halting
problem's counter example input.
They all figured out the recursive simulation non-halting
behavior pattern on their own.
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
-a-a(a) Detects a non-terminating behavior pattern:
-a-a-a-a-a abort simulation and return 0.
-a-a(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a-a return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
-a-a{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
-a-a}
What value should HHH(DD) correctly return?
</Input to LLM systems>
*Here are the best three*
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
eedd0f09e141
Do something more meaningful with your life than discussing
the same detail for years. I find that absolutely insane.
On 9/25/2025 7:21 AM, Bonita Montero wrote:
Am 25.09.2025 um 15:56 schrieb olcott:
Does there exist a single halt decider that can
compute the mapping from its finite string input(s)
to an accept or reject value on the basis of the
semantic halting property specified by this/these
finite string input(s) for all inputs?
*Defines a different result as shown below*
-aFrom just my own two sentences (a) and (b) five LLM
systems figured out how to correctly decide the halting
problem's counter example input.
They all figured out the recursive simulation non-halting
behavior pattern on their own.
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until: >>> -a-a(a) Detects a non-terminating behavior pattern:
-a-a-a-a-a abort simulation and return 0.
-a-a(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a-a return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
-a-a{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
-a-a}
What value should HHH(DD) correctly return?
</Input to LLM systems>
*Here are the best three*
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
eedd0f09e141
Do something more meaningful with your life than discussing
the same detail for years. I find that absolutely insane.
I hope he listens to you.
Am 25.09.2025 um 22:18 schrieb Chris M. Thomasson:
On 9/25/2025 7:21 AM, Bonita Montero wrote:
Am 25.09.2025 um 15:56 schrieb olcott:
Does there exist a single halt decider that can
compute the mapping from its finite string input(s)
to an accept or reject value on the basis of the
semantic halting property specified by this/these
finite string input(s) for all inputs?
*Defines a different result as shown below*
-aFrom just my own two sentences (a) and (b) five LLM
systems figured out how to correctly decide the halting
problem's counter example input.
They all figured out the recursive simulation non-halting
behavior pattern on their own.
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input
until:
-a-a(a) Detects a non-terminating behavior pattern:
-a-a-a-a-a abort simulation and return 0.
-a-a(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a-a return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
-a-a{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
-a-a}
What value should HHH(DD) correctly return?
</Input to LLM systems>
*Here are the best three*
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
eedd0f09e141
Do something more meaningful with your life than discussing
the same detail for years. I find that absolutely insane.
I hope he listens to you.
<3
On 9/26/2025 10:44 AM, Bonita Montero wrote:
Am 25.09.2025 um 22:18 schrieb Chris M. Thomasson:
On 9/25/2025 7:21 AM, Bonita Montero wrote:
Am 25.09.2025 um 15:56 schrieb olcott:
Does there exist a single halt decider that can
compute the mapping from its finite string input(s)
to an accept or reject value on the basis of the
semantic halting property specified by this/these
finite string input(s) for all inputs?
*Defines a different result as shown below*
-aFrom just my own two sentences (a) and (b) five LLM
systems figured out how to correctly decide the halting
problem's counter example input.
They all figured out the recursive simulation non-halting
behavior pattern on their own.
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input
until:
-a-a(a) Detects a non-terminating behavior pattern:
-a-a-a-a-a abort simulation and return 0.
-a-a(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a-a return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
-a-a{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
-a-a}
What value should HHH(DD) correctly return?
</Input to LLM systems>
*Here are the best three*
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
eedd0f09e141
Do something more meaningful with your life than discussing
the same detail for years. I find that absolutely insane.
I hope he listens to you.
<3
Two other PhD computer scientists agree with me.
https://www.cs.toronto.edu/~hehner/PHP.pdf
https://www.complang.tuwien.ac.at/anton/euroforth/ef17/papers/stoddart.pdf
Two other PhD computer scientists agree with me.
On 26/09/2025 16:56, olcott wrote:
<snip>
Two other PhD computer scientists agree with me.
That's an attempt at an appeal to authority, but it isn't a convincing argument. There must be many /thousands/ of Comp Sci PhDs who've studied
the Halting Problem (for the 10 minutes it takes to drink a cup of
coffee while they run the proof through their minds) and who have no
problem with it whatsoever.
On 9/26/2025 12:05 PM, Richard Heathfield wrote:
On 26/09/2025 16:56, olcott wrote:
<snip>
Two other PhD computer scientists agree with me.
That's an attempt at an appeal to authority, but it isn't a
convincing argument. There must be many /thousands/ of Comp Sci
PhDs who've studied the Halting Problem (for the 10 minutes it
takes to drink a cup of coffee while they run the proof through
their minds) and who have no problem with it whatsoever.
And of course you can dismiss whatever they say
without looking at a single word because majority
consensus have never been shown to be less than
totally infallible.
The economist J.K. Galbraith once wrote, rCLFaced with a choice
between changing onerCOs mind and proving there is no need to do
so, almost everyone gets busy with the proof.rCY
Leo Tolstoy was even bolder: rCLThe most difficult subjects can be
explained to the most slow-witted man if he has not formed any
idea of them already; but the simplest thing cannot be made clear
to the most intelligent man if he is firmly persuaded that he
knows already, without a shadow of doubt, what is laid before him.rCY
What's going on here? Why don't facts change our minds?
And why
would someone continue to believe a false or inaccurate idea
anyway?
On 26/09/2025 18:17, olcott wrote:
On 9/26/2025 12:05 PM, Richard Heathfield wrote:
On 26/09/2025 16:56, olcott wrote:
<snip>
Two other PhD computer scientists agree with me.
That's an attempt at an appeal to authority, but it isn't a
convincing argument. There must be many /thousands/ of Comp Sci PhDs
who've studied the Halting Problem (for the 10 minutes it takes to
drink a cup of coffee while they run the proof through their minds)
and who have no problem with it whatsoever.
And of course you can dismiss whatever they say
without looking at a single word because majority
consensus have never been shown to be less than
totally infallible.
That isn't what I said. I said that for every PhD you can appeal to who doesn't understand the proof, there will be thousands who do understand
the proof.
The economist J.K. Galbraith once wrote, rCLFaced with a choice between
changing onerCOs mind and proving there is no need to do so, almost
everyone gets busy with the proof.rCY
We don't even have to do that, because there is no need to change our
minds and the proof is already written.
Contrary to what you appear to believe, a proof doesn't mean someone got
it wrong. It means someone proved they're right.
Leo Tolstoy was even bolder: rCLThe most difficult subjects can be
explained to the most slow-witted man if he has not formed any idea of
them already; but the simplest thing cannot be made clear to the most
intelligent man if he is firmly persuaded that he knows already,
without a shadow of doubt, what is laid before him.rCY
You are the proof. The Halting Problem is remarkably simple, but as a
self-identified genius you are so sure it's mistaken that it has taken people 20 years to persuade you that DD proves HHH is not a halting
decider for DD, even though it's never *once* got the answer right, and having briefly accepted it, you have already returned to your overturned bowl; I suppose 20 years is a hard habit to break.
What's going on here? Why don't facts change our minds?
Like "DD halts", you mean?
And why would someone continue to believe a false or inaccurate idea
anyway?
Because you're so full of it you can't get rid of it?
-aHow do such behaviors serve us?
They don't. You have pissed away 20 years.
On 9/26/2025 1:01 PM, Richard Heathfield wrote:
On 26/09/2025 18:17, olcott wrote:
On 9/26/2025 12:05 PM, Richard Heathfield wrote:
On 26/09/2025 16:56, olcott wrote:
<snip>
Two other PhD computer scientists agree with me.
That's an attempt at an appeal to authority, but it isn't a
convincing argument. There must be many /thousands/ of Comp
Sci PhDs who've studied the Halting Problem (for the 10
minutes it takes to drink a cup of coffee while they run the
proof through their minds) and who have no problem with it
whatsoever.
And of course you can dismiss whatever they say
without looking at a single word because majority
consensus have never been shown to be less than
totally infallible.
That isn't what I said. I said that for every PhD you can
appeal to who doesn't understand the proof, there will be
thousands who do understand the proof.
By showing that two PhD computer scientists agree
with my position makes it unreasonably implausible
that I am a mere crackpot.
The economist J.K. Galbraith once wrote, rCLFaced with a choice
between changing onerCOs mind and proving there is no need to do
so, almost everyone gets busy with the proof.rCY
We don't even have to do that, because there is no need to
change our minds and the proof is already written.
In other words because you are sure that I must
be wrong there is no need to pay close attention
to what I say.
*The halting problem proof question is this*
On 9/26/2025 12:05 PM, Richard Heathfield wrote:
On 26/09/2025 16:56, olcott wrote:
<snip>
Two other PhD computer scientists agree with me.
That's an attempt at an appeal to authority, but it isn't a convincing
argument. There must be many /thousands/ of Comp Sci PhDs who've studied
the Halting Problem (for the 10 minutes it takes to drink a cup of
coffee while they run the proof through their minds) and who have no
problem with it whatsoever.
And of course you can dismiss whatever they say
without looking at a single word because majority
consensus have never been shown to be less than
totally infallible.
The economist J.K. Galbraith once wrote, rCLFaced with a choice between changing onerCOs mind and proving there is no need to do so, almost
everyone gets busy with the proof.rCY
What's going on here? Why don't facts change our minds? And why would
On 2025-09-26, olcott <polcott333@gmail.com> wrote:
On 9/26/2025 12:05 PM, Richard Heathfield wrote:
On 26/09/2025 16:56, olcott wrote:
<snip>
Two other PhD computer scientists agree with me.
That's an attempt at an appeal to authority, but it isn't a convincing
argument. There must be many /thousands/ of Comp Sci PhDs who've studied >>> the Halting Problem (for the 10 minutes it takes to drink a cup of
coffee while they run the proof through their minds) and who have no
problem with it whatsoever.
And of course you can dismiss whatever they say
without looking at a single word because majority
consensus have never been shown to be less than
totally infallible.
Consensus in mathematics /is/ pretty much infallible.
Mathematicians are a very careful bunch; they generally know when they
have a conjecture and when they have a proved theorem.
Some of their proofs go for hundreds of pages. Peer reviews of such
proofs are able to find flaws.
You are not going to find a flaw in a massively accepted, old result
that, in in most succinct presentations, takes about a page.
You might as well look for a dry spot on a T-shirt that was loaded
with rocks and is sitting at the bottom of the ocean.
The economist J.K. Galbraith once wrote, rCLFaced with a choice between
changing onerCOs mind and proving there is no need to do so, almost
everyone gets busy with the proof.rCY
That's a thought-terminating cliche, just like, oh, the idea that the
more someone protests an accusation, the more guilty he must be.
As if someone innocent would not protest?
Are you saying that the concensus is /never/ right? Everyone
who-so-ever has a contrary opinion to a mathematical consensus is right? Merely the possession of a contrary opinion is evidence of having
outwitted everyone?
What's going on here? Why don't facts change our minds? And why would
You've not presented any facts, see.
And your approach to a problem
in logic is to try to redefine it in some handwavy "extralogical" way
and then simply insist that anyone having to do with the original
problem should drop that and make the replacement one their agenda.
You are not able to earnestly engage with the subject matter in its
proper form.
*The conventional halting problem question is this*
Does there exist a single halt decider that
can correctly report the halt status of the
behavior of a directly executed machine on
the basis of this machine's machine description.
*The conventional halting problem proof question is this*
What correct halt status value can be returned
when the input to a halt decider actually does
the opposite of whatever value is returned?
These above conventional views are proven.
On 26/09/2025 19:10, olcott wrote:[SNIP]
On 9/26/2025 2:28 PM, Kaz Kylheku wrote:
On 2025-09-26, olcott <polcott333@gmail.com> wrote:
On 9/26/2025 12:05 PM, Richard Heathfield wrote:
On 26/09/2025 16:56, olcott wrote:
<snip>
Two other PhD computer scientists agree with me.
That's an attempt at an appeal to authority, but it isn't a convincing >>>> argument. There must be many /thousands/ of Comp Sci PhDs who've studied >>>> the Halting Problem (for the 10 minutes it takes to drink a cup of
coffee while they run the proof through their minds) and who have no
problem with it whatsoever.
And of course you can dismiss whatever they say
without looking at a single word because majority
consensus have never been shown to be less than
totally infallible.
Consensus in mathematics /is/ pretty much infallible.
That is like pretty much sterile.
Generally very reliable seems apt.
Math and logic people will hold to views that
are philosophically primarily because they view
knowledge in their field to be pretty much infallible.
The big mistake of logic is that it does not retain
semantics as fully integrated into its formal expressions.
That is how we get nutty things like the Principle of Explosion. https://en.wikipedia.org/wiki/Principle_of_explosion
On 2025-09-26 13:49, olcott wrote:
*The conventional halting problem question is this*
Does there exist a single halt decider that
can correctly report the halt status of the
behavior of a directly executed machine on
the basis of this machine's machine description.
*The conventional halting problem proof question is this*
What correct halt status value can be returned
when the input to a halt decider actually does
the opposite of whatever value is returned?
These above conventional views are proven.
Those are questions. You can't prove a question. You prove statements.
And neither of those are conventional. You can't make up your own formulations and then declare them to be conventional.
Andr|-
On 9/26/2025 3:00 PM, Andr|- G. Isaak wrote:
On 2025-09-26 13:49, olcott wrote:
*The conventional halting problem question is this*
Does there exist a single halt decider that
can correctly report the halt status of the
behavior of a directly executed machine on
the basis of this machine's machine description.
*The conventional halting problem proof question is this*
What correct halt status value can be returned
when the input to a halt decider actually does
the opposite of whatever value is returned?
These above conventional views are proven.
Those are questions. You can't prove a question. You prove statements.
And neither of those are conventional. You can't make up your own
formulations and then declare them to be conventional.
Andr|-
Any statement or question that is semantically
equivalent to another can be replaced by this
other expression of language while retaining
the same essential meaning.
On 2025-09-26, olcott <polcott333@gmail.com> wrote:
On 9/26/2025 2:28 PM, Kaz Kylheku wrote:
On 2025-09-26, olcott <polcott333@gmail.com> wrote:
On 9/26/2025 12:05 PM, Richard Heathfield wrote:
On 26/09/2025 16:56, olcott wrote:
<snip>
Two other PhD computer scientists agree with me.
That's an attempt at an appeal to authority, but it isn't a convincing >>>>> argument. There must be many /thousands/ of Comp Sci PhDs who've studied >>>>> the Halting Problem (for the 10 minutes it takes to drink a cup of
coffee while they run the proof through their minds) and who have no >>>>> problem with it whatsoever.
And of course you can dismiss whatever they say
without looking at a single word because majority
consensus have never been shown to be less than
totally infallible.
Consensus in mathematics /is/ pretty much infallible.
That is like pretty much sterile.
Sometime things are sterile and that is good. Like your surgeon's
gloves, or the interior of your next can of beans, and such.
Generally very reliable seems apt.
You don't even know the beginning of it.
Math and logic people will hold to views that
are philosophically primarily because they view
knowledge in their field to be pretty much infallible.
Formal systems are artificial inventions evolving from their axioms.
While we can't say that we know everything about a system just
because we invented its axioms, we know when we have captured an
air-tight truth.
It is not a situation in which we are relying on hypotheses,
observations and measurements, which are saddled with conditions.
You're not going to end up with a classical mechanics theory
of Turing Machine halting, distinct from a quantum and relativistic one,
in which they can't decide between loops and strings ...
The subject matter admits iron-clad conclusions that get permanently
laid to rest.
The big mistake of logic is that it does not retain
semantics as fully integrated into its formal expressions.
That is how we get nutty things like the Principle of Explosion.
https://en.wikipedia.org/wiki/Principle_of_explosion
The POE is utterly sane.
What is nutty is doing what it describe; go around assuming falsehoods
to be true and the deriving nonsense from them with the intent of
adopting a belief in all those falsehoods and the nonsense that follows.
But that, ironically, perfectly describes your own research programme,
right down to the acronym:
Principle of Explosion -> POE -> Peter Olcott Experiment
A contradiction is a piece of foreign material in a formal system. It
is nonsensical to bring it in, and assert it as a truth; it makes no
sense to do so. Once you do, it creates contagion.
I believe that POE is closely linked to the principle we know
in the systems side of computer science: "one bad bit stops the show".
If you interfere with a correct calculation program by flipping a bit,
all bets are off.
Another face of POE in computing is GIGO: garbage in, garbage out.
Assuming a falsehood to be true is garbage in; the bogus things
you dan then prove are garbage out.
The /reduction ad absurdum/ technique usefully makes a controlled use of
a contradiction. We introduce a contradiction and then derive from it
some other contradictions using the same logical tools that we normally
use for deriving truths from truths. We do that with the specific goal
of arriving at a proposion that we otherwise already know to be false.
At that point we drop regarding as true the entire chain, all the way
back to the initial wrong assumption.
The benefit is that the contradiction being initially assumed is not /obviously/ a contradiction, but when we show that it derivews an
/obvious/ contradiction, we readily see that it is so.
(Note that the diagonal halting proofs do not rely on /reduction ad
absurdum/ whatsoever. They directly show that no decider can be total, without assuming anything about it.)
Far far better to not let garbage in.
On 2025-09-26, olcott <polcott333@gmail.com> wrote:
On 9/26/2025 3:00 PM, Andr|- G. Isaak wrote:
On 2025-09-26 13:49, olcott wrote:
*The conventional halting problem question is this*
Does there exist a single halt decider that
can correctly report the halt status of the
behavior of a directly executed machine on
the basis of this machine's machine description.
*The conventional halting problem proof question is this*
What correct halt status value can be returned
when the input to a halt decider actually does
the opposite of whatever value is returned?
These above conventional views are proven.
Those are questions. You can't prove a question. You prove statements.
And neither of those are conventional. You can't make up your own
formulations and then declare them to be conventional.
Andr|-
Any statement or question that is semantically
equivalent to another can be replaced by this
other expression of language while retaining
the same essential meaning.
Lofty words there, Aristotle!
Too bad HHH(DD) has a different meaning depending on where it is placed
and who is evaluating it, plus whatever you need it to mean for whatever
you are saying.
I believe that POE is closely linked to the principle we know
in the systems side of computer science: "one bad bit stops the show".
If you interfere with a correct calculation program by flipping a bit,
all bets are off.
Another face of POE in computing is GIGO: garbage in, garbage out.
Assuming a falsehood to be true is garbage in; the bogus things
you dan then prove are garbage out.
On 2025-09-26, olcott <polcott333@gmail.com> wrote:
On 9/26/2025 2:28 PM, Kaz Kylheku wrote:
On 2025-09-26, olcott <polcott333@gmail.com> wrote:
On 9/26/2025 12:05 PM, Richard Heathfield wrote:
On 26/09/2025 16:56, olcott wrote:
<snip>
Two other PhD computer scientists agree with me.
That's an attempt at an appeal to authority, but it isn't a convincing >>>>> argument. There must be many /thousands/ of Comp Sci PhDs who've studied >>>>> the Halting Problem (for the 10 minutes it takes to drink a cup of
coffee while they run the proof through their minds) and who have no >>>>> problem with it whatsoever.
And of course you can dismiss whatever they say
without looking at a single word because majority
consensus have never been shown to be less than
totally infallible.
Consensus in mathematics /is/ pretty much infallible.
That is like pretty much sterile.
Sometime things are sterile and that is good. Like your surgeon's
gloves, or the interior of your next can of beans, and such.
Generally very reliable seems apt.
You don't even know the beginning of it.
Math and logic people will hold to views that
are philosophically primarily because they view
knowledge in their field to be pretty much infallible.
Formal systems are artificial inventions evolving from their axioms.
While we can't say that we know everything about a system just
because we invented its axioms, we know when we have captured an
air-tight truth.
It is not a situation in which we are relying on hypotheses,
observations and measurements, which are saddled with conditions.
You're not going to end up with a classical mechanics theory
of Turing Machine halting, distinct from a quantum and relativistic one,
in which they can't decide between loops and strings ...
The subject matter admits iron-clad conclusions that get permanently
laid to rest.
The big mistake of logic is that it does not retain
semantics as fully integrated into its formal expressions.
That is how we get nutty things like the Principle of Explosion.
https://en.wikipedia.org/wiki/Principle_of_explosion
The POE is utterly sane.
What is nutty is doing what it describe; go around assuming falsehoods
to be true and the deriving nonsense from them with the intent of
adopting a belief in all those falsehoods and the nonsense that follows.
But that, ironically, perfectly describes your own research programme,
right down to the acronym:
Principle of Explosion -> POE -> Peter Olcott Experiment
A contradiction is a piece of foreign material in a formal system. It
is nonsensical to bring it in, and assert it as a truth; it makes no
sense to do so. Once you do, it creates contagion.
I believe that POE is closely linked to the principle we know
in the systems side of computer science: "one bad bit stops the show".
If you interfere with a correct calculation program by flipping a bit,
all bets are off.
Another face of POE in computing is GIGO: garbage in, garbage out.
Assuming a falsehood to be true is garbage in; the bogus things
you dan then prove are garbage out.
The /reduction ad absurdum/ technique usefully makes a controlled use of
a contradiction. We introduce a contradiction and then derive from it
some other contradictions using the same logical tools that we normally
use for deriving truths from truths. We do that with the specific goal
of arriving at a proposion that we otherwise already know to be false.
At that point we drop regarding as true the entire chain, all the way
back to the initial wrong assumption.
The benefit is that the contradiction being initially assumed is not /obviously/ a contradiction, but when we show that it derivews an
/obvious/ contradiction, we readily see that it is so.
(Note that the diagonal halting proofs do not rely on /reduction ad
absurdum/ whatsoever. They directly show that no decider can be total, without assuming anything about it.)
On 9/26/2025 10:44 AM, Bonita Montero wrote:
Am 25.09.2025 um 22:18 schrieb Chris M. Thomasson:
On 9/25/2025 7:21 AM, Bonita Montero wrote:
Am 25.09.2025 um 15:56 schrieb olcott:
Does there exist a single halt decider that can
compute the mapping from its finite string input(s)
to an accept or reject value on the basis of the
semantic halting property specified by this/these
finite string input(s) for all inputs?
*Defines a different result as shown below*
-aFrom just my own two sentences (a) and (b) five LLM
systems figured out how to correctly decide the halting
problem's counter example input.
They all figured out the recursive simulation non-halting
behavior pattern on their own.
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input
until:
-a-a(a) Detects a non-terminating behavior pattern:
-a-a-a-a-a abort simulation and return 0.
-a-a(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a-a return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
-a-a{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
-a-a}
What value should HHH(DD) correctly return?
</Input to LLM systems>
*Here are the best three*
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
eedd0f09e141
Do something more meaningful with your life than discussing
the same detail for years. I find that absolutely insane.
I hope he listens to you.
<3
Two other PhD computer scientists agree with me.
https://www.cs.toronto.edu/~hehner/PHP.pdf
https://www.complang.tuwien.ac.at/anton/euroforth/ef17/papers/stoddart.pdf
Am 26.09.2025 um 17:56 schrieb olcott:
On 9/26/2025 10:44 AM, Bonita Montero wrote:
Am 25.09.2025 um 22:18 schrieb Chris M. Thomasson:
On 9/25/2025 7:21 AM, Bonita Montero wrote:
Am 25.09.2025 um 15:56 schrieb olcott:
Does there exist a single halt decider that can
compute the mapping from its finite string input(s)
to an accept or reject value on the basis of the
semantic halting property specified by this/these
finite string input(s) for all inputs?
*Defines a different result as shown below*
-aFrom just my own two sentences (a) and (b) five LLM
systems figured out how to correctly decide the halting
problem's counter example input.
They all figured out the recursive simulation non-halting
behavior pattern on their own.
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input >>>>>> until:
-a-a(a) Detects a non-terminating behavior pattern:
-a-a-a-a-a abort simulation and return 0.
-a-a(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a-a return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
-a-a{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
-a-a}
What value should HHH(DD) correctly return?
</Input to LLM systems>
*Here are the best three*
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
eedd0f09e141
Do something more meaningful with your life than discussing
the same detail for years. I find that absolutely insane.
I hope he listens to you.
<3
Two other PhD computer scientists agree with me.
Are they also crazy ?
https://www.cs.toronto.edu/~hehner/PHP.pdf
https://www.complang.tuwien.ac.at/anton/euroforth/ef17/papers/
stoddart.pdf
A contradiction is a piece of foreign material in a formal system. It
is nonsensical to bring it in, and assert it as a truth; it makes no
sense to do so. Once you do, it creates contagion.
The entailment relations of paraconsistent logics are propositionally
weaker than classical logic; that is, they deem fewer propositional inferences valid. The point is that a paraconsistent logic can never
be a propositional extension of classical logic, that is,
propositionally validate every entailment that classical logic
does. In some sense, then, paraconsistent logic is more conservative
or cautious than classical logic. It is due to such conservativeness
that paraconsistent languages can be more expressive than their
classical counterparts including the hierarchy of metalanguages due to
Alfred Tarski and others.