Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 27 |
Nodes: | 6 (0 / 6) |
Uptime: | 38:02:18 |
Calls: | 631 |
Calls today: | 2 |
Files: | 1,187 |
D/L today: |
22 files (29,767K bytes) |
Messages: | 173,681 |
Does there exist a single halt decider that can
compute the mapping from its finite string input(s)
to an accept or reject value on the basis of the
semantic halting property specified by this/these
finite string input(s) for all inputs?
*Defines a different result as shown below*
From just my own two sentences (a) and (b) five LLM
systems figured out how to correctly decide the halting
problem's counter example input.
They all figured out the recursive simulation non-halting
behavior pattern on their own.
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
-a(a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
-a(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
-a{
-a int Halt_Status = HHH(DD);
-a if (Halt_Status)
-a-a-a HERE: goto HERE;
-a return Halt_Status;
-a}
What value should HHH(DD) correctly return?
</Input to LLM systems>
*Here are the best three* https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21- eedd0f09e141
Am 25.09.2025 um 15:56 schrieb olcott:
Does there exist a single halt decider that can
compute the mapping from its finite string input(s)
to an accept or reject value on the basis of the
semantic halting property specified by this/these
finite string input(s) for all inputs?
*Defines a different result as shown below*
-aFrom just my own two sentences (a) and (b) five LLM
systems figured out how to correctly decide the halting
problem's counter example input.
They all figured out the recursive simulation non-halting
behavior pattern on their own.
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
-a-a(a) Detects a non-terminating behavior pattern:
-a-a-a-a-a abort simulation and return 0.
-a-a(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a-a return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
-a-a{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
-a-a}
What value should HHH(DD) correctly return?
</Input to LLM systems>
*Here are the best three*
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
eedd0f09e141
Do something more meaningful with your life than discussing
the same detail for years. I find that absolutely insane.
The semantic properties of finite strings is
the key aspect of the halting problem that no
one has ever properly addressed.
Am 25.09.2025 um 15:56 schrieb olcott:snip
Does there exist a single halt decider that can
compute the mapping from its finite string input(s)
to an accept or reject value on the basis of the
semantic halting property specified by this/these
finite string input(s) for all inputs?
Do something more meaningful with your life than discussing
the same detail for years. I find that absolutely insane.
The halting problem in AI refers to the challenge of determining whether a given[end quoted "search assist"]
program will finish running or continue indefinitely. It is significant because
if AI systems cannot solve this problem, they may struggle to achieve true >artificial general intelligence and ensure safety in their operations. >autoblocks.ai Wikipedia
Understanding the Halting Problem in AI
What is the Halting Problem?
The halting problem is a fundamental concept in computer science. It asks >whether it is possible to determine, given a description of a program and an >input, if the program will finish running or continue indefinitely. This problem
was first proposed by Alan Turing in 1936 and is known to be undecidable, meaning
no general algorithm can solve it for all possible program-input pairs. >Implications for Artificial Intelligence
The halting problem has significant implications for AI development:
Limitations on AI Reasoning: If AI systems cannot solve the halting problem,
they may struggle to make reliable decisions or reason about their actions.
This limitation affects the creation of truly autonomous AI systems.
Safety Concerns: The inability to determine if a program will halt raises
safety issues. If AI cannot guarantee that it will not enter an infinite loop,
it may pose risks to users and the environment.
Approaches to Address the Halting Problem
While the halting problem is undecidable in general, there are methods to manage
it in specific cases:
Program Tracing: This technique involves monitoring a program's execution to
identify if it enters a repeating state, indicating a potential infinite loop.
Static Analysis: This method examines the code without executing it to predict
whether it will halt. Automated tools can assist in this analysis.
Model Checking: This involves simulating a program in a controlled environment
to track its states. If a state is revisited, it may indicate an infinite loop.
These methods can help mitigate the challenges posed by the halting problem, but
they do not provide a universal solution.
autoblocks.ai Wikipedia
Am 25.09.2025 um 15:56 schrieb olcott:
Does there exist a single halt decider that can
compute the mapping from its finite string input(s)
to an accept or reject value on the basis of the
semantic halting property specified by this/these
finite string input(s) for all inputs?
*Defines a different result as shown below*
-aFrom just my own two sentences (a) and (b) five LLM
systems figured out how to correctly decide the halting
problem's counter example input.
They all figured out the recursive simulation non-halting
behavior pattern on their own.
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until:
-a-a(a) Detects a non-terminating behavior pattern:
-a-a-a-a-a abort simulation and return 0.
-a-a(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a-a return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
-a-a{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
-a-a}
What value should HHH(DD) correctly return?
</Input to LLM systems>
*Here are the best three*
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
eedd0f09e141
Do something more meaningful with your life than discussing
the same detail for years. I find that absolutely insane.
On 9/25/2025 7:21 AM, Bonita Montero wrote:
Am 25.09.2025 um 15:56 schrieb olcott:
Does there exist a single halt decider that can
compute the mapping from its finite string input(s)
to an accept or reject value on the basis of the
semantic halting property specified by this/these
finite string input(s) for all inputs?
*Defines a different result as shown below*
-aFrom just my own two sentences (a) and (b) five LLM
systems figured out how to correctly decide the halting
problem's counter example input.
They all figured out the recursive simulation non-halting
behavior pattern on their own.
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until: >>> -a-a(a) Detects a non-terminating behavior pattern:
-a-a-a-a-a abort simulation and return 0.
-a-a(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a-a return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
-a-a{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
-a-a}
What value should HHH(DD) correctly return?
</Input to LLM systems>
*Here are the best three*
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
eedd0f09e141
Do something more meaningful with your life than discussing
the same detail for years. I find that absolutely insane.
I hope he listens to you.
Am 25.09.2025 um 22:18 schrieb Chris M. Thomasson:
On 9/25/2025 7:21 AM, Bonita Montero wrote:
Am 25.09.2025 um 15:56 schrieb olcott:
Does there exist a single halt decider that can
compute the mapping from its finite string input(s)
to an accept or reject value on the basis of the
semantic halting property specified by this/these
finite string input(s) for all inputs?
*Defines a different result as shown below*
-aFrom just my own two sentences (a) and (b) five LLM
systems figured out how to correctly decide the halting
problem's counter example input.
They all figured out the recursive simulation non-halting
behavior pattern on their own.
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input
until:
-a-a(a) Detects a non-terminating behavior pattern:
-a-a-a-a-a abort simulation and return 0.
-a-a(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a-a return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
-a-a{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
-a-a}
What value should HHH(DD) correctly return?
</Input to LLM systems>
*Here are the best three*
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
eedd0f09e141
Do something more meaningful with your life than discussing
the same detail for years. I find that absolutely insane.
I hope he listens to you.
<3
On 9/26/2025 10:44 AM, Bonita Montero wrote:
Am 25.09.2025 um 22:18 schrieb Chris M. Thomasson:
On 9/25/2025 7:21 AM, Bonita Montero wrote:
Am 25.09.2025 um 15:56 schrieb olcott:
Does there exist a single halt decider that can
compute the mapping from its finite string input(s)
to an accept or reject value on the basis of the
semantic halting property specified by this/these
finite string input(s) for all inputs?
*Defines a different result as shown below*
-aFrom just my own two sentences (a) and (b) five LLM
systems figured out how to correctly decide the halting
problem's counter example input.
They all figured out the recursive simulation non-halting
behavior pattern on their own.
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input
until:
-a-a(a) Detects a non-terminating behavior pattern:
-a-a-a-a-a abort simulation and return 0.
-a-a(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a-a return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
-a-a{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
-a-a}
What value should HHH(DD) correctly return?
</Input to LLM systems>
*Here are the best three*
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
eedd0f09e141
Do something more meaningful with your life than discussing
the same detail for years. I find that absolutely insane.
I hope he listens to you.
<3
Two other PhD computer scientists agree with me.
https://www.cs.toronto.edu/~hehner/PHP.pdf
https://www.complang.tuwien.ac.at/anton/euroforth/ef17/papers/stoddart.pdf
Two other PhD computer scientists agree with me.
On 26/09/2025 16:56, olcott wrote:
<snip>
Two other PhD computer scientists agree with me.
That's an attempt at an appeal to authority, but it isn't a convincing argument. There must be many /thousands/ of Comp Sci PhDs who've studied
the Halting Problem (for the 10 minutes it takes to drink a cup of
coffee while they run the proof through their minds) and who have no
problem with it whatsoever.
On 9/26/2025 12:05 PM, Richard Heathfield wrote:
On 26/09/2025 16:56, olcott wrote:
<snip>
Two other PhD computer scientists agree with me.
That's an attempt at an appeal to authority, but it isn't a
convincing argument. There must be many /thousands/ of Comp Sci
PhDs who've studied the Halting Problem (for the 10 minutes it
takes to drink a cup of coffee while they run the proof through
their minds) and who have no problem with it whatsoever.
And of course you can dismiss whatever they say
without looking at a single word because majority
consensus have never been shown to be less than
totally infallible.
The economist J.K. Galbraith once wrote, rCLFaced with a choice
between changing onerCOs mind and proving there is no need to do
so, almost everyone gets busy with the proof.rCY
Leo Tolstoy was even bolder: rCLThe most difficult subjects can be
explained to the most slow-witted man if he has not formed any
idea of them already; but the simplest thing cannot be made clear
to the most intelligent man if he is firmly persuaded that he
knows already, without a shadow of doubt, what is laid before him.rCY
What's going on here? Why don't facts change our minds?
And why
would someone continue to believe a false or inaccurate idea
anyway?
On 26/09/2025 18:17, olcott wrote:
On 9/26/2025 12:05 PM, Richard Heathfield wrote:
On 26/09/2025 16:56, olcott wrote:
<snip>
Two other PhD computer scientists agree with me.
That's an attempt at an appeal to authority, but it isn't a
convincing argument. There must be many /thousands/ of Comp Sci PhDs
who've studied the Halting Problem (for the 10 minutes it takes to
drink a cup of coffee while they run the proof through their minds)
and who have no problem with it whatsoever.
And of course you can dismiss whatever they say
without looking at a single word because majority
consensus have never been shown to be less than
totally infallible.
That isn't what I said. I said that for every PhD you can appeal to who doesn't understand the proof, there will be thousands who do understand
the proof.
The economist J.K. Galbraith once wrote, rCLFaced with a choice between
changing onerCOs mind and proving there is no need to do so, almost
everyone gets busy with the proof.rCY
We don't even have to do that, because there is no need to change our
minds and the proof is already written.
Contrary to what you appear to believe, a proof doesn't mean someone got
it wrong. It means someone proved they're right.
Leo Tolstoy was even bolder: rCLThe most difficult subjects can be
explained to the most slow-witted man if he has not formed any idea of
them already; but the simplest thing cannot be made clear to the most
intelligent man if he is firmly persuaded that he knows already,
without a shadow of doubt, what is laid before him.rCY
You are the proof. The Halting Problem is remarkably simple, but as a
self-identified genius you are so sure it's mistaken that it has taken people 20 years to persuade you that DD proves HHH is not a halting
decider for DD, even though it's never *once* got the answer right, and having briefly accepted it, you have already returned to your overturned bowl; I suppose 20 years is a hard habit to break.
What's going on here? Why don't facts change our minds?
Like "DD halts", you mean?
And why would someone continue to believe a false or inaccurate idea
anyway?
Because you're so full of it you can't get rid of it?
-aHow do such behaviors serve us?
They don't. You have pissed away 20 years.
On 9/26/2025 1:01 PM, Richard Heathfield wrote:
On 26/09/2025 18:17, olcott wrote:
On 9/26/2025 12:05 PM, Richard Heathfield wrote:
On 26/09/2025 16:56, olcott wrote:
<snip>
Two other PhD computer scientists agree with me.
That's an attempt at an appeal to authority, but it isn't a
convincing argument. There must be many /thousands/ of Comp
Sci PhDs who've studied the Halting Problem (for the 10
minutes it takes to drink a cup of coffee while they run the
proof through their minds) and who have no problem with it
whatsoever.
And of course you can dismiss whatever they say
without looking at a single word because majority
consensus have never been shown to be less than
totally infallible.
That isn't what I said. I said that for every PhD you can
appeal to who doesn't understand the proof, there will be
thousands who do understand the proof.
By showing that two PhD computer scientists agree
with my position makes it unreasonably implausible
that I am a mere crackpot.
The economist J.K. Galbraith once wrote, rCLFaced with a choice
between changing onerCOs mind and proving there is no need to do
so, almost everyone gets busy with the proof.rCY
We don't even have to do that, because there is no need to
change our minds and the proof is already written.
In other words because you are sure that I must
be wrong there is no need to pay close attention
to what I say.
*The halting problem proof question is this*
On 9/26/2025 12:05 PM, Richard Heathfield wrote:
On 26/09/2025 16:56, olcott wrote:
<snip>
Two other PhD computer scientists agree with me.
That's an attempt at an appeal to authority, but it isn't a convincing
argument. There must be many /thousands/ of Comp Sci PhDs who've studied
the Halting Problem (for the 10 minutes it takes to drink a cup of
coffee while they run the proof through their minds) and who have no
problem with it whatsoever.
And of course you can dismiss whatever they say
without looking at a single word because majority
consensus have never been shown to be less than
totally infallible.
The economist J.K. Galbraith once wrote, rCLFaced with a choice between changing onerCOs mind and proving there is no need to do so, almost
everyone gets busy with the proof.rCY
What's going on here? Why don't facts change our minds? And why would
On 2025-09-26, olcott <polcott333@gmail.com> wrote:
On 9/26/2025 12:05 PM, Richard Heathfield wrote:
On 26/09/2025 16:56, olcott wrote:
<snip>
Two other PhD computer scientists agree with me.
That's an attempt at an appeal to authority, but it isn't a convincing
argument. There must be many /thousands/ of Comp Sci PhDs who've studied >>> the Halting Problem (for the 10 minutes it takes to drink a cup of
coffee while they run the proof through their minds) and who have no
problem with it whatsoever.
And of course you can dismiss whatever they say
without looking at a single word because majority
consensus have never been shown to be less than
totally infallible.
Consensus in mathematics /is/ pretty much infallible.
Mathematicians are a very careful bunch; they generally know when they
have a conjecture and when they have a proved theorem.
Some of their proofs go for hundreds of pages. Peer reviews of such
proofs are able to find flaws.
You are not going to find a flaw in a massively accepted, old result
that, in in most succinct presentations, takes about a page.
You might as well look for a dry spot on a T-shirt that was loaded
with rocks and is sitting at the bottom of the ocean.
The economist J.K. Galbraith once wrote, rCLFaced with a choice between
changing onerCOs mind and proving there is no need to do so, almost
everyone gets busy with the proof.rCY
That's a thought-terminating cliche, just like, oh, the idea that the
more someone protests an accusation, the more guilty he must be.
As if someone innocent would not protest?
Are you saying that the concensus is /never/ right? Everyone
who-so-ever has a contrary opinion to a mathematical consensus is right? Merely the possession of a contrary opinion is evidence of having
outwitted everyone?
What's going on here? Why don't facts change our minds? And why would
You've not presented any facts, see.
And your approach to a problem
in logic is to try to redefine it in some handwavy "extralogical" way
and then simply insist that anyone having to do with the original
problem should drop that and make the replacement one their agenda.
You are not able to earnestly engage with the subject matter in its
proper form.
*The conventional halting problem question is this*
Does there exist a single halt decider that
can correctly report the halt status of the
behavior of a directly executed machine on
the basis of this machine's machine description.
*The conventional halting problem proof question is this*
What correct halt status value can be returned
when the input to a halt decider actually does
the opposite of whatever value is returned?
These above conventional views are proven.
On 26/09/2025 19:10, olcott wrote:[SNIP]
On 9/26/2025 2:28 PM, Kaz Kylheku wrote:
On 2025-09-26, olcott <polcott333@gmail.com> wrote:
On 9/26/2025 12:05 PM, Richard Heathfield wrote:
On 26/09/2025 16:56, olcott wrote:
<snip>
Two other PhD computer scientists agree with me.
That's an attempt at an appeal to authority, but it isn't a convincing >>>> argument. There must be many /thousands/ of Comp Sci PhDs who've studied >>>> the Halting Problem (for the 10 minutes it takes to drink a cup of
coffee while they run the proof through their minds) and who have no
problem with it whatsoever.
And of course you can dismiss whatever they say
without looking at a single word because majority
consensus have never been shown to be less than
totally infallible.
Consensus in mathematics /is/ pretty much infallible.
That is like pretty much sterile.
Generally very reliable seems apt.
Math and logic people will hold to views that
are philosophically primarily because they view
knowledge in their field to be pretty much infallible.
The big mistake of logic is that it does not retain
semantics as fully integrated into its formal expressions.
That is how we get nutty things like the Principle of Explosion. https://en.wikipedia.org/wiki/Principle_of_explosion
On 2025-09-26 13:49, olcott wrote:
*The conventional halting problem question is this*
Does there exist a single halt decider that
can correctly report the halt status of the
behavior of a directly executed machine on
the basis of this machine's machine description.
*The conventional halting problem proof question is this*
What correct halt status value can be returned
when the input to a halt decider actually does
the opposite of whatever value is returned?
These above conventional views are proven.
Those are questions. You can't prove a question. You prove statements.
And neither of those are conventional. You can't make up your own formulations and then declare them to be conventional.
Andr|-
On 9/26/2025 3:00 PM, Andr|- G. Isaak wrote:
On 2025-09-26 13:49, olcott wrote:
*The conventional halting problem question is this*
Does there exist a single halt decider that
can correctly report the halt status of the
behavior of a directly executed machine on
the basis of this machine's machine description.
*The conventional halting problem proof question is this*
What correct halt status value can be returned
when the input to a halt decider actually does
the opposite of whatever value is returned?
These above conventional views are proven.
Those are questions. You can't prove a question. You prove statements.
And neither of those are conventional. You can't make up your own
formulations and then declare them to be conventional.
Andr|-
Any statement or question that is semantically
equivalent to another can be replaced by this
other expression of language while retaining
the same essential meaning.
On 2025-09-26, olcott <polcott333@gmail.com> wrote:
On 9/26/2025 2:28 PM, Kaz Kylheku wrote:
On 2025-09-26, olcott <polcott333@gmail.com> wrote:
On 9/26/2025 12:05 PM, Richard Heathfield wrote:
On 26/09/2025 16:56, olcott wrote:
<snip>
Two other PhD computer scientists agree with me.
That's an attempt at an appeal to authority, but it isn't a convincing >>>>> argument. There must be many /thousands/ of Comp Sci PhDs who've studied >>>>> the Halting Problem (for the 10 minutes it takes to drink a cup of
coffee while they run the proof through their minds) and who have no >>>>> problem with it whatsoever.
And of course you can dismiss whatever they say
without looking at a single word because majority
consensus have never been shown to be less than
totally infallible.
Consensus in mathematics /is/ pretty much infallible.
That is like pretty much sterile.
Sometime things are sterile and that is good. Like your surgeon's
gloves, or the interior of your next can of beans, and such.
Generally very reliable seems apt.
You don't even know the beginning of it.
Math and logic people will hold to views that
are philosophically primarily because they view
knowledge in their field to be pretty much infallible.
Formal systems are artificial inventions evolving from their axioms.
While we can't say that we know everything about a system just
because we invented its axioms, we know when we have captured an
air-tight truth.
It is not a situation in which we are relying on hypotheses,
observations and measurements, which are saddled with conditions.
You're not going to end up with a classical mechanics theory
of Turing Machine halting, distinct from a quantum and relativistic one,
in which they can't decide between loops and strings ...
The subject matter admits iron-clad conclusions that get permanently
laid to rest.
The big mistake of logic is that it does not retain
semantics as fully integrated into its formal expressions.
That is how we get nutty things like the Principle of Explosion.
https://en.wikipedia.org/wiki/Principle_of_explosion
The POE is utterly sane.
What is nutty is doing what it describe; go around assuming falsehoods
to be true and the deriving nonsense from them with the intent of
adopting a belief in all those falsehoods and the nonsense that follows.
But that, ironically, perfectly describes your own research programme,
right down to the acronym:
Principle of Explosion -> POE -> Peter Olcott Experiment
A contradiction is a piece of foreign material in a formal system. It
is nonsensical to bring it in, and assert it as a truth; it makes no
sense to do so. Once you do, it creates contagion.
I believe that POE is closely linked to the principle we know
in the systems side of computer science: "one bad bit stops the show".
If you interfere with a correct calculation program by flipping a bit,
all bets are off.
Another face of POE in computing is GIGO: garbage in, garbage out.
Assuming a falsehood to be true is garbage in; the bogus things
you dan then prove are garbage out.
The /reduction ad absurdum/ technique usefully makes a controlled use of
a contradiction. We introduce a contradiction and then derive from it
some other contradictions using the same logical tools that we normally
use for deriving truths from truths. We do that with the specific goal
of arriving at a proposion that we otherwise already know to be false.
At that point we drop regarding as true the entire chain, all the way
back to the initial wrong assumption.
The benefit is that the contradiction being initially assumed is not /obviously/ a contradiction, but when we show that it derivews an
/obvious/ contradiction, we readily see that it is so.
(Note that the diagonal halting proofs do not rely on /reduction ad
absurdum/ whatsoever. They directly show that no decider can be total, without assuming anything about it.)
Far far better to not let garbage in.
On 2025-09-26, olcott <polcott333@gmail.com> wrote:
On 9/26/2025 3:00 PM, Andr|- G. Isaak wrote:
On 2025-09-26 13:49, olcott wrote:
*The conventional halting problem question is this*
Does there exist a single halt decider that
can correctly report the halt status of the
behavior of a directly executed machine on
the basis of this machine's machine description.
*The conventional halting problem proof question is this*
What correct halt status value can be returned
when the input to a halt decider actually does
the opposite of whatever value is returned?
These above conventional views are proven.
Those are questions. You can't prove a question. You prove statements.
And neither of those are conventional. You can't make up your own
formulations and then declare them to be conventional.
Andr|-
Any statement or question that is semantically
equivalent to another can be replaced by this
other expression of language while retaining
the same essential meaning.
Lofty words there, Aristotle!
Too bad HHH(DD) has a different meaning depending on where it is placed
and who is evaluating it, plus whatever you need it to mean for whatever
you are saying.
I believe that POE is closely linked to the principle we know
in the systems side of computer science: "one bad bit stops the show".
If you interfere with a correct calculation program by flipping a bit,
all bets are off.
Another face of POE in computing is GIGO: garbage in, garbage out.
Assuming a falsehood to be true is garbage in; the bogus things
you dan then prove are garbage out.
On 9/26/2025 3:35 PM, Kaz Kylheku wrote:
On 2025-09-26, olcott <polcott333@gmail.com> wrote:
On 9/26/2025 2:28 PM, Kaz Kylheku wrote:
On 2025-09-26, olcott <polcott333@gmail.com> wrote:
On 9/26/2025 12:05 PM, Richard Heathfield wrote:
On 26/09/2025 16:56, olcott wrote:
<snip>
Two other PhD computer scientists agree with me.
That's an attempt at an appeal to authority, but it isn't a
convincing
argument. There must be many /thousands/ of Comp Sci PhDs who've
studied
the Halting Problem (for the 10 minutes it takes to drink a cup of >>>>>> coffee while they run the proof through their minds) and who have no >>>>>> problem with it whatsoever.
And of course you can dismiss whatever they say
without looking at a single word because majority
consensus have never been shown to be less than
totally infallible.
Consensus in mathematics /is/ pretty much infallible.
That is like pretty much sterile.
Sometime things are sterile and that is good.-a Like your surgeon's
gloves, or the interior of your next can of beans, and such.
Pretty much infallible is like pretty much the
one and only creator of the Heavens and Earth.
Generally very reliable seems apt.
You don't even know the beginning of it.
That I start from a philosophical foundation
different than the rules that you learned
by rote does not mean that I am incorrect.
Math and logic people will hold to views that
are philosophically primarily because they view
knowledge in their field to be pretty much infallible.
Formal systems are artificial inventions evolving from their axioms.
While we can't say that we know everything about a system just
because we invented its axioms, we know when we have captured an
air-tight truth.
That is sometimes not airtight at all.
It is not a situation in which we are relying on hypotheses,
observations and measurements, which are saddled with conditions.
Computer science guys do not tend exhaustively to check every
detail about every nuance of everything that they were taught
over and over looking for the tiniest inconsistency.
Philosophers of computer science do this.
You're not going to end up with a classical mechanics theory
of Turing Machine halting, distinct from a quantum and relativistic one,
in which they can't decide between loops and strings ...
The subject matter admits iron-clad conclusions that get permanently
laid to rest.
The big mistake of logic is that it does not retain
semantics as fully integrated into its formal expressions.
That is how we get nutty things like the Principle of Explosion.
https://en.wikipedia.org/wiki/Principle_of_explosion
The POE is utterly sane.
That is just your indoctrination talking.
Try the same thing ion relevance logic.
What is nutty is doing what it describe; go around assuming falsehoods
to be true and the deriving nonsense from them with the intent of
adopting a belief in all those falsehoods and the nonsense that follows.
But that, ironically, perfectly describes your own research programme,
right down to the acronym:
Principle of Explosion -> POE -> Peter Olcott Experiment
A contradiction is a piece of foreign material in a formal system.-a It
is nonsensical to bring it in, and assert it as a truth; it makes no
sense to do so. Once you do, it creates contagion.
Like dog shit in a birthday cake.
I believe that POE is closely linked to the principle we know
in the systems side of computer science: "one bad bit stops the show".
If you interfere with a correct calculation program by flipping a bit,
all bets are off.
Another face of POE in computing is GIGO: garbage in, garbage out.
Assuming a falsehood to be true is garbage in; the bogus things
you dan then prove are garbage out.
Far far better to not let garbage in.
The /reduction ad absurdum/ technique usefully makes a controlled use of
a contradiction.-a We introduce a contradiction and then derive from it
some other contradictions using the same logical tools that we normally
use for deriving truths from truths. We do that with the specific goal
of arriving at a proposion that we otherwise already know to be false.
At that point we drop regarding as true the entire chain, all the way
back to the initial wrong assumption.
The benefit is that the contradiction being initially assumed is not
/obviously/ a contradiction, but when we show that it derivews an
/obvious/ contradiction, we readily see that it is so.
(Note that the diagonal halting proofs do not rely on /reduction ad
absurdum/ whatsoever. They directly show that no decider can be total,
without assuming anything about it.)
*The conventional halting problem proof question is this*
For a halt decider H what correct halt status can
be returned for an input D that does the opposite
of whatever value is returned?
On 2025-09-26, olcott <polcott333@gmail.com> wrote:
On 9/26/2025 2:28 PM, Kaz Kylheku wrote:
On 2025-09-26, olcott <polcott333@gmail.com> wrote:
On 9/26/2025 12:05 PM, Richard Heathfield wrote:
On 26/09/2025 16:56, olcott wrote:
<snip>
Two other PhD computer scientists agree with me.
That's an attempt at an appeal to authority, but it isn't a convincing >>>>> argument. There must be many /thousands/ of Comp Sci PhDs who've studied >>>>> the Halting Problem (for the 10 minutes it takes to drink a cup of
coffee while they run the proof through their minds) and who have no >>>>> problem with it whatsoever.
And of course you can dismiss whatever they say
without looking at a single word because majority
consensus have never been shown to be less than
totally infallible.
Consensus in mathematics /is/ pretty much infallible.
That is like pretty much sterile.
Sometime things are sterile and that is good. Like your surgeon's
gloves, or the interior of your next can of beans, and such.
Generally very reliable seems apt.
You don't even know the beginning of it.
Math and logic people will hold to views that
are philosophically primarily because they view
knowledge in their field to be pretty much infallible.
Formal systems are artificial inventions evolving from their axioms.
While we can't say that we know everything about a system just
because we invented its axioms, we know when we have captured an
air-tight truth.
It is not a situation in which we are relying on hypotheses,
observations and measurements, which are saddled with conditions.
You're not going to end up with a classical mechanics theory
of Turing Machine halting, distinct from a quantum and relativistic one,
in which they can't decide between loops and strings ...
The subject matter admits iron-clad conclusions that get permanently
laid to rest.
The big mistake of logic is that it does not retain
semantics as fully integrated into its formal expressions.
That is how we get nutty things like the Principle of Explosion.
https://en.wikipedia.org/wiki/Principle_of_explosion
The POE is utterly sane.
What is nutty is doing what it describe; go around assuming falsehoods
to be true and the deriving nonsense from them with the intent of
adopting a belief in all those falsehoods and the nonsense that follows.
But that, ironically, perfectly describes your own research programme,
right down to the acronym:
Principle of Explosion -> POE -> Peter Olcott Experiment
A contradiction is a piece of foreign material in a formal system. It
is nonsensical to bring it in, and assert it as a truth; it makes no
sense to do so. Once you do, it creates contagion.
I believe that POE is closely linked to the principle we know
in the systems side of computer science: "one bad bit stops the show".
If you interfere with a correct calculation program by flipping a bit,
all bets are off.
Another face of POE in computing is GIGO: garbage in, garbage out.
Assuming a falsehood to be true is garbage in; the bogus things
you dan then prove are garbage out.
The /reduction ad absurdum/ technique usefully makes a controlled use of
a contradiction. We introduce a contradiction and then derive from it
some other contradictions using the same logical tools that we normally
use for deriving truths from truths. We do that with the specific goal
of arriving at a proposion that we otherwise already know to be false.
At that point we drop regarding as true the entire chain, all the way
back to the initial wrong assumption.
The benefit is that the contradiction being initially assumed is not /obviously/ a contradiction, but when we show that it derivews an
/obvious/ contradiction, we readily see that it is so.
(Note that the diagonal halting proofs do not rely on /reduction ad
absurdum/ whatsoever. They directly show that no decider can be total, without assuming anything about it.)
On 9/26/2025 10:44 AM, Bonita Montero wrote:
Am 25.09.2025 um 22:18 schrieb Chris M. Thomasson:
On 9/25/2025 7:21 AM, Bonita Montero wrote:
Am 25.09.2025 um 15:56 schrieb olcott:
Does there exist a single halt decider that can
compute the mapping from its finite string input(s)
to an accept or reject value on the basis of the
semantic halting property specified by this/these
finite string input(s) for all inputs?
*Defines a different result as shown below*
-aFrom just my own two sentences (a) and (b) five LLM
systems figured out how to correctly decide the halting
problem's counter example input.
They all figured out the recursive simulation non-halting
behavior pattern on their own.
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input
until:
-a-a(a) Detects a non-terminating behavior pattern:
-a-a-a-a-a abort simulation and return 0.
-a-a(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a-a return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
-a-a{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
-a-a}
What value should HHH(DD) correctly return?
</Input to LLM systems>
*Here are the best three*
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
eedd0f09e141
Do something more meaningful with your life than discussing
the same detail for years. I find that absolutely insane.
I hope he listens to you.
<3
Two other PhD computer scientists agree with me.
https://www.cs.toronto.edu/~hehner/PHP.pdf
https://www.complang.tuwien.ac.at/anton/euroforth/ef17/papers/stoddart.pdf
Am 26.09.2025 um 17:56 schrieb olcott:
On 9/26/2025 10:44 AM, Bonita Montero wrote:
Am 25.09.2025 um 22:18 schrieb Chris M. Thomasson:
On 9/25/2025 7:21 AM, Bonita Montero wrote:
Am 25.09.2025 um 15:56 schrieb olcott:
Does there exist a single halt decider that can
compute the mapping from its finite string input(s)
to an accept or reject value on the basis of the
semantic halting property specified by this/these
finite string input(s) for all inputs?
*Defines a different result as shown below*
-aFrom just my own two sentences (a) and (b) five LLM
systems figured out how to correctly decide the halting
problem's counter example input.
They all figured out the recursive simulation non-halting
behavior pattern on their own.
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input >>>>>> until:
-a-a(a) Detects a non-terminating behavior pattern:
-a-a-a-a-a abort simulation and return 0.
-a-a(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a-a return 1.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
-a-a{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
-a-a}
What value should HHH(DD) correctly return?
</Input to LLM systems>
*Here are the best three*
https://claude.ai/share/da9e56ba-f4e9-45ee-9f2c-dc5ffe10f00c
https://chatgpt.com/share/68939ee5-e2f8-8011-837d-438fe8e98b9c
https://grok.com/share/c2hhcmQtMg%3D%3D_810120bb-5ab5-4bf8-af21-
eedd0f09e141
Do something more meaningful with your life than discussing
the same detail for years. I find that absolutely insane.
I hope he listens to you.
<3
Two other PhD computer scientists agree with me.
Are they also crazy ?
https://www.cs.toronto.edu/~hehner/PHP.pdf
https://www.complang.tuwien.ac.at/anton/euroforth/ef17/papers/
stoddart.pdf
A contradiction is a piece of foreign material in a formal system. It
is nonsensical to bring it in, and assert it as a truth; it makes no
sense to do so. Once you do, it creates contagion.
The entailment relations of paraconsistent logics are propositionally
weaker than classical logic; that is, they deem fewer propositional inferences valid. The point is that a paraconsistent logic can never
be a propositional extension of classical logic, that is,
propositionally validate every entailment that classical logic
does. In some sense, then, paraconsistent logic is more conservative
or cautious than classical logic. It is due to such conservativeness
that paraconsistent languages can be more expressive than their
classical counterparts including the hierarchy of metalanguages due to
Alfred Tarski and others.
On 2025-09-26 13:49, olcott wrote:[snipped]>> *The conventional halting problem proof question is this*
*The conventional halting problem question is this*
What correct halt status value can be returned
when the input to a halt decider actually does
the opposite of whatever value is returned?
These above conventional views are proven.
Those are questions. You can't prove a question. You prove statements.
And neither of those are conventional. You can't make up your own formulations and then declare them to be conventional.
The message body is Copyright (C) 2025 Tristan Wibberley except
citations and quotations noted. All Rights Reserved except as described
in the sig.
On 26/09/2025 21:00, Andr|- G. Isaak wrote:
On 2025-09-26 13:49, olcott wrote:[snipped]>> *The conventional halting problem proof question is this*
*The conventional halting problem question is this*
What correct halt status value can be returned
when the input to a halt decider actually does
the opposite of whatever value is returned?
These above conventional views are proven.
Those are questions. You can't prove a question. You prove statements.
And neither of those are conventional. You can't make up your own
formulations and then declare them to be conventional.
I think by "proof question" (s)he means the one that's used within the
proof which conventionally is "What correct halt status value can be
returned ... ? If true then it's not correct, if false then it's not
correct, if something else then it doesn't decide because a decision is
true or false. There's no correct halt status value that can be returned therefore there's no single halt decider." You see there the proof has a question, the proof question is as olcott said.
I think the criticisms levied at olcott can be reflected at the group. I think olcott constructs his/her messages carefully but opaquely through choosing unexpected aspects to mention. It's a raw assertive style
perhaps following the advice of many a bad adviser who tells people to
be assertive. It doesn't include adjustment of context to place
assertions in the A-language or out of it.
For example "These above conventional views are proven." People assert
very strongly and uniformly that the proof question (the question used
in the proof) is as he says it is. It is thus proven (in the traditional sense of proving a real thing by testing it in the real world) to be the "proof question".
Each thing he says looks like it has _an_ interpretation in a normal U-language for the group that is true. I'm not sure if the only
alternative interpretations are invalid and thus trigger negative
emotions due to the reader not needing to backtrack a parse yet
perceiving a meaning that could not have been expressed and should not
have been expressed (the former not properly preventing judgement of the latter by a curious quirk of humanity).
It is worthy of study for the nature of an U-language for logicians and
the type of ambiguity resolution failure that it seems to be triggering.
It might or might not be a good idea to try to receive such things well personally - perhaps one's humanity would be lost by venturing far from
one's interpersonal experiences - it could cause marriage-breaking
stuff, for example.
--
Tristan Wibberley
The message body is Copyright (C) 2025 Tristan Wibberley except
citations and quotations noted. All Rights Reserved except that you may,
of course, cite it academically giving credit to me, distribute it
verbatim as part of a usenet system or its archives, and use it to
promote my greatness and general superiority without misrepresentation
of my opinions other than my opinion of my greatness and general
superiority which you _may_ misrepresent. You definitely MAY NOT train
any production AI system with it but you may train experimental AI that
will only be used for evaluation of the AI methods it implements.