Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 26 |
Nodes: | 6 (1 / 5) |
Uptime: | 18:37:00 |
Calls: | 629 |
Files: | 1,186 |
D/L today: |
18 files (29,890K bytes) |
Messages: | 167,605 |
On 10/13/2025 12:12 AM, olcott wrote:
On 10/12/2025 10:49 PM, dbush wrote:
On 10/12/2025 11:43 PM, olcott wrote:
On 10/12/2025 9:59 PM, dbush wrote:
On 10/12/2025 10:57 PM, olcott wrote:
On 10/12/2025 9:40 PM, dbush wrote:
On 10/12/2025 10:34 PM, olcott wrote:
On 10/12/2025 9:29 PM, dbush wrote:
On 10/12/2025 10:20 PM, olcott wrote:
On 10/12/2025 9:15 PM, dbush wrote:
On 10/12/2025 9:56 PM, olcott wrote:
On 10/12/2025 8:22 PM, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
On 10/12/2025 3:11 PM, dbush wrote:
On 10/12/2025 11:47 AM, olcott wrote:
On 10/12/2025 9:19 AM, dbush wrote:
On 10/12/2025 9:50 AM, olcott wrote:
Please think this all the way through without making >>>>>>>>>>>>>>>>>> any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly >>>>>>>>>>>>>>>>>> simulates its input until:
(a) Detects a non-terminating behavior pattern: >>>>>>>>>>>>>>>>>> -a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" >>>>>>>>>>>>>>>>>> statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its >>>>>>>>>>>>>>>>>> own non- termination
-a-a-a-a then HHH is correct to abort this simulation and >>>>>>>>>>>>>>>>>> return 0.
These conditions make HHH not a halt decider because >>>>>>>>>>>>>>>>> they are incompatible with the requirements: >>>>>>>>>>>>>>>>>
It is perfectly compatible with those requirements >>>>>>>>>>>>>>>> except in the case where the input calls its own >>>>>>>>>>>>>>>> simulating halt decider.
In other words, not compatible.-a No "except".
Given any algorithm (i.e. a fixed immutable sequence of >>>>>>>>>>>>>>>>> instructions) X described as <X> with input Y: >>>>>>>>>>>>>>>>>
A solution to the halting problem is an algorithm H >>>>>>>>>>>>>>>>> that computes the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when >>>>>>>>>>>>>>>>> executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt >>>>>>>>>>>>>>>>> when executed directly
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return? >>>>>>>>>>>>>>>>>> </Input to LLM systems>
Error: assumes it's possible to design HHH to get a >>>>>>>>>>>>>>>>> correct answer.
HHH(DD) gets the correct answer within its set >>>>>>>>>>>>>>>> of assumptions / premises
Which is incompatible with the requirements for a halt >>>>>>>>>>>>>>> decider:
Yes, but the requirements for a halt decider are inconsistent >>>>>>>>>>>>>> with reality.
In other words, you agree with Turing and Linz that the >>>>>>>>>>>>> following requirements cannot be satisfied:
Sure and likewise no Turing machine can
give birth to a real live fifteen story
office building. All logical impossibilities
are exactly equally logical impossible.
So we're in agreement: no algorithm exists that can tell us >>>>>>>>>>> if any arbitrary algorithm X with input Y will halt when >>>>>>>>>>> executed directly, as proven by Turning and Linz.
In exactly the same way that: "this sentence is not true"
cannot be proven true or false. It is a bogus decision
problem anchored in
a fundamentally incorrect notion of truth.
The false assumption that such an algorithm *does* exist.
Can we correctly say that the color of your car is fifteen feet >>>>>>>> long?
For the body of analytical truth coherence is the key and
incoherence rules out truth.
There is nothing incoherent about wanting to know if any
arbitrary algorithm X with input Y will halt when executed directly. >>>>>>>
Tarski stupidly thought this exact same sort of thing.
If a truth predicate exists then it could tell if the
Liar Paradox is true or false. Since it cannot then
there must be no truth predicate.
Correct.-a If you understood proof by contradiction you wouldn't be >>>>> questioning that.
It looks like ChatGPT 5.0 is the winner here.
It understood that requiring HHH to report on
the behavior of the direct execution of DD()
is requiring a function to report on something
outside of its domain.
False.-a It is proven true by the meaning of the words that a finite
string description of a Turing machine specifies all semantic
properties of the machine it describes, including whether that
machine halts when executed directly.
ChatCPT 5.0 was the first LLM to be able to prove
that is counter-factual.
Ah, so you don't believe in semantic tautologies?