Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 26 |
Nodes: | 6 (0 / 6) |
Uptime: | 48:47:00 |
Calls: | 632 |
Files: | 1,187 |
D/L today: |
3 files (4,227K bytes) |
Messages: | 177,138 |
On 10/12/25 12:04 PM, olcott wrote:
Also very important is that there is no chance of
AI hallucination when they are only reasoning
within a set of premises.-a Some systems must be told:
Please think this all the way through without making any guesses
I don't mean to be rude, but that is a completely insane assertion to
me. There is always a non-zero chance for an LLM to roll a bad token
during inference and spit out garbage.
Sure, the top-p decoding strategy
can help minimize such mistakes by pruning the token pool of the worst
of the bad apples, but such models will never *ever* be foolproof. The
price you pay for convincingly generating natural language is
bulletproof reasoning.
If you're interested in formalizing your ideas using cutting-edge tech,
I encourage you to look at Lean 4. Once you provide a machine-checked
proof in Lean 4 with no `sorry`/`axiom`/other cheats, come back. People might adopt a very different tone.
Best of luck, you will need it.
On 12/10/2025 16:53, Bonita Montero wrote:
Sorry, that's silly. You spend half your life discussing the
same problem over and over again and never get to the end.
This gives PO a narrative he can hold on to which gives his life a meaning:-a he is the heroic world-saving unrecognised genius, constantly struggling against "the system" right up to his final breath!-a If he
were to suddenly realise he was just a deluded dumbo who had wasted most
of his life arguing over a succession of mistakes and misunderstandings
on his part, and had never contributed a single idea of any academic
value, would his life be better?-a I think not.
Thankfully he has recently discovered chatbots who can give him the uncritical approval he craves,
so there is next to no chance of that
happening now.-a [Assuming they don't suddenly get better, to the point where they can genuinely analyse and criticise his claims in the way we do...-a Given how they currently work, I don't see that happening any
time soon.]
Would the lives of other posters here be better?-a That's a trickier question.
Mike.--
Am 12.10.2025 um 15:50 schrieb olcott:
Please think this all the way through without making any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly simulates its input until: >>> (a) Detects a non-terminating behavior pattern:
-a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its own non-termination
-a-a-a-a then HHH is correct to abort this simulation and return 0.
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return?
</Input to LLM systems>
On 10/13/2025 12:12 AM, olcott wrote:
On 10/12/2025 10:49 PM, dbush wrote:
On 10/12/2025 11:43 PM, olcott wrote:
On 10/12/2025 9:59 PM, dbush wrote:
On 10/12/2025 10:57 PM, olcott wrote:
On 10/12/2025 9:40 PM, dbush wrote:
On 10/12/2025 10:34 PM, olcott wrote:
On 10/12/2025 9:29 PM, dbush wrote:
On 10/12/2025 10:20 PM, olcott wrote:
On 10/12/2025 9:15 PM, dbush wrote:
On 10/12/2025 9:56 PM, olcott wrote:
On 10/12/2025 8:22 PM, dbush wrote:
On 10/12/2025 9:20 PM, olcott wrote:
On 10/12/2025 3:11 PM, dbush wrote:
On 10/12/2025 11:47 AM, olcott wrote:
On 10/12/2025 9:19 AM, dbush wrote:
On 10/12/2025 9:50 AM, olcott wrote:
Please think this all the way through without making >>>>>>>>>>>>>>>>>> any guesses
<Input to LLM systems>
Simulating Termination Analyzer HHH correctly >>>>>>>>>>>>>>>>>> simulates its input until:
(a) Detects a non-terminating behavior pattern: >>>>>>>>>>>>>>>>>> -a-a-a-a abort simulation and return 0.
(b) Simulated input reaches its simulated "return" >>>>>>>>>>>>>>>>>> statement:
-a-a-a-a return 1.
(c) If HHH must abort its simulation to prevent its >>>>>>>>>>>>>>>>>> own non- termination
-a-a-a-a then HHH is correct to abort this simulation and >>>>>>>>>>>>>>>>>> return 0.
These conditions make HHH not a halt decider because >>>>>>>>>>>>>>>>> they are incompatible with the requirements: >>>>>>>>>>>>>>>>>
It is perfectly compatible with those requirements >>>>>>>>>>>>>>>> except in the case where the input calls its own >>>>>>>>>>>>>>>> simulating halt decider.
In other words, not compatible.-a No "except".
Given any algorithm (i.e. a fixed immutable sequence of >>>>>>>>>>>>>>>>> instructions) X described as <X> with input Y: >>>>>>>>>>>>>>>>>
A solution to the halting problem is an algorithm H >>>>>>>>>>>>>>>>> that computes the following mapping:
(<X>,Y) maps to 1 if and only if X(Y) halts when >>>>>>>>>>>>>>>>> executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt >>>>>>>>>>>>>>>>> when executed directly
typedef int (*ptr)();
int HHH(ptr P);
int DD()
{
-a-a int Halt_Status = HHH(DD);
-a-a if (Halt_Status)
-a-a-a-a HERE: goto HERE;
-a-a return Halt_Status;
}
int main()
{
-a-a HHH(DD);
}
What value should HHH(DD) correctly return? >>>>>>>>>>>>>>>>>> </Input to LLM systems>
Error: assumes it's possible to design HHH to get a >>>>>>>>>>>>>>>>> correct answer.
HHH(DD) gets the correct answer within its set >>>>>>>>>>>>>>>> of assumptions / premises
Which is incompatible with the requirements for a halt >>>>>>>>>>>>>>> decider:
Yes, but the requirements for a halt decider are inconsistent >>>>>>>>>>>>>> with reality.
In other words, you agree with Turing and Linz that the >>>>>>>>>>>>> following requirements cannot be satisfied:
Sure and likewise no Turing machine can
give birth to a real live fifteen story
office building. All logical impossibilities
are exactly equally logical impossible.
So we're in agreement: no algorithm exists that can tell us >>>>>>>>>>> if any arbitrary algorithm X with input Y will halt when >>>>>>>>>>> executed directly, as proven by Turning and Linz.
In exactly the same way that: "this sentence is not true"
cannot be proven true or false. It is a bogus decision
problem anchored in
a fundamentally incorrect notion of truth.
The false assumption that such an algorithm *does* exist.
Can we correctly say that the color of your car is fifteen feet >>>>>>>> long?
For the body of analytical truth coherence is the key and
incoherence rules out truth.
There is nothing incoherent about wanting to know if any
arbitrary algorithm X with input Y will halt when executed directly. >>>>>>>
Tarski stupidly thought this exact same sort of thing.
If a truth predicate exists then it could tell if the
Liar Paradox is true or false. Since it cannot then
there must be no truth predicate.
Correct.-a If you understood proof by contradiction you wouldn't be >>>>> questioning that.
It looks like ChatGPT 5.0 is the winner here.
It understood that requiring HHH to report on
the behavior of the direct execution of DD()
is requiring a function to report on something
outside of its domain.
False.-a It is proven true by the meaning of the words that a finite
string description of a Turing machine specifies all semantic
properties of the machine it describes, including whether that
machine halts when executed directly.
ChatCPT 5.0 was the first LLM to be able to prove
that is counter-factual.
Ah, so you don't believe in semantic tautologies?