A Turing-machine decider is a Turing machine D that
computes a total function D :-a +ureu raA {Accept,Reject},
where +ureu is the set of all finite strings over the
input alphabet. That is:
1. Totality: For every finite string input w ree +ureu,
D halts and outputs either Accept or Reject.
Is simplified to this barest essence across all models of computation
All deciders essentially: Transform finite string
inputs by finite string transformation rules into
{Accept, Reject} values.
Anything that cannot be derived from actual finite string
inputs is not computable and outside the scope of computation.
On 12/30/25 11:21 PM, olcott wrote:
A Turing-machine decider is a Turing machine D that
computes a total function D :-a +ureu raA {Accept,Reject},
where +ureu is the set of all finite strings over the
input alphabet. That is:
1. Totality: For every finite string input w ree +ureu,
D halts and outputs either Accept or Reject.
Is simplified to this barest essence across all models of computation
All deciders essentially: Transform finite string
inputs by finite string transformation rules into
{Accept, Reject} values.
Anything that cannot be derived from actual finite string
inputs is not computable and outside the scope of computation.
But since the Halting status of the machine that finite string IS
deriveed from that machine, by just running that machine, or giving it
to the appropriate UTM, you are just showing that Halting is a valid question.
It is also uncomputable, as has been proven.
Your problem is you seem to not understand the requirement that a
decider needs to CORRECTLY compute the function it is supposed to be computing, because you just don't understand the nature of truth, and
think it can be just redefined.
As a simile, your logic says a persian cat can be entered into the Westminster Dog show and win best of breed, just by saying it is a dog.
On 12/31/2025 6:28 AM, Richard Damon wrote:
On 12/30/25 11:21 PM, olcott wrote:
A Turing-machine decider is a Turing machine D that
computes a total function D :-a +ureu raA {Accept,Reject},
where +ureu is the set of all finite strings over the
input alphabet. That is:
1. Totality: For every finite string input w ree +ureu,
D halts and outputs either Accept or Reject.
Is simplified to this barest essence across all models of computation
All deciders essentially: Transform finite string
inputs by finite string transformation rules into
{Accept, Reject} values.
Anything that cannot be derived from actual finite string
inputs is not computable and outside the scope of computation.
But since the Halting status of the machine that finite string IS
deriveed from that machine, by just running that machine, or giving it
to the appropriate UTM, you are just showing that Halting is a valid
question.
It is also uncomputable, as has been proven.
There are no finite string transformations that HHH(DD)
can apply to its input that derive the behavior of UTM(DD).
There are finite string transformations that HHH(DD)
can apply to its input that derive the behavior that
the input to HHH(DD) specifies.
No decider is ever accountable to report on any behavior
other than the actual behavior that its actual finite
string input actually specifies. When the halting problem
requires more than that it requires too much.
Your problem is you seem to not understand the requirement that a
decider needs to CORRECTLY compute the function it is supposed to be
computing, because you just don't understand the nature of truth, and
think it can be just redefined.
As a simile, your logic says a persian cat can be entered into the
Westminster Dog show and win best of breed, just by saying it is a dog.
On 12/31/25 11:20 AM, olcott wrote:
On 12/31/2025 6:28 AM, Richard Damon wrote:
On 12/30/25 11:21 PM, olcott wrote:
A Turing-machine decider is a Turing machine D that
computes a total function D :-a +ureu raA {Accept,Reject},
where +ureu is the set of all finite strings over the
input alphabet. That is:
1. Totality: For every finite string input w ree +ureu,
D halts and outputs either Accept or Reject.
Is simplified to this barest essence across all models of computation
All deciders essentially: Transform finite string
inputs by finite string transformation rules into
{Accept, Reject} values.
Anything that cannot be derived from actual finite string
inputs is not computable and outside the scope of computation.
But since the Halting status of the machine that finite string IS
deriveed from that machine, by just running that machine, or giving
it to the appropriate UTM, you are just showing that Halting is a
valid question.
It is also uncomputable, as has been proven.
There are no finite string transformations that HHH(DD)
can apply to its input that derive the behavior of UTM(DD).
But that is a different, and non-sense standard.
There is only ONE transform that HHH does, at it is just wrong.
You seem to forget that HHH is a specific decider while the criteria
needs to be an objective criteria.
The criteria is does the mapping that this HHH does compute match the required one, which is what UTM(DD) shows.
The fact that HHH doesn't do that makes it wrong.
The fact that we can make a similar input from any possible decider
makes the problem uncomputable.
The fact you refuse to accept this makes you stupid.
There are finite string transformations that HHH(DD)
can apply to its input that derive the behavior that
the input to HHH(DD) specifies.
No, there is only ONE transform that it DOES apply.
But that does not specify the meaning of the string, as it was SUPPOSED
TO represent that behavior of the program DD.
Once you lable your HHH as a Halt Decider, the semantic of its input are specified, and NOT based on what it actualy does, but on what it was claiming to be.
Now, part of your problem is you never actually formed the right input string, as you never setup your program correctly, just showing your stupidity and ignorance.
No decider is ever accountable to report on any behavior
other than the actual behavior that its actual finite
string input actually specifies. When the halting problem
requires more than that it requires too much.
But the actual behavior that its actual finte string repesents *IS* the behavior of the machine it describes, or you are just admitting you
started with a lie that DD calling HHH(DD) is according to the proof program, as that *IS* the meaning its passed string must represent.
All you are doing is admitting you are just a stupid liar.
It seems you just don't understand the concept of "Requirments" and thus have major errors in your definiton of things like "Truth"
Your problem is you seem to not understand the requirement that a
decider needs to CORRECTLY compute the function it is supposed to be
computing, because you just don't understand the nature of truth, and
think it can be just redefined.
As a simile, your logic says a persian cat can be entered into the
Westminster Dog show and win best of breed, just by saying it is a dog.
On 12/31/2025 11:11 AM, Richard Damon wrote:
On 12/31/25 11:20 AM, olcott wrote:
On 12/31/2025 6:28 AM, Richard Damon wrote:
On 12/30/25 11:21 PM, olcott wrote:
A Turing-machine decider is a Turing machine D that
computes a total function D :-a +ureu raA {Accept,Reject},
where +ureu is the set of all finite strings over the
input alphabet. That is:
1. Totality: For every finite string input w ree +ureu,
D halts and outputs either Accept or Reject.
Is simplified to this barest essence across all models of computation >>>>> All deciders essentially: Transform finite string
inputs by finite string transformation rules into
{Accept, Reject} values.
Anything that cannot be derived from actual finite string
inputs is not computable and outside the scope of computation.
But since the Halting status of the machine that finite string IS
deriveed from that machine, by just running that machine, or giving
it to the appropriate UTM, you are just showing that Halting is a
valid question.
It is also uncomputable, as has been proven.
There are no finite string transformations that HHH(DD)
can apply to its input that derive the behavior of UTM(DD).
But that is a different, and non-sense standard.
There is only ONE transform that HHH does, at it is just wrong.
You seem to forget that HHH is a specific decider while the criteria
needs to be an objective criteria.
Across ChatGPT, Claude AI, Gemini and Grok within
fifty different conversations they all always
agreed that the halting problem counter-example
input is analogous to the Liar Paradox thus
essentially the requirement of a correct answer
to an incorrect question.
I proved the HP input is the same as the Liar Paradox back in 2004
function LoopIfYouSayItHalts (bool YouSayItHalts):
-a-a if YouSayItHalts () then
-a-a-a-a-a-a while true do {}
-a-a-a else
-a-a-a-a-a-a return false;
Does this program Halt?
(Your (YES or NO) answer is to be considered
-atranslated to Boolean as the function's input
-aparameter)
Please ONLY PROVIDE CORRECT ANSWERS!
https://groups.google.com/g/sci.logic/c/Hs78nMN6QZE/m/ID2rxwo__yQJ
When you yourself say YES you are wrong
When you yourself say-a NO you are wrong
Therefore the halting problem counter example input
is a yes/no question lacking a correct yes/no answer.
The criteria is does the mapping that this HHH does compute match the
required one, which is what UTM(DD) shows.
The fact that HHH doesn't do that makes it wrong.
The fact that we can make a similar input from any possible decider
makes the problem uncomputable.
The fact you refuse to accept this makes you stupid.
There are finite string transformations that HHH(DD)
can apply to its input that derive the behavior that
the input to HHH(DD) specifies.
No, there is only ONE transform that it DOES apply.
But that does not specify the meaning of the string, as it was
SUPPOSED TO represent that behavior of the program DD.
Once you lable your HHH as a Halt Decider, the semantic of its input
are specified, and NOT based on what it actualy does, but on what it
was claiming to be.
Now, part of your problem is you never actually formed the right input
string, as you never setup your program correctly, just showing your
stupidity and ignorance.
No decider is ever accountable to report on any behavior
other than the actual behavior that its actual finite
string input actually specifies. When the halting problem
requires more than that it requires too much.
But the actual behavior that its actual finte string repesents *IS*
the behavior of the machine it describes, or you are just admitting
you started with a lie that DD calling HHH(DD) is according to the
proof program, as that *IS* the meaning its passed string must represent.
All you are doing is admitting you are just a stupid liar.
It seems you just don't understand the concept of "Requirments" and
thus have major errors in your definiton of things like "Truth"
Your problem is you seem to not understand the requirement that a
decider needs to CORRECTLY compute the function it is supposed to be
computing, because you just don't understand the nature of truth,
and think it can be just redefined.
As a simile, your logic says a persian cat can be entered into the
Westminster Dog show and win best of breed, just by saying it is a dog. >>>>
A Turing-machine decider is a Turing machine D that
computes a total function D :-a +ureu raA {Accept,Reject},
where +ureu is the set of all finite strings over the
input alphabet. That is:
1. Totality: For every finite string input w ree +ureu,
D halts and outputs either Accept or Reject.
Is simplified to this barest essence across all models of computation
All deciders essentially: Transform finite string
inputs by finite string transformation rules into
{Accept, Reject} values.
Anything that cannot be derived from actual finite string
inputs is not computable and outside the scope of computation.
On 12/30/2025 10:21 PM, olcott wrote:
A Turing-machine decider is a Turing machine D that
computes a total function D :-a +ureu raA {Accept,Reject},
where +ureu is the set of all finite strings over the
input alphabet. That is:
1. Totality: For every finite string input w ree +ureu,
D halts and outputs either Accept or Reject.
Is simplified to this barest essence across all models of computation
All deciders essentially: Transform finite string
inputs by finite string transformation rules into
{Accept, Reject} values.
Anything that cannot be derived from actual finite string
inputs is not computable and outside the scope of computation.
Can Carol correctly answer rCLnorCY to this (yes/no) question?
E C R Hehner. Objective and Subjective Specifications
WST Workshop on Termination, Oxford.-a 2018 July 18.
See https://www.cs.toronto.edu/~hehner/OSS.pdf
People can pretend that Bob is being asked
Carol's question and on the basis of this
false assumption say that Carol's question
has a correct answer.
On 12/31/25 3:12 PM, olcott wrote:
On 12/30/2025 10:21 PM, olcott wrote:
A Turing-machine decider is a Turing machine D that
computes a total function D :-a +ureu raA {Accept,Reject},
where +ureu is the set of all finite strings over the
input alphabet. That is:
1. Totality: For every finite string input w ree +ureu,
D halts and outputs either Accept or Reject.
Is simplified to this barest essence across all models of computation
All deciders essentially: Transform finite string
inputs by finite string transformation rules into
{Accept, Reject} values.
Anything that cannot be derived from actual finite string
inputs is not computable and outside the scope of computation.
Can Carol correctly answer rCLnorCY to this (yes/no) question?
E C R Hehner. Objective and Subjective Specifications
WST Workshop on Termination, Oxford.-a 2018 July 18.
See https://www.cs.toronto.edu/~hehner/OSS.pdf
Which isn't a valid question, as future behavior of a willful being
doesn't have.a truth value YET.
On the other had, the future (or past) behavior of a deterministic
machine is fixed, so suitable for a question.
People can pretend that Bob is being asked
Carol's question and on the basis of this
false assumption say that Carol's question
has a correct answer.
Simce you don't understand the differnce between willful beings and deterministic machines, your arguement just falls apart.
Maybe you are not a willful being, but gave up that perk in some deal
with a wicked being.
And maybe your confusion is why you think AI LLMs are reliable sources, since you can't understand that they don't actually "think", and their computation algorithms are not based on giving a factual answer.
All you are doing is proving how stupid you are.--
On 12/31/2025 2:30 PM, Richard Damon wrote:
On 12/31/25 3:12 PM, olcott wrote:
On 12/30/2025 10:21 PM, olcott wrote:
A Turing-machine decider is a Turing machine D that
computes a total function D :-a +ureu raA {Accept,Reject},
where +ureu is the set of all finite strings over the
input alphabet. That is:
1. Totality: For every finite string input w ree +ureu,
D halts and outputs either Accept or Reject.
Is simplified to this barest essence across all models of computation
All deciders essentially: Transform finite string
inputs by finite string transformation rules into
{Accept, Reject} values.
Anything that cannot be derived from actual finite string
inputs is not computable and outside the scope of computation.
Can Carol correctly answer rCLnorCY to this (yes/no) question?
E C R Hehner. Objective and Subjective Specifications
WST Workshop on Termination, Oxford.-a 2018 July 18.
See https://www.cs.toronto.edu/~hehner/OSS.pdf
Which isn't a valid question, as future behavior of a willful being
doesn't have.a truth value YET.
On the other had, the future (or past) behavior of a deterministic
machine is fixed, so suitable for a question.
People can pretend that Bob is being asked
Carol's question and on the basis of this
false assumption say that Carol's question
has a correct answer.
Simce you don't understand the differnce between willful beings and
deterministic machines, your arguement just falls apart.
They are semantically equivalent.
Maybe you are not a willful being, but gave up that perk in some deal
with a wicked being.
And maybe your confusion is why you think AI LLMs are reliable
sources, since you can't understand that they don't actually "think",
and their computation algorithms are not based on giving a factual
answer.
Correct semantic entailment derives necessary consequences.
All you are doing is proving how stupid you are.
On 12/31/25 3:54 PM, olcott wrote:
On 12/31/2025 2:30 PM, Richard Damon wrote:
On 12/31/25 3:12 PM, olcott wrote:
On 12/30/2025 10:21 PM, olcott wrote:
A Turing-machine decider is a Turing machine D that
computes a total function D :-a +ureu raA {Accept,Reject},
where +ureu is the set of all finite strings over the
input alphabet. That is:
1. Totality: For every finite string input w ree +ureu,
D halts and outputs either Accept or Reject.
Is simplified to this barest essence across all models of computation >>>>> All deciders essentially: Transform finite string
inputs by finite string transformation rules into
{Accept, Reject} values.
Anything that cannot be derived from actual finite string
inputs is not computable and outside the scope of computation.
Can Carol correctly answer rCLnorCY to this (yes/no) question?
E C R Hehner. Objective and Subjective Specifications
WST Workshop on Termination, Oxford.-a 2018 July 18.
See https://www.cs.toronto.edu/~hehner/OSS.pdf
Which isn't a valid question, as future behavior of a willful being
doesn't have.a truth value YET.
On the other had, the future (or past) behavior of a deterministic
machine is fixed, so suitable for a question.
People can pretend that Bob is being asked
Carol's question and on the basis of this
false assumption say that Carol's question
has a correct answer.
Simce you don't understand the differnce between willful beings and
deterministic machines, your arguement just falls apart.
They are semantically equivalent.
Nope.
So you think That a Willful Being is the semantic equivalent of a Deterministic Computation????
No wonder you are so messed up.
You are just showing how much of an idiot you are.
Maybe in your case, as I have opined, you are not willful, because you
have killed your ability to think and reason.
Maybe you are not a willful being, but gave up that perk in some deal
with a wicked being.
And maybe your confusion is why you think AI LLMs are reliable
sources, since you can't understand that they don't actually "think",
and their computation algorithms are not based on giving a factual
answer.
Correct semantic entailment derives necessary consequences.
Yes, but you need to start with the correct meaning of the words.
All you are doing is proving how stupid you are.
On 12/31/2025 3:19 PM, Richard Damon wrote:
On 12/31/25 3:54 PM, olcott wrote:
On 12/31/2025 2:30 PM, Richard Damon wrote:
On 12/31/25 3:12 PM, olcott wrote:
On 12/30/2025 10:21 PM, olcott wrote:
A Turing-machine decider is a Turing machine D that
computes a total function D :-a +ureu raA {Accept,Reject},
where +ureu is the set of all finite strings over the
input alphabet. That is:
1. Totality: For every finite string input w ree +ureu,
D halts and outputs either Accept or Reject.
Is simplified to this barest essence across all models of computation >>>>>> All deciders essentially: Transform finite string
inputs by finite string transformation rules into
{Accept, Reject} values.
Anything that cannot be derived from actual finite string
inputs is not computable and outside the scope of computation.
Can Carol correctly answer rCLnorCY to this (yes/no) question?
E C R Hehner. Objective and Subjective Specifications
WST Workshop on Termination, Oxford.-a 2018 July 18.
See https://www.cs.toronto.edu/~hehner/OSS.pdf
Which isn't a valid question, as future behavior of a willful being
doesn't have.a truth value YET.
On the other had, the future (or past) behavior of a deterministic
machine is fixed, so suitable for a question.
People can pretend that Bob is being asked
Carol's question and on the basis of this
false assumption say that Carol's question
has a correct answer.
Simce you don't understand the differnce between willful beings and
deterministic machines, your arguement just falls apart.
They are semantically equivalent.
Nope.
So you think That a Willful Being is the semantic equivalent of a
Deterministic Computation????
The question posed to Carol is semantically
equivalent to the question posed to H and
you know this is true yet don't give a rat's
ass for truth.
No wonder you are so messed up.
You are just showing how much of an idiot you are.
Maybe in your case, as I have opined, you are not willful, because you
have killed your ability to think and reason.
Maybe you are not a willful being, but gave up that perk in some
deal with a wicked being.
And maybe your confusion is why you think AI LLMs are reliable
sources, since you can't understand that they don't actually
"think", and their computation algorithms are not based on giving a
factual answer.
Correct semantic entailment derives necessary consequences.
Yes, but you need to start with the correct meaning of the words.
All you are doing is proving how stupid you are.
On 12/31/25 4:55 PM, olcott wrote:
On 12/31/2025 3:19 PM, Richard Damon wrote:
On 12/31/25 3:54 PM, olcott wrote:
On 12/31/2025 2:30 PM, Richard Damon wrote:
On 12/31/25 3:12 PM, olcott wrote:
On 12/30/2025 10:21 PM, olcott wrote:
A Turing-machine decider is a Turing machine D that
computes a total function D :-a +ureu raA {Accept,Reject},
where +ureu is the set of all finite strings over the
input alphabet. That is:
1. Totality: For every finite string input w ree +ureu,
D halts and outputs either Accept or Reject.
Is simplified to this barest essence across all models of
computation
All deciders essentially: Transform finite string
inputs by finite string transformation rules into
{Accept, Reject} values.
Anything that cannot be derived from actual finite string
inputs is not computable and outside the scope of computation.
Can Carol correctly answer rCLnorCY to this (yes/no) question?
E C R Hehner. Objective and Subjective Specifications
WST Workshop on Termination, Oxford.-a 2018 July 18.
See https://www.cs.toronto.edu/~hehner/OSS.pdf
Which isn't a valid question, as future behavior of a willful being >>>>> doesn't have.a truth value YET.
On the other had, the future (or past) behavior of a deterministic
machine is fixed, so suitable for a question.
People can pretend that Bob is being asked
Carol's question and on the basis of this
false assumption say that Carol's question
has a correct answer.
Simce you don't understand the differnce between willful beings and >>>>> deterministic machines, your arguement just falls apart.
They are semantically equivalent.
Nope.
So you think That a Willful Being is the semantic equivalent of a
Deterministic Computation????
The question posed to Carol is semantically
equivalent to the question posed to H and
you know this is true yet don't give a rat's
ass for truth.
No it isn't, as the sort of being it is being asked about matters.
You just are proving you don't know what you are talking about,
I guess you have lost your understanding of what free will means.
My guess is your problem is you have fried your "CPU" and can no longer correctly handle logic.
No wonder you are so messed up.
You are just showing how much of an idiot you are.
Maybe in your case, as I have opined, you are not willful, because
you have killed your ability to think and reason.
Maybe you are not a willful being, but gave up that perk in some
deal with a wicked being.
And maybe your confusion is why you think AI LLMs are reliable
sources, since you can't understand that they don't actually
"think", and their computation algorithms are not based on giving a >>>>> factual answer.
Correct semantic entailment derives necessary consequences.
Yes, but you need to start with the correct meaning of the words.
All you are doing is proving how stupid you are.
On 12/31/2025 3:58 PM, Richard Damon wrote:
On 12/31/25 4:55 PM, olcott wrote:
On 12/31/2025 3:19 PM, Richard Damon wrote:
On 12/31/25 3:54 PM, olcott wrote:
On 12/31/2025 2:30 PM, Richard Damon wrote:
On 12/31/25 3:12 PM, olcott wrote:
On 12/30/2025 10:21 PM, olcott wrote:
A Turing-machine decider is a Turing machine D that
computes a total function D :-a +ureu raA {Accept,Reject},
where +ureu is the set of all finite strings over the
input alphabet. That is:
1. Totality: For every finite string input w ree +ureu,
D halts and outputs either Accept or Reject.
Is simplified to this barest essence across all models of
computation
All deciders essentially: Transform finite string
inputs by finite string transformation rules into
{Accept, Reject} values.
Anything that cannot be derived from actual finite string
inputs is not computable and outside the scope of computation. >>>>>>>>
Can Carol correctly answer rCLnorCY to this (yes/no) question?
E C R Hehner. Objective and Subjective Specifications
WST Workshop on Termination, Oxford.-a 2018 July 18.
See https://www.cs.toronto.edu/~hehner/OSS.pdf
Which isn't a valid question, as future behavior of a willful
being doesn't have.a truth value YET.
On the other had, the future (or past) behavior of a deterministic >>>>>> machine is fixed, so suitable for a question.
People can pretend that Bob is being asked
Carol's question and on the basis of this
false assumption say that Carol's question
has a correct answer.
Simce you don't understand the differnce between willful beings
and deterministic machines, your arguement just falls apart.
They are semantically equivalent.
Nope.
So you think That a Willful Being is the semantic equivalent of a
Deterministic Computation????
The question posed to Carol is semantically
equivalent to the question posed to H and
you know this is true yet don't give a rat's
ass for truth.
No it isn't, as the sort of being it is being asked about matters.
You just are proving you don't know what you are talking about,
I guess you have lost your understanding of what free will means.
My guess is your problem is you have fried your "CPU" and can no
longer correctly handle logic.
Even omnipotence cannot correctly resolve
"This sentence is not true" into True or False.
No wonder you are so messed up.
You are just showing how much of an idiot you are.
Maybe in your case, as I have opined, you are not willful, because
you have killed your ability to think and reason.
Maybe you are not a willful being, but gave up that perk in some
deal with a wicked being.
And maybe your confusion is why you think AI LLMs are reliable
sources, since you can't understand that they don't actually
"think", and their computation algorithms are not based on giving >>>>>> a factual answer.
Correct semantic entailment derives necessary consequences.
Yes, but you need to start with the correct meaning of the words.
All you are doing is proving how stupid you are.
On 12/31/25 5:04 PM, olcott wrote:
On 12/31/2025 3:58 PM, Richard Damon wrote:
On 12/31/25 4:55 PM, olcott wrote:
On 12/31/2025 3:19 PM, Richard Damon wrote:
On 12/31/25 3:54 PM, olcott wrote:
On 12/31/2025 2:30 PM, Richard Damon wrote:
On 12/31/25 3:12 PM, olcott wrote:
On 12/30/2025 10:21 PM, olcott wrote:
A Turing-machine decider is a Turing machine D that
computes a total function D :-a +ureu raA {Accept,Reject},
where +ureu is the set of all finite strings over the
input alphabet. That is:
1. Totality: For every finite string input w ree +ureu,
D halts and outputs either Accept or Reject.
Is simplified to this barest essence across all models of
computation
All deciders essentially: Transform finite string
inputs by finite string transformation rules into
{Accept, Reject} values.
Anything that cannot be derived from actual finite string
inputs is not computable and outside the scope of computation. >>>>>>>>>
Can Carol correctly answer rCLnorCY to this (yes/no) question? >>>>>>>> E C R Hehner. Objective and Subjective Specifications
WST Workshop on Termination, Oxford.-a 2018 July 18.
See https://www.cs.toronto.edu/~hehner/OSS.pdf
Which isn't a valid question, as future behavior of a willful
being doesn't have.a truth value YET.
On the other had, the future (or past) behavior of a
deterministic machine is fixed, so suitable for a question.
People can pretend that Bob is being asked
Carol's question and on the basis of this
false assumption say that Carol's question
has a correct answer.
Simce you don't understand the differnce between willful beings >>>>>>> and deterministic machines, your arguement just falls apart.
They are semantically equivalent.
Nope.
So you think That a Willful Being is the semantic equivalent of a
Deterministic Computation????
The question posed to Carol is semantically
equivalent to the question posed to H and
you know this is true yet don't give a rat's
ass for truth.
No it isn't, as the sort of being it is being asked about matters.
You just are proving you don't know what you are talking about,
I guess you have lost your understanding of what free will means.
My guess is your problem is you have fried your "CPU" and can no
longer correctly handle logic.
Even omnipotence cannot correctly resolve
"This sentence is not true" into True or False.
But no one is trying to do that but you.
Your problem is you have fried you processing unit and lost your ability
to think.
That is the only explanation for you to be keep on just repeating the
same errors, that you are just unable to learn because you can't think anymore.
No wonder you are so messed up.
You are just showing how much of an idiot you are.
Maybe in your case, as I have opined, you are not willful, because
you have killed your ability to think and reason.
Maybe you are not a willful being, but gave up that perk in some >>>>>>> deal with a wicked being.
And maybe your confusion is why you think AI LLMs are reliable
sources, since you can't understand that they don't actually
"think", and their computation algorithms are not based on giving >>>>>>> a factual answer.
Correct semantic entailment derives necessary consequences.
Yes, but you need to start with the correct meaning of the words.
All you are doing is proving how stupid you are.
On 12/31/2025 4:11 PM, Richard Damon wrote:
On 12/31/25 5:04 PM, olcott wrote:
On 12/31/2025 3:58 PM, Richard Damon wrote:
On 12/31/25 4:55 PM, olcott wrote:
On 12/31/2025 3:19 PM, Richard Damon wrote:
On 12/31/25 3:54 PM, olcott wrote:
On 12/31/2025 2:30 PM, Richard Damon wrote:
On 12/31/25 3:12 PM, olcott wrote:
On 12/30/2025 10:21 PM, olcott wrote:
A Turing-machine decider is a Turing machine D that
computes a total function D :-a +ureu raA {Accept,Reject}, >>>>>>>>>> where +ureu is the set of all finite strings over the
input alphabet. That is:
1. Totality: For every finite string input w ree +ureu,
D halts and outputs either Accept or Reject.
Is simplified to this barest essence across all models of >>>>>>>>>> computation
All deciders essentially: Transform finite string
inputs by finite string transformation rules into
{Accept, Reject} values.
Anything that cannot be derived from actual finite string
inputs is not computable and outside the scope of computation. >>>>>>>>>>
Can Carol correctly answer rCLnorCY to this (yes/no) question? >>>>>>>>> E C R Hehner. Objective and Subjective Specifications
WST Workshop on Termination, Oxford.-a 2018 July 18.
See https://www.cs.toronto.edu/~hehner/OSS.pdf
Which isn't a valid question, as future behavior of a willful >>>>>>>> being doesn't have.a truth value YET.
On the other had, the future (or past) behavior of a
deterministic machine is fixed, so suitable for a question.
People can pretend that Bob is being asked
Carol's question and on the basis of this
false assumption say that Carol's question
has a correct answer.
Simce you don't understand the differnce between willful beings >>>>>>>> and deterministic machines, your arguement just falls apart.
They are semantically equivalent.
Nope.
So you think That a Willful Being is the semantic equivalent of a >>>>>> Deterministic Computation????
The question posed to Carol is semantically
equivalent to the question posed to H and
you know this is true yet don't give a rat's
ass for truth.
No it isn't, as the sort of being it is being asked about matters.
You just are proving you don't know what you are talking about,
I guess you have lost your understanding of what free will means.
My guess is your problem is you have fried your "CPU" and can no
longer correctly handle logic.
Even omnipotence cannot correctly resolve
"This sentence is not true" into True or False.
But no one is trying to do that but you.
Your problem is you have fried you processing unit and lost your
ability to think.
That is the only explanation for you to be keep on just repeating the
same errors, that you are just unable to learn because you can't think
anymore.
No one has ever provided any reasoning that I am incorrect.
Every single rebuttal in 28 years has always been a form
of we really really don't believe you therefore you are wrong.
No wonder you are so messed up.
You are just showing how much of an idiot you are.
Maybe in your case, as I have opined, you are not willful, because >>>>>> you have killed your ability to think and reason.
Maybe you are not a willful being, but gave up that perk in some >>>>>>>> deal with a wicked being.
And maybe your confusion is why you think AI LLMs are reliable >>>>>>>> sources, since you can't understand that they don't actually
"think", and their computation algorithms are not based on
giving a factual answer.
Correct semantic entailment derives necessary consequences.
Yes, but you need to start with the correct meaning of the words.
All you are doing is proving how stupid you are.
| Sysop: | Amessyroom |
|---|---|
| Location: | Fayetteville, NC |
| Users: | 54 |
| Nodes: | 6 (0 / 6) |
| Uptime: | 12:29:14 |
| Calls: | 742 |
| Files: | 1,218 |
| D/L today: |
2 files (2,024K bytes) |
| Messages: | 183,176 |
| Posted today: | 1 |