On 12/31/25 4:55 PM, olcott wrote:
On 12/31/2025 3:19 PM, Richard Damon wrote:
On 12/31/25 3:54 PM, olcott wrote:
On 12/31/2025 2:30 PM, Richard Damon wrote:
On 12/31/25 3:12 PM, olcott wrote:
On 12/30/2025 10:21 PM, olcott wrote:
A Turing-machine decider is a Turing machine D that
computes a total function D :-a +ureu raA {Accept,Reject},
where +ureu is the set of all finite strings over the
input alphabet. That is:
1. Totality: For every finite string input w ree +ureu,
D halts and outputs either Accept or Reject.
Is simplified to this barest essence across all models of
computation
All deciders essentially: Transform finite string
inputs by finite string transformation rules into
{Accept, Reject} values.
Anything that cannot be derived from actual finite string
inputs is not computable and outside the scope of computation.
Can Carol correctly answer rCLnorCY to this (yes/no) question?
E C R Hehner. Objective and Subjective Specifications
WST Workshop on Termination, Oxford.-a 2018 July 18.
See https://www.cs.toronto.edu/~hehner/OSS.pdf
Which isn't a valid question, as future behavior of a willful being >>>>> doesn't have.a truth value YET.
On the other had, the future (or past) behavior of a deterministic
machine is fixed, so suitable for a question.
People can pretend that Bob is being asked
Carol's question and on the basis of this
false assumption say that Carol's question
has a correct answer.
Simce you don't understand the differnce between willful beings and >>>>> deterministic machines, your arguement just falls apart.
They are semantically equivalent.
Nope.
So you think That a Willful Being is the semantic equivalent of a
Deterministic Computation????
The question posed to Carol is semantically
equivalent to the question posed to H and
you know this is true yet don't give a rat's
ass for truth.
No it isn't, as the sort of being it is being asked about matters.
You just are proving you don't know what you are talking about,
I guess you have lost your understanding of what free will means.
My guess is your problem is you have fried your "CPU" and can no longer correctly handle logic.
No wonder you are so messed up.
You are just showing how much of an idiot you are.
Maybe in your case, as I have opined, you are not willful, because
you have killed your ability to think and reason.
Maybe you are not a willful being, but gave up that perk in some
deal with a wicked being.
And maybe your confusion is why you think AI LLMs are reliable
sources, since you can't understand that they don't actually
"think", and their computation algorithms are not based on giving a >>>>> factual answer.
Correct semantic entailment derives necessary consequences.
Yes, but you need to start with the correct meaning of the words.
All you are doing is proving how stupid you are.
On 12/31/2025 3:58 PM, Richard Damon wrote:
On 12/31/25 4:55 PM, olcott wrote:
On 12/31/2025 3:19 PM, Richard Damon wrote:
On 12/31/25 3:54 PM, olcott wrote:
On 12/31/2025 2:30 PM, Richard Damon wrote:
On 12/31/25 3:12 PM, olcott wrote:
On 12/30/2025 10:21 PM, olcott wrote:
A Turing-machine decider is a Turing machine D that
computes a total function D :-a +ureu raA {Accept,Reject},
where +ureu is the set of all finite strings over the
input alphabet. That is:
1. Totality: For every finite string input w ree +ureu,
D halts and outputs either Accept or Reject.
Is simplified to this barest essence across all models of
computation
All deciders essentially: Transform finite string
inputs by finite string transformation rules into
{Accept, Reject} values.
Anything that cannot be derived from actual finite string
inputs is not computable and outside the scope of computation. >>>>>>>>
Can Carol correctly answer rCLnorCY to this (yes/no) question?
E C R Hehner. Objective and Subjective Specifications
WST Workshop on Termination, Oxford.-a 2018 July 18.
See https://www.cs.toronto.edu/~hehner/OSS.pdf
Which isn't a valid question, as future behavior of a willful
being doesn't have.a truth value YET.
On the other had, the future (or past) behavior of a deterministic >>>>>> machine is fixed, so suitable for a question.
People can pretend that Bob is being asked
Carol's question and on the basis of this
false assumption say that Carol's question
has a correct answer.
Simce you don't understand the differnce between willful beings
and deterministic machines, your arguement just falls apart.
They are semantically equivalent.
Nope.
So you think That a Willful Being is the semantic equivalent of a
Deterministic Computation????
The question posed to Carol is semantically
equivalent to the question posed to H and
you know this is true yet don't give a rat's
ass for truth.
No it isn't, as the sort of being it is being asked about matters.
You just are proving you don't know what you are talking about,
I guess you have lost your understanding of what free will means.
My guess is your problem is you have fried your "CPU" and can no
longer correctly handle logic.
Even omnipotence cannot correctly resolve
"This sentence is not true" into True or False.
No wonder you are so messed up.
You are just showing how much of an idiot you are.
Maybe in your case, as I have opined, you are not willful, because
you have killed your ability to think and reason.
Maybe you are not a willful being, but gave up that perk in some
deal with a wicked being.
And maybe your confusion is why you think AI LLMs are reliable
sources, since you can't understand that they don't actually
"think", and their computation algorithms are not based on giving >>>>>> a factual answer.
Correct semantic entailment derives necessary consequences.
Yes, but you need to start with the correct meaning of the words.
All you are doing is proving how stupid you are.
On 12/31/25 5:04 PM, olcott wrote:
On 12/31/2025 3:58 PM, Richard Damon wrote:
On 12/31/25 4:55 PM, olcott wrote:
On 12/31/2025 3:19 PM, Richard Damon wrote:
On 12/31/25 3:54 PM, olcott wrote:
On 12/31/2025 2:30 PM, Richard Damon wrote:
On 12/31/25 3:12 PM, olcott wrote:
On 12/30/2025 10:21 PM, olcott wrote:
A Turing-machine decider is a Turing machine D that
computes a total function D :-a +ureu raA {Accept,Reject},
where +ureu is the set of all finite strings over the
input alphabet. That is:
1. Totality: For every finite string input w ree +ureu,
D halts and outputs either Accept or Reject.
Is simplified to this barest essence across all models of
computation
All deciders essentially: Transform finite string
inputs by finite string transformation rules into
{Accept, Reject} values.
Anything that cannot be derived from actual finite string
inputs is not computable and outside the scope of computation. >>>>>>>>>
Can Carol correctly answer rCLnorCY to this (yes/no) question? >>>>>>>> E C R Hehner. Objective and Subjective Specifications
WST Workshop on Termination, Oxford.-a 2018 July 18.
See https://www.cs.toronto.edu/~hehner/OSS.pdf
Which isn't a valid question, as future behavior of a willful
being doesn't have.a truth value YET.
On the other had, the future (or past) behavior of a
deterministic machine is fixed, so suitable for a question.
People can pretend that Bob is being asked
Carol's question and on the basis of this
false assumption say that Carol's question
has a correct answer.
Simce you don't understand the differnce between willful beings >>>>>>> and deterministic machines, your arguement just falls apart.
They are semantically equivalent.
Nope.
So you think That a Willful Being is the semantic equivalent of a
Deterministic Computation????
The question posed to Carol is semantically
equivalent to the question posed to H and
you know this is true yet don't give a rat's
ass for truth.
No it isn't, as the sort of being it is being asked about matters.
You just are proving you don't know what you are talking about,
I guess you have lost your understanding of what free will means.
My guess is your problem is you have fried your "CPU" and can no
longer correctly handle logic.
Even omnipotence cannot correctly resolve
"This sentence is not true" into True or False.
But no one is trying to do that but you.
Your problem is you have fried you processing unit and lost your ability
to think.
That is the only explanation for you to be keep on just repeating the
same errors, that you are just unable to learn because you can't think anymore.
No wonder you are so messed up.
You are just showing how much of an idiot you are.
Maybe in your case, as I have opined, you are not willful, because
you have killed your ability to think and reason.
Maybe you are not a willful being, but gave up that perk in some >>>>>>> deal with a wicked being.
And maybe your confusion is why you think AI LLMs are reliable
sources, since you can't understand that they don't actually
"think", and their computation algorithms are not based on giving >>>>>>> a factual answer.
Correct semantic entailment derives necessary consequences.
Yes, but you need to start with the correct meaning of the words.
All you are doing is proving how stupid you are.
On 12/31/2025 4:11 PM, Richard Damon wrote:
On 12/31/25 5:04 PM, olcott wrote:
On 12/31/2025 3:58 PM, Richard Damon wrote:
On 12/31/25 4:55 PM, olcott wrote:
On 12/31/2025 3:19 PM, Richard Damon wrote:
On 12/31/25 3:54 PM, olcott wrote:
On 12/31/2025 2:30 PM, Richard Damon wrote:
On 12/31/25 3:12 PM, olcott wrote:
On 12/30/2025 10:21 PM, olcott wrote:
A Turing-machine decider is a Turing machine D that
computes a total function D :-a +ureu raA {Accept,Reject}, >>>>>>>>>> where +ureu is the set of all finite strings over the
input alphabet. That is:
1. Totality: For every finite string input w ree +ureu,
D halts and outputs either Accept or Reject.
Is simplified to this barest essence across all models of >>>>>>>>>> computation
All deciders essentially: Transform finite string
inputs by finite string transformation rules into
{Accept, Reject} values.
Anything that cannot be derived from actual finite string
inputs is not computable and outside the scope of computation. >>>>>>>>>>
Can Carol correctly answer rCLnorCY to this (yes/no) question? >>>>>>>>> E C R Hehner. Objective and Subjective Specifications
WST Workshop on Termination, Oxford.-a 2018 July 18.
See https://www.cs.toronto.edu/~hehner/OSS.pdf
Which isn't a valid question, as future behavior of a willful >>>>>>>> being doesn't have.a truth value YET.
On the other had, the future (or past) behavior of a
deterministic machine is fixed, so suitable for a question.
People can pretend that Bob is being asked
Carol's question and on the basis of this
false assumption say that Carol's question
has a correct answer.
Simce you don't understand the differnce between willful beings >>>>>>>> and deterministic machines, your arguement just falls apart.
They are semantically equivalent.
Nope.
So you think That a Willful Being is the semantic equivalent of a >>>>>> Deterministic Computation????
The question posed to Carol is semantically
equivalent to the question posed to H and
you know this is true yet don't give a rat's
ass for truth.
No it isn't, as the sort of being it is being asked about matters.
You just are proving you don't know what you are talking about,
I guess you have lost your understanding of what free will means.
My guess is your problem is you have fried your "CPU" and can no
longer correctly handle logic.
Even omnipotence cannot correctly resolve
"This sentence is not true" into True or False.
But no one is trying to do that but you.
Your problem is you have fried you processing unit and lost your
ability to think.
That is the only explanation for you to be keep on just repeating the
same errors, that you are just unable to learn because you can't think
anymore.
No one has ever provided any reasoning that I am incorrect.
Every single rebuttal in 28 years has always been a form
of we really really don't believe you therefore you are wrong.
No wonder you are so messed up.
You are just showing how much of an idiot you are.
Maybe in your case, as I have opined, you are not willful, because >>>>>> you have killed your ability to think and reason.
Maybe you are not a willful being, but gave up that perk in some >>>>>>>> deal with a wicked being.
And maybe your confusion is why you think AI LLMs are reliable >>>>>>>> sources, since you can't understand that they don't actually
"think", and their computation algorithms are not based on
giving a factual answer.
Correct semantic entailment derives necessary consequences.
Yes, but you need to start with the correct meaning of the words.
All you are doing is proving how stupid you are.
| Sysop: | Amessyroom |
|---|---|
| Location: | Fayetteville, NC |
| Users: | 59 |
| Nodes: | 6 (0 / 6) |
| Uptime: | 19:34:48 |
| Calls: | 810 |
| Calls today: | 1 |
| Files: | 1,287 |
| D/L today: |
10 files (21,017K bytes) |
| Messages: | 194,291 |