• Re: The exact meaning of these exact words prove ALL of my points

    From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Dec 31 16:04:20 2025
    From Newsgroup: comp.ai.philosophy

    On 12/31/2025 3:58 PM, Richard Damon wrote:
    On 12/31/25 4:55 PM, olcott wrote:
    On 12/31/2025 3:19 PM, Richard Damon wrote:
    On 12/31/25 3:54 PM, olcott wrote:
    On 12/31/2025 2:30 PM, Richard Damon wrote:
    On 12/31/25 3:12 PM, olcott wrote:
    On 12/30/2025 10:21 PM, olcott wrote:
    A Turing-machine decider is a Turing machine D that
    computes a total function D :-a +ureu raA {Accept,Reject},
    where +ureu is the set of all finite strings over the
    input alphabet. That is:

    1. Totality: For every finite string input w ree +ureu,
    D halts and outputs either Accept or Reject.

    Is simplified to this barest essence across all models of
    computation
    All deciders essentially: Transform finite string
    inputs by finite string transformation rules into
    {Accept, Reject} values.

    Anything that cannot be derived from actual finite string
    inputs is not computable and outside the scope of computation.


    Can Carol correctly answer rCLnorCY to this (yes/no) question?
    E C R Hehner. Objective and Subjective Specifications
    WST Workshop on Termination, Oxford.-a 2018 July 18.
    See https://www.cs.toronto.edu/~hehner/OSS.pdf

    Which isn't a valid question, as future behavior of a willful being >>>>> doesn't have.a truth value YET.

    On the other had, the future (or past) behavior of a deterministic
    machine is fixed, so suitable for a question.


    People can pretend that Bob is being asked
    Carol's question and on the basis of this
    false assumption say that Carol's question
    has a correct answer.



    Simce you don't understand the differnce between willful beings and >>>>> deterministic machines, your arguement just falls apart.


    They are semantically equivalent.

    Nope.

    So you think That a Willful Being is the semantic equivalent of a
    Deterministic Computation????


    The question posed to Carol is semantically
    equivalent to the question posed to H and
    you know this is true yet don't give a rat's
    ass for truth.


    No it isn't, as the sort of being it is being asked about matters.

    You just are proving you don't know what you are talking about,

    I guess you have lost your understanding of what free will means.

    My guess is your problem is you have fried your "CPU" and can no longer correctly handle logic.

    Even omnipotence cannot correctly resolve
    "This sentence is not true" into True or False.



    No wonder you are so messed up.

    You are just showing how much of an idiot you are.

    Maybe in your case, as I have opined, you are not willful, because
    you have killed your ability to think and reason.


    Maybe you are not a willful being, but gave up that perk in some
    deal with a wicked being.

    And maybe your confusion is why you think AI LLMs are reliable
    sources, since you can't understand that they don't actually
    "think", and their computation algorithms are not based on giving a >>>>> factual answer.


    Correct semantic entailment derives necessary consequences.


    Yes, but you need to start with the correct meaning of the words.

    All you are doing is proving how stupid you are.






    --
    Copyright 2025 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Dec 31 17:11:07 2025
    From Newsgroup: comp.ai.philosophy

    On 12/31/25 5:04 PM, olcott wrote:
    On 12/31/2025 3:58 PM, Richard Damon wrote:
    On 12/31/25 4:55 PM, olcott wrote:
    On 12/31/2025 3:19 PM, Richard Damon wrote:
    On 12/31/25 3:54 PM, olcott wrote:
    On 12/31/2025 2:30 PM, Richard Damon wrote:
    On 12/31/25 3:12 PM, olcott wrote:
    On 12/30/2025 10:21 PM, olcott wrote:
    A Turing-machine decider is a Turing machine D that
    computes a total function D :-a +ureu raA {Accept,Reject},
    where +ureu is the set of all finite strings over the
    input alphabet. That is:

    1. Totality: For every finite string input w ree +ureu,
    D halts and outputs either Accept or Reject.

    Is simplified to this barest essence across all models of
    computation
    All deciders essentially: Transform finite string
    inputs by finite string transformation rules into
    {Accept, Reject} values.

    Anything that cannot be derived from actual finite string
    inputs is not computable and outside the scope of computation. >>>>>>>>

    Can Carol correctly answer rCLnorCY to this (yes/no) question?
    E C R Hehner. Objective and Subjective Specifications
    WST Workshop on Termination, Oxford.-a 2018 July 18.
    See https://www.cs.toronto.edu/~hehner/OSS.pdf

    Which isn't a valid question, as future behavior of a willful
    being doesn't have.a truth value YET.

    On the other had, the future (or past) behavior of a deterministic >>>>>> machine is fixed, so suitable for a question.


    People can pretend that Bob is being asked
    Carol's question and on the basis of this
    false assumption say that Carol's question
    has a correct answer.



    Simce you don't understand the differnce between willful beings
    and deterministic machines, your arguement just falls apart.


    They are semantically equivalent.

    Nope.

    So you think That a Willful Being is the semantic equivalent of a
    Deterministic Computation????


    The question posed to Carol is semantically
    equivalent to the question posed to H and
    you know this is true yet don't give a rat's
    ass for truth.


    No it isn't, as the sort of being it is being asked about matters.

    You just are proving you don't know what you are talking about,

    I guess you have lost your understanding of what free will means.

    My guess is your problem is you have fried your "CPU" and can no
    longer correctly handle logic.

    Even omnipotence cannot correctly resolve
    "This sentence is not true" into True or False.

    But no one is trying to do that but you.

    Your problem is you have fried you processing unit and lost your ability
    to think.

    That is the only explanation for you to be keep on just repeating the
    same errors, that you are just unable to learn because you can't think anymore.




    No wonder you are so messed up.

    You are just showing how much of an idiot you are.

    Maybe in your case, as I have opined, you are not willful, because
    you have killed your ability to think and reason.


    Maybe you are not a willful being, but gave up that perk in some
    deal with a wicked being.

    And maybe your confusion is why you think AI LLMs are reliable
    sources, since you can't understand that they don't actually
    "think", and their computation algorithms are not based on giving >>>>>> a factual answer.


    Correct semantic entailment derives necessary consequences.


    Yes, but you need to start with the correct meaning of the words.

    All you are doing is proving how stupid you are.









    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Dec 31 16:45:03 2025
    From Newsgroup: comp.ai.philosophy

    On 12/31/2025 4:11 PM, Richard Damon wrote:
    On 12/31/25 5:04 PM, olcott wrote:
    On 12/31/2025 3:58 PM, Richard Damon wrote:
    On 12/31/25 4:55 PM, olcott wrote:
    On 12/31/2025 3:19 PM, Richard Damon wrote:
    On 12/31/25 3:54 PM, olcott wrote:
    On 12/31/2025 2:30 PM, Richard Damon wrote:
    On 12/31/25 3:12 PM, olcott wrote:
    On 12/30/2025 10:21 PM, olcott wrote:
    A Turing-machine decider is a Turing machine D that
    computes a total function D :-a +ureu raA {Accept,Reject},
    where +ureu is the set of all finite strings over the
    input alphabet. That is:

    1. Totality: For every finite string input w ree +ureu,
    D halts and outputs either Accept or Reject.

    Is simplified to this barest essence across all models of
    computation
    All deciders essentially: Transform finite string
    inputs by finite string transformation rules into
    {Accept, Reject} values.

    Anything that cannot be derived from actual finite string
    inputs is not computable and outside the scope of computation. >>>>>>>>>

    Can Carol correctly answer rCLnorCY to this (yes/no) question? >>>>>>>> E C R Hehner. Objective and Subjective Specifications
    WST Workshop on Termination, Oxford.-a 2018 July 18.
    See https://www.cs.toronto.edu/~hehner/OSS.pdf

    Which isn't a valid question, as future behavior of a willful
    being doesn't have.a truth value YET.

    On the other had, the future (or past) behavior of a
    deterministic machine is fixed, so suitable for a question.


    People can pretend that Bob is being asked
    Carol's question and on the basis of this
    false assumption say that Carol's question
    has a correct answer.



    Simce you don't understand the differnce between willful beings >>>>>>> and deterministic machines, your arguement just falls apart.


    They are semantically equivalent.

    Nope.

    So you think That a Willful Being is the semantic equivalent of a
    Deterministic Computation????


    The question posed to Carol is semantically
    equivalent to the question posed to H and
    you know this is true yet don't give a rat's
    ass for truth.


    No it isn't, as the sort of being it is being asked about matters.

    You just are proving you don't know what you are talking about,

    I guess you have lost your understanding of what free will means.

    My guess is your problem is you have fried your "CPU" and can no
    longer correctly handle logic.

    Even omnipotence cannot correctly resolve
    "This sentence is not true" into True or False.

    But no one is trying to do that but you.

    Your problem is you have fried you processing unit and lost your ability
    to think.

    That is the only explanation for you to be keep on just repeating the
    same errors, that you are just unable to learn because you can't think anymore.


    No one has ever provided any reasoning that I am incorrect.
    Every single rebuttal in 28 years has always been a form
    of we really really don't believe you therefore you are wrong.




    No wonder you are so messed up.

    You are just showing how much of an idiot you are.

    Maybe in your case, as I have opined, you are not willful, because
    you have killed your ability to think and reason.


    Maybe you are not a willful being, but gave up that perk in some >>>>>>> deal with a wicked being.

    And maybe your confusion is why you think AI LLMs are reliable
    sources, since you can't understand that they don't actually
    "think", and their computation algorithms are not based on giving >>>>>>> a factual answer.


    Correct semantic entailment derives necessary consequences.


    Yes, but you need to start with the correct meaning of the words.

    All you are doing is proving how stupid you are.









    --
    Copyright 2025 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Dec 31 17:51:30 2025
    From Newsgroup: comp.ai.philosophy

    On 12/31/25 5:45 PM, olcott wrote:
    On 12/31/2025 4:11 PM, Richard Damon wrote:
    On 12/31/25 5:04 PM, olcott wrote:
    On 12/31/2025 3:58 PM, Richard Damon wrote:
    On 12/31/25 4:55 PM, olcott wrote:
    On 12/31/2025 3:19 PM, Richard Damon wrote:
    On 12/31/25 3:54 PM, olcott wrote:
    On 12/31/2025 2:30 PM, Richard Damon wrote:
    On 12/31/25 3:12 PM, olcott wrote:
    On 12/30/2025 10:21 PM, olcott wrote:
    A Turing-machine decider is a Turing machine D that
    computes a total function D :-a +ureu raA {Accept,Reject}, >>>>>>>>>> where +ureu is the set of all finite strings over the
    input alphabet. That is:

    1. Totality: For every finite string input w ree +ureu,
    D halts and outputs either Accept or Reject.

    Is simplified to this barest essence across all models of >>>>>>>>>> computation
    All deciders essentially: Transform finite string
    inputs by finite string transformation rules into
    {Accept, Reject} values.

    Anything that cannot be derived from actual finite string
    inputs is not computable and outside the scope of computation. >>>>>>>>>>

    Can Carol correctly answer rCLnorCY to this (yes/no) question? >>>>>>>>> E C R Hehner. Objective and Subjective Specifications
    WST Workshop on Termination, Oxford.-a 2018 July 18.
    See https://www.cs.toronto.edu/~hehner/OSS.pdf

    Which isn't a valid question, as future behavior of a willful >>>>>>>> being doesn't have.a truth value YET.

    On the other had, the future (or past) behavior of a
    deterministic machine is fixed, so suitable for a question.


    People can pretend that Bob is being asked
    Carol's question and on the basis of this
    false assumption say that Carol's question
    has a correct answer.



    Simce you don't understand the differnce between willful beings >>>>>>>> and deterministic machines, your arguement just falls apart.


    They are semantically equivalent.

    Nope.

    So you think That a Willful Being is the semantic equivalent of a >>>>>> Deterministic Computation????


    The question posed to Carol is semantically
    equivalent to the question posed to H and
    you know this is true yet don't give a rat's
    ass for truth.


    No it isn't, as the sort of being it is being asked about matters.

    You just are proving you don't know what you are talking about,

    I guess you have lost your understanding of what free will means.

    My guess is your problem is you have fried your "CPU" and can no
    longer correctly handle logic.

    Even omnipotence cannot correctly resolve
    "This sentence is not true" into True or False.

    But no one is trying to do that but you.

    Your problem is you have fried you processing unit and lost your
    ability to think.

    That is the only explanation for you to be keep on just repeating the
    same errors, that you are just unable to learn because you can't think
    anymore.


    No one has ever provided any reasoning that I am incorrect.
    Every single rebuttal in 28 years has always been a form
    of we really really don't believe you therefore you are wrong.

    Sure we have.

    The fact that you haven't ever even tried to point out an error in the
    errors pointed out, but just repeat your error shows you don't
    understand what you are talking about.

    The rubutals of your work haven't been simple "beleif", but point out
    the factual error you make.

    Your reply is just that you don't beleive the fact of the system, but
    can't point out why.

    All you have done is proven you are nothing but a pathological liar that doesn't understand what he is talking about.





    No wonder you are so messed up.

    You are just showing how much of an idiot you are.

    Maybe in your case, as I have opined, you are not willful, because >>>>>> you have killed your ability to think and reason.


    Maybe you are not a willful being, but gave up that perk in some >>>>>>>> deal with a wicked being.

    And maybe your confusion is why you think AI LLMs are reliable >>>>>>>> sources, since you can't understand that they don't actually
    "think", and their computation algorithms are not based on
    giving a factual answer.


    Correct semantic entailment derives necessary consequences.


    Yes, but you need to start with the correct meaning of the words.

    All you are doing is proving how stupid you are.












    --- Synchronet 3.21a-Linux NewsLink 1.2