• The exact meaning of these exact words prove ALL of my points

    From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Tue Dec 30 22:21:51 2025
    From Newsgroup: comp.theory

    A Turing-machine decider is a Turing machine D that
    computes a total function D : +ureu raA {Accept,Reject},
    where +ureu is the set of all finite strings over the
    input alphabet. That is:

    1. Totality: For every finite string input w ree +ureu,
    D halts and outputs either Accept or Reject.

    Is simplified to this barest essence across all models of computation
    All deciders essentially: Transform finite string
    inputs by finite string transformation rules into
    {Accept, Reject} values.

    Anything that cannot be derived from actual finite string
    inputs is not computable and outside the scope of computation.
    --
    Copyright 2025 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Dec 31 07:28:32 2025
    From Newsgroup: comp.theory

    On 12/30/25 11:21 PM, olcott wrote:
    A Turing-machine decider is a Turing machine D that
    computes a total function D :-a +ureu raA {Accept,Reject},
    where +ureu is the set of all finite strings over the
    input alphabet. That is:

    1. Totality: For every finite string input w ree +ureu,
    D halts and outputs either Accept or Reject.

    Is simplified to this barest essence across all models of computation
    All deciders essentially: Transform finite string
    inputs by finite string transformation rules into
    {Accept, Reject} values.

    Anything that cannot be derived from actual finite string
    inputs is not computable and outside the scope of computation.


    But since the Halting status of the machine that finite string IS
    deriveed from that machine, by just running that machine, or giving it
    to the appropriate UTM, you are just showing that Halting is a valid
    question.

    It is also uncomputable, as has been proven.

    Your problem is you seem to not understand the requirement that a
    decider needs to CORRECTLY compute the function it is supposed to be computing, because you just don't understand the nature of truth, and
    think it can be just redefined.

    As a simile, your logic says a persian cat can be entered into the
    Westminster Dog show and win best of breed, just by saying it is a dog.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Dec 31 10:20:26 2025
    From Newsgroup: comp.theory

    On 12/31/2025 6:28 AM, Richard Damon wrote:
    On 12/30/25 11:21 PM, olcott wrote:
    A Turing-machine decider is a Turing machine D that
    computes a total function D :-a +ureu raA {Accept,Reject},
    where +ureu is the set of all finite strings over the
    input alphabet. That is:

    1. Totality: For every finite string input w ree +ureu,
    D halts and outputs either Accept or Reject.

    Is simplified to this barest essence across all models of computation
    All deciders essentially: Transform finite string
    inputs by finite string transformation rules into
    {Accept, Reject} values.

    Anything that cannot be derived from actual finite string
    inputs is not computable and outside the scope of computation.


    But since the Halting status of the machine that finite string IS
    deriveed from that machine, by just running that machine, or giving it
    to the appropriate UTM, you are just showing that Halting is a valid question.

    It is also uncomputable, as has been proven.


    There are no finite string transformations that HHH(DD)
    can apply to its input that derive the behavior of UTM(DD).

    There are finite string transformations that HHH(DD)
    can apply to its input that derive the behavior that
    the input to HHH(DD) specifies.

    No decider is ever accountable to report on any behavior
    other than the actual behavior that its actual finite
    string input actually specifies. When the halting problem
    requires more than that it requires too much.

    Your problem is you seem to not understand the requirement that a
    decider needs to CORRECTLY compute the function it is supposed to be computing, because you just don't understand the nature of truth, and
    think it can be just redefined.

    As a simile, your logic says a persian cat can be entered into the Westminster Dog show and win best of breed, just by saying it is a dog.

    --
    Copyright 2025 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Dec 31 12:11:08 2025
    From Newsgroup: comp.theory

    On 12/31/25 11:20 AM, olcott wrote:
    On 12/31/2025 6:28 AM, Richard Damon wrote:
    On 12/30/25 11:21 PM, olcott wrote:
    A Turing-machine decider is a Turing machine D that
    computes a total function D :-a +ureu raA {Accept,Reject},
    where +ureu is the set of all finite strings over the
    input alphabet. That is:

    1. Totality: For every finite string input w ree +ureu,
    D halts and outputs either Accept or Reject.

    Is simplified to this barest essence across all models of computation
    All deciders essentially: Transform finite string
    inputs by finite string transformation rules into
    {Accept, Reject} values.

    Anything that cannot be derived from actual finite string
    inputs is not computable and outside the scope of computation.


    But since the Halting status of the machine that finite string IS
    deriveed from that machine, by just running that machine, or giving it
    to the appropriate UTM, you are just showing that Halting is a valid
    question.

    It is also uncomputable, as has been proven.


    There are no finite string transformations that HHH(DD)
    can apply to its input that derive the behavior of UTM(DD).


    But that is a different, and non-sense standard.

    There is only ONE transform that HHH does, at it is just wrong.

    You seem to forget that HHH is a specific decider while the criteria
    needs to be an objective criteria.

    The criteria is does the mapping that this HHH does compute match the
    required one, which is what UTM(DD) shows.

    The fact that HHH doesn't do that makes it wrong.

    The fact that we can make a similar input from any possible decider
    makes the problem uncomputable.

    The fact you refuse to accept this makes you stupid.


    There are finite string transformations that HHH(DD)
    can apply to its input that derive the behavior that
    the input to HHH(DD) specifies.

    No, there is only ONE transform that it DOES apply.

    But that does not specify the meaning of the string, as it was SUPPOSED
    TO represent that behavior of the program DD.

    Once you lable your HHH as a Halt Decider, the semantic of its input are specified, and NOT based on what it actualy does, but on what it was
    claiming to be.

    Now, part of your problem is you never actually formed the right input
    string, as you never setup your program correctly, just showing your
    stupidity and ignorance.


    No decider is ever accountable to report on any behavior
    other than the actual behavior that its actual finite
    string input actually specifies. When the halting problem
    requires more than that it requires too much.

    But the actual behavior that its actual finte string repesents *IS* the behavior of the machine it describes, or you are just admitting you
    started with a lie that DD calling HHH(DD) is according to the proof
    program, as that *IS* the meaning its passed string must represent.

    All you are doing is admitting you are just a stupid liar.

    It seems you just don't understand the concept of "Requirments" and thus
    have major errors in your definiton of things like "Truth"


    Your problem is you seem to not understand the requirement that a
    decider needs to CORRECTLY compute the function it is supposed to be
    computing, because you just don't understand the nature of truth, and
    think it can be just redefined.

    As a simile, your logic says a persian cat can be entered into the
    Westminster Dog show and win best of breed, just by saying it is a dog.




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Dec 31 11:51:43 2025
    From Newsgroup: comp.theory

    On 12/31/2025 11:11 AM, Richard Damon wrote:
    On 12/31/25 11:20 AM, olcott wrote:
    On 12/31/2025 6:28 AM, Richard Damon wrote:
    On 12/30/25 11:21 PM, olcott wrote:
    A Turing-machine decider is a Turing machine D that
    computes a total function D :-a +ureu raA {Accept,Reject},
    where +ureu is the set of all finite strings over the
    input alphabet. That is:

    1. Totality: For every finite string input w ree +ureu,
    D halts and outputs either Accept or Reject.

    Is simplified to this barest essence across all models of computation
    All deciders essentially: Transform finite string
    inputs by finite string transformation rules into
    {Accept, Reject} values.

    Anything that cannot be derived from actual finite string
    inputs is not computable and outside the scope of computation.


    But since the Halting status of the machine that finite string IS
    deriveed from that machine, by just running that machine, or giving
    it to the appropriate UTM, you are just showing that Halting is a
    valid question.

    It is also uncomputable, as has been proven.


    There are no finite string transformations that HHH(DD)
    can apply to its input that derive the behavior of UTM(DD).


    But that is a different, and non-sense standard.

    There is only ONE transform that HHH does, at it is just wrong.

    You seem to forget that HHH is a specific decider while the criteria
    needs to be an objective criteria.


    Across ChatGPT, Claude AI, Gemini and Grok within
    fifty different conversations they all always
    agreed that the halting problem counter-example
    input is analogous to the Liar Paradox thus
    essentially the requirement of a correct answer
    to an incorrect question.

    I proved the HP input is the same as the Liar Paradox back in 2004

    function LoopIfYouSayItHalts (bool YouSayItHalts):
    if YouSayItHalts () then
    while true do {}
    else
    return false;

    Does this program Halt?

    (Your (YES or NO) answer is to be considered
    translated to Boolean as the function's input
    parameter)

    Please ONLY PROVIDE CORRECT ANSWERS!

    https://groups.google.com/g/sci.logic/c/Hs78nMN6QZE/m/ID2rxwo__yQJ
    When you yourself say YES you are wrong
    When you yourself say NO you are wrong

    Therefore the halting problem counter example input
    is a yes/no question lacking a correct yes/no answer.

    The criteria is does the mapping that this HHH does compute match the required one, which is what UTM(DD) shows.

    The fact that HHH doesn't do that makes it wrong.

    The fact that we can make a similar input from any possible decider
    makes the problem uncomputable.

    The fact you refuse to accept this makes you stupid.


    There are finite string transformations that HHH(DD)
    can apply to its input that derive the behavior that
    the input to HHH(DD) specifies.

    No, there is only ONE transform that it DOES apply.

    But that does not specify the meaning of the string, as it was SUPPOSED
    TO represent that behavior of the program DD.

    Once you lable your HHH as a Halt Decider, the semantic of its input are specified, and NOT based on what it actualy does, but on what it was claiming to be.

    Now, part of your problem is you never actually formed the right input string, as you never setup your program correctly, just showing your stupidity and ignorance.


    No decider is ever accountable to report on any behavior
    other than the actual behavior that its actual finite
    string input actually specifies. When the halting problem
    requires more than that it requires too much.

    But the actual behavior that its actual finte string repesents *IS* the behavior of the machine it describes, or you are just admitting you
    started with a lie that DD calling HHH(DD) is according to the proof program, as that *IS* the meaning its passed string must represent.

    All you are doing is admitting you are just a stupid liar.

    It seems you just don't understand the concept of "Requirments" and thus have major errors in your definiton of things like "Truth"


    Your problem is you seem to not understand the requirement that a
    decider needs to CORRECTLY compute the function it is supposed to be
    computing, because you just don't understand the nature of truth, and
    think it can be just redefined.

    As a simile, your logic says a persian cat can be entered into the
    Westminster Dog show and win best of breed, just by saying it is a dog.




    --
    Copyright 2025 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Dec 31 14:14:33 2025
    From Newsgroup: comp.theory

    On 12/31/25 12:51 PM, olcott wrote:
    On 12/31/2025 11:11 AM, Richard Damon wrote:
    On 12/31/25 11:20 AM, olcott wrote:
    On 12/31/2025 6:28 AM, Richard Damon wrote:
    On 12/30/25 11:21 PM, olcott wrote:
    A Turing-machine decider is a Turing machine D that
    computes a total function D :-a +ureu raA {Accept,Reject},
    where +ureu is the set of all finite strings over the
    input alphabet. That is:

    1. Totality: For every finite string input w ree +ureu,
    D halts and outputs either Accept or Reject.

    Is simplified to this barest essence across all models of computation >>>>> All deciders essentially: Transform finite string
    inputs by finite string transformation rules into
    {Accept, Reject} values.

    Anything that cannot be derived from actual finite string
    inputs is not computable and outside the scope of computation.


    But since the Halting status of the machine that finite string IS
    deriveed from that machine, by just running that machine, or giving
    it to the appropriate UTM, you are just showing that Halting is a
    valid question.

    It is also uncomputable, as has been proven.


    There are no finite string transformations that HHH(DD)
    can apply to its input that derive the behavior of UTM(DD).


    But that is a different, and non-sense standard.

    There is only ONE transform that HHH does, at it is just wrong.

    You seem to forget that HHH is a specific decider while the criteria
    needs to be an objective criteria.


    Across ChatGPT, Claude AI, Gemini and Grok within
    fifty different conversations they all always
    agreed that the halting problem counter-example
    input is analogous to the Liar Paradox thus
    essentially the requirement of a correct answer
    to an incorrect question.

    In other words, in your worlds proven liars are more reliable the facts.


    I proved the HP input is the same as the Liar Paradox back in 2004

    function LoopIfYouSayItHalts (bool YouSayItHalts):
    -a-a if YouSayItHalts () then
    -a-a-a-a-a-a while true do {}
    -a-a-a else
    -a-a-a-a-a-a return false;

    Does this program Halt?

    But that isn't the proof program, so you are just showing your stupidity.




    (Your (YES or NO) answer is to be considered
    -atranslated to Boolean as the function's input
    -aparameter)

    But that isn't the halting problem



    Please ONLY PROVIDE CORRECT ANSWERS!

    https://groups.google.com/g/sci.logic/c/Hs78nMN6QZE/m/ID2rxwo__yQJ
    When you yourself say YES you are wrong
    When you yourself say-a NO you are wrong

    But that isn't the halting problem.


    Therefore the halting problem counter example input
    is a yes/no question lacking a correct yes/no answer.

    In other words you are just proving that you don't understand the
    halting problem.

    Given a program and its input, determine if it will halt. So:

    LoopIfYouSayItHalts(false) -> Halts
    LoopIfYouSayItHalts(true) -> Non-Halting


    So, all you are doing is proving you have been lying for 21 years
    because you failed to learn what you were talking about.



    The criteria is does the mapping that this HHH does compute match the
    required one, which is what UTM(DD) shows.

    The fact that HHH doesn't do that makes it wrong.

    The fact that we can make a similar input from any possible decider
    makes the problem uncomputable.

    The fact you refuse to accept this makes you stupid.


    There are finite string transformations that HHH(DD)
    can apply to its input that derive the behavior that
    the input to HHH(DD) specifies.

    No, there is only ONE transform that it DOES apply.

    But that does not specify the meaning of the string, as it was
    SUPPOSED TO represent that behavior of the program DD.

    Once you lable your HHH as a Halt Decider, the semantic of its input
    are specified, and NOT based on what it actualy does, but on what it
    was claiming to be.

    Now, part of your problem is you never actually formed the right input
    string, as you never setup your program correctly, just showing your
    stupidity and ignorance.


    No decider is ever accountable to report on any behavior
    other than the actual behavior that its actual finite
    string input actually specifies. When the halting problem
    requires more than that it requires too much.

    But the actual behavior that its actual finte string repesents *IS*
    the behavior of the machine it describes, or you are just admitting
    you started with a lie that DD calling HHH(DD) is according to the
    proof program, as that *IS* the meaning its passed string must represent.

    All you are doing is admitting you are just a stupid liar.

    It seems you just don't understand the concept of "Requirments" and
    thus have major errors in your definiton of things like "Truth"


    Your problem is you seem to not understand the requirement that a
    decider needs to CORRECTLY compute the function it is supposed to be
    computing, because you just don't understand the nature of truth,
    and think it can be just redefined.

    As a simile, your logic says a persian cat can be entered into the
    Westminster Dog show and win best of breed, just by saying it is a dog. >>>>






    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Dec 31 14:12:06 2025
    From Newsgroup: comp.theory

    On 12/30/2025 10:21 PM, olcott wrote:
    A Turing-machine decider is a Turing machine D that
    computes a total function D :-a +ureu raA {Accept,Reject},
    where +ureu is the set of all finite strings over the
    input alphabet. That is:

    1. Totality: For every finite string input w ree +ureu,
    D halts and outputs either Accept or Reject.

    Is simplified to this barest essence across all models of computation
    All deciders essentially: Transform finite string
    inputs by finite string transformation rules into
    {Accept, Reject} values.

    Anything that cannot be derived from actual finite string
    inputs is not computable and outside the scope of computation.


    Can Carol correctly answer rCLnorCY to this (yes/no) question?
    E C R Hehner. Objective and Subjective Specifications
    WST Workshop on Termination, Oxford. 2018 July 18.
    See https://www.cs.toronto.edu/~hehner/OSS.pdf

    People can pretend that Bob is being asked
    Carol's question and on the basis of this
    false assumption say that Carol's question
    has a correct answer.
    --
    Copyright 2025 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Dec 31 15:30:15 2025
    From Newsgroup: comp.theory

    On 12/31/25 3:12 PM, olcott wrote:
    On 12/30/2025 10:21 PM, olcott wrote:
    A Turing-machine decider is a Turing machine D that
    computes a total function D :-a +ureu raA {Accept,Reject},
    where +ureu is the set of all finite strings over the
    input alphabet. That is:

    1. Totality: For every finite string input w ree +ureu,
    D halts and outputs either Accept or Reject.

    Is simplified to this barest essence across all models of computation
    All deciders essentially: Transform finite string
    inputs by finite string transformation rules into
    {Accept, Reject} values.

    Anything that cannot be derived from actual finite string
    inputs is not computable and outside the scope of computation.


    Can Carol correctly answer rCLnorCY to this (yes/no) question?
    E C R Hehner. Objective and Subjective Specifications
    WST Workshop on Termination, Oxford.-a 2018 July 18.
    See https://www.cs.toronto.edu/~hehner/OSS.pdf

    Which isn't a valid question, as future behavior of a willful being
    doesn't have.a truth value YET.

    On the other had, the future (or past) behavior of a deterministic
    machine is fixed, so suitable for a question.


    People can pretend that Bob is being asked
    Carol's question and on the basis of this
    false assumption say that Carol's question
    has a correct answer.



    Simce you don't understand the differnce between willful beings and deterministic machines, your arguement just falls apart.

    Maybe you are not a willful being, but gave up that perk in some deal
    with a wicked being.

    And maybe your confusion is why you think AI LLMs are reliable sources,
    since you can't understand that they don't actually "think", and their computation algorithms are not based on giving a factual answer.

    All you are doing is proving how stupid you are.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Dec 31 14:54:42 2025
    From Newsgroup: comp.theory

    On 12/31/2025 2:30 PM, Richard Damon wrote:
    On 12/31/25 3:12 PM, olcott wrote:
    On 12/30/2025 10:21 PM, olcott wrote:
    A Turing-machine decider is a Turing machine D that
    computes a total function D :-a +ureu raA {Accept,Reject},
    where +ureu is the set of all finite strings over the
    input alphabet. That is:

    1. Totality: For every finite string input w ree +ureu,
    D halts and outputs either Accept or Reject.

    Is simplified to this barest essence across all models of computation
    All deciders essentially: Transform finite string
    inputs by finite string transformation rules into
    {Accept, Reject} values.

    Anything that cannot be derived from actual finite string
    inputs is not computable and outside the scope of computation.


    Can Carol correctly answer rCLnorCY to this (yes/no) question?
    E C R Hehner. Objective and Subjective Specifications
    WST Workshop on Termination, Oxford.-a 2018 July 18.
    See https://www.cs.toronto.edu/~hehner/OSS.pdf

    Which isn't a valid question, as future behavior of a willful being
    doesn't have.a truth value YET.

    On the other had, the future (or past) behavior of a deterministic
    machine is fixed, so suitable for a question.


    People can pretend that Bob is being asked
    Carol's question and on the basis of this
    false assumption say that Carol's question
    has a correct answer.



    Simce you don't understand the differnce between willful beings and deterministic machines, your arguement just falls apart.


    They are semantically equivalent.

    Maybe you are not a willful being, but gave up that perk in some deal
    with a wicked being.

    And maybe your confusion is why you think AI LLMs are reliable sources, since you can't understand that they don't actually "think", and their computation algorithms are not based on giving a factual answer.


    Correct semantic entailment derives necessary consequences.

    All you are doing is proving how stupid you are.
    --
    Copyright 2025 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Dec 31 16:19:38 2025
    From Newsgroup: comp.theory

    On 12/31/25 3:54 PM, olcott wrote:
    On 12/31/2025 2:30 PM, Richard Damon wrote:
    On 12/31/25 3:12 PM, olcott wrote:
    On 12/30/2025 10:21 PM, olcott wrote:
    A Turing-machine decider is a Turing machine D that
    computes a total function D :-a +ureu raA {Accept,Reject},
    where +ureu is the set of all finite strings over the
    input alphabet. That is:

    1. Totality: For every finite string input w ree +ureu,
    D halts and outputs either Accept or Reject.

    Is simplified to this barest essence across all models of computation
    All deciders essentially: Transform finite string
    inputs by finite string transformation rules into
    {Accept, Reject} values.

    Anything that cannot be derived from actual finite string
    inputs is not computable and outside the scope of computation.


    Can Carol correctly answer rCLnorCY to this (yes/no) question?
    E C R Hehner. Objective and Subjective Specifications
    WST Workshop on Termination, Oxford.-a 2018 July 18.
    See https://www.cs.toronto.edu/~hehner/OSS.pdf

    Which isn't a valid question, as future behavior of a willful being
    doesn't have.a truth value YET.

    On the other had, the future (or past) behavior of a deterministic
    machine is fixed, so suitable for a question.


    People can pretend that Bob is being asked
    Carol's question and on the basis of this
    false assumption say that Carol's question
    has a correct answer.



    Simce you don't understand the differnce between willful beings and
    deterministic machines, your arguement just falls apart.


    They are semantically equivalent.

    Nope.

    So you think That a Willful Being is the semantic equivalent of a Deterministic Computation????

    No wonder you are so messed up.

    You are just showing how much of an idiot you are.

    Maybe in your case, as I have opined, you are not willful, because you
    have killed your ability to think and reason.


    Maybe you are not a willful being, but gave up that perk in some deal
    with a wicked being.

    And maybe your confusion is why you think AI LLMs are reliable
    sources, since you can't understand that they don't actually "think",
    and their computation algorithms are not based on giving a factual
    answer.


    Correct semantic entailment derives necessary consequences.


    Yes, but you need to start with the correct meaning of the words.

    All you are doing is proving how stupid you are.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Dec 31 15:55:50 2025
    From Newsgroup: comp.theory

    On 12/31/2025 3:19 PM, Richard Damon wrote:
    On 12/31/25 3:54 PM, olcott wrote:
    On 12/31/2025 2:30 PM, Richard Damon wrote:
    On 12/31/25 3:12 PM, olcott wrote:
    On 12/30/2025 10:21 PM, olcott wrote:
    A Turing-machine decider is a Turing machine D that
    computes a total function D :-a +ureu raA {Accept,Reject},
    where +ureu is the set of all finite strings over the
    input alphabet. That is:

    1. Totality: For every finite string input w ree +ureu,
    D halts and outputs either Accept or Reject.

    Is simplified to this barest essence across all models of computation >>>>> All deciders essentially: Transform finite string
    inputs by finite string transformation rules into
    {Accept, Reject} values.

    Anything that cannot be derived from actual finite string
    inputs is not computable and outside the scope of computation.


    Can Carol correctly answer rCLnorCY to this (yes/no) question?
    E C R Hehner. Objective and Subjective Specifications
    WST Workshop on Termination, Oxford.-a 2018 July 18.
    See https://www.cs.toronto.edu/~hehner/OSS.pdf

    Which isn't a valid question, as future behavior of a willful being
    doesn't have.a truth value YET.

    On the other had, the future (or past) behavior of a deterministic
    machine is fixed, so suitable for a question.


    People can pretend that Bob is being asked
    Carol's question and on the basis of this
    false assumption say that Carol's question
    has a correct answer.



    Simce you don't understand the differnce between willful beings and
    deterministic machines, your arguement just falls apart.


    They are semantically equivalent.

    Nope.

    So you think That a Willful Being is the semantic equivalent of a Deterministic Computation????


    The question posed to Carol is semantically
    equivalent to the question posed to H and
    you know this is true yet don't give a rat's
    ass for truth.


    No wonder you are so messed up.

    You are just showing how much of an idiot you are.

    Maybe in your case, as I have opined, you are not willful, because you
    have killed your ability to think and reason.


    Maybe you are not a willful being, but gave up that perk in some deal
    with a wicked being.

    And maybe your confusion is why you think AI LLMs are reliable
    sources, since you can't understand that they don't actually "think",
    and their computation algorithms are not based on giving a factual
    answer.


    Correct semantic entailment derives necessary consequences.


    Yes, but you need to start with the correct meaning of the words.

    All you are doing is proving how stupid you are.



    --
    Copyright 2025 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Dec 31 16:58:20 2025
    From Newsgroup: comp.theory

    On 12/31/25 4:55 PM, olcott wrote:
    On 12/31/2025 3:19 PM, Richard Damon wrote:
    On 12/31/25 3:54 PM, olcott wrote:
    On 12/31/2025 2:30 PM, Richard Damon wrote:
    On 12/31/25 3:12 PM, olcott wrote:
    On 12/30/2025 10:21 PM, olcott wrote:
    A Turing-machine decider is a Turing machine D that
    computes a total function D :-a +ureu raA {Accept,Reject},
    where +ureu is the set of all finite strings over the
    input alphabet. That is:

    1. Totality: For every finite string input w ree +ureu,
    D halts and outputs either Accept or Reject.

    Is simplified to this barest essence across all models of computation >>>>>> All deciders essentially: Transform finite string
    inputs by finite string transformation rules into
    {Accept, Reject} values.

    Anything that cannot be derived from actual finite string
    inputs is not computable and outside the scope of computation.


    Can Carol correctly answer rCLnorCY to this (yes/no) question?
    E C R Hehner. Objective and Subjective Specifications
    WST Workshop on Termination, Oxford.-a 2018 July 18.
    See https://www.cs.toronto.edu/~hehner/OSS.pdf

    Which isn't a valid question, as future behavior of a willful being
    doesn't have.a truth value YET.

    On the other had, the future (or past) behavior of a deterministic
    machine is fixed, so suitable for a question.


    People can pretend that Bob is being asked
    Carol's question and on the basis of this
    false assumption say that Carol's question
    has a correct answer.



    Simce you don't understand the differnce between willful beings and
    deterministic machines, your arguement just falls apart.


    They are semantically equivalent.

    Nope.

    So you think That a Willful Being is the semantic equivalent of a
    Deterministic Computation????


    The question posed to Carol is semantically
    equivalent to the question posed to H and
    you know this is true yet don't give a rat's
    ass for truth.


    No it isn't, as the sort of being it is being asked about matters.

    You just are proving you don't know what you are talking about,

    I guess you have lost your understanding of what free will means.

    My guess is your problem is you have fried your "CPU" and can no longer correctly handle logic.


    No wonder you are so messed up.

    You are just showing how much of an idiot you are.

    Maybe in your case, as I have opined, you are not willful, because you
    have killed your ability to think and reason.


    Maybe you are not a willful being, but gave up that perk in some
    deal with a wicked being.

    And maybe your confusion is why you think AI LLMs are reliable
    sources, since you can't understand that they don't actually
    "think", and their computation algorithms are not based on giving a
    factual answer.


    Correct semantic entailment derives necessary consequences.


    Yes, but you need to start with the correct meaning of the words.

    All you are doing is proving how stupid you are.






    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Dec 31 16:04:20 2025
    From Newsgroup: comp.theory

    On 12/31/2025 3:58 PM, Richard Damon wrote:
    On 12/31/25 4:55 PM, olcott wrote:
    On 12/31/2025 3:19 PM, Richard Damon wrote:
    On 12/31/25 3:54 PM, olcott wrote:
    On 12/31/2025 2:30 PM, Richard Damon wrote:
    On 12/31/25 3:12 PM, olcott wrote:
    On 12/30/2025 10:21 PM, olcott wrote:
    A Turing-machine decider is a Turing machine D that
    computes a total function D :-a +ureu raA {Accept,Reject},
    where +ureu is the set of all finite strings over the
    input alphabet. That is:

    1. Totality: For every finite string input w ree +ureu,
    D halts and outputs either Accept or Reject.

    Is simplified to this barest essence across all models of
    computation
    All deciders essentially: Transform finite string
    inputs by finite string transformation rules into
    {Accept, Reject} values.

    Anything that cannot be derived from actual finite string
    inputs is not computable and outside the scope of computation.


    Can Carol correctly answer rCLnorCY to this (yes/no) question?
    E C R Hehner. Objective and Subjective Specifications
    WST Workshop on Termination, Oxford.-a 2018 July 18.
    See https://www.cs.toronto.edu/~hehner/OSS.pdf

    Which isn't a valid question, as future behavior of a willful being >>>>> doesn't have.a truth value YET.

    On the other had, the future (or past) behavior of a deterministic
    machine is fixed, so suitable for a question.


    People can pretend that Bob is being asked
    Carol's question and on the basis of this
    false assumption say that Carol's question
    has a correct answer.



    Simce you don't understand the differnce between willful beings and >>>>> deterministic machines, your arguement just falls apart.


    They are semantically equivalent.

    Nope.

    So you think That a Willful Being is the semantic equivalent of a
    Deterministic Computation????


    The question posed to Carol is semantically
    equivalent to the question posed to H and
    you know this is true yet don't give a rat's
    ass for truth.


    No it isn't, as the sort of being it is being asked about matters.

    You just are proving you don't know what you are talking about,

    I guess you have lost your understanding of what free will means.

    My guess is your problem is you have fried your "CPU" and can no longer correctly handle logic.

    Even omnipotence cannot correctly resolve
    "This sentence is not true" into True or False.



    No wonder you are so messed up.

    You are just showing how much of an idiot you are.

    Maybe in your case, as I have opined, you are not willful, because
    you have killed your ability to think and reason.


    Maybe you are not a willful being, but gave up that perk in some
    deal with a wicked being.

    And maybe your confusion is why you think AI LLMs are reliable
    sources, since you can't understand that they don't actually
    "think", and their computation algorithms are not based on giving a >>>>> factual answer.


    Correct semantic entailment derives necessary consequences.


    Yes, but you need to start with the correct meaning of the words.

    All you are doing is proving how stupid you are.






    --
    Copyright 2025 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Dec 31 17:11:07 2025
    From Newsgroup: comp.theory

    On 12/31/25 5:04 PM, olcott wrote:
    On 12/31/2025 3:58 PM, Richard Damon wrote:
    On 12/31/25 4:55 PM, olcott wrote:
    On 12/31/2025 3:19 PM, Richard Damon wrote:
    On 12/31/25 3:54 PM, olcott wrote:
    On 12/31/2025 2:30 PM, Richard Damon wrote:
    On 12/31/25 3:12 PM, olcott wrote:
    On 12/30/2025 10:21 PM, olcott wrote:
    A Turing-machine decider is a Turing machine D that
    computes a total function D :-a +ureu raA {Accept,Reject},
    where +ureu is the set of all finite strings over the
    input alphabet. That is:

    1. Totality: For every finite string input w ree +ureu,
    D halts and outputs either Accept or Reject.

    Is simplified to this barest essence across all models of
    computation
    All deciders essentially: Transform finite string
    inputs by finite string transformation rules into
    {Accept, Reject} values.

    Anything that cannot be derived from actual finite string
    inputs is not computable and outside the scope of computation. >>>>>>>>

    Can Carol correctly answer rCLnorCY to this (yes/no) question?
    E C R Hehner. Objective and Subjective Specifications
    WST Workshop on Termination, Oxford.-a 2018 July 18.
    See https://www.cs.toronto.edu/~hehner/OSS.pdf

    Which isn't a valid question, as future behavior of a willful
    being doesn't have.a truth value YET.

    On the other had, the future (or past) behavior of a deterministic >>>>>> machine is fixed, so suitable for a question.


    People can pretend that Bob is being asked
    Carol's question and on the basis of this
    false assumption say that Carol's question
    has a correct answer.



    Simce you don't understand the differnce between willful beings
    and deterministic machines, your arguement just falls apart.


    They are semantically equivalent.

    Nope.

    So you think That a Willful Being is the semantic equivalent of a
    Deterministic Computation????


    The question posed to Carol is semantically
    equivalent to the question posed to H and
    you know this is true yet don't give a rat's
    ass for truth.


    No it isn't, as the sort of being it is being asked about matters.

    You just are proving you don't know what you are talking about,

    I guess you have lost your understanding of what free will means.

    My guess is your problem is you have fried your "CPU" and can no
    longer correctly handle logic.

    Even omnipotence cannot correctly resolve
    "This sentence is not true" into True or False.

    But no one is trying to do that but you.

    Your problem is you have fried you processing unit and lost your ability
    to think.

    That is the only explanation for you to be keep on just repeating the
    same errors, that you are just unable to learn because you can't think anymore.




    No wonder you are so messed up.

    You are just showing how much of an idiot you are.

    Maybe in your case, as I have opined, you are not willful, because
    you have killed your ability to think and reason.


    Maybe you are not a willful being, but gave up that perk in some
    deal with a wicked being.

    And maybe your confusion is why you think AI LLMs are reliable
    sources, since you can't understand that they don't actually
    "think", and their computation algorithms are not based on giving >>>>>> a factual answer.


    Correct semantic entailment derives necessary consequences.


    Yes, but you need to start with the correct meaning of the words.

    All you are doing is proving how stupid you are.









    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Dec 31 16:45:03 2025
    From Newsgroup: comp.theory

    On 12/31/2025 4:11 PM, Richard Damon wrote:
    On 12/31/25 5:04 PM, olcott wrote:
    On 12/31/2025 3:58 PM, Richard Damon wrote:
    On 12/31/25 4:55 PM, olcott wrote:
    On 12/31/2025 3:19 PM, Richard Damon wrote:
    On 12/31/25 3:54 PM, olcott wrote:
    On 12/31/2025 2:30 PM, Richard Damon wrote:
    On 12/31/25 3:12 PM, olcott wrote:
    On 12/30/2025 10:21 PM, olcott wrote:
    A Turing-machine decider is a Turing machine D that
    computes a total function D :-a +ureu raA {Accept,Reject},
    where +ureu is the set of all finite strings over the
    input alphabet. That is:

    1. Totality: For every finite string input w ree +ureu,
    D halts and outputs either Accept or Reject.

    Is simplified to this barest essence across all models of
    computation
    All deciders essentially: Transform finite string
    inputs by finite string transformation rules into
    {Accept, Reject} values.

    Anything that cannot be derived from actual finite string
    inputs is not computable and outside the scope of computation. >>>>>>>>>

    Can Carol correctly answer rCLnorCY to this (yes/no) question? >>>>>>>> E C R Hehner. Objective and Subjective Specifications
    WST Workshop on Termination, Oxford.-a 2018 July 18.
    See https://www.cs.toronto.edu/~hehner/OSS.pdf

    Which isn't a valid question, as future behavior of a willful
    being doesn't have.a truth value YET.

    On the other had, the future (or past) behavior of a
    deterministic machine is fixed, so suitable for a question.


    People can pretend that Bob is being asked
    Carol's question and on the basis of this
    false assumption say that Carol's question
    has a correct answer.



    Simce you don't understand the differnce between willful beings >>>>>>> and deterministic machines, your arguement just falls apart.


    They are semantically equivalent.

    Nope.

    So you think That a Willful Being is the semantic equivalent of a
    Deterministic Computation????


    The question posed to Carol is semantically
    equivalent to the question posed to H and
    you know this is true yet don't give a rat's
    ass for truth.


    No it isn't, as the sort of being it is being asked about matters.

    You just are proving you don't know what you are talking about,

    I guess you have lost your understanding of what free will means.

    My guess is your problem is you have fried your "CPU" and can no
    longer correctly handle logic.

    Even omnipotence cannot correctly resolve
    "This sentence is not true" into True or False.

    But no one is trying to do that but you.

    Your problem is you have fried you processing unit and lost your ability
    to think.

    That is the only explanation for you to be keep on just repeating the
    same errors, that you are just unable to learn because you can't think anymore.


    No one has ever provided any reasoning that I am incorrect.
    Every single rebuttal in 28 years has always been a form
    of we really really don't believe you therefore you are wrong.




    No wonder you are so messed up.

    You are just showing how much of an idiot you are.

    Maybe in your case, as I have opined, you are not willful, because
    you have killed your ability to think and reason.


    Maybe you are not a willful being, but gave up that perk in some >>>>>>> deal with a wicked being.

    And maybe your confusion is why you think AI LLMs are reliable
    sources, since you can't understand that they don't actually
    "think", and their computation algorithms are not based on giving >>>>>>> a factual answer.


    Correct semantic entailment derives necessary consequences.


    Yes, but you need to start with the correct meaning of the words.

    All you are doing is proving how stupid you are.









    --
    Copyright 2025 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Wed Dec 31 17:51:30 2025
    From Newsgroup: comp.theory

    On 12/31/25 5:45 PM, olcott wrote:
    On 12/31/2025 4:11 PM, Richard Damon wrote:
    On 12/31/25 5:04 PM, olcott wrote:
    On 12/31/2025 3:58 PM, Richard Damon wrote:
    On 12/31/25 4:55 PM, olcott wrote:
    On 12/31/2025 3:19 PM, Richard Damon wrote:
    On 12/31/25 3:54 PM, olcott wrote:
    On 12/31/2025 2:30 PM, Richard Damon wrote:
    On 12/31/25 3:12 PM, olcott wrote:
    On 12/30/2025 10:21 PM, olcott wrote:
    A Turing-machine decider is a Turing machine D that
    computes a total function D :-a +ureu raA {Accept,Reject}, >>>>>>>>>> where +ureu is the set of all finite strings over the
    input alphabet. That is:

    1. Totality: For every finite string input w ree +ureu,
    D halts and outputs either Accept or Reject.

    Is simplified to this barest essence across all models of >>>>>>>>>> computation
    All deciders essentially: Transform finite string
    inputs by finite string transformation rules into
    {Accept, Reject} values.

    Anything that cannot be derived from actual finite string
    inputs is not computable and outside the scope of computation. >>>>>>>>>>

    Can Carol correctly answer rCLnorCY to this (yes/no) question? >>>>>>>>> E C R Hehner. Objective and Subjective Specifications
    WST Workshop on Termination, Oxford.-a 2018 July 18.
    See https://www.cs.toronto.edu/~hehner/OSS.pdf

    Which isn't a valid question, as future behavior of a willful >>>>>>>> being doesn't have.a truth value YET.

    On the other had, the future (or past) behavior of a
    deterministic machine is fixed, so suitable for a question.


    People can pretend that Bob is being asked
    Carol's question and on the basis of this
    false assumption say that Carol's question
    has a correct answer.



    Simce you don't understand the differnce between willful beings >>>>>>>> and deterministic machines, your arguement just falls apart.


    They are semantically equivalent.

    Nope.

    So you think That a Willful Being is the semantic equivalent of a >>>>>> Deterministic Computation????


    The question posed to Carol is semantically
    equivalent to the question posed to H and
    you know this is true yet don't give a rat's
    ass for truth.


    No it isn't, as the sort of being it is being asked about matters.

    You just are proving you don't know what you are talking about,

    I guess you have lost your understanding of what free will means.

    My guess is your problem is you have fried your "CPU" and can no
    longer correctly handle logic.

    Even omnipotence cannot correctly resolve
    "This sentence is not true" into True or False.

    But no one is trying to do that but you.

    Your problem is you have fried you processing unit and lost your
    ability to think.

    That is the only explanation for you to be keep on just repeating the
    same errors, that you are just unable to learn because you can't think
    anymore.


    No one has ever provided any reasoning that I am incorrect.
    Every single rebuttal in 28 years has always been a form
    of we really really don't believe you therefore you are wrong.

    Sure we have.

    The fact that you haven't ever even tried to point out an error in the
    errors pointed out, but just repeat your error shows you don't
    understand what you are talking about.

    The rubutals of your work haven't been simple "beleif", but point out
    the factual error you make.

    Your reply is just that you don't beleive the fact of the system, but
    can't point out why.

    All you have done is proven you are nothing but a pathological liar that doesn't understand what he is talking about.





    No wonder you are so messed up.

    You are just showing how much of an idiot you are.

    Maybe in your case, as I have opined, you are not willful, because >>>>>> you have killed your ability to think and reason.


    Maybe you are not a willful being, but gave up that perk in some >>>>>>>> deal with a wicked being.

    And maybe your confusion is why you think AI LLMs are reliable >>>>>>>> sources, since you can't understand that they don't actually
    "think", and their computation algorithms are not based on
    giving a factual answer.


    Correct semantic entailment derives necessary consequences.


    Yes, but you need to start with the correct meaning of the words.

    All you are doing is proving how stupid you are.












    --- Synchronet 3.21a-Linux NewsLink 1.2