• By what process can we trust the analysis of LLM systems

    From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Fri Dec 26 21:19:46 2025
    From Newsgroup: comp.ai.philosophy

    Whenever it can be verified that correct semantic
    entailment is applied to the semantic meaning of
    expressions of language then what-so-ever conclusion
    is derived is a necessary consequence of this
    expression of language.
    --
    Copyright 2025 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Fri Dec 26 22:39:26 2025
    From Newsgroup: comp.ai.philosophy

    On 12/26/25 10:19 PM, olcott wrote:
    Whenever it can be verified that correct semantic
    entailment is applied to the semantic meaning of
    expressions of language then what-so-ever conclusion
    is derived is a necessary consequence of this
    expression of language.


    You just don't know what that means, because to you, words don't actualy
    need to mean what you use them as.

    All you are doing is using gobbledygook words to try to hide your lies.

    You don't even know what a program is, or how its input is defined.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Fri Dec 26 21:52:46 2025
    From Newsgroup: comp.ai.philosophy

    On 12/26/2025 9:39 PM, Richard Damon wrote:
    On 12/26/25 10:19 PM, olcott wrote:
    Whenever it can be verified that correct semantic
    entailment is applied to the semantic meaning of
    expressions of language then what-so-ever conclusion
    is derived is a necessary consequence of this
    expression of language.


    You just don't know what that means, because to you, words don't actualy need to mean what you use them as.


    *You just don't know what that means* or you could show my mistake.

    All you are doing is using gobbledygook words to try to hide your lies.

    You don't even know what a program is, or how its input is defined.

    The gist of
    *correct semantic entailment*
    is shown by the syllogism that directly encodes
    its semantics as categorical propositions.

    No separate model theory nonsense where true
    and provable can diverge.

    https://en.wikipedia.org/wiki/Syllogism#Basic_structure https://en.wikipedia.org/wiki/Categorical_proposition
    --
    Copyright 2025 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Fri Dec 26 23:30:34 2025
    From Newsgroup: comp.ai.philosophy

    On 12/26/25 10:52 PM, olcott wrote:
    On 12/26/2025 9:39 PM, Richard Damon wrote:
    On 12/26/25 10:19 PM, olcott wrote:
    Whenever it can be verified that correct semantic
    entailment is applied to the semantic meaning of
    expressions of language then what-so-ever conclusion
    is derived is a necessary consequence of this
    expression of language.


    You just don't know what that means, because to you, words don't
    actualy need to mean what you use them as.


    *You just don't know what that means* or you could show my mistake.



    All you are doing is using gobbledygook words to try to hide your lies.

    You don't even know what a program is, or how its input is defined.

    The gist of
    *correct semantic entailment*
    is shown by the syllogism that directly encodes
    its semantics as categorical propositions.

    No separate model theory nonsense where true
    and provable can diverge.

    https://en.wikipedia.org/wiki/Syllogism#Basic_structure https://en.wikipedia.org/wiki/Categorical_proposition


    Your problem is that "Correct Semantic Entailment" first requires you to
    have the RIGHT DEFINITIONS.

    That means for terms-of-art, you know the term-of-art menaning.

    That is what the "Semantic" part of the term refers to.

    Since you have shown you don't, it means you don't know how to do this.

    Sorry, until you learn what Truth means, (and what a program is) you are
    just locked out of your arguement due to your stupidity,

    One of the problems you run into is that in a "Formal Theory", the
    Semantics of EVERYTHING in the theory are formally defined by the
    system. and any meaning from outside the system is just meaningless in
    the system.

    Thus, your attempts to bring in Natural Language meaning is just unsound logic.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Fri Dec 26 22:35:36 2025
    From Newsgroup: comp.ai.philosophy

    On 12/26/2025 10:30 PM, Richard Damon wrote:
    On 12/26/25 10:52 PM, olcott wrote:
    On 12/26/2025 9:39 PM, Richard Damon wrote:
    On 12/26/25 10:19 PM, olcott wrote:
    Whenever it can be verified that correct semantic
    entailment is applied to the semantic meaning of
    expressions of language then what-so-ever conclusion
    is derived is a necessary consequence of this
    expression of language.


    You just don't know what that means, because to you, words don't
    actualy need to mean what you use them as.


    *You just don't know what that means* or you could show my mistake.



    All you are doing is using gobbledygook words to try to hide your lies.

    You don't even know what a program is, or how its input is defined.

    The gist of
    *correct semantic entailment*
    is shown by the syllogism that directly encodes
    its semantics as categorical propositions.

    No separate model theory nonsense where true
    and provable can diverge.

    https://en.wikipedia.org/wiki/Syllogism#Basic_structure
    https://en.wikipedia.org/wiki/Categorical_proposition


    Your problem is that "Correct Semantic Entailment" first requires you to have the RIGHT DEFINITIONS.

    That means for terms-of-art, you know the term-of-art menaning.


    All deciders essentially: Transform finite string
    inputs by finite string transformation rules into
    {Accept, Reject} values.

    That is what the "Semantic" part of the term refers to.

    Since you have shown you don't, it means you don't know how to do this.

    Sorry, until you learn what Truth means, (and what a program is) you are just locked out of your arguement due to your stupidity,


    Four LLM system all agree that this breaks undecidability
    "true on the basis of meaning expressed in language"

    One of the problems you run into is that in a "Formal Theory", the
    Semantics of EVERYTHING in the theory are formally defined by the
    system. and any meaning from outside the system is just meaningless in
    the system.

    Thus, your attempts to bring in Natural Language meaning is just unsound logic.
    --
    Copyright 2025 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Fri Dec 26 23:43:20 2025
    From Newsgroup: comp.ai.philosophy

    On 12/26/25 11:35 PM, olcott wrote:
    On 12/26/2025 10:30 PM, Richard Damon wrote:
    On 12/26/25 10:52 PM, olcott wrote:
    On 12/26/2025 9:39 PM, Richard Damon wrote:
    On 12/26/25 10:19 PM, olcott wrote:
    Whenever it can be verified that correct semantic
    entailment is applied to the semantic meaning of
    expressions of language then what-so-ever conclusion
    is derived is a necessary consequence of this
    expression of language.


    You just don't know what that means, because to you, words don't
    actualy need to mean what you use them as.


    *You just don't know what that means* or you could show my mistake.



    All you are doing is using gobbledygook words to try to hide your lies. >>>>
    You don't even know what a program is, or how its input is defined.

    The gist of
    *correct semantic entailment*
    is shown by the syllogism that directly encodes
    its semantics as categorical propositions.

    No separate model theory nonsense where true
    and provable can diverge.

    https://en.wikipedia.org/wiki/Syllogism#Basic_structure
    https://en.wikipedia.org/wiki/Categorical_proposition


    Your problem is that "Correct Semantic Entailment" first requires you
    to have the RIGHT DEFINITIONS.

    That means for terms-of-art, you know the term-of-art menaning.


    All deciders essentially: Transform finite string
    inputs by finite string transformation rules into
    {Accept, Reject} values.

    Right.

    And that transform needs to match the function they are supposed to be computing for them to be correct.

    But then "requirement", "correct", "program", "truth", and "proof" are
    all words that you have shown yourself incapable of learning their meaning.

    You seem to have a blind spot for them, because to see them would break
    YOUR programming that you brainwashed yourself with.

    Or maybe sold that ability for a bit of time with kiddie porn to get
    your kicks.


    That is what the "Semantic" part of the term refers to.

    Since you have shown you don't, it means you don't know how to do this.

    Sorry, until you learn what Truth means, (and what a program is) you
    are just locked out of your arguement due to your stupidity,


    Four LLM system all agree that this breaks undecidability
    "true on the basis of meaning expressed in language"

    In pther words, you trust liars above facts.

    Sorry, you are just proving your stupidity.


    One of the problems you run into is that in a "Formal Theory", the
    Semantics of EVERYTHING in the theory are formally defined by the
    system. and any meaning from outside the system is just meaningless in
    the system.

    Thus, your attempts to bring in Natural Language meaning is just
    unsound logic.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Sat Dec 27 04:50:37 2025
    From Newsgroup: comp.ai.philosophy

    On 27/12/2025 03:19, olcott wrote:
    Whenever it can be verified that correct semantic
    entailment is applied to the semantic meaning of
    expressions of language then what-so-ever conclusion
    is derived is a necessary consequence of this
    expression of language.

    "the" applied to a continuum. How do you trust a system that does such a verification? It's related to LLMs so closely itself.
    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Fri Dec 26 22:59:37 2025
    From Newsgroup: comp.ai.philosophy

    On 12/26/2025 10:50 PM, Tristan Wibberley wrote:
    On 27/12/2025 03:19, olcott wrote:
    Whenever it can be verified that correct semantic
    entailment is applied to the semantic meaning of
    expressions of language then what-so-ever conclusion
    is derived is a necessary consequence of this
    expression of language.

    "the" applied to a continuum. How do you trust a system that does such a verification? It's related to LLMs so closely itself.


    It is not how you trust such a system that does
    such a verification. You yourself verify that
    the semantic entailment is correct.

    That it can show every tiny step and paraphrase
    its understanding of these steps shows that it
    has the actual equivalent of human understanding.
    --
    Copyright 2025 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.ai.philosophy on Sat Dec 27 17:24:13 2025
    From Newsgroup: comp.ai.philosophy

    On 27/12/2025 04:59, olcott wrote:
    On 12/26/2025 10:50 PM, Tristan Wibberley wrote:
    On 27/12/2025 03:19, olcott wrote:
    Whenever it can be verified that correct semantic
    entailment is applied to the semantic meaning of
    expressions of language then what-so-ever conclusion
    is derived is a necessary consequence of this
    expression of language.

    "the" applied to a continuum. How do you trust a system that does such a
    verification? It's related to LLMs so closely itself.


    It is not how you trust such a system that does
    such a verification. You yourself verify that
    the semantic entailment is correct.

    That it can show every tiny step and paraphrase
    its understanding of these steps shows that it
    has the actual equivalent of human understanding.


    no. It shows that it has statistics on a population of utterances.

    A human student with understanding infers new utterances not covered by
    the measurements. It relies on mental synergy with the professor (uses
    doctrine as a reference and the capability of the professor to
    understand to communicate on variations). It gambles with its wealth,
    health, and life using the knowledge and thrives (doesn't fail) on its
    topical effect--but not its market or political effect except when
    they're the topic.

    If you give a human student so many utterances that they can just pick
    out new paths that are likely to be accepted by the professor you say
    he's a Chinese room, not an understander. You rely on the human
    inability to do that well to discover that they're doing that instead of
    using a model of a system, of a professor, of a language and its nuances
    as it pertains to the professor's possible own internal models of the
    system.

    A more difficult corner is when the topic is market effects and politics
    of "nudging", because that's really all they can do and thrive by.
    However, there's a big problem (which I'd like to know more about, academically) about whether an LLM acts and claims congruent knowledge
    by its Chinese room, or derives acts from the knowledge. Humans that
    don't understand do that and they also thrive and explain, we're trained
    to do it as children.

    An additional wrinkle is that humans that don't understand forget, or
    else they make mistakes under questioning even when motivated not to act
    like an LLM. Small models derived from large ones turn out to have
    forgotten, and then they make mistakes under questioning. Questioning
    can trigger sycophancy, a strategy to avoid mistake detection wherein
    the questioner's misunderstanding is mirrored.

    I've seen LLMs appear to understand my topics as I'm learning them and
    not be sycophantic but I felt they didn't understand when I pushed into
    my inferences in the topic and they were merely repeating worn paths. I
    think that was due to a lack of mental synergy and instead training to
    emulate large corpuses taken from (a) those who didn't really understand
    and (b) those who understood and tried inappropriate control of the
    population around them to perceive that they advantaged their position.

    An emulator doesn't understand, its just a model of a physical
    phenomenon. A more interesting idea is whether the population of LLM
    creators with their body of compute resources understands as a single
    entity. That, perhaps, does but when it doesn't seem to demonstrate that
    it does, is it merely understanding the population of humans and using
    that understanding on us.
    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2