• Re: The proper way to use LLMs to aid primary research into foundations --- PLO

    From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Mar 26 14:51:47 2026
    From Newsgroup: comp.theory

    On 3/25/2026 7:59 PM, Mr Flibble wrote:
    On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:

    On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:

    My 28 year journey involved primary research into the foundations of
    math, computer science, logic, and linguistics. This requires deep
    knowledge of all of these fields and deep knowledge of the
    philosophical alternative foundations of these fields.

    Almost zero humans have deep knowledge of any one of these fields and
    deep knowledge of alternative foundations in this same field. Almost
    all human experts in any one of these fields accepts the foundation of
    these fields as inherently infallible. Any challenge to the "received
    view" is met with ridicule.

    LLMs provide a key breakthrough in that they have they have the
    equivalent of deep knowledge of these fields and known alternative
    foundations. LLMs are known to have serious issues with AI
    hallucination. Presenting the same ideas to each of five different LLMs
    provides some cross validation.

    Boiling the ideas down to their key essence so that they can be
    succinctly presented seems to work very well. All the time that these
    ideas are presented the LLM's ground these ideas in peer reviewed
    papers. A succinct presentation fully grounded in all relevant peer
    reviewed papers is the end result.

    The LLMs think you are a crank. You have not solved the halting problem.

    /Flibble

    Or in ChatGPT's words:

    For a more Usenet-style brutal version:

    Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer-reviewed papersrCY
    is not a substitute for a correct result. No amount of LLM-assisted
    citation farming changes the fact that you have not solved the halting problem.


    When it is fully understood how:
    "true on the basis of meaning expressed in language"
    has always been actually derived then "undecidability" is
    understood to have always been merely foolish nonsense.

    The models are not validating your claim. They are merely generating text.

    You are still wrong.

    /Flibble



    --
    Copyright 2026 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable for the entire body of knowledge.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Fri Mar 27 11:08:33 2026
    From Newsgroup: comp.theory

    On 26/03/2026 21:51, olcott wrote:
    On 3/25/2026 7:59 PM, Mr Flibble wrote:
    On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:

    On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:

    My 28 year journey involved primary research into the foundations of
    math, computer science, logic, and linguistics. This requires deep
    knowledge of all of these fields and deep knowledge of the
    philosophical alternative foundations of these fields.

    Almost zero humans have deep knowledge of any one of these fields and
    deep knowledge of alternative foundations in this same field. Almost
    all human experts in any one of these fields accepts the foundation of >>>> these fields as inherently infallible. Any challenge to the "received
    view" is met with ridicule.

    LLMs provide a key breakthrough in that they have they have the
    equivalent of deep knowledge of these fields and known alternative
    foundations. LLMs are known to have serious issues with AI
    hallucination. Presenting the same ideas to each of five different LLMs >>>> provides some cross validation.

    Boiling the ideas down to their key essence so that they can be
    succinctly presented seems to work very well. All the time that these
    ideas are presented the LLM's ground these ideas in peer reviewed
    papers. A succinct presentation fully grounded in all relevant peer
    reviewed papers is the end result.

    The LLMs think you are a crank. You have not solved the halting problem. >>>
    /Flibble

    Or in ChatGPT's words:

    For a more Usenet-style brutal version:

    Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer-reviewed
    papersrCY
    is not a substitute for a correct result. No amount of LLM-assisted
    citation farming changes the fact that you have not solved the halting
    problem.

    When it is fully understood how:
    "true on the basis of meaning expressed in language"
    has always been actually derived then "undecidability" is
    understood to have always been merely foolish nonsense.

    The paragprah above can already be understood to be foolish nonsense.
    Perhaps not by its author but certainly by many others.
    --
    Mikko
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to sci.logic,comp.theory,sci.math,comp.ai.philosophy on Fri Mar 27 08:32:25 2026
    From Newsgroup: comp.theory

    On 3/27/2026 4:08 AM, Mikko wrote:
    On 26/03/2026 21:51, olcott wrote:
    On 3/25/2026 7:59 PM, Mr Flibble wrote:
    On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:

    On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:

    My 28 year journey involved primary research into the foundations of >>>>> math, computer science, logic, and linguistics. This requires deep
    knowledge of all of these fields and deep knowledge of the
    philosophical alternative foundations of these fields.

    Almost zero humans have deep knowledge of any one of these fields and >>>>> deep knowledge of alternative foundations in this same field. Almost >>>>> all human experts in any one of these fields accepts the foundation of >>>>> these fields as inherently infallible. Any challenge to the "received >>>>> view" is met with ridicule.

    LLMs provide a key breakthrough in that they have they have the
    equivalent of deep knowledge of these fields and known alternative
    foundations. LLMs are known to have serious issues with AI
    hallucination. Presenting the same ideas to each of five different
    LLMs
    provides some cross validation.

    Boiling the ideas down to their key essence so that they can be
    succinctly presented seems to work very well. All the time that these >>>>> ideas are presented the LLM's ground these ideas in peer reviewed
    papers. A succinct presentation fully grounded in all relevant peer
    reviewed papers is the end result.

    The LLMs think you are a crank. You have not solved the halting
    problem.

    /Flibble

    Or in ChatGPT's words:

    For a more Usenet-style brutal version:

    Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer-reviewed
    papersrCY
    is not a substitute for a correct result. No amount of LLM-assisted
    citation farming changes the fact that you have not solved the halting
    problem.

    When it is fully understood how:
    "true on the basis of meaning expressed in language"
    has always been actually derived then "undecidability" is
    understood to have always been merely foolish nonsense.

    The paragprah above can already be understood to be foolish nonsense.
    Perhaps not by its author but certainly by many others.


    *A mandatory prerequisite was specified*
    When it is fully understood how:
    "true on the basis of meaning expressed in language"
    has always been actually derived ...

    Since I did not yet provide the details of this
    it can't make sense until I do.
    --
    Copyright 2026 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable for the entire body of knowledge.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to sci.logic,comp.theory,sci.math,comp.ai.philosophy on Sat Mar 28 03:02:04 2026
    From Newsgroup: comp.theory

    On 3/27/2026 4:08 AM, Mikko wrote:
    On 26/03/2026 21:51, olcott wrote:
    On 3/25/2026 7:59 PM, Mr Flibble wrote:
    On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:

    On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:

    My 28 year journey involved primary research into the foundations of >>>>> math, computer science, logic, and linguistics. This requires deep
    knowledge of all of these fields and deep knowledge of the
    philosophical alternative foundations of these fields.

    Almost zero humans have deep knowledge of any one of these fields and >>>>> deep knowledge of alternative foundations in this same field. Almost >>>>> all human experts in any one of these fields accepts the foundation of >>>>> these fields as inherently infallible. Any challenge to the "received >>>>> view" is met with ridicule.

    LLMs provide a key breakthrough in that they have they have the
    equivalent of deep knowledge of these fields and known alternative
    foundations. LLMs are known to have serious issues with AI
    hallucination. Presenting the same ideas to each of five different
    LLMs
    provides some cross validation.

    Boiling the ideas down to their key essence so that they can be
    succinctly presented seems to work very well. All the time that these >>>>> ideas are presented the LLM's ground these ideas in peer reviewed
    papers. A succinct presentation fully grounded in all relevant peer
    reviewed papers is the end result.

    The LLMs think you are a crank. You have not solved the halting
    problem.

    /Flibble

    Or in ChatGPT's words:

    For a more Usenet-style brutal version:

    Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer-reviewed
    papersrCY
    is not a substitute for a correct result. No amount of LLM-assisted
    citation farming changes the fact that you have not solved the halting
    problem.

    When it is fully understood how:
    "true on the basis of meaning expressed in language"
    has always been actually derived then "undecidability" is
    understood to have always been merely foolish nonsense.

    The paragprah above can already be understood to be foolish nonsense.
    Perhaps not by its author but certainly by many others.


    If that was true and not some sort of head game then
    you would be able explain the details of how
    "true on the basis of meaning expressed in language"
    is consistently and correctly derived.
    --
    Copyright 2026 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable for the entire body of knowledge.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to sci.logic,comp.theory,sci.math,comp.ai.philosophy on Sat Mar 28 10:55:05 2026
    From Newsgroup: comp.theory

    On 27/03/2026 15:32, olcott wrote:
    On 3/27/2026 4:08 AM, Mikko wrote:
    On 26/03/2026 21:51, olcott wrote:
    On 3/25/2026 7:59 PM, Mr Flibble wrote:
    On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:

    On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:

    My 28 year journey involved primary research into the foundations of >>>>>> math, computer science, logic, and linguistics. This requires deep >>>>>> knowledge of all of these fields and deep knowledge of the
    philosophical alternative foundations of these fields.

    Almost zero humans have deep knowledge of any one of these fields and >>>>>> deep knowledge of alternative foundations in this same field. Almost >>>>>> all human experts in any one of these fields accepts the
    foundation of
    these fields as inherently infallible. Any challenge to the "received >>>>>> view" is met with ridicule.

    LLMs provide a key breakthrough in that they have they have the
    equivalent of deep knowledge of these fields and known alternative >>>>>> foundations. LLMs are known to have serious issues with AI
    hallucination. Presenting the same ideas to each of five different >>>>>> LLMs
    provides some cross validation.

    Boiling the ideas down to their key essence so that they can be
    succinctly presented seems to work very well. All the time that these >>>>>> ideas are presented the LLM's ground these ideas in peer reviewed
    papers. A succinct presentation fully grounded in all relevant peer >>>>>> reviewed papers is the end result.

    The LLMs think you are a crank. You have not solved the halting
    problem.

    /Flibble

    Or in ChatGPT's words:

    For a more Usenet-style brutal version:

    Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer-reviewed
    papersrCY
    is not a substitute for a correct result. No amount of LLM-assisted
    citation farming changes the fact that you have not solved the halting >>>> problem.

    When it is fully understood how:
    "true on the basis of meaning expressed in language"
    has always been actually derived then "undecidability" is
    understood to have always been merely foolish nonsense.

    The paragprah above can already be understood to be foolish nonsense.
    Perhaps not by its author but certainly by many others.

    *A mandatory prerequisite was specified*
    When it is fully understood how:
    "true on the basis of meaning expressed in language"
    has always been actually derived ...

    Since I did not yet provide the details of this
    it can't make sense until I do.
    Which is all we need to know in order to determine that it is nonsense.
    But you did provide enough details to determine that the nonsense is
    foolish.
    --
    Mikko
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to sci.logic,comp.theory,sci.math,comp.ai.philosophy on Sat Mar 28 11:00:09 2026
    From Newsgroup: comp.theory

    On 28/03/2026 10:02, olcott wrote:
    On 3/27/2026 4:08 AM, Mikko wrote:
    On 26/03/2026 21:51, olcott wrote:
    On 3/25/2026 7:59 PM, Mr Flibble wrote:
    On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:

    On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:

    My 28 year journey involved primary research into the foundations of >>>>>> math, computer science, logic, and linguistics. This requires deep >>>>>> knowledge of all of these fields and deep knowledge of the
    philosophical alternative foundations of these fields.

    Almost zero humans have deep knowledge of any one of these fields and >>>>>> deep knowledge of alternative foundations in this same field. Almost >>>>>> all human experts in any one of these fields accepts the
    foundation of
    these fields as inherently infallible. Any challenge to the "received >>>>>> view" is met with ridicule.

    LLMs provide a key breakthrough in that they have they have the
    equivalent of deep knowledge of these fields and known alternative >>>>>> foundations. LLMs are known to have serious issues with AI
    hallucination. Presenting the same ideas to each of five different >>>>>> LLMs
    provides some cross validation.

    Boiling the ideas down to their key essence so that they can be
    succinctly presented seems to work very well. All the time that these >>>>>> ideas are presented the LLM's ground these ideas in peer reviewed
    papers. A succinct presentation fully grounded in all relevant peer >>>>>> reviewed papers is the end result.

    The LLMs think you are a crank. You have not solved the halting
    problem.

    /Flibble

    Or in ChatGPT's words:

    For a more Usenet-style brutal version:

    Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer-reviewed
    papersrCY
    is not a substitute for a correct result. No amount of LLM-assisted
    citation farming changes the fact that you have not solved the halting >>>> problem.

    When it is fully understood how:
    "true on the basis of meaning expressed in language"
    has always been actually derived then "undecidability" is
    understood to have always been merely foolish nonsense.

    The paragprah above can already be understood to be foolish nonsense.
    Perhaps not by its author but certainly by many others.

    If that was true and not some sort of head game then
    you would be able explain the details of how
    "true on the basis of meaning expressed in language"
    is consistently and correctly derived.

    No, that does not follow. It is sufficient to know that if the
    derivation implies that "undecidablity" is understood to have
    always been foolish nonsense then the derivation is so far from
    any reality that it can be regarded as foolish.
    --
    Mikko
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to sci.logic,comp.theory,sci.math,comp.ai.philosophy on Sat Mar 28 11:06:53 2026
    From Newsgroup: comp.theory

    On 3/28/2026 4:00 AM, Mikko wrote:
    On 28/03/2026 10:02, olcott wrote:
    On 3/27/2026 4:08 AM, Mikko wrote:
    On 26/03/2026 21:51, olcott wrote:
    On 3/25/2026 7:59 PM, Mr Flibble wrote:
    On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:

    On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:

    My 28 year journey involved primary research into the foundations of >>>>>>> math, computer science, logic, and linguistics. This requires deep >>>>>>> knowledge of all of these fields and deep knowledge of the
    philosophical alternative foundations of these fields.

    Almost zero humans have deep knowledge of any one of these fields >>>>>>> and
    deep knowledge of alternative foundations in this same field. Almost >>>>>>> all human experts in any one of these fields accepts the
    foundation of
    these fields as inherently infallible. Any challenge to the
    "received
    view" is met with ridicule.

    LLMs provide a key breakthrough in that they have they have the
    equivalent of deep knowledge of these fields and known alternative >>>>>>> foundations. LLMs are known to have serious issues with AI
    hallucination. Presenting the same ideas to each of five
    different LLMs
    provides some cross validation.

    Boiling the ideas down to their key essence so that they can be
    succinctly presented seems to work very well. All the time that >>>>>>> these
    ideas are presented the LLM's ground these ideas in peer reviewed >>>>>>> papers. A succinct presentation fully grounded in all relevant peer >>>>>>> reviewed papers is the end result.

    The LLMs think you are a crank. You have not solved the halting
    problem.

    /Flibble

    Or in ChatGPT's words:

    For a more Usenet-style brutal version:

    Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer-reviewed
    papersrCY
    is not a substitute for a correct result. No amount of LLM-assisted
    citation farming changes the fact that you have not solved the halting >>>>> problem.

    When it is fully understood how:
    "true on the basis of meaning expressed in language"
    has always been actually derived then "undecidability" is
    understood to have always been merely foolish nonsense.

    The paragprah above can already be understood to be foolish nonsense.
    Perhaps not by its author but certainly by many others.

    If that was true and not some sort of head game then
    you would be able explain the details of how
    "true on the basis of meaning expressed in language"
    is consistently and correctly derived.

    No, that does not follow. It is sufficient to know that if the
    derivation implies that "undecidablity" is understood to have
    always been foolish nonsense then the derivation is so far from
    any reality that it can be regarded as foolish.


    Every meaningful expression of language that has a
    well-founded justification tree and is in the body
    of knowledge is computable.
    --
    Copyright 2026 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable for the entire body of knowledge.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to sci.logic,comp.theory,sci.math,comp.ai.philosophy on Sat Mar 28 12:47:56 2026
    From Newsgroup: comp.theory

    On 3/28/26 12:06 PM, olcott wrote:
    On 3/28/2026 4:00 AM, Mikko wrote:
    On 28/03/2026 10:02, olcott wrote:
    On 3/27/2026 4:08 AM, Mikko wrote:
    On 26/03/2026 21:51, olcott wrote:
    On 3/25/2026 7:59 PM, Mr Flibble wrote:
    On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:

    On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:

    My 28 year journey involved primary research into the
    foundations of
    math, computer science, logic, and linguistics. This requires deep >>>>>>>> knowledge of all of these fields and deep knowledge of the
    philosophical alternative foundations of these fields.

    Almost zero humans have deep knowledge of any one of these
    fields and
    deep knowledge of alternative foundations in this same field. >>>>>>>> Almost
    all human experts in any one of these fields accepts the
    foundation of
    these fields as inherently infallible. Any challenge to the
    "received
    view" is met with ridicule.

    LLMs provide a key breakthrough in that they have they have the >>>>>>>> equivalent of deep knowledge of these fields and known alternative >>>>>>>> foundations. LLMs are known to have serious issues with AI
    hallucination. Presenting the same ideas to each of five
    different LLMs
    provides some cross validation.

    Boiling the ideas down to their key essence so that they can be >>>>>>>> succinctly presented seems to work very well. All the time that >>>>>>>> these
    ideas are presented the LLM's ground these ideas in peer reviewed >>>>>>>> papers. A succinct presentation fully grounded in all relevant peer >>>>>>>> reviewed papers is the end result.

    The LLMs think you are a crank. You have not solved the halting >>>>>>> problem.

    /Flibble

    Or in ChatGPT's words:

    For a more Usenet-style brutal version:

    Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer-reviewed
    papersrCY
    is not a substitute for a correct result. No amount of LLM-assisted >>>>>> citation farming changes the fact that you have not solved the
    halting
    problem.

    When it is fully understood how:
    "true on the basis of meaning expressed in language"
    has always been actually derived then "undecidability" is
    understood to have always been merely foolish nonsense.

    The paragprah above can already be understood to be foolish nonsense.
    Perhaps not by its author but certainly by many others.

    If that was true and not some sort of head game then
    you would be able explain the details of how
    "true on the basis of meaning expressed in language"
    is consistently and correctly derived.

    No, that does not follow. It is sufficient to know that if the
    derivation implies that "undecidablity" is understood to have
    always been foolish nonsense then the derivation is so far from
    any reality that it can be regarded as foolish.


    Every meaningful expression of language that has a
    well-founded justification tree and is in the body
    of knowledge is computable.



    Which just means that every statement that is provable is computable.

    TRIVIAL statement.

    The problem is that many "meaningful" statements can be "factually"
    true, in that there is a justification tree that might be infinite in
    length, (and thus can't be false) for which we can not "compute" the answer.

    This includes a LOT of statements in the field of Mathematics or
    Computation (which derive from Mathematics).

    Your requirement that we know of a "well-founded justification" means
    your claim becomes trivial, as you need to know of the existance of a
    proof before you can look at the statement.

    This goes back to your catgegoerical error in logic, where you start to
    talk about the "well-founded justifaction" for a programs behavior.

    *ALL* programs behavior is by definition well-founded, as it is what the program does.

    Thus your comments about Turing H not having well-founded justifcation
    so your D can "reject" it is just an admittion that your version of H
    isn't actually a program (as required) because your D isn't actually a program, and thus you are just admitting to your arguement being a pathological lie.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to sci.logic,comp.theory,sci.math,comp.ai.philosophy on Sat Mar 28 12:32:40 2026
    From Newsgroup: comp.theory

    On 3/28/2026 11:06 AM, olcott wrote:
    On 3/28/2026 4:00 AM, Mikko wrote:
    On 28/03/2026 10:02, olcott wrote:
    On 3/27/2026 4:08 AM, Mikko wrote:
    On 26/03/2026 21:51, olcott wrote:
    On 3/25/2026 7:59 PM, Mr Flibble wrote:
    On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:

    On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:

    My 28 year journey involved primary research into the
    foundations of
    math, computer science, logic, and linguistics. This requires deep >>>>>>>> knowledge of all of these fields and deep knowledge of the
    philosophical alternative foundations of these fields.

    Almost zero humans have deep knowledge of any one of these
    fields and
    deep knowledge of alternative foundations in this same field. >>>>>>>> Almost
    all human experts in any one of these fields accepts the
    foundation of
    these fields as inherently infallible. Any challenge to the
    "received
    view" is met with ridicule.

    LLMs provide a key breakthrough in that they have they have the >>>>>>>> equivalent of deep knowledge of these fields and known alternative >>>>>>>> foundations. LLMs are known to have serious issues with AI
    hallucination. Presenting the same ideas to each of five
    different LLMs
    provides some cross validation.

    Boiling the ideas down to their key essence so that they can be >>>>>>>> succinctly presented seems to work very well. All the time that >>>>>>>> these
    ideas are presented the LLM's ground these ideas in peer reviewed >>>>>>>> papers. A succinct presentation fully grounded in all relevant peer >>>>>>>> reviewed papers is the end result.

    The LLMs think you are a crank. You have not solved the halting >>>>>>> problem.

    /Flibble

    Or in ChatGPT's words:

    For a more Usenet-style brutal version:

    Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer-reviewed
    papersrCY
    is not a substitute for a correct result. No amount of LLM-assisted >>>>>> citation farming changes the fact that you have not solved the
    halting
    problem.

    When it is fully understood how:
    "true on the basis of meaning expressed in language"
    has always been actually derived then "undecidability" is
    understood to have always been merely foolish nonsense.

    The paragprah above can already be understood to be foolish nonsense.
    Perhaps not by its author but certainly by many others.

    If that was true and not some sort of head game then
    you would be able explain the details of how
    "true on the basis of meaning expressed in language"
    is consistently and correctly derived.

    No, that does not follow. It is sufficient to know that if the
    derivation implies that "undecidablity" is understood to have
    always been foolish nonsense then the derivation is so far from
    any reality that it can be regarded as foolish.


    Every meaningful expression of language that has a
    well-founded justification tree and is in the body
    of knowledge is computable.


    Most undecidable decision problem instances are
    excluded as lacking a well-founded justification tree.
    --
    Copyright 2026 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable for the entire body of knowledge.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to sci.logic,comp.theory,sci.math,comp.ai.philosophy on Sat Mar 28 15:05:42 2026
    From Newsgroup: comp.theory

    On 3/28/26 1:32 PM, olcott wrote:
    On 3/28/2026 11:06 AM, olcott wrote:
    On 3/28/2026 4:00 AM, Mikko wrote:
    On 28/03/2026 10:02, olcott wrote:
    On 3/27/2026 4:08 AM, Mikko wrote:
    On 26/03/2026 21:51, olcott wrote:
    On 3/25/2026 7:59 PM, Mr Flibble wrote:
    On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:

    On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:

    My 28 year journey involved primary research into the
    foundations of
    math, computer science, logic, and linguistics. This requires deep >>>>>>>>> knowledge of all of these fields and deep knowledge of the
    philosophical alternative foundations of these fields.

    Almost zero humans have deep knowledge of any one of these
    fields and
    deep knowledge of alternative foundations in this same field. >>>>>>>>> Almost
    all human experts in any one of these fields accepts the
    foundation of
    these fields as inherently infallible. Any challenge to the >>>>>>>>> "received
    view" is met with ridicule.

    LLMs provide a key breakthrough in that they have they have the >>>>>>>>> equivalent of deep knowledge of these fields and known alternative >>>>>>>>> foundations. LLMs are known to have serious issues with AI
    hallucination. Presenting the same ideas to each of five
    different LLMs
    provides some cross validation.

    Boiling the ideas down to their key essence so that they can be >>>>>>>>> succinctly presented seems to work very well. All the time that >>>>>>>>> these
    ideas are presented the LLM's ground these ideas in peer reviewed >>>>>>>>> papers. A succinct presentation fully grounded in all relevant >>>>>>>>> peer
    reviewed papers is the end result.

    The LLMs think you are a crank. You have not solved the halting >>>>>>>> problem.

    /Flibble

    Or in ChatGPT's words:

    For a more Usenet-style brutal version:

    Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer- >>>>>>> reviewed papersrCY
    is not a substitute for a correct result. No amount of LLM-assisted >>>>>>> citation farming changes the fact that you have not solved the
    halting
    problem.

    When it is fully understood how:
    "true on the basis of meaning expressed in language"
    has always been actually derived then "undecidability" is
    understood to have always been merely foolish nonsense.

    The paragprah above can already be understood to be foolish nonsense. >>>>> Perhaps not by its author but certainly by many others.

    If that was true and not some sort of head game then
    you would be able explain the details of how
    "true on the basis of meaning expressed in language"
    is consistently and correctly derived.

    No, that does not follow. It is sufficient to know that if the
    derivation implies that "undecidablity" is understood to have
    always been foolish nonsense then the derivation is so far from
    any reality that it can be regarded as foolish.


    Every meaningful expression of language that has a
    well-founded justification tree and is in the body
    of knowledge is computable.


    Most undecidable decision problem instances are
    excluded as lacking a well-founded justification tree.


    But "Problems" don't need such a tree, only the answer. Thus you are
    just showing your logic is based on performing category errors.

    The whole concept of "well-founded justification tree" is about being
    able to PROVE a statement, and thus not applicable to a question or request.

    And the Undecidable nature is ABOUT there not being able to build such a
    tree for an possible answer to the problem.

    For instance, when we ask about the behavior of a machine, it is a fact
    that this behavior will ALWAYS have a "well-founded" tree that defines
    it, as that is inherent in it being a machine, and thus having a fully specified algorithm that it follows.

    Thus, the inability to create a halt decider is NOT based on the lack of
    a well-founded justifcation tree in the input, since all inputs have a
    fully defined behavior build with a well defined tree, but the inability
    to show that a decider is always correct, because we can prove for any
    decider you want to claim to be correct that there exists an input it
    will get that input wrong.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to sci.logic,comp.theory,sci.math,comp.ai.philosophy on Sat Mar 28 14:09:17 2026
    From Newsgroup: comp.theory

    On 3/28/2026 12:32 PM, olcott wrote:
    On 3/28/2026 11:06 AM, olcott wrote:
    On 3/28/2026 4:00 AM, Mikko wrote:
    On 28/03/2026 10:02, olcott wrote:
    On 3/27/2026 4:08 AM, Mikko wrote:
    On 26/03/2026 21:51, olcott wrote:
    On 3/25/2026 7:59 PM, Mr Flibble wrote:
    On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:

    On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:

    My 28 year journey involved primary research into the
    foundations of
    math, computer science, logic, and linguistics. This requires deep >>>>>>>>> knowledge of all of these fields and deep knowledge of the
    philosophical alternative foundations of these fields.

    Almost zero humans have deep knowledge of any one of these
    fields and
    deep knowledge of alternative foundations in this same field. >>>>>>>>> Almost
    all human experts in any one of these fields accepts the
    foundation of
    these fields as inherently infallible. Any challenge to the >>>>>>>>> "received
    view" is met with ridicule.

    LLMs provide a key breakthrough in that they have they have the >>>>>>>>> equivalent of deep knowledge of these fields and known alternative >>>>>>>>> foundations. LLMs are known to have serious issues with AI
    hallucination. Presenting the same ideas to each of five
    different LLMs
    provides some cross validation.

    Boiling the ideas down to their key essence so that they can be >>>>>>>>> succinctly presented seems to work very well. All the time that >>>>>>>>> these
    ideas are presented the LLM's ground these ideas in peer reviewed >>>>>>>>> papers. A succinct presentation fully grounded in all relevant >>>>>>>>> peer
    reviewed papers is the end result.

    The LLMs think you are a crank. You have not solved the halting >>>>>>>> problem.

    /Flibble

    Or in ChatGPT's words:

    For a more Usenet-style brutal version:

    Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer- >>>>>>> reviewed papersrCY
    is not a substitute for a correct result. No amount of LLM-assisted >>>>>>> citation farming changes the fact that you have not solved the
    halting
    problem.

    When it is fully understood how:
    "true on the basis of meaning expressed in language"
    has always been actually derived then "undecidability" is
    understood to have always been merely foolish nonsense.

    The paragprah above can already be understood to be foolish nonsense. >>>>> Perhaps not by its author but certainly by many others.

    If that was true and not some sort of head game then
    you would be able explain the details of how
    "true on the basis of meaning expressed in language"
    is consistently and correctly derived.

    No, that does not follow. It is sufficient to know that if the
    derivation implies that "undecidablity" is understood to have
    always been foolish nonsense then the derivation is so far from
    any reality that it can be regarded as foolish.


    Every meaningful expression of language that has a
    well-founded justification tree and is in the body
    of knowledge is computable.


    Most undecidable decision problem instances are
    excluded as lacking a well-founded justification tree.


    Discussing this with anyone that is not yet an expert in all
    of the key details of proof theoretic semantics is like trying
    to explain algebra to someone that has not yet learned what
    arithmetic is and how it works.
    --
    Copyright 2026 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable for the entire body of knowledge.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to sci.logic,comp.theory,sci.math,comp.ai.philosophy on Sat Mar 28 15:33:32 2026
    From Newsgroup: comp.theory

    On 3/28/26 3:09 PM, olcott wrote:
    On 3/28/2026 12:32 PM, olcott wrote:
    On 3/28/2026 11:06 AM, olcott wrote:
    On 3/28/2026 4:00 AM, Mikko wrote:
    On 28/03/2026 10:02, olcott wrote:
    On 3/27/2026 4:08 AM, Mikko wrote:
    On 26/03/2026 21:51, olcott wrote:
    On 3/25/2026 7:59 PM, Mr Flibble wrote:
    On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:

    On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:

    My 28 year journey involved primary research into the
    foundations of
    math, computer science, logic, and linguistics. This requires >>>>>>>>>> deep
    knowledge of all of these fields and deep knowledge of the >>>>>>>>>> philosophical alternative foundations of these fields.

    Almost zero humans have deep knowledge of any one of these >>>>>>>>>> fields and
    deep knowledge of alternative foundations in this same field. >>>>>>>>>> Almost
    all human experts in any one of these fields accepts the
    foundation of
    these fields as inherently infallible. Any challenge to the >>>>>>>>>> "received
    view" is met with ridicule.

    LLMs provide a key breakthrough in that they have they have the >>>>>>>>>> equivalent of deep knowledge of these fields and known
    alternative
    foundations. LLMs are known to have serious issues with AI >>>>>>>>>> hallucination. Presenting the same ideas to each of five
    different LLMs
    provides some cross validation.

    Boiling the ideas down to their key essence so that they can be >>>>>>>>>> succinctly presented seems to work very well. All the time >>>>>>>>>> that these
    ideas are presented the LLM's ground these ideas in peer reviewed >>>>>>>>>> papers. A succinct presentation fully grounded in all relevant >>>>>>>>>> peer
    reviewed papers is the end result.

    The LLMs think you are a crank. You have not solved the halting >>>>>>>>> problem.

    /Flibble

    Or in ChatGPT's words:

    For a more Usenet-style brutal version:

    Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer- >>>>>>>> reviewed papersrCY
    is not a substitute for a correct result. No amount of LLM-assisted >>>>>>>> citation farming changes the fact that you have not solved the >>>>>>>> halting
    problem.

    When it is fully understood how:
    "true on the basis of meaning expressed in language"
    has always been actually derived then "undecidability" is
    understood to have always been merely foolish nonsense.

    The paragprah above can already be understood to be foolish nonsense. >>>>>> Perhaps not by its author but certainly by many others.

    If that was true and not some sort of head game then
    you would be able explain the details of how
    "true on the basis of meaning expressed in language"
    is consistently and correctly derived.

    No, that does not follow. It is sufficient to know that if the
    derivation implies that "undecidablity" is understood to have
    always been foolish nonsense then the derivation is so far from
    any reality that it can be regarded as foolish.


    Every meaningful expression of language that has a
    well-founded justification tree and is in the body
    of knowledge is computable.


    Most undecidable decision problem instances are
    excluded as lacking a well-founded justification tree.


    Discussing this with anyone that is not yet an expert in all
    of the key details of proof theoretic semantics is like trying
    to explain algebra to someone that has not yet learned what
    arithmetic is and how it works.


    Which means you should shut up, as you clearly don't know what you are
    talking about.

    After all, you think programs need a "well-founded justification tree"
    to exist (or soething?)

    Try to explain that one.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to sci.logic,comp.theory,sci.math,comp.ai.philosophy on Sun Mar 29 11:33:18 2026
    From Newsgroup: comp.theory

    On 28/03/2026 18:06, olcott wrote:
    On 3/28/2026 4:00 AM, Mikko wrote:
    On 28/03/2026 10:02, olcott wrote:
    On 3/27/2026 4:08 AM, Mikko wrote:
    On 26/03/2026 21:51, olcott wrote:
    On 3/25/2026 7:59 PM, Mr Flibble wrote:
    On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:

    On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:

    My 28 year journey involved primary research into the
    foundations of
    math, computer science, logic, and linguistics. This requires deep >>>>>>>> knowledge of all of these fields and deep knowledge of the
    philosophical alternative foundations of these fields.

    Almost zero humans have deep knowledge of any one of these
    fields and
    deep knowledge of alternative foundations in this same field. >>>>>>>> Almost
    all human experts in any one of these fields accepts the
    foundation of
    these fields as inherently infallible. Any challenge to the
    "received
    view" is met with ridicule.

    LLMs provide a key breakthrough in that they have they have the >>>>>>>> equivalent of deep knowledge of these fields and known alternative >>>>>>>> foundations. LLMs are known to have serious issues with AI
    hallucination. Presenting the same ideas to each of five
    different LLMs
    provides some cross validation.

    Boiling the ideas down to their key essence so that they can be >>>>>>>> succinctly presented seems to work very well. All the time that >>>>>>>> these
    ideas are presented the LLM's ground these ideas in peer reviewed >>>>>>>> papers. A succinct presentation fully grounded in all relevant peer >>>>>>>> reviewed papers is the end result.

    The LLMs think you are a crank. You have not solved the halting >>>>>>> problem.

    /Flibble

    Or in ChatGPT's words:

    For a more Usenet-style brutal version:

    Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer-reviewed
    papersrCY
    is not a substitute for a correct result. No amount of LLM-assisted >>>>>> citation farming changes the fact that you have not solved the
    halting
    problem.

    When it is fully understood how:
    "true on the basis of meaning expressed in language"
    has always been actually derived then "undecidability" is
    understood to have always been merely foolish nonsense.

    The paragprah above can already be understood to be foolish nonsense.
    Perhaps not by its author but certainly by many others.

    If that was true and not some sort of head game then
    you would be able explain the details of how
    "true on the basis of meaning expressed in language"
    is consistently and correctly derived.

    No, that does not follow. It is sufficient to know that if the
    derivation implies that "undecidablity" is understood to have
    always been foolish nonsense then the derivation is so far from
    any reality that it can be regarded as foolish.

    Every meaningful expression of language that has a
    well-founded justification tree and is in the body
    of knowledge is computable.

    False but irrelevant to the message it pretends to answer.
    --
    Mikko
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Sun Mar 29 10:38:47 2026
    From Newsgroup: comp.theory

    On 3/29/2026 3:33 AM, Mikko wrote:
    On 28/03/2026 18:06, olcott wrote:
    On 3/28/2026 4:00 AM, Mikko wrote:
    On 28/03/2026 10:02, olcott wrote:
    On 3/27/2026 4:08 AM, Mikko wrote:
    On 26/03/2026 21:51, olcott wrote:
    On 3/25/2026 7:59 PM, Mr Flibble wrote:
    On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:

    On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:

    My 28 year journey involved primary research into the
    foundations of
    math, computer science, logic, and linguistics. This requires deep >>>>>>>>> knowledge of all of these fields and deep knowledge of the
    philosophical alternative foundations of these fields.

    Almost zero humans have deep knowledge of any one of these
    fields and
    deep knowledge of alternative foundations in this same field. >>>>>>>>> Almost
    all human experts in any one of these fields accepts the
    foundation of
    these fields as inherently infallible. Any challenge to the >>>>>>>>> "received
    view" is met with ridicule.

    LLMs provide a key breakthrough in that they have they have the >>>>>>>>> equivalent of deep knowledge of these fields and known alternative >>>>>>>>> foundations. LLMs are known to have serious issues with AI
    hallucination. Presenting the same ideas to each of five
    different LLMs
    provides some cross validation.

    Boiling the ideas down to their key essence so that they can be >>>>>>>>> succinctly presented seems to work very well. All the time that >>>>>>>>> these
    ideas are presented the LLM's ground these ideas in peer reviewed >>>>>>>>> papers. A succinct presentation fully grounded in all relevant >>>>>>>>> peer
    reviewed papers is the end result.

    The LLMs think you are a crank. You have not solved the halting >>>>>>>> problem.

    /Flibble

    Or in ChatGPT's words:

    For a more Usenet-style brutal version:

    Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer- >>>>>>> reviewed papersrCY
    is not a substitute for a correct result. No amount of LLM-assisted >>>>>>> citation farming changes the fact that you have not solved the
    halting
    problem.

    When it is fully understood how:
    "true on the basis of meaning expressed in language"
    has always been actually derived then "undecidability" is
    understood to have always been merely foolish nonsense.

    The paragprah above can already be understood to be foolish nonsense. >>>>> Perhaps not by its author but certainly by many others.

    If that was true and not some sort of head game then
    you would be able explain the details of how
    "true on the basis of meaning expressed in language"
    is consistently and correctly derived.

    No, that does not follow. It is sufficient to know that if the
    derivation implies that "undecidablity" is understood to have
    always been foolish nonsense then the derivation is so far from
    any reality that it can be regarded as foolish.

    Every meaningful expression of language that has a
    well-founded justification tree and is in the body
    of knowledge is computable.

    False but irrelevant to the message it pretends to answer.


    It is dishonest to say that words you do not understand are false.
    The notion of a "well-founded justification tree" cannot be properly
    understood unless and until "proof theoretic semantics" is first
    fully understood.

    https://plato.stanford.edu/entries/proof-theoretic-semantics/
    --
    Copyright 2026 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable for the entire body of knowledge.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Sun Mar 29 13:07:13 2026
    From Newsgroup: comp.theory

    On 3/29/26 11:38 AM, olcott wrote:
    On 3/29/2026 3:33 AM, Mikko wrote:
    On 28/03/2026 18:06, olcott wrote:
    On 3/28/2026 4:00 AM, Mikko wrote:
    On 28/03/2026 10:02, olcott wrote:
    On 3/27/2026 4:08 AM, Mikko wrote:
    On 26/03/2026 21:51, olcott wrote:
    On 3/25/2026 7:59 PM, Mr Flibble wrote:
    On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:

    On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:

    My 28 year journey involved primary research into the
    foundations of
    math, computer science, logic, and linguistics. This requires >>>>>>>>>> deep
    knowledge of all of these fields and deep knowledge of the >>>>>>>>>> philosophical alternative foundations of these fields.

    Almost zero humans have deep knowledge of any one of these >>>>>>>>>> fields and
    deep knowledge of alternative foundations in this same field. >>>>>>>>>> Almost
    all human experts in any one of these fields accepts the
    foundation of
    these fields as inherently infallible. Any challenge to the >>>>>>>>>> "received
    view" is met with ridicule.

    LLMs provide a key breakthrough in that they have they have the >>>>>>>>>> equivalent of deep knowledge of these fields and known
    alternative
    foundations. LLMs are known to have serious issues with AI >>>>>>>>>> hallucination. Presenting the same ideas to each of five
    different LLMs
    provides some cross validation.

    Boiling the ideas down to their key essence so that they can be >>>>>>>>>> succinctly presented seems to work very well. All the time >>>>>>>>>> that these
    ideas are presented the LLM's ground these ideas in peer reviewed >>>>>>>>>> papers. A succinct presentation fully grounded in all relevant >>>>>>>>>> peer
    reviewed papers is the end result.

    The LLMs think you are a crank. You have not solved the halting >>>>>>>>> problem.

    /Flibble

    Or in ChatGPT's words:

    For a more Usenet-style brutal version:

    Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer- >>>>>>>> reviewed papersrCY
    is not a substitute for a correct result. No amount of LLM-assisted >>>>>>>> citation farming changes the fact that you have not solved the >>>>>>>> halting
    problem.

    When it is fully understood how:
    "true on the basis of meaning expressed in language"
    has always been actually derived then "undecidability" is
    understood to have always been merely foolish nonsense.

    The paragprah above can already be understood to be foolish nonsense. >>>>>> Perhaps not by its author but certainly by many others.

    If that was true and not some sort of head game then
    you would be able explain the details of how
    "true on the basis of meaning expressed in language"
    is consistently and correctly derived.

    No, that does not follow. It is sufficient to know that if the
    derivation implies that "undecidablity" is understood to have
    always been foolish nonsense then the derivation is so far from
    any reality that it can be regarded as foolish.

    Every meaningful expression of language that has a
    well-founded justification tree and is in the body
    of knowledge is computable.

    False but irrelevant to the message it pretends to answer.


    It is dishonest to say that words you do not understand are false.
    The notion of a "well-founded justification tree" cannot be properly understood unless and until "proof theoretic semantics" is first
    fully understood.

    https://plato.stanford.edu/entries/proof-theoretic-semantics/


    The problem is that the concept of a "well-founded justifcation tree"
    only applies to statements with a claimed truth value.

    A "Question" doesn't need (or have) such a tree.

    A "Program" doesn't either. If we want to claim that the program does something specific, we can require such a tree to prove the statement.

    But a program that halts HAS such a tree, the simplest is just the
    execution trace of the program.

    Thus, the P that halts because the D it calls says it doesn't, has a well-founded justification tree for its halting, so D can't respond that
    the program doesn't have one.

    And if some D says the H built on it does halt, and it goes into a tight non-halting loop, that loop itself also generates a well-founded
    justification tree for that non-halting, so we can't say it doesn't have
    one.

    The only way for a specific H to not have a well-founded justifcation
    tree is for the D it is built on to not actualy be a computation, and
    not have defined behavior, then the H wasn't actually a program either.

    So, all Olcott has does is to prove that he has been lying about working
    with compuations and/or the Halting Problem in the first place.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Mon Mar 30 11:12:37 2026
    From Newsgroup: comp.theory

    On 29/03/2026 18:38, olcott wrote:
    On 3/29/2026 3:33 AM, Mikko wrote:
    On 28/03/2026 18:06, olcott wrote:
    On 3/28/2026 4:00 AM, Mikko wrote:
    On 28/03/2026 10:02, olcott wrote:
    On 3/27/2026 4:08 AM, Mikko wrote:
    On 26/03/2026 21:51, olcott wrote:
    On 3/25/2026 7:59 PM, Mr Flibble wrote:
    On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:

    On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:

    My 28 year journey involved primary research into the
    foundations of
    math, computer science, logic, and linguistics. This requires >>>>>>>>>> deep
    knowledge of all of these fields and deep knowledge of the >>>>>>>>>> philosophical alternative foundations of these fields.

    Almost zero humans have deep knowledge of any one of these >>>>>>>>>> fields and
    deep knowledge of alternative foundations in this same field. >>>>>>>>>> Almost
    all human experts in any one of these fields accepts the
    foundation of
    these fields as inherently infallible. Any challenge to the >>>>>>>>>> "received
    view" is met with ridicule.

    LLMs provide a key breakthrough in that they have they have the >>>>>>>>>> equivalent of deep knowledge of these fields and known
    alternative
    foundations. LLMs are known to have serious issues with AI >>>>>>>>>> hallucination. Presenting the same ideas to each of five
    different LLMs
    provides some cross validation.

    Boiling the ideas down to their key essence so that they can be >>>>>>>>>> succinctly presented seems to work very well. All the time >>>>>>>>>> that these
    ideas are presented the LLM's ground these ideas in peer reviewed >>>>>>>>>> papers. A succinct presentation fully grounded in all relevant >>>>>>>>>> peer
    reviewed papers is the end result.

    The LLMs think you are a crank. You have not solved the halting >>>>>>>>> problem.

    /Flibble

    Or in ChatGPT's words:

    For a more Usenet-style brutal version:

    Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer- >>>>>>>> reviewed papersrCY
    is not a substitute for a correct result. No amount of LLM-assisted >>>>>>>> citation farming changes the fact that you have not solved the >>>>>>>> halting
    problem.

    When it is fully understood how:
    "true on the basis of meaning expressed in language"
    has always been actually derived then "undecidability" is
    understood to have always been merely foolish nonsense.

    The paragprah above can already be understood to be foolish nonsense. >>>>>> Perhaps not by its author but certainly by many others.

    If that was true and not some sort of head game then
    you would be able explain the details of how
    "true on the basis of meaning expressed in language"
    is consistently and correctly derived.

    No, that does not follow. It is sufficient to know that if the
    derivation implies that "undecidablity" is understood to have
    always been foolish nonsense then the derivation is so far from
    any reality that it can be regarded as foolish.

    Every meaningful expression of language that has a
    well-founded justification tree and is in the body
    of knowledge is computable.

    False but irrelevant to the message it pretends to answer.

    It is dishonest to say that words you do not understand are false.

    True but irrelevant.

    The notion of a "well-founded justification tree" cannot be properly understood unless and until "proof theoretic semantics" is first
    fully understood.

    THat is false. In order to understand "well-founded justification tree"
    it is sufficient to know what "well-founded justification tree" means.
    But neither understanding is necessary to the understanding that "undecidablility" and "true on basis of meaning expressed in language"
    are unrelated an belong to distinct topic areas.

    It is dishonest to present irrelevancies as a support of your claims.
    --
    Mikko
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to sci.logic,comp.theory,sci.math,comp.ai.philosophy on Mon Mar 30 12:02:50 2026
    From Newsgroup: comp.theory

    On 28/03/2026 18:06, olcott wrote:
    On 3/28/2026 4:00 AM, Mikko wrote:
    On 28/03/2026 10:02, olcott wrote:
    On 3/27/2026 4:08 AM, Mikko wrote:
    On 26/03/2026 21:51, olcott wrote:
    On 3/25/2026 7:59 PM, Mr Flibble wrote:
    On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:

    On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:

    My 28 year journey involved primary research into the
    foundations of
    math, computer science, logic, and linguistics. This requires deep >>>>>>>> knowledge of all of these fields and deep knowledge of the
    philosophical alternative foundations of these fields.

    Almost zero humans have deep knowledge of any one of these
    fields and
    deep knowledge of alternative foundations in this same field. >>>>>>>> Almost
    all human experts in any one of these fields accepts the
    foundation of
    these fields as inherently infallible. Any challenge to the
    "received
    view" is met with ridicule.

    LLMs provide a key breakthrough in that they have they have the >>>>>>>> equivalent of deep knowledge of these fields and known alternative >>>>>>>> foundations. LLMs are known to have serious issues with AI
    hallucination. Presenting the same ideas to each of five
    different LLMs
    provides some cross validation.

    Boiling the ideas down to their key essence so that they can be >>>>>>>> succinctly presented seems to work very well. All the time that >>>>>>>> these
    ideas are presented the LLM's ground these ideas in peer reviewed >>>>>>>> papers. A succinct presentation fully grounded in all relevant peer >>>>>>>> reviewed papers is the end result.

    The LLMs think you are a crank. You have not solved the halting >>>>>>> problem.

    /Flibble

    Or in ChatGPT's words:

    For a more Usenet-style brutal version:

    Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer-reviewed
    papersrCY
    is not a substitute for a correct result. No amount of LLM-assisted >>>>>> citation farming changes the fact that you have not solved the
    halting
    problem.

    When it is fully understood how:
    "true on the basis of meaning expressed in language"
    has always been actually derived then "undecidability" is
    understood to have always been merely foolish nonsense.

    The paragprah above can already be understood to be foolish nonsense.
    Perhaps not by its author but certainly by many others.

    If that was true and not some sort of head game then
    you would be able explain the details of how
    "true on the basis of meaning expressed in language"
    is consistently and correctly derived.

    No, that does not follow. It is sufficient to know that if the
    derivation implies that "undecidablity" is understood to have
    always been foolish nonsense then the derivation is so far from
    any reality that it can be regarded as foolish.

    Every meaningful expression of language that has a
    well-founded justification tree and is in the body
    of knowledge is computable.

    I.e., if you know that a sentence is true you can write a program
    that tells whether it is true.
    --
    Mikko
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to sci.logic,comp.theory,sci.math,comp.ai.philosophy on Mon Mar 30 12:10:15 2026
    From Newsgroup: comp.theory

    On 28/03/2026 19:32, olcott wrote:
    On 3/28/2026 11:06 AM, olcott wrote:
    On 3/28/2026 4:00 AM, Mikko wrote:
    On 28/03/2026 10:02, olcott wrote:
    On 3/27/2026 4:08 AM, Mikko wrote:
    On 26/03/2026 21:51, olcott wrote:
    On 3/25/2026 7:59 PM, Mr Flibble wrote:
    On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:

    On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:

    My 28 year journey involved primary research into the
    foundations of
    math, computer science, logic, and linguistics. This requires deep >>>>>>>>> knowledge of all of these fields and deep knowledge of the
    philosophical alternative foundations of these fields.

    Almost zero humans have deep knowledge of any one of these
    fields and
    deep knowledge of alternative foundations in this same field. >>>>>>>>> Almost
    all human experts in any one of these fields accepts the
    foundation of
    these fields as inherently infallible. Any challenge to the >>>>>>>>> "received
    view" is met with ridicule.

    LLMs provide a key breakthrough in that they have they have the >>>>>>>>> equivalent of deep knowledge of these fields and known alternative >>>>>>>>> foundations. LLMs are known to have serious issues with AI
    hallucination. Presenting the same ideas to each of five
    different LLMs
    provides some cross validation.

    Boiling the ideas down to their key essence so that they can be >>>>>>>>> succinctly presented seems to work very well. All the time that >>>>>>>>> these
    ideas are presented the LLM's ground these ideas in peer reviewed >>>>>>>>> papers. A succinct presentation fully grounded in all relevant >>>>>>>>> peer
    reviewed papers is the end result.

    The LLMs think you are a crank. You have not solved the halting >>>>>>>> problem.

    /Flibble

    Or in ChatGPT's words:

    For a more Usenet-style brutal version:

    Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer- >>>>>>> reviewed papersrCY
    is not a substitute for a correct result. No amount of LLM-assisted >>>>>>> citation farming changes the fact that you have not solved the
    halting
    problem.

    When it is fully understood how:
    "true on the basis of meaning expressed in language"
    has always been actually derived then "undecidability" is
    understood to have always been merely foolish nonsense.

    The paragprah above can already be understood to be foolish nonsense. >>>>> Perhaps not by its author but certainly by many others.

    If that was true and not some sort of head game then
    you would be able explain the details of how
    "true on the basis of meaning expressed in language"
    is consistently and correctly derived.

    No, that does not follow. It is sufficient to know that if the
    derivation implies that "undecidablity" is understood to have
    always been foolish nonsense then the derivation is so far from
    any reality that it can be regarded as foolish.


    Every meaningful expression of language that has a
    well-founded justification tree and is in the body
    of knowledge is computable.

    Most undecidable decision problem instances are
    excluded as lacking a well-founded justification tree.

    That does not help if you don't know and have no way to find out whether
    the instance you are asked about has or lacks a well-founded
    justification tree.
    --
    Mikko
    --- Synchronet 3.21f-Linux NewsLink 1.2