On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:
On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:
My 28 year journey involved primary research into the foundations of
math, computer science, logic, and linguistics. This requires deep
knowledge of all of these fields and deep knowledge of the
philosophical alternative foundations of these fields.
Almost zero humans have deep knowledge of any one of these fields and
deep knowledge of alternative foundations in this same field. Almost
all human experts in any one of these fields accepts the foundation of
these fields as inherently infallible. Any challenge to the "received
view" is met with ridicule.
LLMs provide a key breakthrough in that they have they have the
equivalent of deep knowledge of these fields and known alternative
foundations. LLMs are known to have serious issues with AI
hallucination. Presenting the same ideas to each of five different LLMs
provides some cross validation.
Boiling the ideas down to their key essence so that they can be
succinctly presented seems to work very well. All the time that these
ideas are presented the LLM's ground these ideas in peer reviewed
papers. A succinct presentation fully grounded in all relevant peer
reviewed papers is the end result.
The LLMs think you are a crank. You have not solved the halting problem.
/Flibble
Or in ChatGPT's words:
For a more Usenet-style brutal version:
Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer-reviewed papersrCY
is not a substitute for a correct result. No amount of LLM-assisted
citation farming changes the fact that you have not solved the halting problem.
The models are not validating your claim. They are merely generating text.
You are still wrong.
/Flibble
On 3/25/2026 7:59 PM, Mr Flibble wrote:
On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:
On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:
My 28 year journey involved primary research into the foundations of
math, computer science, logic, and linguistics. This requires deep
knowledge of all of these fields and deep knowledge of the
philosophical alternative foundations of these fields.
Almost zero humans have deep knowledge of any one of these fields and
deep knowledge of alternative foundations in this same field. Almost
all human experts in any one of these fields accepts the foundation of >>>> these fields as inherently infallible. Any challenge to the "received
view" is met with ridicule.
LLMs provide a key breakthrough in that they have they have the
equivalent of deep knowledge of these fields and known alternative
foundations. LLMs are known to have serious issues with AI
hallucination. Presenting the same ideas to each of five different LLMs >>>> provides some cross validation.
Boiling the ideas down to their key essence so that they can be
succinctly presented seems to work very well. All the time that these
ideas are presented the LLM's ground these ideas in peer reviewed
papers. A succinct presentation fully grounded in all relevant peer
reviewed papers is the end result.
The LLMs think you are a crank. You have not solved the halting problem. >>>
/Flibble
Or in ChatGPT's words:
For a more Usenet-style brutal version:
Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer-reviewed
papersrCY
is not a substitute for a correct result. No amount of LLM-assisted
citation farming changes the fact that you have not solved the halting
problem.
When it is fully understood how:
"true on the basis of meaning expressed in language"
has always been actually derived then "undecidability" is
understood to have always been merely foolish nonsense.
On 26/03/2026 21:51, olcott wrote:
On 3/25/2026 7:59 PM, Mr Flibble wrote:
On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:
On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:
My 28 year journey involved primary research into the foundations of >>>>> math, computer science, logic, and linguistics. This requires deep
knowledge of all of these fields and deep knowledge of the
philosophical alternative foundations of these fields.
Almost zero humans have deep knowledge of any one of these fields and >>>>> deep knowledge of alternative foundations in this same field. Almost >>>>> all human experts in any one of these fields accepts the foundation of >>>>> these fields as inherently infallible. Any challenge to the "received >>>>> view" is met with ridicule.
LLMs provide a key breakthrough in that they have they have the
equivalent of deep knowledge of these fields and known alternative
foundations. LLMs are known to have serious issues with AI
hallucination. Presenting the same ideas to each of five different
LLMs
provides some cross validation.
Boiling the ideas down to their key essence so that they can be
succinctly presented seems to work very well. All the time that these >>>>> ideas are presented the LLM's ground these ideas in peer reviewed
papers. A succinct presentation fully grounded in all relevant peer
reviewed papers is the end result.
The LLMs think you are a crank. You have not solved the halting
problem.
/Flibble
Or in ChatGPT's words:
For a more Usenet-style brutal version:
Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer-reviewed
papersrCY
is not a substitute for a correct result. No amount of LLM-assisted
citation farming changes the fact that you have not solved the halting
problem.
When it is fully understood how:
"true on the basis of meaning expressed in language"
has always been actually derived then "undecidability" is
understood to have always been merely foolish nonsense.
The paragprah above can already be understood to be foolish nonsense.
Perhaps not by its author but certainly by many others.
On 26/03/2026 21:51, olcott wrote:
On 3/25/2026 7:59 PM, Mr Flibble wrote:
On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:
On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:
My 28 year journey involved primary research into the foundations of >>>>> math, computer science, logic, and linguistics. This requires deep
knowledge of all of these fields and deep knowledge of the
philosophical alternative foundations of these fields.
Almost zero humans have deep knowledge of any one of these fields and >>>>> deep knowledge of alternative foundations in this same field. Almost >>>>> all human experts in any one of these fields accepts the foundation of >>>>> these fields as inherently infallible. Any challenge to the "received >>>>> view" is met with ridicule.
LLMs provide a key breakthrough in that they have they have the
equivalent of deep knowledge of these fields and known alternative
foundations. LLMs are known to have serious issues with AI
hallucination. Presenting the same ideas to each of five different
LLMs
provides some cross validation.
Boiling the ideas down to their key essence so that they can be
succinctly presented seems to work very well. All the time that these >>>>> ideas are presented the LLM's ground these ideas in peer reviewed
papers. A succinct presentation fully grounded in all relevant peer
reviewed papers is the end result.
The LLMs think you are a crank. You have not solved the halting
problem.
/Flibble
Or in ChatGPT's words:
For a more Usenet-style brutal version:
Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer-reviewed
papersrCY
is not a substitute for a correct result. No amount of LLM-assisted
citation farming changes the fact that you have not solved the halting
problem.
When it is fully understood how:
"true on the basis of meaning expressed in language"
has always been actually derived then "undecidability" is
understood to have always been merely foolish nonsense.
The paragprah above can already be understood to be foolish nonsense.
Perhaps not by its author but certainly by many others.
On 3/27/2026 4:08 AM, Mikko wrote:Which is all we need to know in order to determine that it is nonsense.
On 26/03/2026 21:51, olcott wrote:
On 3/25/2026 7:59 PM, Mr Flibble wrote:
On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:
On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:
My 28 year journey involved primary research into the foundations of >>>>>> math, computer science, logic, and linguistics. This requires deep >>>>>> knowledge of all of these fields and deep knowledge of the
philosophical alternative foundations of these fields.
Almost zero humans have deep knowledge of any one of these fields and >>>>>> deep knowledge of alternative foundations in this same field. Almost >>>>>> all human experts in any one of these fields accepts the
foundation of
these fields as inherently infallible. Any challenge to the "received >>>>>> view" is met with ridicule.
LLMs provide a key breakthrough in that they have they have the
equivalent of deep knowledge of these fields and known alternative >>>>>> foundations. LLMs are known to have serious issues with AI
hallucination. Presenting the same ideas to each of five different >>>>>> LLMs
provides some cross validation.
Boiling the ideas down to their key essence so that they can be
succinctly presented seems to work very well. All the time that these >>>>>> ideas are presented the LLM's ground these ideas in peer reviewed
papers. A succinct presentation fully grounded in all relevant peer >>>>>> reviewed papers is the end result.
The LLMs think you are a crank. You have not solved the halting
problem.
/Flibble
Or in ChatGPT's words:
For a more Usenet-style brutal version:
Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer-reviewed
papersrCY
is not a substitute for a correct result. No amount of LLM-assisted
citation farming changes the fact that you have not solved the halting >>>> problem.
When it is fully understood how:
"true on the basis of meaning expressed in language"
has always been actually derived then "undecidability" is
understood to have always been merely foolish nonsense.
The paragprah above can already be understood to be foolish nonsense.
Perhaps not by its author but certainly by many others.
*A mandatory prerequisite was specified*
When it is fully understood how:
"true on the basis of meaning expressed in language"
has always been actually derived ...
Since I did not yet provide the details of this
it can't make sense until I do.
On 3/27/2026 4:08 AM, Mikko wrote:
On 26/03/2026 21:51, olcott wrote:
On 3/25/2026 7:59 PM, Mr Flibble wrote:
On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:
On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:
My 28 year journey involved primary research into the foundations of >>>>>> math, computer science, logic, and linguistics. This requires deep >>>>>> knowledge of all of these fields and deep knowledge of the
philosophical alternative foundations of these fields.
Almost zero humans have deep knowledge of any one of these fields and >>>>>> deep knowledge of alternative foundations in this same field. Almost >>>>>> all human experts in any one of these fields accepts the
foundation of
these fields as inherently infallible. Any challenge to the "received >>>>>> view" is met with ridicule.
LLMs provide a key breakthrough in that they have they have the
equivalent of deep knowledge of these fields and known alternative >>>>>> foundations. LLMs are known to have serious issues with AI
hallucination. Presenting the same ideas to each of five different >>>>>> LLMs
provides some cross validation.
Boiling the ideas down to their key essence so that they can be
succinctly presented seems to work very well. All the time that these >>>>>> ideas are presented the LLM's ground these ideas in peer reviewed
papers. A succinct presentation fully grounded in all relevant peer >>>>>> reviewed papers is the end result.
The LLMs think you are a crank. You have not solved the halting
problem.
/Flibble
Or in ChatGPT's words:
For a more Usenet-style brutal version:
Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer-reviewed
papersrCY
is not a substitute for a correct result. No amount of LLM-assisted
citation farming changes the fact that you have not solved the halting >>>> problem.
When it is fully understood how:
"true on the basis of meaning expressed in language"
has always been actually derived then "undecidability" is
understood to have always been merely foolish nonsense.
The paragprah above can already be understood to be foolish nonsense.
Perhaps not by its author but certainly by many others.
If that was true and not some sort of head game then
you would be able explain the details of how
"true on the basis of meaning expressed in language"
is consistently and correctly derived.
On 28/03/2026 10:02, olcott wrote:
On 3/27/2026 4:08 AM, Mikko wrote:
On 26/03/2026 21:51, olcott wrote:
On 3/25/2026 7:59 PM, Mr Flibble wrote:
On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:
On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:
My 28 year journey involved primary research into the foundations of >>>>>>> math, computer science, logic, and linguistics. This requires deep >>>>>>> knowledge of all of these fields and deep knowledge of the
philosophical alternative foundations of these fields.
Almost zero humans have deep knowledge of any one of these fields >>>>>>> and
deep knowledge of alternative foundations in this same field. Almost >>>>>>> all human experts in any one of these fields accepts the
foundation of
these fields as inherently infallible. Any challenge to the
"received
view" is met with ridicule.
LLMs provide a key breakthrough in that they have they have the
equivalent of deep knowledge of these fields and known alternative >>>>>>> foundations. LLMs are known to have serious issues with AI
hallucination. Presenting the same ideas to each of five
different LLMs
provides some cross validation.
Boiling the ideas down to their key essence so that they can be
succinctly presented seems to work very well. All the time that >>>>>>> these
ideas are presented the LLM's ground these ideas in peer reviewed >>>>>>> papers. A succinct presentation fully grounded in all relevant peer >>>>>>> reviewed papers is the end result.
The LLMs think you are a crank. You have not solved the halting
problem.
/Flibble
Or in ChatGPT's words:
For a more Usenet-style brutal version:
Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer-reviewed
papersrCY
is not a substitute for a correct result. No amount of LLM-assisted
citation farming changes the fact that you have not solved the halting >>>>> problem.
When it is fully understood how:
"true on the basis of meaning expressed in language"
has always been actually derived then "undecidability" is
understood to have always been merely foolish nonsense.
The paragprah above can already be understood to be foolish nonsense.
Perhaps not by its author but certainly by many others.
If that was true and not some sort of head game then
you would be able explain the details of how
"true on the basis of meaning expressed in language"
is consistently and correctly derived.
No, that does not follow. It is sufficient to know that if the
derivation implies that "undecidablity" is understood to have
always been foolish nonsense then the derivation is so far from
any reality that it can be regarded as foolish.
On 3/28/2026 4:00 AM, Mikko wrote:
On 28/03/2026 10:02, olcott wrote:
On 3/27/2026 4:08 AM, Mikko wrote:
On 26/03/2026 21:51, olcott wrote:
On 3/25/2026 7:59 PM, Mr Flibble wrote:
On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:
On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:
My 28 year journey involved primary research into the
foundations of
math, computer science, logic, and linguistics. This requires deep >>>>>>>> knowledge of all of these fields and deep knowledge of the
philosophical alternative foundations of these fields.
Almost zero humans have deep knowledge of any one of these
fields and
deep knowledge of alternative foundations in this same field. >>>>>>>> Almost
all human experts in any one of these fields accepts the
foundation of
these fields as inherently infallible. Any challenge to the
"received
view" is met with ridicule.
LLMs provide a key breakthrough in that they have they have the >>>>>>>> equivalent of deep knowledge of these fields and known alternative >>>>>>>> foundations. LLMs are known to have serious issues with AI
hallucination. Presenting the same ideas to each of five
different LLMs
provides some cross validation.
Boiling the ideas down to their key essence so that they can be >>>>>>>> succinctly presented seems to work very well. All the time that >>>>>>>> these
ideas are presented the LLM's ground these ideas in peer reviewed >>>>>>>> papers. A succinct presentation fully grounded in all relevant peer >>>>>>>> reviewed papers is the end result.
The LLMs think you are a crank. You have not solved the halting >>>>>>> problem.
/Flibble
Or in ChatGPT's words:
For a more Usenet-style brutal version:
Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer-reviewed
papersrCY
is not a substitute for a correct result. No amount of LLM-assisted >>>>>> citation farming changes the fact that you have not solved the
halting
problem.
When it is fully understood how:
"true on the basis of meaning expressed in language"
has always been actually derived then "undecidability" is
understood to have always been merely foolish nonsense.
The paragprah above can already be understood to be foolish nonsense.
Perhaps not by its author but certainly by many others.
If that was true and not some sort of head game then
you would be able explain the details of how
"true on the basis of meaning expressed in language"
is consistently and correctly derived.
No, that does not follow. It is sufficient to know that if the
derivation implies that "undecidablity" is understood to have
always been foolish nonsense then the derivation is so far from
any reality that it can be regarded as foolish.
Every meaningful expression of language that has a
well-founded justification tree and is in the body
of knowledge is computable.
On 3/28/2026 4:00 AM, Mikko wrote:
On 28/03/2026 10:02, olcott wrote:
On 3/27/2026 4:08 AM, Mikko wrote:
On 26/03/2026 21:51, olcott wrote:
On 3/25/2026 7:59 PM, Mr Flibble wrote:
On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:
On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:
My 28 year journey involved primary research into the
foundations of
math, computer science, logic, and linguistics. This requires deep >>>>>>>> knowledge of all of these fields and deep knowledge of the
philosophical alternative foundations of these fields.
Almost zero humans have deep knowledge of any one of these
fields and
deep knowledge of alternative foundations in this same field. >>>>>>>> Almost
all human experts in any one of these fields accepts the
foundation of
these fields as inherently infallible. Any challenge to the
"received
view" is met with ridicule.
LLMs provide a key breakthrough in that they have they have the >>>>>>>> equivalent of deep knowledge of these fields and known alternative >>>>>>>> foundations. LLMs are known to have serious issues with AI
hallucination. Presenting the same ideas to each of five
different LLMs
provides some cross validation.
Boiling the ideas down to their key essence so that they can be >>>>>>>> succinctly presented seems to work very well. All the time that >>>>>>>> these
ideas are presented the LLM's ground these ideas in peer reviewed >>>>>>>> papers. A succinct presentation fully grounded in all relevant peer >>>>>>>> reviewed papers is the end result.
The LLMs think you are a crank. You have not solved the halting >>>>>>> problem.
/Flibble
Or in ChatGPT's words:
For a more Usenet-style brutal version:
Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer-reviewed
papersrCY
is not a substitute for a correct result. No amount of LLM-assisted >>>>>> citation farming changes the fact that you have not solved the
halting
problem.
When it is fully understood how:
"true on the basis of meaning expressed in language"
has always been actually derived then "undecidability" is
understood to have always been merely foolish nonsense.
The paragprah above can already be understood to be foolish nonsense.
Perhaps not by its author but certainly by many others.
If that was true and not some sort of head game then
you would be able explain the details of how
"true on the basis of meaning expressed in language"
is consistently and correctly derived.
No, that does not follow. It is sufficient to know that if the
derivation implies that "undecidablity" is understood to have
always been foolish nonsense then the derivation is so far from
any reality that it can be regarded as foolish.
Every meaningful expression of language that has a
well-founded justification tree and is in the body
of knowledge is computable.
On 3/28/2026 11:06 AM, olcott wrote:
On 3/28/2026 4:00 AM, Mikko wrote:
On 28/03/2026 10:02, olcott wrote:
On 3/27/2026 4:08 AM, Mikko wrote:
On 26/03/2026 21:51, olcott wrote:
On 3/25/2026 7:59 PM, Mr Flibble wrote:
On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:
On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:
My 28 year journey involved primary research into the
foundations of
math, computer science, logic, and linguistics. This requires deep >>>>>>>>> knowledge of all of these fields and deep knowledge of the
philosophical alternative foundations of these fields.
Almost zero humans have deep knowledge of any one of these
fields and
deep knowledge of alternative foundations in this same field. >>>>>>>>> Almost
all human experts in any one of these fields accepts the
foundation of
these fields as inherently infallible. Any challenge to the >>>>>>>>> "received
view" is met with ridicule.
LLMs provide a key breakthrough in that they have they have the >>>>>>>>> equivalent of deep knowledge of these fields and known alternative >>>>>>>>> foundations. LLMs are known to have serious issues with AI
hallucination. Presenting the same ideas to each of five
different LLMs
provides some cross validation.
Boiling the ideas down to their key essence so that they can be >>>>>>>>> succinctly presented seems to work very well. All the time that >>>>>>>>> these
ideas are presented the LLM's ground these ideas in peer reviewed >>>>>>>>> papers. A succinct presentation fully grounded in all relevant >>>>>>>>> peer
reviewed papers is the end result.
The LLMs think you are a crank. You have not solved the halting >>>>>>>> problem.
/Flibble
Or in ChatGPT's words:
For a more Usenet-style brutal version:
Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer- >>>>>>> reviewed papersrCY
is not a substitute for a correct result. No amount of LLM-assisted >>>>>>> citation farming changes the fact that you have not solved the
halting
problem.
When it is fully understood how:
"true on the basis of meaning expressed in language"
has always been actually derived then "undecidability" is
understood to have always been merely foolish nonsense.
The paragprah above can already be understood to be foolish nonsense. >>>>> Perhaps not by its author but certainly by many others.
If that was true and not some sort of head game then
you would be able explain the details of how
"true on the basis of meaning expressed in language"
is consistently and correctly derived.
No, that does not follow. It is sufficient to know that if the
derivation implies that "undecidablity" is understood to have
always been foolish nonsense then the derivation is so far from
any reality that it can be regarded as foolish.
Every meaningful expression of language that has a
well-founded justification tree and is in the body
of knowledge is computable.
Most undecidable decision problem instances are
excluded as lacking a well-founded justification tree.
On 3/28/2026 11:06 AM, olcott wrote:
On 3/28/2026 4:00 AM, Mikko wrote:
On 28/03/2026 10:02, olcott wrote:
On 3/27/2026 4:08 AM, Mikko wrote:
On 26/03/2026 21:51, olcott wrote:
On 3/25/2026 7:59 PM, Mr Flibble wrote:
On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:
On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:
My 28 year journey involved primary research into the
foundations of
math, computer science, logic, and linguistics. This requires deep >>>>>>>>> knowledge of all of these fields and deep knowledge of the
philosophical alternative foundations of these fields.
Almost zero humans have deep knowledge of any one of these
fields and
deep knowledge of alternative foundations in this same field. >>>>>>>>> Almost
all human experts in any one of these fields accepts the
foundation of
these fields as inherently infallible. Any challenge to the >>>>>>>>> "received
view" is met with ridicule.
LLMs provide a key breakthrough in that they have they have the >>>>>>>>> equivalent of deep knowledge of these fields and known alternative >>>>>>>>> foundations. LLMs are known to have serious issues with AI
hallucination. Presenting the same ideas to each of five
different LLMs
provides some cross validation.
Boiling the ideas down to their key essence so that they can be >>>>>>>>> succinctly presented seems to work very well. All the time that >>>>>>>>> these
ideas are presented the LLM's ground these ideas in peer reviewed >>>>>>>>> papers. A succinct presentation fully grounded in all relevant >>>>>>>>> peer
reviewed papers is the end result.
The LLMs think you are a crank. You have not solved the halting >>>>>>>> problem.
/Flibble
Or in ChatGPT's words:
For a more Usenet-style brutal version:
Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer- >>>>>>> reviewed papersrCY
is not a substitute for a correct result. No amount of LLM-assisted >>>>>>> citation farming changes the fact that you have not solved the
halting
problem.
When it is fully understood how:
"true on the basis of meaning expressed in language"
has always been actually derived then "undecidability" is
understood to have always been merely foolish nonsense.
The paragprah above can already be understood to be foolish nonsense. >>>>> Perhaps not by its author but certainly by many others.
If that was true and not some sort of head game then
you would be able explain the details of how
"true on the basis of meaning expressed in language"
is consistently and correctly derived.
No, that does not follow. It is sufficient to know that if the
derivation implies that "undecidablity" is understood to have
always been foolish nonsense then the derivation is so far from
any reality that it can be regarded as foolish.
Every meaningful expression of language that has a
well-founded justification tree and is in the body
of knowledge is computable.
Most undecidable decision problem instances are
excluded as lacking a well-founded justification tree.
On 3/28/2026 12:32 PM, olcott wrote:
On 3/28/2026 11:06 AM, olcott wrote:
On 3/28/2026 4:00 AM, Mikko wrote:
On 28/03/2026 10:02, olcott wrote:
On 3/27/2026 4:08 AM, Mikko wrote:
On 26/03/2026 21:51, olcott wrote:
On 3/25/2026 7:59 PM, Mr Flibble wrote:
On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:
On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:
My 28 year journey involved primary research into the
foundations of
math, computer science, logic, and linguistics. This requires >>>>>>>>>> deep
knowledge of all of these fields and deep knowledge of the >>>>>>>>>> philosophical alternative foundations of these fields.
Almost zero humans have deep knowledge of any one of these >>>>>>>>>> fields and
deep knowledge of alternative foundations in this same field. >>>>>>>>>> Almost
all human experts in any one of these fields accepts the
foundation of
these fields as inherently infallible. Any challenge to the >>>>>>>>>> "received
view" is met with ridicule.
LLMs provide a key breakthrough in that they have they have the >>>>>>>>>> equivalent of deep knowledge of these fields and known
alternative
foundations. LLMs are known to have serious issues with AI >>>>>>>>>> hallucination. Presenting the same ideas to each of five
different LLMs
provides some cross validation.
Boiling the ideas down to their key essence so that they can be >>>>>>>>>> succinctly presented seems to work very well. All the time >>>>>>>>>> that these
ideas are presented the LLM's ground these ideas in peer reviewed >>>>>>>>>> papers. A succinct presentation fully grounded in all relevant >>>>>>>>>> peer
reviewed papers is the end result.
The LLMs think you are a crank. You have not solved the halting >>>>>>>>> problem.
/Flibble
Or in ChatGPT's words:
For a more Usenet-style brutal version:
Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer- >>>>>>>> reviewed papersrCY
is not a substitute for a correct result. No amount of LLM-assisted >>>>>>>> citation farming changes the fact that you have not solved the >>>>>>>> halting
problem.
When it is fully understood how:
"true on the basis of meaning expressed in language"
has always been actually derived then "undecidability" is
understood to have always been merely foolish nonsense.
The paragprah above can already be understood to be foolish nonsense. >>>>>> Perhaps not by its author but certainly by many others.
If that was true and not some sort of head game then
you would be able explain the details of how
"true on the basis of meaning expressed in language"
is consistently and correctly derived.
No, that does not follow. It is sufficient to know that if the
derivation implies that "undecidablity" is understood to have
always been foolish nonsense then the derivation is so far from
any reality that it can be regarded as foolish.
Every meaningful expression of language that has a
well-founded justification tree and is in the body
of knowledge is computable.
Most undecidable decision problem instances are
excluded as lacking a well-founded justification tree.
Discussing this with anyone that is not yet an expert in all
of the key details of proof theoretic semantics is like trying
to explain algebra to someone that has not yet learned what
arithmetic is and how it works.
On 3/28/2026 4:00 AM, Mikko wrote:
On 28/03/2026 10:02, olcott wrote:Every meaningful expression of language that has a
On 3/27/2026 4:08 AM, Mikko wrote:
On 26/03/2026 21:51, olcott wrote:
On 3/25/2026 7:59 PM, Mr Flibble wrote:
On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:
On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:
My 28 year journey involved primary research into the
foundations of
math, computer science, logic, and linguistics. This requires deep >>>>>>>> knowledge of all of these fields and deep knowledge of the
philosophical alternative foundations of these fields.
Almost zero humans have deep knowledge of any one of these
fields and
deep knowledge of alternative foundations in this same field. >>>>>>>> Almost
all human experts in any one of these fields accepts the
foundation of
these fields as inherently infallible. Any challenge to the
"received
view" is met with ridicule.
LLMs provide a key breakthrough in that they have they have the >>>>>>>> equivalent of deep knowledge of these fields and known alternative >>>>>>>> foundations. LLMs are known to have serious issues with AI
hallucination. Presenting the same ideas to each of five
different LLMs
provides some cross validation.
Boiling the ideas down to their key essence so that they can be >>>>>>>> succinctly presented seems to work very well. All the time that >>>>>>>> these
ideas are presented the LLM's ground these ideas in peer reviewed >>>>>>>> papers. A succinct presentation fully grounded in all relevant peer >>>>>>>> reviewed papers is the end result.
The LLMs think you are a crank. You have not solved the halting >>>>>>> problem.
/Flibble
Or in ChatGPT's words:
For a more Usenet-style brutal version:
Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer-reviewed
papersrCY
is not a substitute for a correct result. No amount of LLM-assisted >>>>>> citation farming changes the fact that you have not solved the
halting
problem.
When it is fully understood how:
"true on the basis of meaning expressed in language"
has always been actually derived then "undecidability" is
understood to have always been merely foolish nonsense.
The paragprah above can already be understood to be foolish nonsense.
Perhaps not by its author but certainly by many others.
If that was true and not some sort of head game then
you would be able explain the details of how
"true on the basis of meaning expressed in language"
is consistently and correctly derived.
No, that does not follow. It is sufficient to know that if the
derivation implies that "undecidablity" is understood to have
always been foolish nonsense then the derivation is so far from
any reality that it can be regarded as foolish.
well-founded justification tree and is in the body
of knowledge is computable.
On 28/03/2026 18:06, olcott wrote:
On 3/28/2026 4:00 AM, Mikko wrote:
On 28/03/2026 10:02, olcott wrote:Every meaningful expression of language that has a
On 3/27/2026 4:08 AM, Mikko wrote:
On 26/03/2026 21:51, olcott wrote:
On 3/25/2026 7:59 PM, Mr Flibble wrote:
On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:
On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:
My 28 year journey involved primary research into the
foundations of
math, computer science, logic, and linguistics. This requires deep >>>>>>>>> knowledge of all of these fields and deep knowledge of the
philosophical alternative foundations of these fields.
Almost zero humans have deep knowledge of any one of these
fields and
deep knowledge of alternative foundations in this same field. >>>>>>>>> Almost
all human experts in any one of these fields accepts the
foundation of
these fields as inherently infallible. Any challenge to the >>>>>>>>> "received
view" is met with ridicule.
LLMs provide a key breakthrough in that they have they have the >>>>>>>>> equivalent of deep knowledge of these fields and known alternative >>>>>>>>> foundations. LLMs are known to have serious issues with AI
hallucination. Presenting the same ideas to each of five
different LLMs
provides some cross validation.
Boiling the ideas down to their key essence so that they can be >>>>>>>>> succinctly presented seems to work very well. All the time that >>>>>>>>> these
ideas are presented the LLM's ground these ideas in peer reviewed >>>>>>>>> papers. A succinct presentation fully grounded in all relevant >>>>>>>>> peer
reviewed papers is the end result.
The LLMs think you are a crank. You have not solved the halting >>>>>>>> problem.
/Flibble
Or in ChatGPT's words:
For a more Usenet-style brutal version:
Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer- >>>>>>> reviewed papersrCY
is not a substitute for a correct result. No amount of LLM-assisted >>>>>>> citation farming changes the fact that you have not solved the
halting
problem.
When it is fully understood how:
"true on the basis of meaning expressed in language"
has always been actually derived then "undecidability" is
understood to have always been merely foolish nonsense.
The paragprah above can already be understood to be foolish nonsense. >>>>> Perhaps not by its author but certainly by many others.
If that was true and not some sort of head game then
you would be able explain the details of how
"true on the basis of meaning expressed in language"
is consistently and correctly derived.
No, that does not follow. It is sufficient to know that if the
derivation implies that "undecidablity" is understood to have
always been foolish nonsense then the derivation is so far from
any reality that it can be regarded as foolish.
well-founded justification tree and is in the body
of knowledge is computable.
False but irrelevant to the message it pretends to answer.
On 3/29/2026 3:33 AM, Mikko wrote:
On 28/03/2026 18:06, olcott wrote:
On 3/28/2026 4:00 AM, Mikko wrote:
On 28/03/2026 10:02, olcott wrote:Every meaningful expression of language that has a
On 3/27/2026 4:08 AM, Mikko wrote:
On 26/03/2026 21:51, olcott wrote:
On 3/25/2026 7:59 PM, Mr Flibble wrote:
On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:
On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:
My 28 year journey involved primary research into the
foundations of
math, computer science, logic, and linguistics. This requires >>>>>>>>>> deep
knowledge of all of these fields and deep knowledge of the >>>>>>>>>> philosophical alternative foundations of these fields.
Almost zero humans have deep knowledge of any one of these >>>>>>>>>> fields and
deep knowledge of alternative foundations in this same field. >>>>>>>>>> Almost
all human experts in any one of these fields accepts the
foundation of
these fields as inherently infallible. Any challenge to the >>>>>>>>>> "received
view" is met with ridicule.
LLMs provide a key breakthrough in that they have they have the >>>>>>>>>> equivalent of deep knowledge of these fields and known
alternative
foundations. LLMs are known to have serious issues with AI >>>>>>>>>> hallucination. Presenting the same ideas to each of five
different LLMs
provides some cross validation.
Boiling the ideas down to their key essence so that they can be >>>>>>>>>> succinctly presented seems to work very well. All the time >>>>>>>>>> that these
ideas are presented the LLM's ground these ideas in peer reviewed >>>>>>>>>> papers. A succinct presentation fully grounded in all relevant >>>>>>>>>> peer
reviewed papers is the end result.
The LLMs think you are a crank. You have not solved the halting >>>>>>>>> problem.
/Flibble
Or in ChatGPT's words:
For a more Usenet-style brutal version:
Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer- >>>>>>>> reviewed papersrCY
is not a substitute for a correct result. No amount of LLM-assisted >>>>>>>> citation farming changes the fact that you have not solved the >>>>>>>> halting
problem.
When it is fully understood how:
"true on the basis of meaning expressed in language"
has always been actually derived then "undecidability" is
understood to have always been merely foolish nonsense.
The paragprah above can already be understood to be foolish nonsense. >>>>>> Perhaps not by its author but certainly by many others.
If that was true and not some sort of head game then
you would be able explain the details of how
"true on the basis of meaning expressed in language"
is consistently and correctly derived.
No, that does not follow. It is sufficient to know that if the
derivation implies that "undecidablity" is understood to have
always been foolish nonsense then the derivation is so far from
any reality that it can be regarded as foolish.
well-founded justification tree and is in the body
of knowledge is computable.
False but irrelevant to the message it pretends to answer.
It is dishonest to say that words you do not understand are false.
The notion of a "well-founded justification tree" cannot be properly understood unless and until "proof theoretic semantics" is first
fully understood.
https://plato.stanford.edu/entries/proof-theoretic-semantics/
On 3/29/2026 3:33 AM, Mikko wrote:
On 28/03/2026 18:06, olcott wrote:It is dishonest to say that words you do not understand are false.
On 3/28/2026 4:00 AM, Mikko wrote:
On 28/03/2026 10:02, olcott wrote:Every meaningful expression of language that has a
On 3/27/2026 4:08 AM, Mikko wrote:
On 26/03/2026 21:51, olcott wrote:
On 3/25/2026 7:59 PM, Mr Flibble wrote:
On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:
On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:
My 28 year journey involved primary research into the
foundations of
math, computer science, logic, and linguistics. This requires >>>>>>>>>> deep
knowledge of all of these fields and deep knowledge of the >>>>>>>>>> philosophical alternative foundations of these fields.
Almost zero humans have deep knowledge of any one of these >>>>>>>>>> fields and
deep knowledge of alternative foundations in this same field. >>>>>>>>>> Almost
all human experts in any one of these fields accepts the
foundation of
these fields as inherently infallible. Any challenge to the >>>>>>>>>> "received
view" is met with ridicule.
LLMs provide a key breakthrough in that they have they have the >>>>>>>>>> equivalent of deep knowledge of these fields and known
alternative
foundations. LLMs are known to have serious issues with AI >>>>>>>>>> hallucination. Presenting the same ideas to each of five
different LLMs
provides some cross validation.
Boiling the ideas down to their key essence so that they can be >>>>>>>>>> succinctly presented seems to work very well. All the time >>>>>>>>>> that these
ideas are presented the LLM's ground these ideas in peer reviewed >>>>>>>>>> papers. A succinct presentation fully grounded in all relevant >>>>>>>>>> peer
reviewed papers is the end result.
The LLMs think you are a crank. You have not solved the halting >>>>>>>>> problem.
/Flibble
Or in ChatGPT's words:
For a more Usenet-style brutal version:
Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer- >>>>>>>> reviewed papersrCY
is not a substitute for a correct result. No amount of LLM-assisted >>>>>>>> citation farming changes the fact that you have not solved the >>>>>>>> halting
problem.
When it is fully understood how:
"true on the basis of meaning expressed in language"
has always been actually derived then "undecidability" is
understood to have always been merely foolish nonsense.
The paragprah above can already be understood to be foolish nonsense. >>>>>> Perhaps not by its author but certainly by many others.
If that was true and not some sort of head game then
you would be able explain the details of how
"true on the basis of meaning expressed in language"
is consistently and correctly derived.
No, that does not follow. It is sufficient to know that if the
derivation implies that "undecidablity" is understood to have
always been foolish nonsense then the derivation is so far from
any reality that it can be regarded as foolish.
well-founded justification tree and is in the body
of knowledge is computable.
False but irrelevant to the message it pretends to answer.
The notion of a "well-founded justification tree" cannot be properly understood unless and until "proof theoretic semantics" is first
fully understood.
On 3/28/2026 4:00 AM, Mikko wrote:
On 28/03/2026 10:02, olcott wrote:
On 3/27/2026 4:08 AM, Mikko wrote:
On 26/03/2026 21:51, olcott wrote:
On 3/25/2026 7:59 PM, Mr Flibble wrote:
On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:
On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:
My 28 year journey involved primary research into the
foundations of
math, computer science, logic, and linguistics. This requires deep >>>>>>>> knowledge of all of these fields and deep knowledge of the
philosophical alternative foundations of these fields.
Almost zero humans have deep knowledge of any one of these
fields and
deep knowledge of alternative foundations in this same field. >>>>>>>> Almost
all human experts in any one of these fields accepts the
foundation of
these fields as inherently infallible. Any challenge to the
"received
view" is met with ridicule.
LLMs provide a key breakthrough in that they have they have the >>>>>>>> equivalent of deep knowledge of these fields and known alternative >>>>>>>> foundations. LLMs are known to have serious issues with AI
hallucination. Presenting the same ideas to each of five
different LLMs
provides some cross validation.
Boiling the ideas down to their key essence so that they can be >>>>>>>> succinctly presented seems to work very well. All the time that >>>>>>>> these
ideas are presented the LLM's ground these ideas in peer reviewed >>>>>>>> papers. A succinct presentation fully grounded in all relevant peer >>>>>>>> reviewed papers is the end result.
The LLMs think you are a crank. You have not solved the halting >>>>>>> problem.
/Flibble
Or in ChatGPT's words:
For a more Usenet-style brutal version:
Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer-reviewed
papersrCY
is not a substitute for a correct result. No amount of LLM-assisted >>>>>> citation farming changes the fact that you have not solved the
halting
problem.
When it is fully understood how:
"true on the basis of meaning expressed in language"
has always been actually derived then "undecidability" is
understood to have always been merely foolish nonsense.
The paragprah above can already be understood to be foolish nonsense.
Perhaps not by its author but certainly by many others.
If that was true and not some sort of head game then
you would be able explain the details of how
"true on the basis of meaning expressed in language"
is consistently and correctly derived.
No, that does not follow. It is sufficient to know that if the
derivation implies that "undecidablity" is understood to have
always been foolish nonsense then the derivation is so far from
any reality that it can be regarded as foolish.
Every meaningful expression of language that has a
well-founded justification tree and is in the body
of knowledge is computable.
On 3/28/2026 11:06 AM, olcott wrote:
On 3/28/2026 4:00 AM, Mikko wrote:
On 28/03/2026 10:02, olcott wrote:
On 3/27/2026 4:08 AM, Mikko wrote:
On 26/03/2026 21:51, olcott wrote:
On 3/25/2026 7:59 PM, Mr Flibble wrote:
On Thu, 26 Mar 2026 00:53:07 +0000, Mr Flibble wrote:
On Thu, 05 Mar 2026 10:20:05 -0600, olcott wrote:
My 28 year journey involved primary research into the
foundations of
math, computer science, logic, and linguistics. This requires deep >>>>>>>>> knowledge of all of these fields and deep knowledge of the
philosophical alternative foundations of these fields.
Almost zero humans have deep knowledge of any one of these
fields and
deep knowledge of alternative foundations in this same field. >>>>>>>>> Almost
all human experts in any one of these fields accepts the
foundation of
these fields as inherently infallible. Any challenge to the >>>>>>>>> "received
view" is met with ridicule.
LLMs provide a key breakthrough in that they have they have the >>>>>>>>> equivalent of deep knowledge of these fields and known alternative >>>>>>>>> foundations. LLMs are known to have serious issues with AI
hallucination. Presenting the same ideas to each of five
different LLMs
provides some cross validation.
Boiling the ideas down to their key essence so that they can be >>>>>>>>> succinctly presented seems to work very well. All the time that >>>>>>>>> these
ideas are presented the LLM's ground these ideas in peer reviewed >>>>>>>>> papers. A succinct presentation fully grounded in all relevant >>>>>>>>> peer
reviewed papers is the end result.
The LLMs think you are a crank. You have not solved the halting >>>>>>>> problem.
/Flibble
Or in ChatGPT's words:
For a more Usenet-style brutal version:
Invoking rCL28 years,rCY rCLalternative foundations,rCY and rCLpeer- >>>>>>> reviewed papersrCY
is not a substitute for a correct result. No amount of LLM-assisted >>>>>>> citation farming changes the fact that you have not solved the
halting
problem.
When it is fully understood how:
"true on the basis of meaning expressed in language"
has always been actually derived then "undecidability" is
understood to have always been merely foolish nonsense.
The paragprah above can already be understood to be foolish nonsense. >>>>> Perhaps not by its author but certainly by many others.
If that was true and not some sort of head game then
you would be able explain the details of how
"true on the basis of meaning expressed in language"
is consistently and correctly derived.
No, that does not follow. It is sufficient to know that if the
derivation implies that "undecidablity" is understood to have
always been foolish nonsense then the derivation is so far from
any reality that it can be regarded as foolish.
Every meaningful expression of language that has a
well-founded justification tree and is in the body
of knowledge is computable.
Most undecidable decision problem instances are
excluded as lacking a well-founded justification tree.
| Sysop: | Amessyroom |
|---|---|
| Location: | Fayetteville, NC |
| Users: | 63 |
| Nodes: | 6 (0 / 6) |
| Uptime: | 492925:17:39 |
| Calls: | 840 |
| Calls today: | 1 |
| Files: | 1,300 |
| D/L today: |
5 files (16,259K bytes) |
| Messages: | 258,313 |