Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 27 |
Nodes: | 6 (0 / 6) |
Uptime: | 37:59:44 |
Calls: | 631 |
Calls today: | 2 |
Files: | 1,187 |
D/L today: |
22 files (29,767K bytes) |
Messages: | 173,681 |
*The halting problem breaks with reality*
-a Formal computability theory is internally consistent,
-a but it presupposes that rCLthe behavior of the encoded
-a programrCY is a formal object inside the same domain
-a as the deciderrCOs input. If that identification is
-a treated as a fact about reality rather than a modeling
-a convention, then yesrCoit would be a false assumption.
-a https://chatgpt.com/share/68ec6e96-7eb8-8011-90c7-86248034d475
That one quote from a long conversation that I had with ChatGPT.
LLM systems have gotten 67-fold more powerful in the
last year in that their context window (the number
of words that they can keep in their head at one time)
has increased from 3000 to 200,000.
Because of this they have become enormously more powerful
at semantic logical entailment. They can simultaneously
handle the constraints of many complex premises to
correctly derive the conclusions that deductively
follow from those premises.
When reasoning is entirely on the basis of a provided
set of premises AI hallucination cannot occur.
if the halting predicate is invalid then wtf does the program do when
the program is run?
the runtime will behave as follows:
1. It calls HHH(DD).
2. HHH starts simulating DD.
3. That simulation in turn calls HHH(DD) again.
4. This recurses indefinitely until stack or memory exhaustion.
5. So in the real world, the program just diverges rCo it loops or
recurses forever.
On 10/13/25 5:37 PM, olcott wrote:
*The halting problem breaks with reality*
-a-a Formal computability theory is internally consistent,
-a-a but it presupposes that rCLthe behavior of the encoded
-a-a programrCY is a formal object inside the same domain
-a-a as the deciderrCOs input. If that identification is
-a-a treated as a fact about reality rather than a modeling
-a-a convention, then yesrCoit would be a false assumption.
-a-a https://chatgpt.com/share/68ec6e96-7eb8-8011-90c7-86248034d475
That one quote from a long conversation that I had with ChatGPT.
LLM systems have gotten 67-fold more powerful in the
last year in that their context window (the number
of words that they can keep in their head at one time)
has increased from 3000 to 200,000.
Because of this they have become enormously more powerful
at semantic logical entailment. They can simultaneously
handle the constraints of many complex premises to
correctly derive the conclusions that deductively
follow from those premises.
When reasoning is entirely on the basis of a provided
set of premises AI hallucination cannot occur.
lol, i look that chat and asked it a question:
if the halting predicate is invalid then wtf does the program do when
the program is run?
it's response included lines like:
the runtime will behave as follows:
1. It calls HHH(DD).
2. HHH starts simulating DD.
3. That simulation in turn calls HHH(DD) again.
4. This recurses indefinitely until stack or memory exhaustion.
5. So in the real world, the program just diverges rCo it loops or recurses forever.
which is wrong, we agreed DD() when run from main halts()
On 10/13/2025 9:21 PM, dart200 wrote:
On 10/13/25 5:37 PM, olcott wrote:
*The halting problem breaks with reality*
-a-a Formal computability theory is internally consistent,
-a-a but it presupposes that rCLthe behavior of the encoded
-a-a programrCY is a formal object inside the same domain
-a-a as the deciderrCOs input. If that identification is
-a-a treated as a fact about reality rather than a modeling
-a-a convention, then yesrCoit would be a false assumption.
-a-a https://chatgpt.com/share/68ec6e96-7eb8-8011-90c7-86248034d475
That one quote from a long conversation that I had with ChatGPT.
LLM systems have gotten 67-fold more powerful in the
last year in that their context window (the number
of words that they can keep in their head at one time)
has increased from 3000 to 200,000.
Because of this they have become enormously more powerful
at semantic logical entailment. They can simultaneously
handle the constraints of many complex premises to
correctly derive the conclusions that deductively
follow from those premises.
When reasoning is entirely on the basis of a provided
set of premises AI hallucination cannot occur.
lol, i look that chat and asked it a question:
if the halting predicate is invalid then wtf does the program do when
the program is run?
it's response included lines like:
the runtime will behave as follows:
1. It calls HHH(DD).
2. HHH starts simulating DD.
3. That simulation in turn calls HHH(DD) again.
4. This recurses indefinitely until stack or memory exhaustion.
5. So in the real world, the program just diverges rCo it loops or
recurses forever.
which is wrong, we agreed DD() when run from main halts()
If you look at the top of the link you will
see that it figured out the same answer that
I have been telling everyone and it figured
that out on its own.
On 10/13/25 7:26 PM, olcott wrote:
On 10/13/2025 9:21 PM, dart200 wrote:
On 10/13/25 5:37 PM, olcott wrote:
*The halting problem breaks with reality*
-a-a Formal computability theory is internally consistent,
-a-a but it presupposes that rCLthe behavior of the encoded
-a-a programrCY is a formal object inside the same domain
-a-a as the deciderrCOs input. If that identification is
-a-a treated as a fact about reality rather than a modeling
-a-a convention, then yesrCoit would be a false assumption.
-a-a https://chatgpt.com/share/68ec6e96-7eb8-8011-90c7-86248034d475
That one quote from a long conversation that I had with ChatGPT.
LLM systems have gotten 67-fold more powerful in the
last year in that their context window (the number
of words that they can keep in their head at one time)
has increased from 3000 to 200,000.
Because of this they have become enormously more powerful
at semantic logical entailment. They can simultaneously
handle the constraints of many complex premises to
correctly derive the conclusions that deductively
follow from those premises.
When reasoning is entirely on the basis of a provided
set of premises AI hallucination cannot occur.
lol, i look that chat and asked it a question:
if the halting predicate is invalid then wtf does the program do when >>> -a> the program is run?
it's response included lines like:
the runtime will behave as follows:
1. It calls HHH(DD).
2. HHH starts simulating DD.
3. That simulation in turn calls HHH(DD) again.
4. This recurses indefinitely until stack or memory exhaustion.
5. So in the real world, the program just diverges rCo it loops or
recurses forever.
which is wrong, we agreed DD() when run from main halts()
If you look at the top of the link you will
see that it figured out the same answer that
I have been telling everyone and it figured
that out on its own.
i took that specific link, asked it one question, and it fucked up the answer polcott
bruh u are you so oozing with confirmation bias it's a little disgusting from someone who seeks the truth, and nothing but the truth, so help me god
On 10/13/2025 9:31 PM, dart200 wrote:
On 10/13/25 7:26 PM, olcott wrote:
On 10/13/2025 9:21 PM, dart200 wrote:
On 10/13/25 5:37 PM, olcott wrote:
*The halting problem breaks with reality*
-a-a Formal computability theory is internally consistent,
-a-a but it presupposes that rCLthe behavior of the encoded
-a-a programrCY is a formal object inside the same domain
-a-a as the deciderrCOs input. If that identification is
-a-a treated as a fact about reality rather than a modeling
-a-a convention, then yesrCoit would be a false assumption.
-a-a https://chatgpt.com/share/68ec6e96-7eb8-8011-90c7-86248034d475
That one quote from a long conversation that I had with ChatGPT.
LLM systems have gotten 67-fold more powerful in the
last year in that their context window (the number
of words that they can keep in their head at one time)
has increased from 3000 to 200,000.
Because of this they have become enormously more powerful
at semantic logical entailment. They can simultaneously
handle the constraints of many complex premises to
correctly derive the conclusions that deductively
follow from those premises.
When reasoning is entirely on the basis of a provided
set of premises AI hallucination cannot occur.
lol, i look that chat and asked it a question:
if the halting predicate is invalid then wtf does the program dowhen
the program is run?
it's response included lines like:
the runtime will behave as follows:
1. It calls HHH(DD).
2. HHH starts simulating DD.
3. That simulation in turn calls HHH(DD) again.
4. This recurses indefinitely until stack or memory exhaustion.
5. So in the real world, the program just diverges rCo it loops or >>>> -a> recurses forever.
which is wrong, we agreed DD() when run from main halts()
If you look at the top of the link you will
see that it figured out the same answer that
I have been telling everyone and it figured
that out on its own.
i took that specific link, asked it one question, and it fucked up the
answer polcott
bruh u are you so oozing with confirmation bias it's a little
disgusting from someone who seeks the truth, and nothing but the
truth, so help me god
I just asked it this precise question from the same
link that you had and it got the correct answer.
From the original simulation of DD by HHH what does HHH do?
-a Formal computability theory is internally consistent,
-a but it presupposes that rCLthe behavior of the encoded
-a programrCY is a formal object inside the same domain
-a as the deciderrCOs input. If that identification is
-a treated as a fact about reality rather than a modeling
-a convention, then yesrCoit would be a false assumption.
These are difficult words. I will simplify them a little
too much so that you can get the gist. The behavior of
the directly executed DD() is none-of-the-business of HHH.
More technically DD() is not in the input domain of
the function computed by HHH.
On 10/13/25 7:48 PM, olcott wrote:
On 10/13/2025 9:31 PM, dart200 wrote:
On 10/13/25 7:26 PM, olcott wrote:
On 10/13/2025 9:21 PM, dart200 wrote:
On 10/13/25 5:37 PM, olcott wrote:
*The halting problem breaks with reality*
-a-a Formal computability theory is internally consistent,
-a-a but it presupposes that rCLthe behavior of the encoded
-a-a programrCY is a formal object inside the same domain
-a-a as the deciderrCOs input. If that identification is
-a-a treated as a fact about reality rather than a modeling
-a-a convention, then yesrCoit would be a false assumption.
-a-a https://chatgpt.com/share/68ec6e96-7eb8-8011-90c7-86248034d475 >>>>>>
That one quote from a long conversation that I had with ChatGPT.
LLM systems have gotten 67-fold more powerful in the
last year in that their context window (the number
of words that they can keep in their head at one time)
has increased from 3000 to 200,000.
Because of this they have become enormously more powerful
at semantic logical entailment. They can simultaneously
handle the constraints of many complex premises to
correctly derive the conclusions that deductively
follow from those premises.
When reasoning is entirely on the basis of a provided
set of premises AI hallucination cannot occur.
lol, i look that chat and asked it a question:
if the halting predicate is invalid then wtf does the program do >>>>> when
the program is run?
it's response included lines like:
the runtime will behave as follows:
1. It calls HHH(DD).
2. HHH starts simulating DD.
3. That simulation in turn calls HHH(DD) again.
4. This recurses indefinitely until stack or memory exhaustion.
5. So in the real world, the program just diverges rCo it loops or >>>>> -a> recurses forever.
which is wrong, we agreed DD() when run from main halts()
If you look at the top of the link you will
see that it figured out the same answer that
I have been telling everyone and it figured
that out on its own.
i took that specific link, asked it one question, and it fucked up
the answer polcott
bruh u are you so oozing with confirmation bias it's a little
disgusting from someone who seeks the truth, and nothing but the
truth, so help me god
I just asked it this precise question from the same
link that you had and it got the correct answer.
which is not the question i asked, which is what DD() does when run...
it failed to understand that HHH(DD) returns an answer when DD() is run, because it's not doing critical thot my dude, it's doing a facsimile of critical thot
-aFrom the original simulation of DD by HHH what does HHH do?
-a-a Formal computability theory is internally consistent,
-a-a but it presupposes that rCLthe behavior of the encoded
-a-a programrCY is a formal object inside the same domain
-a-a as the deciderrCOs input. If that identification is
-a-a treated as a fact about reality rather than a modeling
-a-a convention, then yesrCoit would be a false assumption.
These are difficult words. I will simplify them a little
too much so that you can get the gist. The behavior of
the directly executed DD() is none-of-the-business of HHH.
More technically DD() is not in the input domain of
the function computed by HHH.
i get it: ur not computing halting analysis of DD(),
ur computing something else that no one else but you cares about,
because it doesn't address what the halting problem is about
On 10/13/2025 9:55 PM, dart200 wrote:
On 10/13/25 7:48 PM, olcott wrote:
On 10/13/2025 9:31 PM, dart200 wrote:
On 10/13/25 7:26 PM, olcott wrote:
On 10/13/2025 9:21 PM, dart200 wrote:
On 10/13/25 5:37 PM, olcott wrote:
*The halting problem breaks with reality*
-a-a Formal computability theory is internally consistent,
-a-a but it presupposes that rCLthe behavior of the encoded
-a-a programrCY is a formal object inside the same domain
-a-a as the deciderrCOs input. If that identification is
-a-a treated as a fact about reality rather than a modeling
-a-a convention, then yesrCoit would be a false assumption.
-a-a https://chatgpt.com/share/68ec6e96-7eb8-8011-90c7-86248034d475 >>>>>>>
That one quote from a long conversation that I had with ChatGPT. >>>>>>>
LLM systems have gotten 67-fold more powerful in the
last year in that their context window (the number
of words that they can keep in their head at one time)
has increased from 3000 to 200,000.
Because of this they have become enormously more powerful
at semantic logical entailment. They can simultaneously
handle the constraints of many complex premises to
correctly derive the conclusions that deductively
follow from those premises.
When reasoning is entirely on the basis of a provided
set of premises AI hallucination cannot occur.
lol, i look that chat and asked it a question:
if the halting predicate is invalid then wtf does the program >>>>>> do when
the program is run?
it's response included lines like:
the runtime will behave as follows:
1. It calls HHH(DD).
2. HHH starts simulating DD.
3. That simulation in turn calls HHH(DD) again.
4. This recurses indefinitely until stack or memory exhaustion. >>>>>> -a>
5. So in the real world, the program just diverges rCo it loops or >>>>>> -a> recurses forever.
which is wrong, we agreed DD() when run from main halts()
If you look at the top of the link you will
see that it figured out the same answer that
I have been telling everyone and it figured
that out on its own.
i took that specific link, asked it one question, and it fucked up
the answer polcott
bruh u are you so oozing with confirmation bias it's a little
disgusting from someone who seeks the truth, and nothing but the
truth, so help me god
I just asked it this precise question from the same
link that you had and it got the correct answer.
which is not the question i asked, which is what DD() does when run...
it failed to understand that HHH(DD) returns an answer when DD() is
run, because it's not doing critical thot my dude, it's doing a
facsimile of critical thot
-aFrom the original simulation of DD by HHH what does HHH do?
-a-a Formal computability theory is internally consistent,
-a-a but it presupposes that rCLthe behavior of the encoded
-a-a programrCY is a formal object inside the same domain
-a-a as the deciderrCOs input. If that identification is
-a-a treated as a fact about reality rather than a modeling
-a-a convention, then yesrCoit would be a false assumption.
These are difficult words. I will simplify them a little
too much so that you can get the gist. The behavior of
the directly executed DD() is none-of-the-business of HHH.
More technically DD() is not in the input domain of
the function computed by HHH.
i get it: ur not computing halting analysis of DD(),
It says that the halting problem contradicting reality
when it stipulates that the executable and the input
are in the same domain because in fact they are not in
the same domain.
If you don't know what a domain is you won't get that.
--ur computing something else that no one else but you cares about,
because it doesn't address what the halting problem is about
On 10/13/25 8:02 PM, olcott wrote:
On 10/13/2025 9:55 PM, dart200 wrote:
On 10/13/25 7:48 PM, olcott wrote:
On 10/13/2025 9:31 PM, dart200 wrote:
On 10/13/25 7:26 PM, olcott wrote:
On 10/13/2025 9:21 PM, dart200 wrote:
On 10/13/25 5:37 PM, olcott wrote:
*The halting problem breaks with reality*
-a-a Formal computability theory is internally consistent,
-a-a but it presupposes that rCLthe behavior of the encoded
-a-a programrCY is a formal object inside the same domain
-a-a as the deciderrCOs input. If that identification is
-a-a treated as a fact about reality rather than a modeling
-a-a convention, then yesrCoit would be a false assumption.
-a-a https://chatgpt.com/share/68ec6e96-7eb8-8011-90c7-86248034d475 >>>>>>>>
That one quote from a long conversation that I had with ChatGPT. >>>>>>>>
LLM systems have gotten 67-fold more powerful in the
last year in that their context window (the number
of words that they can keep in their head at one time)
has increased from 3000 to 200,000.
Because of this they have become enormously more powerful
at semantic logical entailment. They can simultaneously
handle the constraints of many complex premises to
correctly derive the conclusions that deductively
follow from those premises.
When reasoning is entirely on the basis of a provided
set of premises AI hallucination cannot occur.
lol, i look that chat and asked it a question:
if the halting predicate is invalid then wtf does the program >>>>>>> do when
the program is run?
it's response included lines like:
the runtime will behave as follows:
1. It calls HHH(DD).
2. HHH starts simulating DD.
3. That simulation in turn calls HHH(DD) again.
4. This recurses indefinitely until stack or memory exhaustion. >>>>>>> -a>
5. So in the real world, the program just diverges rCo it loops or >>>>>>> -a> recurses forever.
which is wrong, we agreed DD() when run from main halts()
If you look at the top of the link you will
see that it figured out the same answer that
I have been telling everyone and it figured
that out on its own.
i took that specific link, asked it one question, and it fucked up
the answer polcott
bruh u are you so oozing with confirmation bias it's a little
disgusting from someone who seeks the truth, and nothing but the
truth, so help me god
I just asked it this precise question from the same
link that you had and it got the correct answer.
which is not the question i asked, which is what DD() does when run...
it failed to understand that HHH(DD) returns an answer when DD() is
run, because it's not doing critical thot my dude, it's doing a
facsimile of critical thot
-aFrom the original simulation of DD by HHH what does HHH do?
-a-a Formal computability theory is internally consistent,
-a-a but it presupposes that rCLthe behavior of the encoded
-a-a programrCY is a formal object inside the same domain
-a-a as the deciderrCOs input. If that identification is
-a-a treated as a fact about reality rather than a modeling
-a-a convention, then yesrCoit would be a false assumption.
These are difficult words. I will simplify them a little
too much so that you can get the gist. The behavior of
the directly executed DD() is none-of-the-business of HHH.
More technically DD() is not in the input domain of
the function computed by HHH.
i get it: ur not computing halting analysis of DD(),
It says that the halting problem contradicting reality
when it stipulates that the executable and the input
are in the same domain because in fact they are not in
the same domain.
If you don't know what a domain is you won't get that.
i know what a domain is
i'm not interested in redefining it because i'm interested in an / effectively computable/ mapping of:
(machine_description) -> semantics of the machine describe
the halting problem involves an inability to /effectively compute/ that *specific* mapping
--
ur computing something else that no one else but you cares about,
because it doesn't address what the halting problem is about
On 10/13/2025 10:10 PM, dart200 wrote:
On 10/13/25 8:02 PM, olcott wrote:
On 10/13/2025 9:55 PM, dart200 wrote:
On 10/13/25 7:48 PM, olcott wrote:
On 10/13/2025 9:31 PM, dart200 wrote:
On 10/13/25 7:26 PM, olcott wrote:
On 10/13/2025 9:21 PM, dart200 wrote:
On 10/13/25 5:37 PM, olcott wrote:
*The halting problem breaks with reality*
-a-a Formal computability theory is internally consistent,
-a-a but it presupposes that rCLthe behavior of the encoded
-a-a programrCY is a formal object inside the same domain
-a-a as the deciderrCOs input. If that identification is
-a-a treated as a fact about reality rather than a modeling
-a-a convention, then yesrCoit would be a false assumption.
-a-a https://chatgpt.com/share/68ec6e96-7eb8-8011-90c7-86248034d475 >>>>>>>>>
That one quote from a long conversation that I had with ChatGPT. >>>>>>>>>
LLM systems have gotten 67-fold more powerful in the
last year in that their context window (the number
of words that they can keep in their head at one time)
has increased from 3000 to 200,000.
Because of this they have become enormously more powerful
at semantic logical entailment. They can simultaneously
handle the constraints of many complex premises to
correctly derive the conclusions that deductively
follow from those premises.
When reasoning is entirely on the basis of a provided
set of premises AI hallucination cannot occur.
lol, i look that chat and asked it a question:
if the halting predicate is invalid then wtf does the program >>>>>>>> do when
the program is run?
it's response included lines like:
the runtime will behave as follows:
1. It calls HHH(DD).
2. HHH starts simulating DD.
3. That simulation in turn calls HHH(DD) again.
4. This recurses indefinitely until stack or memory exhaustion. >>>>>>>> -a>
5. So in the real world, the program just diverges rCo it loops or >>>>>>>> -a> recurses forever.
which is wrong, we agreed DD() when run from main halts()
If you look at the top of the link you will
see that it figured out the same answer that
I have been telling everyone and it figured
that out on its own.
i took that specific link, asked it one question, and it fucked up >>>>>> the answer polcott
bruh u are you so oozing with confirmation bias it's a little
disgusting from someone who seeks the truth, and nothing but the
truth, so help me god
I just asked it this precise question from the same
link that you had and it got the correct answer.
which is not the question i asked, which is what DD() does when run... >>>>
it failed to understand that HHH(DD) returns an answer when DD() is
run, because it's not doing critical thot my dude, it's doing a
facsimile of critical thot
-aFrom the original simulation of DD by HHH what does HHH do?
-a-a Formal computability theory is internally consistent,
-a-a but it presupposes that rCLthe behavior of the encoded
-a-a programrCY is a formal object inside the same domain
-a-a as the deciderrCOs input. If that identification is
-a-a treated as a fact about reality rather than a modeling
-a-a convention, then yesrCoit would be a false assumption.
These are difficult words. I will simplify them a little
too much so that you can get the gist. The behavior of
the directly executed DD() is none-of-the-business of HHH.
More technically DD() is not in the input domain of
the function computed by HHH.
i get it: ur not computing halting analysis of DD(),
It says that the halting problem contradicting reality
when it stipulates that the executable and the input
are in the same domain because in fact they are not in
the same domain.
If you don't know what a domain is you won't get that.
i know what a domain is
i'm not interested in redefining it because i'm interested in an /
effectively computable/ mapping of:
(machine_description) -> semantics of the machine describe
the halting problem involves an inability to /effectively compute/
that *specific* mapping
ChatGPT confirmed that in reality the assumptions
that the halting problem makes are false.
ur computing something else that no one else but you cares about,
because it doesn't address what the halting problem is about
On 10/13/25 8:17 PM, olcott wrote:
On 10/13/2025 10:10 PM, dart200 wrote:
On 10/13/25 8:02 PM, olcott wrote:
On 10/13/2025 9:55 PM, dart200 wrote:
On 10/13/25 7:48 PM, olcott wrote:
On 10/13/2025 9:31 PM, dart200 wrote:
On 10/13/25 7:26 PM, olcott wrote:
On 10/13/2025 9:21 PM, dart200 wrote:
On 10/13/25 5:37 PM, olcott wrote:
*The halting problem breaks with reality*
-a-a Formal computability theory is internally consistent, >>>>>>>>>> -a-a but it presupposes that rCLthe behavior of the encoded >>>>>>>>>> -a-a programrCY is a formal object inside the same domain
-a-a as the deciderrCOs input. If that identification is
-a-a treated as a fact about reality rather than a modeling >>>>>>>>>> -a-a convention, then yesrCoit would be a false assumption. >>>>>>>>>> -a-a https://chatgpt.com/share/68ec6e96-7eb8-8011-90c7-86248034d475 >>>>>>>>>>
That one quote from a long conversation that I had with ChatGPT. >>>>>>>>>>
LLM systems have gotten 67-fold more powerful in the
last year in that their context window (the number
of words that they can keep in their head at one time)
has increased from 3000 to 200,000.
Because of this they have become enormously more powerful
at semantic logical entailment. They can simultaneously
handle the constraints of many complex premises to
correctly derive the conclusions that deductively
follow from those premises.
When reasoning is entirely on the basis of a provided
set of premises AI hallucination cannot occur.
lol, i look that chat and asked it a question:
if the halting predicate is invalid then wtf does theprogram do when
the program is run?
it's response included lines like:
the runtime will behave as follows:
1. It calls HHH(DD).
2. HHH starts simulating DD.
3. That simulation in turn calls HHH(DD) again.
4. This recurses indefinitely until stack or memory exhaustion. >>>>>>>>> -a>
5. So in the real world, the program just diverges rCo it >>>>>>>>> loops or
recurses forever.
which is wrong, we agreed DD() when run from main halts()
If you look at the top of the link you will
see that it figured out the same answer that
I have been telling everyone and it figured
that out on its own.
i took that specific link, asked it one question, and it fucked >>>>>>> up the answer polcott
bruh u are you so oozing with confirmation bias it's a little
disgusting from someone who seeks the truth, and nothing but the >>>>>>> truth, so help me god
I just asked it this precise question from the same
link that you had and it got the correct answer.
which is not the question i asked, which is what DD() does when run... >>>>>
it failed to understand that HHH(DD) returns an answer when DD() is >>>>> run, because it's not doing critical thot my dude, it's doing a
facsimile of critical thot
-aFrom the original simulation of DD by HHH what does HHH do?
-a-a Formal computability theory is internally consistent,
-a-a but it presupposes that rCLthe behavior of the encoded
-a-a programrCY is a formal object inside the same domain
-a-a as the deciderrCOs input. If that identification is
-a-a treated as a fact about reality rather than a modeling
-a-a convention, then yesrCoit would be a false assumption.
These are difficult words. I will simplify them a little
too much so that you can get the gist. The behavior of
the directly executed DD() is none-of-the-business of HHH.
More technically DD() is not in the input domain of
the function computed by HHH.
i get it: ur not computing halting analysis of DD(),
It says that the halting problem contradicting reality
when it stipulates that the executable and the input
are in the same domain because in fact they are not in
the same domain.
If you don't know what a domain is you won't get that.
i know what a domain is
i'm not interested in redefining it because i'm interested in an /
effectively computable/ mapping of:
(machine_description) -> semantics of the machine describe
the halting problem involves an inability to /effectively compute/
that *specific* mapping
ChatGPT confirmed that in reality the assumptions
that the halting problem makes are false.
bro quit ur ridiculous fucking cherry picking, no one cares
and ur just admiting that u can't resolve the halting problem, no one
cares about u computing some random-ass mapping that no one cares about.
ur computing something else that no one else but you cares about,
because it doesn't address what the halting problem is about
On 10/13/2025 10:54 PM, dart200 wrote:
On 10/13/25 8:17 PM, olcott wrote:
On 10/13/2025 10:10 PM, dart200 wrote:
On 10/13/25 8:02 PM, olcott wrote:
On 10/13/2025 9:55 PM, dart200 wrote:
On 10/13/25 7:48 PM, olcott wrote:
On 10/13/2025 9:31 PM, dart200 wrote:
On 10/13/25 7:26 PM, olcott wrote:
On 10/13/2025 9:21 PM, dart200 wrote:
On 10/13/25 5:37 PM, olcott wrote:
*The halting problem breaks with reality*
-a-a Formal computability theory is internally consistent, >>>>>>>>>>> -a-a but it presupposes that rCLthe behavior of the encoded >>>>>>>>>>> -a-a programrCY is a formal object inside the same domain >>>>>>>>>>> -a-a as the deciderrCOs input. If that identification is >>>>>>>>>>> -a-a treated as a fact about reality rather than a modeling >>>>>>>>>>> -a-a convention, then yesrCoit would be a false assumption. >>>>>>>>>>> -a-a https://chatgpt.com/
share/68ec6e96-7eb8-8011-90c7-86248034d475
That one quote from a long conversation that I had with ChatGPT. >>>>>>>>>>>
LLM systems have gotten 67-fold more powerful in the
last year in that their context window (the number
of words that they can keep in their head at one time)
has increased from 3000 to 200,000.
Because of this they have become enormously more powerful >>>>>>>>>>> at semantic logical entailment. They can simultaneously
handle the constraints of many complex premises to
correctly derive the conclusions that deductively
follow from those premises.
When reasoning is entirely on the basis of a provided
set of premises AI hallucination cannot occur.
lol, i look that chat and asked it a question:
if the halting predicate is invalid then wtf does the >>>>>>>>>> program do when
the program is run?
it's response included lines like:
the runtime will behave as follows:exhaustion.
1. It calls HHH(DD).
2. HHH starts simulating DD.
3. That simulation in turn calls HHH(DD) again.
4. This recurses indefinitely until stack or memory
5. So in the real world, the program just diverges rCo it >>>>>>>>>> loops or
recurses forever.
which is wrong, we agreed DD() when run from main halts()
If you look at the top of the link you will
see that it figured out the same answer that
I have been telling everyone and it figured
that out on its own.
i took that specific link, asked it one question, and it fucked >>>>>>>> up the answer polcott
bruh u are you so oozing with confirmation bias it's a little >>>>>>>> disgusting from someone who seeks the truth, and nothing but the >>>>>>>> truth, so help me god
I just asked it this precise question from the same
link that you had and it got the correct answer.
which is not the question i asked, which is what DD() does when
run...
it failed to understand that HHH(DD) returns an answer when DD()
is run, because it's not doing critical thot my dude, it's doing a >>>>>> facsimile of critical thot
-aFrom the original simulation of DD by HHH what does HHH do?
-a-a Formal computability theory is internally consistent,
-a-a but it presupposes that rCLthe behavior of the encoded
-a-a programrCY is a formal object inside the same domain
-a-a as the deciderrCOs input. If that identification is
-a-a treated as a fact about reality rather than a modeling
-a-a convention, then yesrCoit would be a false assumption.
These are difficult words. I will simplify them a little
too much so that you can get the gist. The behavior of
the directly executed DD() is none-of-the-business of HHH.
More technically DD() is not in the input domain of
the function computed by HHH.
i get it: ur not computing halting analysis of DD(),
It says that the halting problem contradicting reality
when it stipulates that the executable and the input
are in the same domain because in fact they are not in
the same domain.
If you don't know what a domain is you won't get that.
i know what a domain is
i'm not interested in redefining it because i'm interested in an /
effectively computable/ mapping of:
(machine_description) -> semantics of the machine describe
the halting problem involves an inability to /effectively compute/
that *specific* mapping
ChatGPT confirmed that in reality the assumptions
that the halting problem makes are false.
bro quit ur ridiculous fucking cherry picking, no one cares
and ur just admiting that u can't resolve the halting problem, no one
cares about u computing some random-ass mapping that no one cares about.
You are just showing that you don't understand
the meaning of the words that ChatGPT said.
ur computing something else that no one else but you cares about,
because it doesn't address what the halting problem is about
On 10/13/25 9:37 PM, olcott wrote:
On 10/13/2025 10:54 PM, dart200 wrote:
On 10/13/25 8:17 PM, olcott wrote:
On 10/13/2025 10:10 PM, dart200 wrote:
On 10/13/25 8:02 PM, olcott wrote:
On 10/13/2025 9:55 PM, dart200 wrote:
On 10/13/25 7:48 PM, olcott wrote:
On 10/13/2025 9:31 PM, dart200 wrote:
On 10/13/25 7:26 PM, olcott wrote:
On 10/13/2025 9:21 PM, dart200 wrote:
On 10/13/25 5:37 PM, olcott wrote:
*The halting problem breaks with reality*
-a-a Formal computability theory is internally consistent, >>>>>>>>>>>> -a-a but it presupposes that rCLthe behavior of the encoded >>>>>>>>>>>> -a-a programrCY is a formal object inside the same domain >>>>>>>>>>>> -a-a as the deciderrCOs input. If that identification is >>>>>>>>>>>> -a-a treated as a fact about reality rather than a modeling >>>>>>>>>>>> -a-a convention, then yesrCoit would be a false assumption. >>>>>>>>>>>> -a-a https://chatgpt.com/
share/68ec6e96-7eb8-8011-90c7-86248034d475
That one quote from a long conversation that I had with >>>>>>>>>>>> ChatGPT.
LLM systems have gotten 67-fold more powerful in the
last year in that their context window (the number
of words that they can keep in their head at one time) >>>>>>>>>>>> has increased from 3000 to 200,000.
Because of this they have become enormously more powerful >>>>>>>>>>>> at semantic logical entailment. They can simultaneously >>>>>>>>>>>> handle the constraints of many complex premises to
correctly derive the conclusions that deductively
follow from those premises.
When reasoning is entirely on the basis of a provided
set of premises AI hallucination cannot occur.
lol, i look that chat and asked it a question:
if the halting predicate is invalid then wtf does the >>>>>>>>>>> program do when
the program is run?
it's response included lines like:
the runtime will behave as follows:
1. It calls HHH(DD).
2. HHH starts simulating DD.
3. That simulation in turn calls HHH(DD) again.
4. This recurses indefinitely until stack or memory >>>>>>>>>>> exhaustion.
5. So in the real world, the program just diverges rCo it >>>>>>>>>>> loops or
recurses forever.
which is wrong, we agreed DD() when run from main halts() >>>>>>>>>>>
If you look at the top of the link you will
see that it figured out the same answer that
I have been telling everyone and it figured
that out on its own.
i took that specific link, asked it one question, and it fucked >>>>>>>>> up the answer polcott
bruh u are you so oozing with confirmation bias it's a little >>>>>>>>> disgusting from someone who seeks the truth, and nothing but >>>>>>>>> the truth, so help me god
I just asked it this precise question from the same
link that you had and it got the correct answer.
which is not the question i asked, which is what DD() does when >>>>>>> run...
it failed to understand that HHH(DD) returns an answer when DD() >>>>>>> is run, because it's not doing critical thot my dude, it's doing >>>>>>> a facsimile of critical thot
-aFrom the original simulation of DD by HHH what does HHH do?
-a-a Formal computability theory is internally consistent,
-a-a but it presupposes that rCLthe behavior of the encoded
-a-a programrCY is a formal object inside the same domain
-a-a as the deciderrCOs input. If that identification is
-a-a treated as a fact about reality rather than a modeling
-a-a convention, then yesrCoit would be a false assumption.
These are difficult words. I will simplify them a little
too much so that you can get the gist. The behavior of
the directly executed DD() is none-of-the-business of HHH.
More technically DD() is not in the input domain of
the function computed by HHH.
i get it: ur not computing halting analysis of DD(),
It says that the halting problem contradicting reality
when it stipulates that the executable and the input
are in the same domain because in fact they are not in
the same domain.
If you don't know what a domain is you won't get that.
i know what a domain is
i'm not interested in redefining it because i'm interested in an /
effectively computable/ mapping of:
(machine_description) -> semantics of the machine describe
the halting problem involves an inability to /effectively compute/
that *specific* mapping
ChatGPT confirmed that in reality the assumptions
that the halting problem makes are false.
bro quit ur ridiculous fucking cherry picking, no one cares
and ur just admiting that u can't resolve the halting problem, no one
cares about u computing some random-ass mapping that no one cares about. >>>
You are just showing that you don't understand
the meaning of the words that ChatGPT said.
ChatGPT saying anything doesn't mean anything to understand.
idgaf about the "semantics as determined by the input" blah blah blah, i want to compute the semantics of the machine described by the input...
in this regards you are the only one here that disagrees with the
validity of such a question. i kinda hope you don't go to the grave with that disagreement, because i think it's just totally wrong.
the halting problem is asking the right question, it's not "psychic" to
ask about true semantic analysis of the machine described by the input string.
ur computing something else that no one else but you cares about, >>>>>>>
because it doesn't address what the halting problem is about
*The halting problem breaks with reality*
On 2025-10-14 00:37:59 +0000, olcott said:
*The halting problem breaks with reality*
The meaning of the above words is too ambiguous to mean anything.
In particular, the word "break" has many metaphoric meanings but
none of the common ones is applicable to a problem.
*The halting problem breaks with reality*
Formal computability theory is internally consistent,
but it presupposes that rCLthe behavior of the encoded
programrCY is a formal object inside the same domain
as the deciderrCOs input. If that identification is
treated as a fact about reality rather than a modeling
convention, then yesrCoit would be a false assumption.
https://chatgpt.com/share/68ec6e96-7eb8-8011-90c7-86248034d475
That one quote from a long conversation that I had with ChatGPT.
When reasoning is entirely on the basis of a provided
set of premises AI hallucination cannot occur.
On 10/13/2025 9:31 PM, dart200 wrote:
i took that specific link, asked it one question, and it fucked up the
answer polcott
I just asked it this precise question from the same
link that you had and it got the correct answer.
On 10/14/2025 4:53 AM, Mikko wrote:
On 2025-10-14 00:37:59 +0000, olcott said:
*The halting problem breaks with reality*
The meaning of the above words is too ambiguous to mean anything.
In particular, the word "break" has many metaphoric meanings but
none of the common ones is applicable to a problem.
rCLFormal computability theory is internally consistent,
but it presupposes that rCLthe behavior of the encoded
programrCY is a formal object inside the same domain
as the deciderrCOs input. If that identification is
treated as a fact about reality rather than a modeling
convention, then yesrCoit would be a false assumption.rCY
Does this say that the halting problem is contradicting reality
when it stipulates that the executable and the input
are in the same domain because in fact they are not in
the same domain?
On 2025-10-14 16:29:52 +0000, olcott said:
On 10/14/2025 4:53 AM, Mikko wrote:
On 2025-10-14 00:37:59 +0000, olcott said:
*The halting problem breaks with reality*
The meaning of the above words is too ambiguous to mean anything.
In particular, the word "break" has many metaphoric meanings but
none of the common ones is applicable to a problem.
-a-a rCLFormal computability theory is internally consistent,
-a-a but it presupposes that rCLthe behavior of the encoded
-a-a programrCY is a formal object inside the same domain
-a-a as the deciderrCOs input. If that identification is
-a-a treated as a fact about reality rather than a modeling
-a-a convention, then yesrCoit would be a false assumption.rCY
Does this say that the halting problem is contradicting reality
when it stipulates that the executable and the input
are in the same domain because in fact they are not in
the same domain?
No, it merely falsely claims that formal computability theory
presupposes that "the behaviour of the encoded program" is in
the same domain as the decider's input.
On 10/15/2025 3:49 AM, Mikko wrote:
On 2025-10-14 16:29:52 +0000, olcott said:
On 10/14/2025 4:53 AM, Mikko wrote:
On 2025-10-14 00:37:59 +0000, olcott said:
*The halting problem breaks with reality*
The meaning of the above words is too ambiguous to mean anything.
In particular, the word "break" has many metaphoric meanings but
none of the common ones is applicable to a problem.
-a-a rCLFormal computability theory is internally consistent,
-a-a but it presupposes that rCLthe behavior of the encoded
-a-a programrCY is a formal object inside the same domain
-a-a as the deciderrCOs input. If that identification is
-a-a treated as a fact about reality rather than a modeling
-a-a convention, then yesrCoit would be a false assumption.rCY
Does this say that the halting problem is contradicting reality
when it stipulates that the executable and the input
are in the same domain because in fact they are not in
the same domain?
No, it merely falsely claims that formal computability theory
presupposes that "the behaviour of the encoded program" is in
the same domain as the decider's input.
When in fact they are not, thus a break from reality.