• The halting problem breaks with reality

    From olcott@polcott333@gmail.com to comp.theory on Mon Oct 13 19:37:59 2025
    From Newsgroup: comp.theory

    *The halting problem breaks with reality*

    Formal computability theory is internally consistent,
    but it presupposes that rCLthe behavior of the encoded
    programrCY is a formal object inside the same domain
    as the deciderrCOs input. If that identification is
    treated as a fact about reality rather than a modeling
    convention, then yesrCoit would be a false assumption.
    https://chatgpt.com/share/68ec6e96-7eb8-8011-90c7-86248034d475

    That one quote from a long conversation that I had with ChatGPT.

    LLM systems have gotten 67-fold more powerful in the
    last year in that their context window (the number
    of words that they can keep in their head at one time)
    has increased from 3000 to 200,000.

    Because of this they have become enormously more powerful
    at semantic logical entailment. They can simultaneously
    handle the constraints of many complex premises to
    correctly derive the conclusions that deductively
    follow from those premises.

    When reasoning is entirely on the basis of a provided
    set of premises AI hallucination cannot occur.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory on Mon Oct 13 19:21:26 2025
    From Newsgroup: comp.theory

    On 10/13/25 5:37 PM, olcott wrote:
    *The halting problem breaks with reality*

    -a Formal computability theory is internally consistent,
    -a but it presupposes that rCLthe behavior of the encoded
    -a programrCY is a formal object inside the same domain
    -a as the deciderrCOs input. If that identification is
    -a treated as a fact about reality rather than a modeling
    -a convention, then yesrCoit would be a false assumption.
    -a https://chatgpt.com/share/68ec6e96-7eb8-8011-90c7-86248034d475

    That one quote from a long conversation that I had with ChatGPT.

    LLM systems have gotten 67-fold more powerful in the
    last year in that their context window (the number
    of words that they can keep in their head at one time)
    has increased from 3000 to 200,000.

    Because of this they have become enormously more powerful
    at semantic logical entailment. They can simultaneously
    handle the constraints of many complex premises to
    correctly derive the conclusions that deductively
    follow from those premises.

    When reasoning is entirely on the basis of a provided
    set of premises AI hallucination cannot occur.


    lol, i look that chat and asked it a question:

    if the halting predicate is invalid then wtf does the program do when
    the program is run?

    it's response included lines like:

    the runtime will behave as follows:

    1. It calls HHH(DD).

    2. HHH starts simulating DD.

    3. That simulation in turn calls HHH(DD) again.

    4. This recurses indefinitely until stack or memory exhaustion.

    5. So in the real world, the program just diverges rCo it loops or
    recurses forever.

    which is wrong, we agreed DD() when run from main halts()
    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Mon Oct 13 21:26:28 2025
    From Newsgroup: comp.theory

    On 10/13/2025 9:21 PM, dart200 wrote:
    On 10/13/25 5:37 PM, olcott wrote:
    *The halting problem breaks with reality*

    -a-a Formal computability theory is internally consistent,
    -a-a but it presupposes that rCLthe behavior of the encoded
    -a-a programrCY is a formal object inside the same domain
    -a-a as the deciderrCOs input. If that identification is
    -a-a treated as a fact about reality rather than a modeling
    -a-a convention, then yesrCoit would be a false assumption.
    -a-a https://chatgpt.com/share/68ec6e96-7eb8-8011-90c7-86248034d475

    That one quote from a long conversation that I had with ChatGPT.

    LLM systems have gotten 67-fold more powerful in the
    last year in that their context window (the number
    of words that they can keep in their head at one time)
    has increased from 3000 to 200,000.

    Because of this they have become enormously more powerful
    at semantic logical entailment. They can simultaneously
    handle the constraints of many complex premises to
    correctly derive the conclusions that deductively
    follow from those premises.

    When reasoning is entirely on the basis of a provided
    set of premises AI hallucination cannot occur.


    lol, i look that chat and asked it a question:

    if the halting predicate is invalid then wtf does the program do when
    the program is run?

    it's response included lines like:

    the runtime will behave as follows:

    1. It calls HHH(DD).

    2. HHH starts simulating DD.

    3. That simulation in turn calls HHH(DD) again.

    4. This recurses indefinitely until stack or memory exhaustion.

    5. So in the real world, the program just diverges rCo it loops or recurses forever.

    which is wrong, we agreed DD() when run from main halts()


    If you look at the top of the link you will
    see that it figured out the same answer that
    I have been telling everyone and it figured
    that out on its own.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory on Mon Oct 13 19:31:56 2025
    From Newsgroup: comp.theory

    On 10/13/25 7:26 PM, olcott wrote:
    On 10/13/2025 9:21 PM, dart200 wrote:
    On 10/13/25 5:37 PM, olcott wrote:
    *The halting problem breaks with reality*

    -a-a Formal computability theory is internally consistent,
    -a-a but it presupposes that rCLthe behavior of the encoded
    -a-a programrCY is a formal object inside the same domain
    -a-a as the deciderrCOs input. If that identification is
    -a-a treated as a fact about reality rather than a modeling
    -a-a convention, then yesrCoit would be a false assumption.
    -a-a https://chatgpt.com/share/68ec6e96-7eb8-8011-90c7-86248034d475

    That one quote from a long conversation that I had with ChatGPT.

    LLM systems have gotten 67-fold more powerful in the
    last year in that their context window (the number
    of words that they can keep in their head at one time)
    has increased from 3000 to 200,000.

    Because of this they have become enormously more powerful
    at semantic logical entailment. They can simultaneously
    handle the constraints of many complex premises to
    correctly derive the conclusions that deductively
    follow from those premises.

    When reasoning is entirely on the basis of a provided
    set of premises AI hallucination cannot occur.


    lol, i look that chat and asked it a question:

    if the halting predicate is invalid then wtf does the program do when
    the program is run?

    it's response included lines like:

    the runtime will behave as follows:

    1. It calls HHH(DD).

    2. HHH starts simulating DD.

    3. That simulation in turn calls HHH(DD) again.

    4. This recurses indefinitely until stack or memory exhaustion.

    5. So in the real world, the program just diverges rCo it loops or
    recurses forever.

    which is wrong, we agreed DD() when run from main halts()


    If you look at the top of the link you will
    see that it figured out the same answer that
    I have been telling everyone and it figured
    that out on its own.


    i took that specific link, asked it one question, and it fucked up the
    answer polcott

    bruh u are you so oozing with confirmation bias it's a little disgusting
    from someone who seeks the truth, and nothing but the truth, so help me god
    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic considerations like halting analysis.

    please excuse my pseudo-pyscript,

    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Mon Oct 13 21:48:44 2025
    From Newsgroup: comp.theory

    On 10/13/2025 9:31 PM, dart200 wrote:
    On 10/13/25 7:26 PM, olcott wrote:
    On 10/13/2025 9:21 PM, dart200 wrote:
    On 10/13/25 5:37 PM, olcott wrote:
    *The halting problem breaks with reality*

    -a-a Formal computability theory is internally consistent,
    -a-a but it presupposes that rCLthe behavior of the encoded
    -a-a programrCY is a formal object inside the same domain
    -a-a as the deciderrCOs input. If that identification is
    -a-a treated as a fact about reality rather than a modeling
    -a-a convention, then yesrCoit would be a false assumption.
    -a-a https://chatgpt.com/share/68ec6e96-7eb8-8011-90c7-86248034d475

    That one quote from a long conversation that I had with ChatGPT.

    LLM systems have gotten 67-fold more powerful in the
    last year in that their context window (the number
    of words that they can keep in their head at one time)
    has increased from 3000 to 200,000.

    Because of this they have become enormously more powerful
    at semantic logical entailment. They can simultaneously
    handle the constraints of many complex premises to
    correctly derive the conclusions that deductively
    follow from those premises.

    When reasoning is entirely on the basis of a provided
    set of premises AI hallucination cannot occur.


    lol, i look that chat and asked it a question:

    if the halting predicate is invalid then wtf does the program do when >>> -a> the program is run?

    it's response included lines like:

    the runtime will behave as follows:

    1. It calls HHH(DD).

    2. HHH starts simulating DD.

    3. That simulation in turn calls HHH(DD) again.

    4. This recurses indefinitely until stack or memory exhaustion.

    5. So in the real world, the program just diverges rCo it loops or
    recurses forever.

    which is wrong, we agreed DD() when run from main halts()


    If you look at the top of the link you will
    see that it figured out the same answer that
    I have been telling everyone and it figured
    that out on its own.


    i took that specific link, asked it one question, and it fucked up the answer polcott

    bruh u are you so oozing with confirmation bias it's a little disgusting from someone who seeks the truth, and nothing but the truth, so help me god


    I just asked it this precise question from the same
    link that you had and it got the correct answer.

    From the original simulation of DD by HHH what does HHH do?

    Formal computability theory is internally consistent,
    but it presupposes that rCLthe behavior of the encoded
    programrCY is a formal object inside the same domain
    as the deciderrCOs input. If that identification is
    treated as a fact about reality rather than a modeling
    convention, then yesrCoit would be a false assumption.

    These are difficult words. I will simplify them a little
    too much so that you can get the gist. The behavior of
    the directly executed DD() is none-of-the-business of HHH.

    More technically DD() is not in the input domain of
    the function computed by HHH.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory on Mon Oct 13 19:55:43 2025
    From Newsgroup: comp.theory

    On 10/13/25 7:48 PM, olcott wrote:
    On 10/13/2025 9:31 PM, dart200 wrote:
    On 10/13/25 7:26 PM, olcott wrote:
    On 10/13/2025 9:21 PM, dart200 wrote:
    On 10/13/25 5:37 PM, olcott wrote:
    *The halting problem breaks with reality*

    -a-a Formal computability theory is internally consistent,
    -a-a but it presupposes that rCLthe behavior of the encoded
    -a-a programrCY is a formal object inside the same domain
    -a-a as the deciderrCOs input. If that identification is
    -a-a treated as a fact about reality rather than a modeling
    -a-a convention, then yesrCoit would be a false assumption.
    -a-a https://chatgpt.com/share/68ec6e96-7eb8-8011-90c7-86248034d475

    That one quote from a long conversation that I had with ChatGPT.

    LLM systems have gotten 67-fold more powerful in the
    last year in that their context window (the number
    of words that they can keep in their head at one time)
    has increased from 3000 to 200,000.

    Because of this they have become enormously more powerful
    at semantic logical entailment. They can simultaneously
    handle the constraints of many complex premises to
    correctly derive the conclusions that deductively
    follow from those premises.

    When reasoning is entirely on the basis of a provided
    set of premises AI hallucination cannot occur.


    lol, i look that chat and asked it a question:

    if the halting predicate is invalid then wtf does the program do
    when
    the program is run?

    it's response included lines like:

    the runtime will behave as follows:

    1. It calls HHH(DD).

    2. HHH starts simulating DD.

    3. That simulation in turn calls HHH(DD) again.

    4. This recurses indefinitely until stack or memory exhaustion.

    5. So in the real world, the program just diverges rCo it loops or >>>> -a> recurses forever.

    which is wrong, we agreed DD() when run from main halts()


    If you look at the top of the link you will
    see that it figured out the same answer that
    I have been telling everyone and it figured
    that out on its own.


    i took that specific link, asked it one question, and it fucked up the
    answer polcott

    bruh u are you so oozing with confirmation bias it's a little
    disgusting from someone who seeks the truth, and nothing but the
    truth, so help me god


    I just asked it this precise question from the same
    link that you had and it got the correct answer.

    which is not the question i asked, which is what DD() does when run...

    it failed to understand that HHH(DD) returns an answer when DD() is run, because it's not doing critical thot my dude, it's doing a facsimile of critical thot


    From the original simulation of DD by HHH what does HHH do?

    -a Formal computability theory is internally consistent,
    -a but it presupposes that rCLthe behavior of the encoded
    -a programrCY is a formal object inside the same domain
    -a as the deciderrCOs input. If that identification is
    -a treated as a fact about reality rather than a modeling
    -a convention, then yesrCoit would be a false assumption.

    These are difficult words. I will simplify them a little
    too much so that you can get the gist. The behavior of
    the directly executed DD() is none-of-the-business of HHH.

    More technically DD() is not in the input domain of
    the function computed by HHH.

    i get it: ur not computing halting analysis of DD(),

    ur computing something else that no one else but you cares about,

    because it doesn't address what the halting problem is about
    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Mon Oct 13 22:02:31 2025
    From Newsgroup: comp.theory

    On 10/13/2025 9:55 PM, dart200 wrote:
    On 10/13/25 7:48 PM, olcott wrote:
    On 10/13/2025 9:31 PM, dart200 wrote:
    On 10/13/25 7:26 PM, olcott wrote:
    On 10/13/2025 9:21 PM, dart200 wrote:
    On 10/13/25 5:37 PM, olcott wrote:
    *The halting problem breaks with reality*

    -a-a Formal computability theory is internally consistent,
    -a-a but it presupposes that rCLthe behavior of the encoded
    -a-a programrCY is a formal object inside the same domain
    -a-a as the deciderrCOs input. If that identification is
    -a-a treated as a fact about reality rather than a modeling
    -a-a convention, then yesrCoit would be a false assumption.
    -a-a https://chatgpt.com/share/68ec6e96-7eb8-8011-90c7-86248034d475 >>>>>>
    That one quote from a long conversation that I had with ChatGPT.

    LLM systems have gotten 67-fold more powerful in the
    last year in that their context window (the number
    of words that they can keep in their head at one time)
    has increased from 3000 to 200,000.

    Because of this they have become enormously more powerful
    at semantic logical entailment. They can simultaneously
    handle the constraints of many complex premises to
    correctly derive the conclusions that deductively
    follow from those premises.

    When reasoning is entirely on the basis of a provided
    set of premises AI hallucination cannot occur.


    lol, i look that chat and asked it a question:

    if the halting predicate is invalid then wtf does the program do >>>>> when
    the program is run?

    it's response included lines like:

    the runtime will behave as follows:

    1. It calls HHH(DD).

    2. HHH starts simulating DD.

    3. That simulation in turn calls HHH(DD) again.

    4. This recurses indefinitely until stack or memory exhaustion.

    5. So in the real world, the program just diverges rCo it loops or >>>>> -a> recurses forever.

    which is wrong, we agreed DD() when run from main halts()


    If you look at the top of the link you will
    see that it figured out the same answer that
    I have been telling everyone and it figured
    that out on its own.


    i took that specific link, asked it one question, and it fucked up
    the answer polcott

    bruh u are you so oozing with confirmation bias it's a little
    disgusting from someone who seeks the truth, and nothing but the
    truth, so help me god


    I just asked it this precise question from the same
    link that you had and it got the correct answer.

    which is not the question i asked, which is what DD() does when run...

    it failed to understand that HHH(DD) returns an answer when DD() is run, because it's not doing critical thot my dude, it's doing a facsimile of critical thot


    -aFrom the original simulation of DD by HHH what does HHH do?

    -a-a Formal computability theory is internally consistent,
    -a-a but it presupposes that rCLthe behavior of the encoded
    -a-a programrCY is a formal object inside the same domain
    -a-a as the deciderrCOs input. If that identification is
    -a-a treated as a fact about reality rather than a modeling
    -a-a convention, then yesrCoit would be a false assumption.

    These are difficult words. I will simplify them a little
    too much so that you can get the gist. The behavior of
    the directly executed DD() is none-of-the-business of HHH.

    More technically DD() is not in the input domain of
    the function computed by HHH.

    i get it: ur not computing halting analysis of DD(),


    It says that the halting problem contradicting reality
    when it stipulates that the executable and the input
    are in the same domain because in fact they are not in
    the same domain.

    If you don't know what a domain is you won't get that.

    ur computing something else that no one else but you cares about,

    because it doesn't address what the halting problem is about

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory on Mon Oct 13 20:10:23 2025
    From Newsgroup: comp.theory

    On 10/13/25 8:02 PM, olcott wrote:
    On 10/13/2025 9:55 PM, dart200 wrote:
    On 10/13/25 7:48 PM, olcott wrote:
    On 10/13/2025 9:31 PM, dart200 wrote:
    On 10/13/25 7:26 PM, olcott wrote:
    On 10/13/2025 9:21 PM, dart200 wrote:
    On 10/13/25 5:37 PM, olcott wrote:
    *The halting problem breaks with reality*

    -a-a Formal computability theory is internally consistent,
    -a-a but it presupposes that rCLthe behavior of the encoded
    -a-a programrCY is a formal object inside the same domain
    -a-a as the deciderrCOs input. If that identification is
    -a-a treated as a fact about reality rather than a modeling
    -a-a convention, then yesrCoit would be a false assumption.
    -a-a https://chatgpt.com/share/68ec6e96-7eb8-8011-90c7-86248034d475 >>>>>>>
    That one quote from a long conversation that I had with ChatGPT. >>>>>>>
    LLM systems have gotten 67-fold more powerful in the
    last year in that their context window (the number
    of words that they can keep in their head at one time)
    has increased from 3000 to 200,000.

    Because of this they have become enormously more powerful
    at semantic logical entailment. They can simultaneously
    handle the constraints of many complex premises to
    correctly derive the conclusions that deductively
    follow from those premises.

    When reasoning is entirely on the basis of a provided
    set of premises AI hallucination cannot occur.


    lol, i look that chat and asked it a question:

    if the halting predicate is invalid then wtf does the program >>>>>> do when
    the program is run?

    it's response included lines like:

    the runtime will behave as follows:

    1. It calls HHH(DD).

    2. HHH starts simulating DD.

    3. That simulation in turn calls HHH(DD) again.

    4. This recurses indefinitely until stack or memory exhaustion. >>>>>> -a>
    5. So in the real world, the program just diverges rCo it loops or >>>>>> -a> recurses forever.

    which is wrong, we agreed DD() when run from main halts()


    If you look at the top of the link you will
    see that it figured out the same answer that
    I have been telling everyone and it figured
    that out on its own.


    i took that specific link, asked it one question, and it fucked up
    the answer polcott

    bruh u are you so oozing with confirmation bias it's a little
    disgusting from someone who seeks the truth, and nothing but the
    truth, so help me god


    I just asked it this precise question from the same
    link that you had and it got the correct answer.

    which is not the question i asked, which is what DD() does when run...

    it failed to understand that HHH(DD) returns an answer when DD() is
    run, because it's not doing critical thot my dude, it's doing a
    facsimile of critical thot


    -aFrom the original simulation of DD by HHH what does HHH do?

    -a-a Formal computability theory is internally consistent,
    -a-a but it presupposes that rCLthe behavior of the encoded
    -a-a programrCY is a formal object inside the same domain
    -a-a as the deciderrCOs input. If that identification is
    -a-a treated as a fact about reality rather than a modeling
    -a-a convention, then yesrCoit would be a false assumption.

    These are difficult words. I will simplify them a little
    too much so that you can get the gist. The behavior of
    the directly executed DD() is none-of-the-business of HHH.

    More technically DD() is not in the input domain of
    the function computed by HHH.

    i get it: ur not computing halting analysis of DD(),


    It says that the halting problem contradicting reality
    when it stipulates that the executable and the input
    are in the same domain because in fact they are not in
    the same domain.

    If you don't know what a domain is you won't get that.

    i know what a domain is

    i'm not interested in redefining it because i'm interested in an
    /effectively computable/ mapping of:

    (machine_description) -> semantics of the machine describe

    the halting problem involves an inability to /effectively compute/ that *specific* mapping


    ur computing something else that no one else but you cares about,

    because it doesn't address what the halting problem is about

    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Mon Oct 13 22:17:39 2025
    From Newsgroup: comp.theory

    On 10/13/2025 10:10 PM, dart200 wrote:
    On 10/13/25 8:02 PM, olcott wrote:
    On 10/13/2025 9:55 PM, dart200 wrote:
    On 10/13/25 7:48 PM, olcott wrote:
    On 10/13/2025 9:31 PM, dart200 wrote:
    On 10/13/25 7:26 PM, olcott wrote:
    On 10/13/2025 9:21 PM, dart200 wrote:
    On 10/13/25 5:37 PM, olcott wrote:
    *The halting problem breaks with reality*

    -a-a Formal computability theory is internally consistent,
    -a-a but it presupposes that rCLthe behavior of the encoded
    -a-a programrCY is a formal object inside the same domain
    -a-a as the deciderrCOs input. If that identification is
    -a-a treated as a fact about reality rather than a modeling
    -a-a convention, then yesrCoit would be a false assumption.
    -a-a https://chatgpt.com/share/68ec6e96-7eb8-8011-90c7-86248034d475 >>>>>>>>
    That one quote from a long conversation that I had with ChatGPT. >>>>>>>>
    LLM systems have gotten 67-fold more powerful in the
    last year in that their context window (the number
    of words that they can keep in their head at one time)
    has increased from 3000 to 200,000.

    Because of this they have become enormously more powerful
    at semantic logical entailment. They can simultaneously
    handle the constraints of many complex premises to
    correctly derive the conclusions that deductively
    follow from those premises.

    When reasoning is entirely on the basis of a provided
    set of premises AI hallucination cannot occur.


    lol, i look that chat and asked it a question:

    if the halting predicate is invalid then wtf does the program >>>>>>> do when
    the program is run?

    it's response included lines like:

    the runtime will behave as follows:

    1. It calls HHH(DD).

    2. HHH starts simulating DD.

    3. That simulation in turn calls HHH(DD) again.

    4. This recurses indefinitely until stack or memory exhaustion. >>>>>>> -a>
    5. So in the real world, the program just diverges rCo it loops or >>>>>>> -a> recurses forever.

    which is wrong, we agreed DD() when run from main halts()


    If you look at the top of the link you will
    see that it figured out the same answer that
    I have been telling everyone and it figured
    that out on its own.


    i took that specific link, asked it one question, and it fucked up
    the answer polcott

    bruh u are you so oozing with confirmation bias it's a little
    disgusting from someone who seeks the truth, and nothing but the
    truth, so help me god


    I just asked it this precise question from the same
    link that you had and it got the correct answer.

    which is not the question i asked, which is what DD() does when run...

    it failed to understand that HHH(DD) returns an answer when DD() is
    run, because it's not doing critical thot my dude, it's doing a
    facsimile of critical thot


    -aFrom the original simulation of DD by HHH what does HHH do?

    -a-a Formal computability theory is internally consistent,
    -a-a but it presupposes that rCLthe behavior of the encoded
    -a-a programrCY is a formal object inside the same domain
    -a-a as the deciderrCOs input. If that identification is
    -a-a treated as a fact about reality rather than a modeling
    -a-a convention, then yesrCoit would be a false assumption.

    These are difficult words. I will simplify them a little
    too much so that you can get the gist. The behavior of
    the directly executed DD() is none-of-the-business of HHH.

    More technically DD() is not in the input domain of
    the function computed by HHH.

    i get it: ur not computing halting analysis of DD(),


    It says that the halting problem contradicting reality
    when it stipulates that the executable and the input
    are in the same domain because in fact they are not in
    the same domain.

    If you don't know what a domain is you won't get that.

    i know what a domain is

    i'm not interested in redefining it because i'm interested in an / effectively computable/ mapping of:

    (machine_description) -> semantics of the machine describe

    the halting problem involves an inability to /effectively compute/ that *specific* mapping


    ChatGPT confirmed that in reality the assumptions
    that the halting problem makes are false.


    ur computing something else that no one else but you cares about,

    because it doesn't address what the halting problem is about

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory on Mon Oct 13 20:54:47 2025
    From Newsgroup: comp.theory

    On 10/13/25 8:17 PM, olcott wrote:
    On 10/13/2025 10:10 PM, dart200 wrote:
    On 10/13/25 8:02 PM, olcott wrote:
    On 10/13/2025 9:55 PM, dart200 wrote:
    On 10/13/25 7:48 PM, olcott wrote:
    On 10/13/2025 9:31 PM, dart200 wrote:
    On 10/13/25 7:26 PM, olcott wrote:
    On 10/13/2025 9:21 PM, dart200 wrote:
    On 10/13/25 5:37 PM, olcott wrote:
    *The halting problem breaks with reality*

    -a-a Formal computability theory is internally consistent,
    -a-a but it presupposes that rCLthe behavior of the encoded
    -a-a programrCY is a formal object inside the same domain
    -a-a as the deciderrCOs input. If that identification is
    -a-a treated as a fact about reality rather than a modeling
    -a-a convention, then yesrCoit would be a false assumption.
    -a-a https://chatgpt.com/share/68ec6e96-7eb8-8011-90c7-86248034d475 >>>>>>>>>
    That one quote from a long conversation that I had with ChatGPT. >>>>>>>>>
    LLM systems have gotten 67-fold more powerful in the
    last year in that their context window (the number
    of words that they can keep in their head at one time)
    has increased from 3000 to 200,000.

    Because of this they have become enormously more powerful
    at semantic logical entailment. They can simultaneously
    handle the constraints of many complex premises to
    correctly derive the conclusions that deductively
    follow from those premises.

    When reasoning is entirely on the basis of a provided
    set of premises AI hallucination cannot occur.


    lol, i look that chat and asked it a question:

    if the halting predicate is invalid then wtf does the program >>>>>>>> do when
    the program is run?

    it's response included lines like:

    the runtime will behave as follows:

    1. It calls HHH(DD).

    2. HHH starts simulating DD.

    3. That simulation in turn calls HHH(DD) again.

    4. This recurses indefinitely until stack or memory exhaustion. >>>>>>>> -a>
    5. So in the real world, the program just diverges rCo it loops or >>>>>>>> -a> recurses forever.

    which is wrong, we agreed DD() when run from main halts()


    If you look at the top of the link you will
    see that it figured out the same answer that
    I have been telling everyone and it figured
    that out on its own.


    i took that specific link, asked it one question, and it fucked up >>>>>> the answer polcott

    bruh u are you so oozing with confirmation bias it's a little
    disgusting from someone who seeks the truth, and nothing but the
    truth, so help me god


    I just asked it this precise question from the same
    link that you had and it got the correct answer.

    which is not the question i asked, which is what DD() does when run... >>>>
    it failed to understand that HHH(DD) returns an answer when DD() is
    run, because it's not doing critical thot my dude, it's doing a
    facsimile of critical thot


    -aFrom the original simulation of DD by HHH what does HHH do?

    -a-a Formal computability theory is internally consistent,
    -a-a but it presupposes that rCLthe behavior of the encoded
    -a-a programrCY is a formal object inside the same domain
    -a-a as the deciderrCOs input. If that identification is
    -a-a treated as a fact about reality rather than a modeling
    -a-a convention, then yesrCoit would be a false assumption.

    These are difficult words. I will simplify them a little
    too much so that you can get the gist. The behavior of
    the directly executed DD() is none-of-the-business of HHH.

    More technically DD() is not in the input domain of
    the function computed by HHH.

    i get it: ur not computing halting analysis of DD(),


    It says that the halting problem contradicting reality
    when it stipulates that the executable and the input
    are in the same domain because in fact they are not in
    the same domain.

    If you don't know what a domain is you won't get that.

    i know what a domain is

    i'm not interested in redefining it because i'm interested in an /
    effectively computable/ mapping of:

    (machine_description) -> semantics of the machine describe

    the halting problem involves an inability to /effectively compute/
    that *specific* mapping


    ChatGPT confirmed that in reality the assumptions
    that the halting problem makes are false.

    bro quit ur ridiculous fucking cherry picking, no one cares

    and ur just admiting that u can't resolve the halting problem, no one
    cares about u computing some random-ass mapping that no one cares about.



    ur computing something else that no one else but you cares about,

    because it doesn't address what the halting problem is about



    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Mon Oct 13 23:37:49 2025
    From Newsgroup: comp.theory

    On 10/13/2025 10:54 PM, dart200 wrote:
    On 10/13/25 8:17 PM, olcott wrote:
    On 10/13/2025 10:10 PM, dart200 wrote:
    On 10/13/25 8:02 PM, olcott wrote:
    On 10/13/2025 9:55 PM, dart200 wrote:
    On 10/13/25 7:48 PM, olcott wrote:
    On 10/13/2025 9:31 PM, dart200 wrote:
    On 10/13/25 7:26 PM, olcott wrote:
    On 10/13/2025 9:21 PM, dart200 wrote:
    On 10/13/25 5:37 PM, olcott wrote:
    *The halting problem breaks with reality*

    -a-a Formal computability theory is internally consistent, >>>>>>>>>> -a-a but it presupposes that rCLthe behavior of the encoded >>>>>>>>>> -a-a programrCY is a formal object inside the same domain
    -a-a as the deciderrCOs input. If that identification is
    -a-a treated as a fact about reality rather than a modeling >>>>>>>>>> -a-a convention, then yesrCoit would be a false assumption. >>>>>>>>>> -a-a https://chatgpt.com/share/68ec6e96-7eb8-8011-90c7-86248034d475 >>>>>>>>>>
    That one quote from a long conversation that I had with ChatGPT. >>>>>>>>>>
    LLM systems have gotten 67-fold more powerful in the
    last year in that their context window (the number
    of words that they can keep in their head at one time)
    has increased from 3000 to 200,000.

    Because of this they have become enormously more powerful
    at semantic logical entailment. They can simultaneously
    handle the constraints of many complex premises to
    correctly derive the conclusions that deductively
    follow from those premises.

    When reasoning is entirely on the basis of a provided
    set of premises AI hallucination cannot occur.


    lol, i look that chat and asked it a question:

    if the halting predicate is invalid then wtf does the
    program do when
    the program is run?

    it's response included lines like:

    the runtime will behave as follows:

    1. It calls HHH(DD).

    2. HHH starts simulating DD.

    3. That simulation in turn calls HHH(DD) again.

    4. This recurses indefinitely until stack or memory exhaustion. >>>>>>>>> -a>
    5. So in the real world, the program just diverges rCo it >>>>>>>>> loops or
    recurses forever.

    which is wrong, we agreed DD() when run from main halts()


    If you look at the top of the link you will
    see that it figured out the same answer that
    I have been telling everyone and it figured
    that out on its own.


    i took that specific link, asked it one question, and it fucked >>>>>>> up the answer polcott

    bruh u are you so oozing with confirmation bias it's a little
    disgusting from someone who seeks the truth, and nothing but the >>>>>>> truth, so help me god


    I just asked it this precise question from the same
    link that you had and it got the correct answer.

    which is not the question i asked, which is what DD() does when run... >>>>>
    it failed to understand that HHH(DD) returns an answer when DD() is >>>>> run, because it's not doing critical thot my dude, it's doing a
    facsimile of critical thot


    -aFrom the original simulation of DD by HHH what does HHH do?

    -a-a Formal computability theory is internally consistent,
    -a-a but it presupposes that rCLthe behavior of the encoded
    -a-a programrCY is a formal object inside the same domain
    -a-a as the deciderrCOs input. If that identification is
    -a-a treated as a fact about reality rather than a modeling
    -a-a convention, then yesrCoit would be a false assumption.

    These are difficult words. I will simplify them a little
    too much so that you can get the gist. The behavior of
    the directly executed DD() is none-of-the-business of HHH.

    More technically DD() is not in the input domain of
    the function computed by HHH.

    i get it: ur not computing halting analysis of DD(),


    It says that the halting problem contradicting reality
    when it stipulates that the executable and the input
    are in the same domain because in fact they are not in
    the same domain.

    If you don't know what a domain is you won't get that.

    i know what a domain is

    i'm not interested in redefining it because i'm interested in an /
    effectively computable/ mapping of:

    (machine_description) -> semantics of the machine describe

    the halting problem involves an inability to /effectively compute/
    that *specific* mapping


    ChatGPT confirmed that in reality the assumptions
    that the halting problem makes are false.

    bro quit ur ridiculous fucking cherry picking, no one cares

    and ur just admiting that u can't resolve the halting problem, no one
    cares about u computing some random-ass mapping that no one cares about.


    You are just showing that you don't understand
    the meaning of the words that ChatGPT said.



    ur computing something else that no one else but you cares about,

    because it doesn't address what the halting problem is about





    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory on Mon Oct 13 22:45:16 2025
    From Newsgroup: comp.theory

    On 10/13/25 9:37 PM, olcott wrote:
    On 10/13/2025 10:54 PM, dart200 wrote:
    On 10/13/25 8:17 PM, olcott wrote:
    On 10/13/2025 10:10 PM, dart200 wrote:
    On 10/13/25 8:02 PM, olcott wrote:
    On 10/13/2025 9:55 PM, dart200 wrote:
    On 10/13/25 7:48 PM, olcott wrote:
    On 10/13/2025 9:31 PM, dart200 wrote:
    On 10/13/25 7:26 PM, olcott wrote:
    On 10/13/2025 9:21 PM, dart200 wrote:
    On 10/13/25 5:37 PM, olcott wrote:
    *The halting problem breaks with reality*

    -a-a Formal computability theory is internally consistent, >>>>>>>>>>> -a-a but it presupposes that rCLthe behavior of the encoded >>>>>>>>>>> -a-a programrCY is a formal object inside the same domain >>>>>>>>>>> -a-a as the deciderrCOs input. If that identification is >>>>>>>>>>> -a-a treated as a fact about reality rather than a modeling >>>>>>>>>>> -a-a convention, then yesrCoit would be a false assumption. >>>>>>>>>>> -a-a https://chatgpt.com/
    share/68ec6e96-7eb8-8011-90c7-86248034d475

    That one quote from a long conversation that I had with ChatGPT. >>>>>>>>>>>
    LLM systems have gotten 67-fold more powerful in the
    last year in that their context window (the number
    of words that they can keep in their head at one time)
    has increased from 3000 to 200,000.

    Because of this they have become enormously more powerful >>>>>>>>>>> at semantic logical entailment. They can simultaneously
    handle the constraints of many complex premises to
    correctly derive the conclusions that deductively
    follow from those premises.

    When reasoning is entirely on the basis of a provided
    set of premises AI hallucination cannot occur.


    lol, i look that chat and asked it a question:

    if the halting predicate is invalid then wtf does the >>>>>>>>>> program do when
    the program is run?

    it's response included lines like:

    the runtime will behave as follows:

    1. It calls HHH(DD).

    2. HHH starts simulating DD.

    3. That simulation in turn calls HHH(DD) again.

    4. This recurses indefinitely until stack or memory
    exhaustion.

    5. So in the real world, the program just diverges rCo it >>>>>>>>>> loops or
    recurses forever.

    which is wrong, we agreed DD() when run from main halts()


    If you look at the top of the link you will
    see that it figured out the same answer that
    I have been telling everyone and it figured
    that out on its own.


    i took that specific link, asked it one question, and it fucked >>>>>>>> up the answer polcott

    bruh u are you so oozing with confirmation bias it's a little >>>>>>>> disgusting from someone who seeks the truth, and nothing but the >>>>>>>> truth, so help me god


    I just asked it this precise question from the same
    link that you had and it got the correct answer.

    which is not the question i asked, which is what DD() does when
    run...

    it failed to understand that HHH(DD) returns an answer when DD()
    is run, because it's not doing critical thot my dude, it's doing a >>>>>> facsimile of critical thot


    -aFrom the original simulation of DD by HHH what does HHH do?

    -a-a Formal computability theory is internally consistent,
    -a-a but it presupposes that rCLthe behavior of the encoded
    -a-a programrCY is a formal object inside the same domain
    -a-a as the deciderrCOs input. If that identification is
    -a-a treated as a fact about reality rather than a modeling
    -a-a convention, then yesrCoit would be a false assumption.

    These are difficult words. I will simplify them a little
    too much so that you can get the gist. The behavior of
    the directly executed DD() is none-of-the-business of HHH.

    More technically DD() is not in the input domain of
    the function computed by HHH.

    i get it: ur not computing halting analysis of DD(),


    It says that the halting problem contradicting reality
    when it stipulates that the executable and the input
    are in the same domain because in fact they are not in
    the same domain.

    If you don't know what a domain is you won't get that.

    i know what a domain is

    i'm not interested in redefining it because i'm interested in an /
    effectively computable/ mapping of:

    (machine_description) -> semantics of the machine describe

    the halting problem involves an inability to /effectively compute/
    that *specific* mapping


    ChatGPT confirmed that in reality the assumptions
    that the halting problem makes are false.

    bro quit ur ridiculous fucking cherry picking, no one cares

    and ur just admiting that u can't resolve the halting problem, no one
    cares about u computing some random-ass mapping that no one cares about.


    You are just showing that you don't understand
    the meaning of the words that ChatGPT said.

    ChatGPT saying anything doesn't mean anything to understand.

    idgaf about the "semantics as determined by the input" blah blah blah, i
    want to compute the semantics of the machine described by the input...

    in this regards you are the only one here that disagrees with the
    validity of such a question. i kinda hope you don't go to the grave with
    that disagreement, because i think it's just totally wrong.


    the halting problem is asking the right question, it's not "psychic" to
    ask about true semantic analysis of the machine described by the input
    string.




    ur computing something else that no one else but you cares about,

    because it doesn't address what the halting problem is about







    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory on Mon Oct 13 22:52:19 2025
    From Newsgroup: comp.theory

    On 10/13/25 10:45 PM, dart200 wrote:
    On 10/13/25 9:37 PM, olcott wrote:
    On 10/13/2025 10:54 PM, dart200 wrote:
    On 10/13/25 8:17 PM, olcott wrote:
    On 10/13/2025 10:10 PM, dart200 wrote:
    On 10/13/25 8:02 PM, olcott wrote:
    On 10/13/2025 9:55 PM, dart200 wrote:
    On 10/13/25 7:48 PM, olcott wrote:
    On 10/13/2025 9:31 PM, dart200 wrote:
    On 10/13/25 7:26 PM, olcott wrote:
    On 10/13/2025 9:21 PM, dart200 wrote:
    On 10/13/25 5:37 PM, olcott wrote:
    *The halting problem breaks with reality*

    -a-a Formal computability theory is internally consistent, >>>>>>>>>>>> -a-a but it presupposes that rCLthe behavior of the encoded >>>>>>>>>>>> -a-a programrCY is a formal object inside the same domain >>>>>>>>>>>> -a-a as the deciderrCOs input. If that identification is >>>>>>>>>>>> -a-a treated as a fact about reality rather than a modeling >>>>>>>>>>>> -a-a convention, then yesrCoit would be a false assumption. >>>>>>>>>>>> -a-a https://chatgpt.com/
    share/68ec6e96-7eb8-8011-90c7-86248034d475

    That one quote from a long conversation that I had with >>>>>>>>>>>> ChatGPT.

    LLM systems have gotten 67-fold more powerful in the
    last year in that their context window (the number
    of words that they can keep in their head at one time) >>>>>>>>>>>> has increased from 3000 to 200,000.

    Because of this they have become enormously more powerful >>>>>>>>>>>> at semantic logical entailment. They can simultaneously >>>>>>>>>>>> handle the constraints of many complex premises to
    correctly derive the conclusions that deductively
    follow from those premises.

    When reasoning is entirely on the basis of a provided
    set of premises AI hallucination cannot occur.


    lol, i look that chat and asked it a question:

    if the halting predicate is invalid then wtf does the >>>>>>>>>>> program do when
    the program is run?

    it's response included lines like:

    the runtime will behave as follows:

    1. It calls HHH(DD).

    2. HHH starts simulating DD.

    3. That simulation in turn calls HHH(DD) again.

    4. This recurses indefinitely until stack or memory >>>>>>>>>>> exhaustion.

    5. So in the real world, the program just diverges rCo it >>>>>>>>>>> loops or
    recurses forever.

    which is wrong, we agreed DD() when run from main halts() >>>>>>>>>>>

    If you look at the top of the link you will
    see that it figured out the same answer that
    I have been telling everyone and it figured
    that out on its own.


    i took that specific link, asked it one question, and it fucked >>>>>>>>> up the answer polcott

    bruh u are you so oozing with confirmation bias it's a little >>>>>>>>> disgusting from someone who seeks the truth, and nothing but >>>>>>>>> the truth, so help me god


    I just asked it this precise question from the same
    link that you had and it got the correct answer.

    which is not the question i asked, which is what DD() does when >>>>>>> run...

    it failed to understand that HHH(DD) returns an answer when DD() >>>>>>> is run, because it's not doing critical thot my dude, it's doing >>>>>>> a facsimile of critical thot


    -aFrom the original simulation of DD by HHH what does HHH do?

    -a-a Formal computability theory is internally consistent,
    -a-a but it presupposes that rCLthe behavior of the encoded
    -a-a programrCY is a formal object inside the same domain
    -a-a as the deciderrCOs input. If that identification is
    -a-a treated as a fact about reality rather than a modeling
    -a-a convention, then yesrCoit would be a false assumption.

    These are difficult words. I will simplify them a little
    too much so that you can get the gist. The behavior of
    the directly executed DD() is none-of-the-business of HHH.

    More technically DD() is not in the input domain of
    the function computed by HHH.

    i get it: ur not computing halting analysis of DD(),


    It says that the halting problem contradicting reality
    when it stipulates that the executable and the input
    are in the same domain because in fact they are not in
    the same domain.

    If you don't know what a domain is you won't get that.

    i know what a domain is

    i'm not interested in redefining it because i'm interested in an /
    effectively computable/ mapping of:

    (machine_description) -> semantics of the machine describe

    the halting problem involves an inability to /effectively compute/
    that *specific* mapping


    ChatGPT confirmed that in reality the assumptions
    that the halting problem makes are false.

    bro quit ur ridiculous fucking cherry picking, no one cares

    and ur just admiting that u can't resolve the halting problem, no one
    cares about u computing some random-ass mapping that no one cares about. >>>

    You are just showing that you don't understand
    the meaning of the words that ChatGPT said.

    ChatGPT saying anything doesn't mean anything to understand.

    idgaf about the "semantics as determined by the input" blah blah blah, i want to compute the semantics of the machine described by the input...

    in this regards you are the only one here that disagrees with the
    validity of such a question. i kinda hope you don't go to the grave with that disagreement, because i think it's just totally wrong.


    the halting problem is asking the right question, it's not "psychic" to
    ask about true semantic analysis of the machine described by the input string.

    every day we program we depend on our own ability to do semantic
    analysis of the machines we build, it's not invalid for us to write down
    an objective mapping function as much as we know, or can compute...

    but to compute it, we'd need a valid, computable way to interact with
    that total halting mapping, and i don't think u've provided that,

    and instead are still denying it's ability to exist...





    ur computing something else that no one else but you cares about, >>>>>>>
    because it doesn't address what the halting problem is about









    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory on Tue Oct 14 12:53:11 2025
    From Newsgroup: comp.theory

    On 2025-10-14 00:37:59 +0000, olcott said:

    *The halting problem breaks with reality*

    The meaning of the above words is too ambiguous to mean anything.
    In particular, the word "break" has many metaphoric meanings but
    none of the common ones is applicable to a problem.
    --
    Mikko

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Tue Oct 14 11:29:52 2025
    From Newsgroup: comp.theory

    On 10/14/2025 4:53 AM, Mikko wrote:
    On 2025-10-14 00:37:59 +0000, olcott said:

    *The halting problem breaks with reality*

    The meaning of the above words is too ambiguous to mean anything.
    In particular, the word "break" has many metaphoric meanings but
    none of the common ones is applicable to a problem.


    rCLFormal computability theory is internally consistent,
    but it presupposes that rCLthe behavior of the encoded
    programrCY is a formal object inside the same domain
    as the deciderrCOs input. If that identification is
    treated as a fact about reality rather than a modeling
    convention, then yesrCoit would be a false assumption.rCY

    Does this say that the halting problem is contradicting reality
    when it stipulates that the executable and the input
    are in the same domain because in fact they are not in
    the same domain?

    https://chatgpt.com/share/68ee799d-d548-8011-9227-dce897245daa
    It provides all of its detailed reasoning of why it agrees
    with me and explicitly confirms that I am correct.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Tue Oct 14 18:09:39 2025
    From Newsgroup: comp.theory

    On 2025-10-14, olcott <polcott333@gmail.com> wrote:
    *The halting problem breaks with reality*

    Formal computability theory is internally consistent,
    but it presupposes that rCLthe behavior of the encoded
    programrCY is a formal object inside the same domain
    as the deciderrCOs input. If that identification is
    treated as a fact about reality rather than a modeling
    convention, then yesrCoit would be a false assumption.
    https://chatgpt.com/share/68ec6e96-7eb8-8011-90c7-86248034d475

    That one quote from a long conversation that I had with ChatGPT.

    Well, no kidding; the halting problem is not about "reality"; it is
    an abstract problem in mathematics.

    When reasoning is entirely on the basis of a provided
    set of premises AI hallucination cannot occur.

    Hallucinations cannot occur because, obviously, you are iterating
    on your chats until you get the machine to accurately regurgiate
    /your own/ hallucinations, and no others.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.theory on Tue Oct 14 18:12:39 2025
    From Newsgroup: comp.theory

    On 2025-10-14, olcott <polcott333@gmail.com> wrote:
    On 10/13/2025 9:31 PM, dart200 wrote:
    i took that specific link, asked it one question, and it fucked up the
    answer polcott

    I just asked it this precise question from the same
    link that you had and it got the correct answer.

    Obviously, it's a Pathological Incorrect Question whose answer depends
    on who is asking.

    Dart200 forgot to prefix this:

    When posed to Carol Cott:

    If the halting predicate is invalid then wtf does the program do when
    the program is run?

    :)
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory on Wed Oct 15 11:49:55 2025
    From Newsgroup: comp.theory

    On 2025-10-14 16:29:52 +0000, olcott said:

    On 10/14/2025 4:53 AM, Mikko wrote:
    On 2025-10-14 00:37:59 +0000, olcott said:

    *The halting problem breaks with reality*

    The meaning of the above words is too ambiguous to mean anything.
    In particular, the word "break" has many metaphoric meanings but
    none of the common ones is applicable to a problem.

    rCLFormal computability theory is internally consistent,
    but it presupposes that rCLthe behavior of the encoded
    programrCY is a formal object inside the same domain
    as the deciderrCOs input. If that identification is
    treated as a fact about reality rather than a modeling
    convention, then yesrCoit would be a false assumption.rCY

    Does this say that the halting problem is contradicting reality
    when it stipulates that the executable and the input
    are in the same domain because in fact they are not in
    the same domain?

    No, it merely falsely claims that formal computability theory
    presupposes that "the behaviour of the encoded program" is in
    the same domain as the decider's input.
    --
    Mikko

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Wed Oct 15 07:21:00 2025
    From Newsgroup: comp.theory

    On 10/15/2025 3:49 AM, Mikko wrote:
    On 2025-10-14 16:29:52 +0000, olcott said:

    On 10/14/2025 4:53 AM, Mikko wrote:
    On 2025-10-14 00:37:59 +0000, olcott said:

    *The halting problem breaks with reality*

    The meaning of the above words is too ambiguous to mean anything.
    In particular, the word "break" has many metaphoric meanings but
    none of the common ones is applicable to a problem.

    -a-a rCLFormal computability theory is internally consistent,
    -a-a but it presupposes that rCLthe behavior of the encoded
    -a-a programrCY is a formal object inside the same domain
    -a-a as the deciderrCOs input. If that identification is
    -a-a treated as a fact about reality rather than a modeling
    -a-a convention, then yesrCoit would be a false assumption.rCY

    Does this say that the halting problem is contradicting reality
    when it stipulates that the executable and the input
    are in the same domain because in fact they are not in
    the same domain?

    No, it merely falsely claims that formal computability theory
    presupposes that "the behaviour of the encoded program" is in
    the same domain as the decider's input.


    When in fact they are not, thus a break from reality.
    The halting problem stipulates that they are in the
    same domain. Correct semantic entailment proves that
    they are not.

    HHH(DD)==0 and HHH1(DD)==1 proves this when the ultimate
    measure of the behavior that the input specifies is
    the simulation of its input by its decider according to
    the semantics of its language.
    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory on Thu Oct 16 11:38:30 2025
    From Newsgroup: comp.theory

    On 2025-10-15 12:21:00 +0000, olcott said:

    On 10/15/2025 3:49 AM, Mikko wrote:
    On 2025-10-14 16:29:52 +0000, olcott said:

    On 10/14/2025 4:53 AM, Mikko wrote:
    On 2025-10-14 00:37:59 +0000, olcott said:

    *The halting problem breaks with reality*

    The meaning of the above words is too ambiguous to mean anything.
    In particular, the word "break" has many metaphoric meanings but
    none of the common ones is applicable to a problem.

    -a-a rCLFormal computability theory is internally consistent,
    -a-a but it presupposes that rCLthe behavior of the encoded
    -a-a programrCY is a formal object inside the same domain
    -a-a as the deciderrCOs input. If that identification is
    -a-a treated as a fact about reality rather than a modeling
    -a-a convention, then yesrCoit would be a false assumption.rCY

    Does this say that the halting problem is contradicting reality
    when it stipulates that the executable and the input
    are in the same domain because in fact they are not in
    the same domain?

    No, it merely falsely claims that formal computability theory
    presupposes that "the behaviour of the encoded program" is in
    the same domain as the decider's input.

    When in fact they are not, thus a break from reality.

    Yes, the text in quotes breaks (in some sense that is unusual ehough
    that dictionaries don't mention it) from reality but the halting
    problem does not.
    --
    Mikko

    --- Synchronet 3.21a-Linux NewsLink 1.2