• Re: on the limit to undecidability

    From dart200@user7160@newsgrouper.org.invalid to comp.theory,alt.buddha.short.fat.guy on Sun Dec 28 23:13:54 2025
    From Newsgroup: comp.theory

    On 12/28/25 3:40 PM, Richard Damon wrote:
    On 12/28/25 5:23 PM, dart200 wrote:


    it doesn't really matter if we have a test for it, the fact is it will
    be hit during a full enumeration, and it cannot be functionally
    equivalent to any of the machines that came before it.

    And how do you know you HAVE come to it?

    How do you distinguish the machine that can't be decided on, verse the machine that halts, but just needs more steps than you have done?

    The problem is you end up not being able to make the decision in a
    bounded time.

    i don't need brute force to determine that the machine <q0 0 0 q0> is an infinite loop
    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,alt.buddha.short.fat.guy on Mon Dec 29 08:37:16 2025
    From Newsgroup: comp.theory

    On 12/29/25 2:13 AM, dart200 wrote:
    On 12/28/25 3:40 PM, Richard Damon wrote:
    On 12/28/25 5:23 PM, dart200 wrote:


    it doesn't really matter if we have a test for it, the fact is it
    will be hit during a full enumeration, and it cannot be functionally
    equivalent to any of the machines that came before it.

    And how do you know you HAVE come to it?

    How do you distinguish the machine that can't be decided on, verse the
    machine that halts, but just needs more steps than you have done?

    The problem is you end up not being able to make the decision in a
    bounded time.

    i don't need brute force to determine that the machine <q0 0 0 q0> is an infinite loop


    Right, but that doesn't show that there can't be machine that you can't determine in finite time.

    It is hard to prove the non-existence of a thing, and it definitely
    can't be done by an example.

    The problem is your method of enumerating all machines doesn't help you
    make this decision.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory on Tue Dec 30 17:37:07 2025
    From Newsgroup: comp.theory

    On 12/26/25 3:37 PM, Tristan Wibberley wrote:
    On 26/12/2025 13:37, Richard Damon wrote:
    On 12/26/25 2:08 AM, Tristan Wibberley wrote:
    On 25/12/2025 21:52, dart200 wrote:

    rick mentioned that there could be "dead code" that keep the computation >>>> exactly the same, but the instruction graph isn't isomorphic. so you can >>>> have machines that compute *exactly* the same, but don't have code level >>>> isomorphism. computing those would be uncomputable under current theory >>>> because now we're talking about computing actual semantics not just
    transition graph comparisons.

    Oh I see, is that what he was talking about. Good thinking. I didn't
    consider unreachable nodes.

    And the nodes don't even need to be unreachable in the graph. We can add
    code like:
    // The following could be a complicated formula, even based on our
    // input, that just turns out to always be 0.

    No, dart200's notion of equivalence includes how the tape content varies step-wise, not all segments that produce the same result are
    step-by-step identical - but I certainly haven't carefully considered that.


    But such equivalence isn't good enough to acheive his goal, as the
    challenge program isn't limited to that set.

    There isn't much interest in trying to create a computable varient of
    the problem that so greatly limits what the input can be.

    This is one of his problems. There are many existing partial halt
    deciders algorithms. And having an ansser to say that you give up and
    are not going to answer is also a common technique. (Better to answer I
    don't know then to just run forever).

    He tries to limit the case to the specific patholgoical case, but he
    can't, as that case isn't limited to his limited category that he can
    check. So he still can't make his machine actually a decider.

    Also, his concept of the input being "incoherent" is just a wrong idea,
    as the machine is fully defined in behavior. so not incoherent.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory on Tue Dec 30 22:56:51 2025
    From Newsgroup: comp.theory

    On 12/30/25 2:37 PM, Richard Damon wrote:
    On 12/26/25 3:37 PM, Tristan Wibberley wrote:
    On 26/12/2025 13:37, Richard Damon wrote:
    On 12/26/25 2:08 AM, Tristan Wibberley wrote:
    On 25/12/2025 21:52, dart200 wrote:

    rick mentioned that there could be "dead code" that keep the
    computation
    exactly the same, but the instruction graph isn't isomorphic. so
    you can
    have machines that compute *exactly* the same, but don't have code
    level
    isomorphism. computing those would be uncomputable under current
    theory
    because now we're talking about computing actual semantics not just
    transition graph comparisons.

    Oh I see, is that what he was talking about. Good thinking. I didn't
    consider unreachable nodes.

    And the nodes don't even need to be unreachable in the graph. We can add >>> code like:
    // The following could be a complicated formula, even based on our
    // input, that just turns out to always be 0.

    No, dart200's notion of equivalence includes how the tape content varies
    step-wise, not all segments that produce the same result are
    step-by-step identical - but I certainly haven't carefully considered
    that.


    But such equivalence isn't good enough to acheive his goal, as the
    challenge program isn't limited to that set.

    There isn't much interest in trying to create a computable varient of
    the problem that so greatly limits what the input can be.

    This is one of his problems. There are many existing partial halt
    deciders algorithms. And having an ansser to say that you give up and
    are not going to answer is also a common technique. (Better to answer I don't know then to just run forever).

    i got a new potential goal post:

    adding UNCOMPUTABLE to ghost_detector_halts,

    but following that up with a proof that both REDUCIBLE and UNCOMPUTABLE programs (in respect to some semantics) are *always* functionally
    equivalent to some decidedly less-complex machine that is COMPUTABLE ...

    and therefore we just can ignore the fact those exist as a weird but irrelevant artifact of self-referential logic ... much like godel's
    useless statement about nothing except the fact it has no proof.

    if u want to do more than that we need reflective computing.


    He tries to limit the case to the specific patholgoical case, but he
    can't, as that case isn't limited to his limited category that he can
    check. So he still can't make his machine actually a decider.

    Also, his concept of the input being "incoherent" is just a wrong idea,
    as the machine is fully defined in behavior. so not incoherent.
    --
    a burnt out swe investigating into why our tooling doesn't involve
    basic semantic proofs like halting analysis

    please excuse my pseudo-pyscript,

    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory on Wed Dec 31 07:23:51 2025
    From Newsgroup: comp.theory

    On 12/31/25 1:56 AM, dart200 wrote:
    On 12/30/25 2:37 PM, Richard Damon wrote:
    On 12/26/25 3:37 PM, Tristan Wibberley wrote:
    On 26/12/2025 13:37, Richard Damon wrote:
    On 12/26/25 2:08 AM, Tristan Wibberley wrote:
    On 25/12/2025 21:52, dart200 wrote:

    rick mentioned that there could be "dead code" that keep the
    computation
    exactly the same, but the instruction graph isn't isomorphic. so
    you can
    have machines that compute *exactly* the same, but don't have code >>>>>> level
    isomorphism. computing those would be uncomputable under current
    theory
    because now we're talking about computing actual semantics not just >>>>>> transition graph comparisons.

    Oh I see, is that what he was talking about. Good thinking. I didn't >>>>> consider unreachable nodes.

    And the nodes don't even need to be unreachable in the graph. We can
    add
    code like:
    // The following could be a complicated formula, even based on our
    // input, that just turns out to always be 0.

    No, dart200's notion of equivalence includes how the tape content varies >>> step-wise, not all segments that produce the same result are
    step-by-step identical - but I certainly haven't carefully considered
    that.


    But such equivalence isn't good enough to acheive his goal, as the
    challenge program isn't limited to that set.

    There isn't much interest in trying to create a computable varient of
    the problem that so greatly limits what the input can be.

    This is one of his problems. There are many existing partial halt
    deciders algorithms. And having an ansser to say that you give up and
    are not going to answer is also a common technique. (Better to answer
    I don't know then to just run forever).

    i got a new potential goal post:

    adding UNCOMPUTABLE to ghost_detector_halts,


    So, how is it going to decide that something is UNCOMPUTABLE?



    but following that up with a proof that both REDUCIBLE and UNCOMPUTABLE programs (in respect to some semantics) are *always* functionally
    equivalent to some decidedly less-complex machine that is COMPUTABLE ...

    And thus showing that your assumption makes your system contradictory.


    and therefore we just can ignore the fact those exist as a weird but irrelevant artifact of self-referential logic ... much like godel's
    useless statement about nothing except the fact it has no proof.

    And thus, you admit that you are just ignoring the contradictions that
    your assumption made, and thus you have let your system blow itself up
    with that contradiction.


    if u want to do more than that we need reflective computing.

    Which, as I pointed out, becomes a contradiction in terms, as it implies
    that the correct answer for an objective statement about something that doesn't depend on context, depends on the context.

    Whether a given machine halts or not is not dependent on the context you
    are asking the question, as that machine iteself never changed.



    He tries to limit the case to the specific patholgoical case, but he
    can't, as that case isn't limited to his limited category that he can
    check. So he still can't make his machine actually a decider.

    Also, his concept of the input being "incoherent" is just a wrong
    idea, as the machine is fully defined in behavior. so not incoherent.



    --- Synchronet 3.21a-Linux NewsLink 1.2