• Re: is the ct-thesis cooked?

    From dart200@user7160@newsgrouper.org.invalid to comp.theory on Sun Jan 25 20:01:15 2026
    From Newsgroup: comp.theory

    On 1/25/26 2:24 PM, Richard Damon wrote:
    On 1/25/26 4:08 PM, dart200 wrote:
    On 1/25/26 10:20 AM, Richard Damon wrote:
    On 1/24/26 10:30 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:29 PM, dart200 wrote:
    On 1/23/26 5:52 PM, Richard Damon wrote:
    On 1/20/26 8:36 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:
    On 1/20/26 1:46 AM, dart200 wrote:
    On 1/19/26 9:29 PM, Richard Damon wrote:
    On 1/19/26 2:09 AM, dart200 wrote:
    On 1/18/26 12:56 PM, Richard Damon wrote:
    On 1/18/26 1:44 PM, dart200 wrote:
    A Reflective Turing Machine is a mathematical model of a >>>>>>>>>>>>>> machine that performs a computation with the following >>>>>>>>>>>>>> pieces:

    1) A Tape, infinite in capacity, divided into cells which, >>>>>>>>>>>>>> unless otherwise specified, initially contain the "empty" >>>>>>>>>>>>>> symbol, and is capable of storing in each cell, one symbol >>>>>>>>>>>>>> from a defined finite set of symbols.

    2) A Head, which at any point of time points to a specific >>>>>>>>>>>>>> location on the tape. The head can read the symbol on the >>>>>>>>>>>>>> tape at its current position, change the symbol at the >>>>>>>>>>>>>> current location as commanded by the state machine defined >>>>>>>>>>>>>> below, and move a step at a time in either direction. >>>>>>>>>>>>>>
    3) A State Machine, that has a register which store the >>>>>>>>>>>>>> current "state" from among a finite listing of possible >>>>>>>>>>>>>> states, and includes a "program" of tuples of data: >>>>>>>>>>>>>> (Current state, Current Symbol, Operation, New State) that >>>>>>>>>>>>>> causes the machine when it matches the (Current State, >>>>>>>>>>>>>> Current Symbol), updates the tape/ head according to the >>>>>>>>>>>>>> operation, and then transitions to the New State, and then >>>>>>>>>>>>>> Begins again. The state machine has a 2ndary temporary >>>>>>>>>>>>>> buffer tape to store a copy of the current tape during >>>>>>>>>>>>>> certain operations.

    The list of operations possible:

    - HEAD_RIGHT: move the head one cell to the right

    - HEAD_LEFT: move the head one cell to the left

    - WRITE(SYMBOL): write SYMBOL to the head

    - REFLECT: will cause a bunch of machine meta-information >>>>>>>>>>>>>> to be written to the tape, starting at the head,
    overwriting anything its path. The information written to >>>>>>>>>>>>>> tape will include 3 components: the "program" of tuples of >>>>>>>>>>>>>> data, current tuple that the operation is part of, and the >>>>>>>>>>>>>> current tape (the tape state before command runs). At the >>>>>>>>>>>>>> end of the Operation, the head will be moved back to the >>>>>>>>>>>>>> start of the Operation.

    And where does this "meta-information" come from?

    How do you translate the mathematical "tuples" that define >>>>>>>>>>>>> the machine into the finite set of symbols of the system. >>>>>>>>>>>>
    we when write a turing machine description and run it ... >>>>>>>>>>>> the machine description is dumped as it is written

    But you don't write a turing machine description, you create >>>>>>>>>>> a turing machine.

    whatever bro ur just being purposefully obtuse

    No, it is a key point.

    it's a syntax point, which is boring


    A Given Turing machine doesn't have *A* description, so what >>>>>>>>> you want it to write out doesn't have a unique definition.

    ok, it dumps it out as the *description number* as defined by >>>>>>>> turing in his paper /on computable number/ p240, that can be
    uniquely defined for every unique machine

    But that page talks about how to get *A* description numbers
    based on an arbitrary assignment of symbols to values. Nowhere
    does it use *the* as the qualifier.

    No where is that number called "unique". Note, page 241 points
    out that the number you get here will only produce one computable >>>>>>> sequence.

    In fact, he doesn't even qualify "standard form" as being unique, >>>>>>> but it is *a* standard form, as he knows there are many ways to >>>>>>> arbitrarily standardize the form.

    literally the last sentence of p240 to the first of p241:

    /The D.N determine the S.D and the structure of the machine uniquely/ >>>>>>
    but i've quoted that at you before and u denied it before, so ofc... >>>>>>
    anyways, if u had more than half a brain u'd know that it doesn't >>>>>> really matter what the specific syntax is ... so long as whatever >>>>>> it dumps consistently determines the structure of the machine
    uniquely and completely,

    which DNs do, turing demonstrated that with the first paper /on
    computable numbers/, so can we move past theory of computing 101, k? >>>>>>
    or not i guess, i can hear you angrily typing away a willfully
    contrarian response already!! idk Efn+Efn+Efn+ i guess fuck u too eh??? >>>>>>







    i'm not really sure why you have trouble accepting this, >>>>>>>>>>>> let's take a really simple machine using REFLECT:

    Because it is based on a logical error of confusing the >>>>>>>>>>> turing machine description that can be given to a UTM to >>>>>>>>>>> simulate the machine, and the machine itself.

    The problem is there are MANY different UTMs, and different >>>>>>>>>>> UTMs can use different representations, and thus depending on >>>>>>>>>>> which UTM you target, you get different descriptions, many of >>>>>>>>>>> which are not actually in the same alphabet as the machine >>>>>>>>>>> itself.


    <q0 0 REFLECT q1>

    the machine steps would (format: [state tape], head
    indicated by ^:

    [q0 0]
    -a-a-a-a ^
    [q1 <q0 0 REFLECT q1>|<q0 0 REFLECT q1>|0]
    -a-a-a-a ^
    halt

    we can quibble about what the format of that dump should be >>>>>>>>>>>> in, but that's not actually that interesting to me

    But it should be. Your problem is you are describing your >>>>>>>>>>> Turing Machine in a language easy for people to read, but >>>>>>>>>>> awful for a UTM to process.

    how is that relevant to proving principles with it?

    Because it hides the key issue, that there isn't a unique
    description for it to write out. If it gets to arbitarily pick >>>>>>>>> one, then your "program" can't process it, as it doesn't know >>>>>>>>> what "language" it needs to interprete.

    please do actually read turing's paper sometime


    You think I haven't?

    yes, i think u haven't. i'm pretty sure u've looked at some of the >>>>>> words i've quoted, but that's not the same as reading.






    Most actual work with Turing Machines use very limited
    alphabets, while yours looks like it uses near full ASCII. >>>>>>>>>>
    so dump the ASCII in their binary equivalents ...??? i'm not >>>>>>>>>> typing that shit out just to feed ur massive fking ego. like >>>>>>>>>> seriously why do i need to state that to a fucking 70yo chief >>>>>>>>>> engineer???

    Which presumes that this it the proper encoding.

    Your problem is you presume that your program is going to be >>>>>>>>> able to process the output, when you don't define what it will >>>>>>>>> look like.





    Originally, you talked of putting out the tape at the start >>>>>>>>>>>>> of when the machine ran. Now you just seem to recopy the >>>>>>>>>>>>> tape to avoid overwriting it.

    originally i was storing the initial tape so it could be >>>>>>>>>>>> dumped during REFLECT, but i did away with that by just >>>>>>>>>>>> making all tapes start blank, requiring users to use the >>>>>>>>>>>> machine description itself to build the initial tape before >>>>>>>>>>>> running a further computation on it.

    it's just simpler, and i think TMs should fundamentally >>>>>>>>>>>> follow this method too

    The problem then is your complete machines can't take an >>>>>>>>>>> input, and thus complicate the composition property.

    This means you MUST be looking at submachines when you
    discuss properties of computations.

    we always were.

    the semantic properties of a particular computation has always >>>>>>>>>> been defined by the tuple (machine, input)

    No, the sematic properties of a particular cokputation has
    always been defined by the full process of running the machine. >>>>>>>>>
    The "tuple" is a syntax rule. Semantics comes by the complete >>>>>>>>> operation of all the syntax.

    i love how you agree with me while making it look like u disagree >>>>>>>
    So, you think that *the* tuple (machine, input) means anything
    like what happens when you actually RUN the machine on the input? >>>>>>
    the (machine, input) specifies a particular computation or
    "sequence of machine configurations", yes, so therefor it
    consequentially also specifies the semantics of that particular
    computation. whether u determine that specification thru brute
    force or some more intelligent method is quite irrelevant (tho i'm >>>>>> sure you'll disagree but idk)

    No "machine" here would seem to be the description/definition of
    the machine in some form.

    That is a syntactic statement.

    The RESULT of running that requires the semantic operation of
    running the machine,

    i don't need to manually compute:

    () -> loop()

    to figure out what it does. in fact manually computing it would
    never figure out what it does. our ability to program relies on an
    ability to compute the result of computations without brute force
    running them. we just haven't done the work to encode that
    understanding into a general algo because sheeple like ur are hell
    bent on shooting urself in the foot with endless gishgallop denial.

    Sure it can, as a simple loop detector will detect the repeated state,

    which requires more than pure brute forcing because at the very least
    ur storing and comparing to all past states in order to detect the loop


    The classic answer was two simulators simulating the same machine, one stepping two steps at a time, and the other 1 step at a time. If ever
    the two simulators are in the same state, check if the tapes are
    identical. If so, you have your loop.

    ah yeah i do remember that



    Yes, you can define syntactic rules to handle the simpler cases, but
    they CAN NOT handle a general program.

    why because muh paradox??? lol

    No, because you can define a fininte set of syntax rules that detect a finite set of loop constructions.

    it's definitely at least an infinite set, just not as infinite as a
    turing complete one ...




    YEs, a PROVABLY CORRECT program can likely be determined what it does
    without running it, *IF* you are given the proof. The issue is that
    most programs are not provably correct.

    The problem is that proving a given program is provable correct is
    generally not computable.






    The tuple is just the symbolic expression of the machine and
    input. That is SYNTACTIC.

    RUNNING the machine is what gets us to the semantics.




    And, because if your use of a reflect instruction changes the >>>>>>>>>>> output based on different detected contexts, those
    submachines are no longer actually computations.

    the context *is* the input,

    Then your sub-machines are not fully controlable, as part of >>>>>>>>> their input isn't settable by their caller, but is forced by >>>>>>>>> things beyond its ability to control

    before you run any machine you can examine the entire machine >>>>>>>> including the interplay between context-independent and context- >>>>>>>> dependent sub- machines, so idgaf about ur complaints of
    "uncontrollable" machines

    But that is the problem, you CAN'T do that, as parts of the
    semantics are only determinable by running the machine.

    begging the question, again

    No, seems to be you not knowing what you ara saying.






    the formal parameters are just part of the total input

    So, your definition is about uncontrollable machines.

    the point is indeed to be able to assert correctness even when >>>>>>>> malicious input is given

    And how does not being able to fully give the input to an
    algorithm help you here?

    computations can lie about context in the total enumeration
    causing paradoxes,

    Computations are soul-less machines, they can't "lie" as that is an >>>>> act of will or judgement.

    Since the halting problem doesn't depend on the context we are
    asking the decider on, there is no lie about that that can matter.


    REFLECT cannot do so by definition.

    And thus make the "input" uncontrolled, or the machine not perform
    a computation (depending on your definitons)

    it's entirely controlled by the context (which include formal
    params), which a programmer can account for when programming.

    But that only applies to problems based on the context of the program
    that is asking, and NOT about problems that are independent of such
    context.

    Questions like does a given program halt, is not dependent on the
    context of the machine that is trying to ask the question.


    it's done all the time in react, it's not a big deal.

    Because react isn't based on "computations".

    Of course, since you don;t understand what that means, it means
    nothing to you.

    it doesn't mean anything beside it doesn't fit into some weird little
    box u keep arguing about that idgaf about because no one's proven that
    said little box to actually be all of the box ...

    No, you are just showing you don't understand what you are talking
    about, and just say anything you don't understand doesn't matter.

    Thas is just your "stupid" talking

    not an argument, just continued definist fallacy.

    post actual contradiction or shut the fuck up tbh.







    i cannot define away lies in the total enumeration of all possible >>>>>> input ... i can do so with the mechanisms of machine itself, however >>>>>
    But since the question doesn't depend on the context it is asked
    in, we can't lie about anything that actually matters.

    the ability to compute an accurate response, however, does

    Which just means the question isn't computable.

    to a reasonable person it would,

    Only someone that doesn't know what the word is defined to be in this context.

    more definist fallacy


    Of course, Stupid people agree to all sorts of nonsense.

    ad hominem



    ur concerned about little boxes without actual justified reason. u
    never actually put in the work to show contradictions u just complain
    about definitions in an endless slop of various definist fallacies.

    Nope, but it seems you don't understand the basics to understand what I
    am saying.

    definist fallacy




    You don't seem to understand that basic definition of a computation.

    Which of course, just means you idea of an expanded theory is almost
    certainly worthless.






    It says that you can't actually fully test your machine, as you >>>>>>> can't

    why even test things when u can prove them instead???

    Because it is hard to prove behavior of uncontrollable inputs.

    It is also hard to prove something correct, if your API doesn't
    even permit asking the question you are supposed to be asking.

    And, if the result has one correct answer for the part you can
    give, giving two (or more) different answers based on
    uncontrollable input makes your machine BY DEFINITION incorrect.


    it's incredibly ironic that ur complaining about not being able to >>>>>> test things in discussion i undertake with a goal to replace
    testing with proofs...

    But, if I can't GIVE the required input to get the results, you
    can't prove I can get the right result.




    and the pathetic part is ur total lack of ability to have any
    foresight or vision

    even attempt to generate all classes of input, and you are trying >>>>>>> to define that answers about something can depend on context that >>>>>>> doesn't actually affect that thing, but is the context of the asker. >>>>>>>





    i'm still copying the whole tape like before, since that's >>>>>>>>>>>> needed to fully describe the current configuration


    Since the length of the tape isn't bounded (but is finite) >>>>>>>>>>>>> how do you go back to the original start of operation? >>>>>>>>>>>>> Remember, the "machine" has a fixed definition, and thus >>>>>>>>>>>>> fixed size "registers". Not a problem for a standard Turing >>>>>>>>>>>>> Machine, as the only register is the current state, and we >>>>>>>>>>>>> know how many states are in the design, so how "big" that >>>>>>>>>>>>> register needs to be.

    unbounded buffer, just like the tape

    And thus NOT a valid computation atom. Sorry.

    Fundamental in the definition is its boundedness.

    making shit up gish gallop

    Nope, maybe you should study the initial problem.




    heck you could intersperse this buffer among the tape using >>>>>>>>>>>> alternating cells, but it's mentally easier and more
    theoretically secure to just keep it metaphysically
    separated so it's inaccessible to pathological computations. >>>>>>>>>>>
    Not following the rules make is just invalid.

    the thing has an unbounded tape, it can also have an unbounded >>>>>>>>>> portion of that tape sectioned off just for machine-level use, >>>>>>>>>> and still have unbounded space for whatever computation-level >>>>>>>>>> work is done.

    Right, but that is the "input" not the algorithm part of the >>>>>>>>> machine.
    The ALGORITHM needs to be boundedly described.

    THe input just needs to be finite, but can be unbounded.

    context is necessarily length, as all individual configurations/ >>>>>>>> steps of the machine are necessarily finite in length.




    hilbert's hotel is great, no?

    But it is the hotel, not the desk that is infinite.



    Learn how to follow the rules.



    It seems you still have "bugs" in your finite machine that >>>>>>>>>>>>> is your processing core.

    these kinda details kinda bore with me being obvious to me, >>>>>>>>>>>> ya know?

    Which is what makes your work just garbage, Good work looks >>>>>>>>>>> at the details.

    I thought you wanted provably correct programs. It is the >>>>>>>>>>> ignoring of the details that causes most of the bugs you >>>>>>>>>>> don't want.

    none of this gish gallop has anything to do with the
    theoretical power of the system

    sure it does. A system that doesn't exist can't do anything. >>>>>>>>>





    4) A defined starting state, and starting head position >>>>>>>>>>>>>>
    This list of tuples can only have a single entry for >>>>>>>>>>>>>> (Current State, Current Symbol), and if no entry matches >>>>>>>>>>>>>> the current condition, the machine halts.

    ... wow that was extremely boring exercise of futility >>>>>>>>>>>>>> that will convince a certain massive dick of absolutely >>>>>>>>>>>>>> fucking nothing that u already hadn't been convinced of. >>>>>>>>>>>>>>
    it's funny that richard considers himself too stupid to >>>>>>>>>>>>>> follow the REFLECT operation


    You still haven't defined HOW to generate the output, and >>>>>>>>>>>>> you have changed your definition, as originally it put out >>>>>>>>>>>>> the original tape contents.

    Why do you need to write out the current tape contents? >>>>>>>>>>>>
    because a machine simulated within a base level machine >>>>>>>>>>>> runtime needs access to the base level machine tape to know >>>>>>>>>>>> where it is in the overall machine execution.


    WHy do you need to write out the tuple that does the >>>>>>>>>>>>> "Reflect" operation?

    because the transition being made is "where" the machine is >>>>>>>>>>>> witin the machine description

    ultimately: there is a fixed tuple that describes the >>>>>>>>>>>> overall machine's initial configuration, and there a fixed >>>>>>>>>>>> tuple the described the machine's current configuration, >>>>>>>>>>>>
    and REFLECT needs to dump all the information required to >>>>>>>>>>>> build both those tuples so a simulation can be done to >>>>>>>>>>>> generate all the steps between the initial tuple and current >>>>>>>>>>>> tuple, which then allows for computing everything that can >>>>>>>>>>>> be known about "where" that REFLECT was done with respect to >>>>>>>>>>>> the overall machine execution.
















    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Sun Jan 25 21:56:54 2026
    From Newsgroup: comp.theory

    On 1/25/26 2:36 PM, Richard Damon wrote:
    On 1/25/26 4:05 PM, dart200 wrote:
    On 1/25/26 10:21 AM, Richard Damon wrote:
    On 1/24/26 9:24 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:36 PM, dart200 wrote:
    On 1/24/26 6:44 AM, Richard Damon wrote:
    On 1/20/26 8:55 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:
    On 1/20/26 1:18 AM, dart200 wrote:
    On 1/19/26 9:29 PM, Richard Damon wrote:
    On 1/18/26 11:51 PM, dart200 wrote:
    On 1/18/26 4:28 PM, Richard Damon wrote:
    On 1/18/26 4:50 PM, dart200 wrote:
    On 1/18/26 12:56 PM, Richard Damon wrote:
    On 1/18/26 1:15 PM, dart200 wrote:
    On 1/18/26 4:05 AM, Richard Damon wrote:
    On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote:
    On 1/17/26 10:14 PM, dart200 wrote:

    Good luck starving to death when your money runs out. >>>>>>>>>>>>>>>>>>
    one can only hope for so much sometimes EfOA >>>>>>>>>>>>>>>>>>



    I guess you don't understand the rules of logic. >>>>>>>>>>>>>>>>>>>>
    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even if >>>>>>>>>>>>>>>>>>>>>> u haven't understood it yet) that produces a >>>>>>>>>>>>>>>>>>>>>> consistent deterministic result that is "not a >>>>>>>>>>>>>>>>>>>>>> computation".

    Because you get that result only by equivocating on >>>>>>>>>>>>>>>>>>>>> your definitions.

    If the context is part of the inpt to make the >>>>>>>>>>>>>>>>>>>>> output determistic from the input, then they fail >>>>>>>>>>>>>>>>>>>>> to be usable as sub- computations as we can't >>>>>>>>>>>>>>>>>>>>> control that context part of the input. >>>>>>>>>>>>>>>>>>>>>
    When we look at just the controllable input for a >>>>>>>>>>>>>>>>>>>>> sub- computation, the output is NOT a deterministic >>>>>>>>>>>>>>>>>>>>> function of that inut.


    not sure what the fuck it's doing if it's not a >>>>>>>>>>>>>>>>>>>>>> computation

    Its using hidden inputs that the caller can't control. >>>>>>>>>>>>>>>>>>>>
    which we do all the time in normal programming, >>>>>>>>>>>>>>>>>>>> something which apparently u think the tHeOrY oF >>>>>>>>>>>>>>>>>>>> CoMpUtInG fails to encapsulate

    Right, but that isn't about computations. >>>>>>>>>>>>>>>>>>>

    pretty crazy we do a bunch "non-computating" in the >>>>>>>>>>>>>>>>>>>> normal act of programming computers

    Why?

    As I have said, "Computatations" is NOT about how >>>>>>>>>>>>>>>>>>> modern computers work.

    I guess you are just showing that you fundamentally >>>>>>>>>>>>>>>>>>> don't understand the problem field you are betting >>>>>>>>>>>>>>>>>>> your life on.

    one would presume the fundamental theory of computing >>>>>>>>>>>>>>>>>> would be general enough to encapsulate everything >>>>>>>>>>>>>>>>>> computed by real world computers, no???

    Why?

    Remember, the fundamental theory of Computing PREDATES >>>>>>>>>>>>>>>>> the computer as you know it.

    so ur saying it's outdated and needs updating in regards >>>>>>>>>>>>>>>> to new things we do with computers that apparently >>>>>>>>>>>>>>>> turing machines as a model don't have variations of ... >>>>>>>>>>>>>>>
    No, it still handles that which it was developed for. >>>>>>>>>>>>>>
    well it was developed to be a general theory of computing, >>>>>>>>>>>>>> and apparently modern computing has transcended that >>>>>>>>>>>>>> theory ...

    Not really.

    THe way modern processors work, "sub-routines" can fail to >>>>>>>>>>>>> be computations, but whole programs will tend to be. Sub- >>>>>>>>>>>>> routines CAN be built with care to fall under its guidance. >>>>>>>>>>>>
    lol, what are they even if not "computations"???

    not-computations

    great, a set of deterministic steps that produces a result but >>>>>>>>>> is somehow not a compution!

    Because it isn't deterministically based on the INPUT,

    no it's just a series of steps to produce some output.

    Nope, not in the formulation of the theory.

    again: YOU HAVE NOT PROVEN THAT TURING MACHINES, OR ANY EQUIVALENT >>>>>> THEORY, ENCOMPASS ALL POSSIBLE COMPUTATIONS

    like holy fuck, how many times will i need to repeat that???

    it's a ct-THESIS not a ct-LAW

    But I can say that Computations as defined, are all that they can do. >>>>
    i will never care about you complaining about the fact the
    computations i'm talking about don't fit within the particular box
    you call a "Computation", because i just doesn't mean anything,

    In other words, you are just saying you don't care about computation
    theory, and thus why are you complaining about what it says about
    computations.

    no i'm saying i don't care about ur particular definition, richard

    do better that trying to "define" me as wrong. meaning: put in the
    work to demonstrate actual contradictions

    In other words, you want me to prove there isn't a teapot in the
    asteroid belt.

    lol, what. asking for a proof of contradiction is now akin to russel's teapot???

    are u even doing math here or this just a giant definist fallacy shitshow???


    YOU are the one assuming things can be done, but refuse to actually try
    to define an actual algorithm that does so.

    An actual algorithm being an actual sequence of finite atomic steps, and using bounded loops.





    u and the entire field can be wrong about how u specified
    "Computation",

    No, you just don't understand the WHY of computation theory.

    u don't give a why u stupid fucking retarded faggot, and u never will
    because the ct-thesis isn't proven, and u've already gone down the
    moronic hole of "maybe my favorite truth isn't even provable!!!??"

    I have mentioned it, but have you bothered to look into it?

    Comptation Theory was developed to see if "Computations" of this sort
    could be used to generate proofs of the great problems of mathematics
    and logic.

    It was hoped that it would provide a solution to the then curretly
    seeming intractable problems that seemed to have an answer, but they
    just couldn't be found.

    Insteed, it showed that it was a provable fact that some problems would
    not have a solution. And thus we had to accept that we couldn't prove everything we might want.


    and that fact was only shown, for computing in regards to itself, by
    using self-referential set-classification paradoxes, like the halting
    problem

    which is the part i'm trying to reconcile, that very specific (but quite
    broad within tm computing) problem...

    i'm not here to spoon feed humanity a general decision algo, cause we assuredly do not have enough number theory to build that at this time.

    i'm trying to deal with all the claims of hubris that such a general
    decision algo *cannot* exist, by showing *how* it could exist alongside
    the potential for self-referential set-classification paradoxes:

    either by showing that we can just ignore the paradoxes, or by utilizing reflective turing machines to decide on them in a context aware manner,
    both are valid resolutions.

    i know u want me to spoon feed you all the answers here, but i'm one
    freaking dude, with very limited time, and training, stuck with
    discussion that is willfully antagonistic and soaked with fallacy after fallacy,

    turing spend years coming up with his turing jump nonsense, on a brand
    new fresh theory, and people that likely actually tried to be collaborative,

    while i've gotta reconcile a massive almost century old bandwagon, /thru argument alone/

    i don't even have the luxury of pointing to an experiment, i've gotta
    come up with a set of purely logical arguments that stand entirely on
    their own right. einstein had it easier




    and that potential is well codified by the fact the ct-thesis is
    still a thesis and not a law.

    It might just be a thesis, because it IS an unprovable truth.

    lookie u just accepting things as "muh unprovable truths". holy
    fucking hypocritical fucking faggot

    It isn't "just accepting", it is looking at the proofs and understanding
    the logic of them.

    YOU HAVEN'T PROVEN THE CT-THESIS, MY GOD


    imagine if i pulled that argument out on you wildly unfair irrational
    bastard??

    But all you can do is make baseless claims. My statements of unprovable truths is based on real proofs, that seem to be beyond you ability to understand.

    YOU ALSO HAVEN'T PROVEN THAT THE CT-THESIS IS UNPROVABLE, MY FUCKING GOD



    u make a complete mockery of reason with the disgustingly idiot
    dogshit u post over and over again...

    How is looking at proofs and accepting their results.

    BECAUSE UR JUST ARBITRARILY OVERGENERALIZING WITHOUT PROOF,

    OH MY FUCKING GOD

    godel's result is a curse on this species even if he wasn't wrong to
    produce it


    It is the rejection of proofs and thinking things must be different that
    is the mockery.


    holy fuck you dude eat a bag of dicks

    It seems you have eaten them all already,

    sure thing dick rLiN+A




    i will not respond to more comments on this because it's a boring,
    lazy, non-argument that is fucking waste of both our time.





    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Sun Jan 25 22:50:18 2026
    From Newsgroup: comp.theory

    On 1/25/26 2:40 PM, Richard Damon wrote:
    On 1/25/26 4:04 PM, dart200 wrote:
    On 1/25/26 10:23 AM, Richard Damon wrote:
    On 1/24/26 9:05 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 6:06 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:

    the CT-thesis is a thesis, not a proof.
    *I think that I fixed that*
    It seems to me that if something cannot be computed
    by applying finite string transformation rules to
    input finite strings then it cannot be computed.

    As soon as this is shown to be categorically impossible
    then the thesis turns into a proof.


    In other words, you just don't know what you are talking about.

    The fact that it is impossible to build a computation that, given a >>>>> representation of another computation and its input, determine for
    all cases if the computation will halt does nothing to further the
    question of are Turing Machines the most powerful form of computation. >>>>
    contexts-aware machines compute functions:

    (context,input) -> output


    And what problems of interest to computation theory are of that form?

    Computation Theory was to answer questions of logic and mathematics.

    What logic or math is dependent on "context"

    *mechanically computing* the answer *generally* is dependent on context,

    Really?

    Most problems don't care about the context of the person asking it, just
    the context of the thing being looked at.

    well, yes, most problems don't involve pathologically querying a decider specifically for the purpose of then contradicting the decision... EfOa



    and ignoring that is the underlying cause of the halting problem

    Nope.


    clearly novel techniques will be required to resolve long standing
    problems, eh richard???

    Or just lying as you try.

    I guess you think the speed of light is just a suggestion. (Yes, there
    are some thoughts about how to break it, but they require things totally outside our current physics).

    Yes, there may be a new definition of "Computations" that is actually useful, and generates answers to some things we currently think as uncomputable, but until you can actually figure out what that is,
    assuming it is just science fiction.

    or u'd just call it lying over and over again with no serious
    consideration to what's really being said ...



    fuck


    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 26 01:35:45 2026
    From Newsgroup: comp.theory

    On 1/25/26 10:50 PM, dart200 wrote:
    On 1/25/26 2:40 PM, Richard Damon wrote:
    On 1/25/26 4:04 PM, dart200 wrote:
    On 1/25/26 10:23 AM, Richard Damon wrote:
    On 1/24/26 9:05 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 6:06 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:

    the CT-thesis is a thesis, not a proof.
    *I think that I fixed that*
    It seems to me that if something cannot be computed
    by applying finite string transformation rules to
    input finite strings then it cannot be computed.

    As soon as this is shown to be categorically impossible
    then the thesis turns into a proof.


    In other words, you just don't know what you are talking about.

    The fact that it is impossible to build a computation that, given >>>>>> a representation of another computation and its input, determine
    for all cases if the computation will halt does nothing to further >>>>>> the question of are Turing Machines the most powerful form of
    computation.

    contexts-aware machines compute functions:

    (context,input) -> output


    And what problems of interest to computation theory are of that form?

    Computation Theory was to answer questions of logic and mathematics.

    What logic or math is dependent on "context"

    *mechanically computing* the answer *generally* is dependent on context,

    Really?

    Most problems don't care about the context of the person asking it,
    just the context of the thing being looked at.

    well, yes, most problems don't involve pathologically querying a decider specifically for the purpose of then contradicting the decision... EfOa

    or put more generally:

    well, yes, most problems don't involve pathologically querying the truth specifically for the purpose of then contradicting the truth... Ef2-Ef2-Ef2-




    and ignoring that is the underlying cause of the halting problem

    Nope.


    clearly novel techniques will be required to resolve long standing
    problems, eh richard???

    Or just lying as you try.

    I guess you think the speed of light is just a suggestion. (Yes, there
    are some thoughts about how to break it, but they require things
    totally outside our current physics).

    Yes, there may be a new definition of "Computations" that is actually
    useful, and generates answers to some things we currently think as
    uncomputable, but until you can actually figure out what that is,
    assuming it is just science fiction.

    or u'd just call it lying over and over again with no serious
    consideration to what's really being said ...



    fuck



    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory on Mon Jan 26 01:59:46 2026
    From Newsgroup: comp.theory

    On 1/25/26 1:05 PM, dart200 wrote:
    On 1/25/26 10:20 AM, Richard Damon wrote:
    On 1/24/26 9:10 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:31 PM, dart200 wrote:
    On 1/24/26 2:25 PM, Richard Damon wrote:
    On 1/24/26 3:56 PM, dart200 wrote:
    On 1/24/26 11:52 AM, Richard Damon wrote:
    On 1/24/26 1:33 PM, dart200 wrote:
    On 1/24/26 9:26 AM, Richard Damon wrote:
    On 1/24/26 11:45 AM, dart200 wrote:
    On 1/24/26 4:17 AM, Richard Damon wrote:
    On 1/24/26 3:03 AM, dart200 wrote:
    On 1/23/26 5:36 PM, Richard Damon wrote:
    On 1/20/26 9:30 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:

    The term *IS* defined, and to change it means you lie. >>>>>>>>>>>>>>>

    doubling down on definist fallacy ehh???

    I guess you don't understand the difinist fallacy, as >>>>>>>>>>>>>> using the ACTUAL definition isn't a fallacy.

    nah ur just pushing a definition that u've happened latch >>>>>>>>>>>>> onto because it's convenient for u

    classic definist fallacy


    Nope, you are just stuck in a definist fallacy about what a >>>>>>>>>>>> definist fallacy is.

    It seems you don't understand the concept that some things >>>>>>>>>>>> ARE just defined a given way to be in a given context. >>>>>>>>>>>>

    and u richard are not the god what that is


    But "the field" is, and thus you are just saying it is ok to >>>>>>>>>> change the meaning of words.

    i don't believe u represent what "the field" is either


    The go to "the field" and see if they disagree.

    "the field" can come here if they like,

    but as it stands i'm going to call out any more "nOT mUh
    CoMpUTaTiOn" arguments as definist fallacy


    In other words, you are just admitting, you don't care what the
    words mean in the field, you will just continue to be a stupid and >>>>>> ignorant lair about what you are doing.

    i just don't care what YOU, richard, says "CoMpUTaTiOn" means. you
    aren't "the field" bro, and i just really dgaf about ur endless
    definist fallacy


    But apperently you do, as you aren't just going to present your
    ideas directly to "the field" on a peer-reviewed journal, so
    something is telling you that you have something to fix.

    or rather the peer-review is so gatekept i don't even get a review
    back for my submission, just rejection without review.

    the system is broken such that i will take my stance elsewhere.

    everyone else can ignore me at all our peril...


    WHich just shows that you aren't in step with what theory actually is
    talking about.

    That is your problem, you assume the world is wrong, and more than
    likely it is you that is wrong.

    i'm not one continually asserting a bunch of impossible to find teapots floating around in machine space

    by "teapots" i mean unidentifiably undecidable machine ghosts that
    apparently are all non-halting,

    and by "machine space" i mean the full enumeration of turing machines

    just to be clear, eh???



    If you want to break down a "broken" structure, you need to know
    enough about it to SHOW it is broken.

    Just assuming it is just shows that it is most likely YOU that is wrong.

    It is more that the system ignores that which tries to break it,
    because getting side tracked on false trails is too damaging.

    To me it seems more of a peril to accept your misguided ideas.

    The fact that you begin by trying to redefine things out of ignorance
    doesn't help your case.


    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory on Mon Jan 26 11:17:08 2026
    From Newsgroup: comp.theory

    On 1/26/26 4:59 AM, dart200 wrote:
    On 1/25/26 1:05 PM, dart200 wrote:
    On 1/25/26 10:20 AM, Richard Damon wrote:
    On 1/24/26 9:10 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:31 PM, dart200 wrote:
    On 1/24/26 2:25 PM, Richard Damon wrote:
    On 1/24/26 3:56 PM, dart200 wrote:
    On 1/24/26 11:52 AM, Richard Damon wrote:
    On 1/24/26 1:33 PM, dart200 wrote:
    On 1/24/26 9:26 AM, Richard Damon wrote:
    On 1/24/26 11:45 AM, dart200 wrote:
    On 1/24/26 4:17 AM, Richard Damon wrote:
    On 1/24/26 3:03 AM, dart200 wrote:
    On 1/23/26 5:36 PM, Richard Damon wrote:
    On 1/20/26 9:30 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:

    The term *IS* defined, and to change it means you lie. >>>>>>>>>>>>>>>>

    doubling down on definist fallacy ehh???

    I guess you don't understand the difinist fallacy, as >>>>>>>>>>>>>>> using the ACTUAL definition isn't a fallacy.

    nah ur just pushing a definition that u've happened latch >>>>>>>>>>>>>> onto because it's convenient for u

    classic definist fallacy


    Nope, you are just stuck in a definist fallacy about what a >>>>>>>>>>>>> definist fallacy is.

    It seems you don't understand the concept that some things >>>>>>>>>>>>> ARE just defined a given way to be in a given context. >>>>>>>>>>>>>

    and u richard are not the god what that is


    But "the field" is, and thus you are just saying it is ok to >>>>>>>>>>> change the meaning of words.

    i don't believe u represent what "the field" is either


    The go to "the field" and see if they disagree.

    "the field" can come here if they like,

    but as it stands i'm going to call out any more "nOT mUh
    CoMpUTaTiOn" arguments as definist fallacy


    In other words, you are just admitting, you don't care what the >>>>>>> words mean in the field, you will just continue to be a stupid
    and ignorant lair about what you are doing.

    i just don't care what YOU, richard, says "CoMpUTaTiOn" means. you >>>>>> aren't "the field" bro, and i just really dgaf about ur endless
    definist fallacy


    But apperently you do, as you aren't just going to present your
    ideas directly to "the field" on a peer-reviewed journal, so
    something is telling you that you have something to fix.

    or rather the peer-review is so gatekept i don't even get a review
    back for my submission, just rejection without review.

    the system is broken such that i will take my stance elsewhere.

    everyone else can ignore me at all our peril...


    WHich just shows that you aren't in step with what theory actually is
    talking about.

    That is your problem, you assume the world is wrong, and more than
    likely it is you that is wrong.

    i'm not one continually asserting a bunch of impossible to find
    teapots floating around in machine space

    by "teapots" i mean unidentifiably undecidable machine ghosts that apparently are all non-halting,

    and by "machine space" i mean the full enumeration of turing machines

    just to be clear, eh???

    Right, so what was wrong with my proof?

    They have to be non-halting, as halting is always provable just by
    running the machine long enough.

    They have to be non-identifiable, as since they must be non-halting, to identify them as undecidable would become proof of there non-halting.

    And, they have to exists, or otherwise we could build a decider by your
    method of iterating through all partial deciders, but then we can build
    the pathological machine for THAT decider, and it must not be able to
    answer.




    If you want to break down a "broken" structure, you need to know
    enough about it to SHOW it is broken.

    Just assuming it is just shows that it is most likely YOU that is wrong. >>>
    It is more that the system ignores that which tries to break it,
    because getting side tracked on false trails is too damaging.

    To me it seems more of a peril to accept your misguided ideas.

    The fact that you begin by trying to redefine things out of ignorance
    doesn't help your case.





    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory on Mon Jan 26 11:21:02 2026
    From Newsgroup: comp.theory

    On 1/25/26 11:01 PM, dart200 wrote:
    On 1/25/26 2:24 PM, Richard Damon wrote:
    On 1/25/26 4:08 PM, dart200 wrote:
    On 1/25/26 10:20 AM, Richard Damon wrote:
    On 1/24/26 10:30 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:29 PM, dart200 wrote:
    On 1/23/26 5:52 PM, Richard Damon wrote:
    On 1/20/26 8:36 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:
    On 1/20/26 1:46 AM, dart200 wrote:
    On 1/19/26 9:29 PM, Richard Damon wrote:
    On 1/19/26 2:09 AM, dart200 wrote:
    On 1/18/26 12:56 PM, Richard Damon wrote:
    On 1/18/26 1:44 PM, dart200 wrote:
    A Reflective Turing Machine is a mathematical model of a >>>>>>>>>>>>>>> machine that performs a computation with the following >>>>>>>>>>>>>>> pieces:

    1) A Tape, infinite in capacity, divided into cells >>>>>>>>>>>>>>> which, unless otherwise specified, initially contain the >>>>>>>>>>>>>>> "empty" symbol, and is capable of storing in each cell, >>>>>>>>>>>>>>> one symbol from a defined finite set of symbols. >>>>>>>>>>>>>>>
    2) A Head, which at any point of time points to a >>>>>>>>>>>>>>> specific location on the tape. The head can read the >>>>>>>>>>>>>>> symbol on the tape at its current position, change the >>>>>>>>>>>>>>> symbol at the current location as commanded by the state >>>>>>>>>>>>>>> machine defined below, and move a step at a time in >>>>>>>>>>>>>>> either direction.

    3) A State Machine, that has a register which store the >>>>>>>>>>>>>>> current "state" from among a finite listing of possible >>>>>>>>>>>>>>> states, and includes a "program" of tuples of data: >>>>>>>>>>>>>>> (Current state, Current Symbol, Operation, New State) >>>>>>>>>>>>>>> that causes the machine when it matches the (Current >>>>>>>>>>>>>>> State, Current Symbol), updates the tape/ head according >>>>>>>>>>>>>>> to the operation, and then transitions to the New State, >>>>>>>>>>>>>>> and then Begins again. The state machine has a 2ndary >>>>>>>>>>>>>>> temporary buffer tape to store a copy of the current tape >>>>>>>>>>>>>>> during certain operations.

    The list of operations possible:

    - HEAD_RIGHT: move the head one cell to the right >>>>>>>>>>>>>>>
    - HEAD_LEFT: move the head one cell to the left

    - WRITE(SYMBOL): write SYMBOL to the head

    - REFLECT: will cause a bunch of machine meta-information >>>>>>>>>>>>>>> to be written to the tape, starting at the head, >>>>>>>>>>>>>>> overwriting anything its path. The information written to >>>>>>>>>>>>>>> tape will include 3 components: the "program" of tuples >>>>>>>>>>>>>>> of data, current tuple that the operation is part of, and >>>>>>>>>>>>>>> the current tape (the tape state before command runs). At >>>>>>>>>>>>>>> the end of the Operation, the head will be moved back to >>>>>>>>>>>>>>> the start of the Operation.

    And where does this "meta-information" come from?

    How do you translate the mathematical "tuples" that define >>>>>>>>>>>>>> the machine into the finite set of symbols of the system. >>>>>>>>>>>>>
    we when write a turing machine description and run it ... >>>>>>>>>>>>> the machine description is dumped as it is written

    But you don't write a turing machine description, you create >>>>>>>>>>>> a turing machine.

    whatever bro ur just being purposefully obtuse

    No, it is a key point.

    it's a syntax point, which is boring


    A Given Turing machine doesn't have *A* description, so what >>>>>>>>>> you want it to write out doesn't have a unique definition.

    ok, it dumps it out as the *description number* as defined by >>>>>>>>> turing in his paper /on computable number/ p240, that can be >>>>>>>>> uniquely defined for every unique machine

    But that page talks about how to get *A* description numbers
    based on an arbitrary assignment of symbols to values. Nowhere >>>>>>>> does it use *the* as the qualifier.

    No where is that number called "unique". Note, page 241 points >>>>>>>> out that the number you get here will only produce one
    computable sequence.

    In fact, he doesn't even qualify "standard form" as being
    unique, but it is *a* standard form, as he knows there are many >>>>>>>> ways to arbitrarily standardize the form.

    literally the last sentence of p240 to the first of p241:

    /The D.N determine the S.D and the structure of the machine
    uniquely/

    but i've quoted that at you before and u denied it before, so ofc... >>>>>>>
    anyways, if u had more than half a brain u'd know that it doesn't >>>>>>> really matter what the specific syntax is ... so long as whatever >>>>>>> it dumps consistently determines the structure of the machine
    uniquely and completely,

    which DNs do, turing demonstrated that with the first paper /on >>>>>>> computable numbers/, so can we move past theory of computing 101, k? >>>>>>>
    or not i guess, i can hear you angrily typing away a willfully
    contrarian response already!! idk Efn+Efn+Efn+ i guess fuck u too eh??? >>>>>>>







    i'm not really sure why you have trouble accepting this, >>>>>>>>>>>>> let's take a really simple machine using REFLECT:

    Because it is based on a logical error of confusing the >>>>>>>>>>>> turing machine description that can be given to a UTM to >>>>>>>>>>>> simulate the machine, and the machine itself.

    The problem is there are MANY different UTMs, and different >>>>>>>>>>>> UTMs can use different representations, and thus depending >>>>>>>>>>>> on which UTM you target, you get different descriptions, >>>>>>>>>>>> many of which are not actually in the same alphabet as the >>>>>>>>>>>> machine itself.


    <q0 0 REFLECT q1>

    the machine steps would (format: [state tape], head >>>>>>>>>>>>> indicated by ^:

    [q0 0]
    -a-a-a-a ^
    [q1 <q0 0 REFLECT q1>|<q0 0 REFLECT q1>|0]
    -a-a-a-a ^
    halt

    we can quibble about what the format of that dump should be >>>>>>>>>>>>> in, but that's not actually that interesting to me

    But it should be. Your problem is you are describing your >>>>>>>>>>>> Turing Machine in a language easy for people to read, but >>>>>>>>>>>> awful for a UTM to process.

    how is that relevant to proving principles with it?

    Because it hides the key issue, that there isn't a unique >>>>>>>>>> description for it to write out. If it gets to arbitarily pick >>>>>>>>>> one, then your "program" can't process it, as it doesn't know >>>>>>>>>> what "language" it needs to interprete.

    please do actually read turing's paper sometime


    You think I haven't?

    yes, i think u haven't. i'm pretty sure u've looked at some of
    the words i've quoted, but that's not the same as reading.






    Most actual work with Turing Machines use very limited >>>>>>>>>>>> alphabets, while yours looks like it uses near full ASCII. >>>>>>>>>>>
    so dump the ASCII in their binary equivalents ...??? i'm not >>>>>>>>>>> typing that shit out just to feed ur massive fking ego. like >>>>>>>>>>> seriously why do i need to state that to a fucking 70yo chief >>>>>>>>>>> engineer???

    Which presumes that this it the proper encoding.

    Your problem is you presume that your program is going to be >>>>>>>>>> able to process the output, when you don't define what it will >>>>>>>>>> look like.





    Originally, you talked of putting out the tape at the >>>>>>>>>>>>>> start of when the machine ran. Now you just seem to recopy >>>>>>>>>>>>>> the tape to avoid overwriting it.

    originally i was storing the initial tape so it could be >>>>>>>>>>>>> dumped during REFLECT, but i did away with that by just >>>>>>>>>>>>> making all tapes start blank, requiring users to use the >>>>>>>>>>>>> machine description itself to build the initial tape before >>>>>>>>>>>>> running a further computation on it.

    it's just simpler, and i think TMs should fundamentally >>>>>>>>>>>>> follow this method too

    The problem then is your complete machines can't take an >>>>>>>>>>>> input, and thus complicate the composition property.

    This means you MUST be looking at submachines when you >>>>>>>>>>>> discuss properties of computations.

    we always were.

    the semantic properties of a particular computation has >>>>>>>>>>> always been defined by the tuple (machine, input)

    No, the sematic properties of a particular cokputation has >>>>>>>>>> always been defined by the full process of running the machine. >>>>>>>>>>
    The "tuple" is a syntax rule. Semantics comes by the complete >>>>>>>>>> operation of all the syntax.

    i love how you agree with me while making it look like u disagree >>>>>>>>
    So, you think that *the* tuple (machine, input) means anything >>>>>>>> like what happens when you actually RUN the machine on the input? >>>>>>>
    the (machine, input) specifies a particular computation or
    "sequence of machine configurations", yes, so therefor it
    consequentially also specifies the semantics of that particular >>>>>>> computation. whether u determine that specification thru brute
    force or some more intelligent method is quite irrelevant (tho
    i'm sure you'll disagree but idk)

    No "machine" here would seem to be the description/definition of
    the machine in some form.

    That is a syntactic statement.

    The RESULT of running that requires the semantic operation of
    running the machine,

    i don't need to manually compute:

    () -> loop()

    to figure out what it does. in fact manually computing it would
    never figure out what it does. our ability to program relies on an
    ability to compute the result of computations without brute force
    running them. we just haven't done the work to encode that
    understanding into a general algo because sheeple like ur are hell
    bent on shooting urself in the foot with endless gishgallop denial.

    Sure it can, as a simple loop detector will detect the repeated state,

    which requires more than pure brute forcing because at the very least
    ur storing and comparing to all past states in order to detect the loop


    The classic answer was two simulators simulating the same machine, one
    stepping two steps at a time, and the other 1 step at a time. If ever
    the two simulators are in the same state, check if the tapes are
    identical. If so, you have your loop.

    ah yeah i do remember that

    Which shows how much you get your exercise by leaping to false conclusions,




    Yes, you can define syntactic rules to handle the simpler cases, but
    they CAN NOT handle a general program.

    why because muh paradox??? lol

    No, because you can define a fininte set of syntax rules that detect a
    finite set of loop constructions.

    it's definitely at least an infinite set, just not as infinite as a
    turing complete one ...

    No, I said for a FINITE set of loop constructions. Of course the
    complete set of loop constructions is infinite, but we can make a finite sub-set of them, and detect that with a finite set of syntax rules,

    It is the fact that there are an infinite set of loop constructions that
    means you can't completely detect behavior with syntactic analysis, but
    need to run/simulate the machine.





    YEs, a PROVABLY CORRECT program can likely be determined what it
    does without running it, *IF* you are given the proof. The issue is
    that most programs are not provably correct.

    The problem is that proving a given program is provable correct is
    generally not computable.






    The tuple is just the symbolic expression of the machine and
    input. That is SYNTACTIC.

    RUNNING the machine is what gets us to the semantics.




    And, because if your use of a reflect instruction changes >>>>>>>>>>>> the output based on different detected contexts, those >>>>>>>>>>>> submachines are no longer actually computations.

    the context *is* the input,

    Then your sub-machines are not fully controlable, as part of >>>>>>>>>> their input isn't settable by their caller, but is forced by >>>>>>>>>> things beyond its ability to control

    before you run any machine you can examine the entire machine >>>>>>>>> including the interplay between context-independent and
    context- dependent sub- machines, so idgaf about ur complaints >>>>>>>>> of "uncontrollable" machines

    But that is the problem, you CAN'T do that, as parts of the
    semantics are only determinable by running the machine.

    begging the question, again

    No, seems to be you not knowing what you ara saying.






    the formal parameters are just part of the total input

    So, your definition is about uncontrollable machines.

    the point is indeed to be able to assert correctness even when >>>>>>>>> malicious input is given

    And how does not being able to fully give the input to an
    algorithm help you here?

    computations can lie about context in the total enumeration
    causing paradoxes,

    Computations are soul-less machines, they can't "lie" as that is
    an act of will or judgement.

    Since the halting problem doesn't depend on the context we are
    asking the decider on, there is no lie about that that can matter. >>>>>>

    REFLECT cannot do so by definition.

    And thus make the "input" uncontrolled, or the machine not perform >>>>>> a computation (depending on your definitons)

    it's entirely controlled by the context (which include formal
    params), which a programmer can account for when programming.

    But that only applies to problems based on the context of the
    program that is asking, and NOT about problems that are independent
    of such context.

    Questions like does a given program halt, is not dependent on the
    context of the machine that is trying to ask the question.


    it's done all the time in react, it's not a big deal.

    Because react isn't based on "computations".

    Of course, since you don;t understand what that means, it means
    nothing to you.

    it doesn't mean anything beside it doesn't fit into some weird little
    box u keep arguing about that idgaf about because no one's proven
    that said little box to actually be all of the box ...

    No, you are just showing you don't understand what you are talking
    about, and just say anything you don't understand doesn't matter.

    Thas is just your "stupid" talking

    not an argument, just continued definist fallacy.

    post actual contradiction or shut the fuck up tbh.







    i cannot define away lies in the total enumeration of all
    possible input ... i can do so with the mechanisms of machine
    itself, however

    But since the question doesn't depend on the context it is asked
    in, we can't lie about anything that actually matters.

    the ability to compute an accurate response, however, does

    Which just means the question isn't computable.

    to a reasonable person it would,

    Only someone that doesn't know what the word is defined to be in this
    context.

    more definist fallacy


    Of course, Stupid people agree to all sorts of nonsense.

    ad hominem



    ur concerned about little boxes without actual justified reason. u
    never actually put in the work to show contradictions u just complain
    about definitions in an endless slop of various definist fallacies.

    Nope, but it seems you don't understand the basics to understand what
    I am saying.

    definist fallacy




    You don't seem to understand that basic definition of a computation.

    Which of course, just means you idea of an expanded theory is almost
    certainly worthless.






    It says that you can't actually fully test your machine, as you >>>>>>>> can't

    why even test things when u can prove them instead???

    Because it is hard to prove behavior of uncontrollable inputs.

    It is also hard to prove something correct, if your API doesn't
    even permit asking the question you are supposed to be asking.

    And, if the result has one correct answer for the part you can
    give, giving two (or more) different answers based on
    uncontrollable input makes your machine BY DEFINITION incorrect.


    it's incredibly ironic that ur complaining about not being able >>>>>>> to test things in discussion i undertake with a goal to replace >>>>>>> testing with proofs...

    But, if I can't GIVE the required input to get the results, you
    can't prove I can get the right result.




    and the pathetic part is ur total lack of ability to have any
    foresight or vision

    even attempt to generate all classes of input, and you are
    trying to define that answers about something can depend on
    context that doesn't actually affect that thing, but is the
    context of the asker.






    i'm still copying the whole tape like before, since that's >>>>>>>>>>>>> needed to fully describe the current configuration


    Since the length of the tape isn't bounded (but is finite) >>>>>>>>>>>>>> how do you go back to the original start of operation? >>>>>>>>>>>>>> Remember, the "machine" has a fixed definition, and thus >>>>>>>>>>>>>> fixed size "registers". Not a problem for a standard >>>>>>>>>>>>>> Turing Machine, as the only register is the current state, >>>>>>>>>>>>>> and we know how many states are in the design, so how >>>>>>>>>>>>>> "big" that register needs to be.

    unbounded buffer, just like the tape

    And thus NOT a valid computation atom. Sorry.

    Fundamental in the definition is its boundedness.

    making shit up gish gallop

    Nope, maybe you should study the initial problem.




    heck you could intersperse this buffer among the tape using >>>>>>>>>>>>> alternating cells, but it's mentally easier and more >>>>>>>>>>>>> theoretically secure to just keep it metaphysically >>>>>>>>>>>>> separated so it's inaccessible to pathological computations. >>>>>>>>>>>>
    Not following the rules make is just invalid.

    the thing has an unbounded tape, it can also have an
    unbounded portion of that tape sectioned off just for
    machine-level use, and still have unbounded space for
    whatever computation-level work is done.

    Right, but that is the "input" not the algorithm part of the >>>>>>>>>> machine.
    The ALGORITHM needs to be boundedly described.

    THe input just needs to be finite, but can be unbounded.

    context is necessarily length, as all individual
    configurations/ steps of the machine are necessarily finite in >>>>>>>>> length.




    hilbert's hotel is great, no?

    But it is the hotel, not the desk that is infinite.



    Learn how to follow the rules.



    It seems you still have "bugs" in your finite machine that >>>>>>>>>>>>>> is your processing core.

    these kinda details kinda bore with me being obvious to me, >>>>>>>>>>>>> ya know?

    Which is what makes your work just garbage, Good work looks >>>>>>>>>>>> at the details.

    I thought you wanted provably correct programs. It is the >>>>>>>>>>>> ignoring of the details that causes most of the bugs you >>>>>>>>>>>> don't want.

    none of this gish gallop has anything to do with the
    theoretical power of the system

    sure it does. A system that doesn't exist can't do anything. >>>>>>>>>>





    4) A defined starting state, and starting head position >>>>>>>>>>>>>>>
    This list of tuples can only have a single entry for >>>>>>>>>>>>>>> (Current State, Current Symbol), and if no entry matches >>>>>>>>>>>>>>> the current condition, the machine halts.

    ... wow that was extremely boring exercise of futility >>>>>>>>>>>>>>> that will convince a certain massive dick of absolutely >>>>>>>>>>>>>>> fucking nothing that u already hadn't been convinced of. >>>>>>>>>>>>>>>
    it's funny that richard considers himself too stupid to >>>>>>>>>>>>>>> follow the REFLECT operation


    You still haven't defined HOW to generate the output, and >>>>>>>>>>>>>> you have changed your definition, as originally it put out >>>>>>>>>>>>>> the original tape contents.

    Why do you need to write out the current tape contents? >>>>>>>>>>>>>
    because a machine simulated within a base level machine >>>>>>>>>>>>> runtime needs access to the base level machine tape to know >>>>>>>>>>>>> where it is in the overall machine execution.


    WHy do you need to write out the tuple that does the >>>>>>>>>>>>>> "Reflect" operation?

    because the transition being made is "where" the machine is >>>>>>>>>>>>> witin the machine description

    ultimately: there is a fixed tuple that describes the >>>>>>>>>>>>> overall machine's initial configuration, and there a fixed >>>>>>>>>>>>> tuple the described the machine's current configuration, >>>>>>>>>>>>>
    and REFLECT needs to dump all the information required to >>>>>>>>>>>>> build both those tuples so a simulation can be done to >>>>>>>>>>>>> generate all the steps between the initial tuple and >>>>>>>>>>>>> current tuple, which then allows for computing everything >>>>>>>>>>>>> that can be known about "where" that REFLECT was done with >>>>>>>>>>>>> respect to the overall machine execution.



















    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 26 11:39:59 2026
    From Newsgroup: comp.theory

    On 1/26/26 12:56 AM, dart200 wrote:
    On 1/25/26 2:36 PM, Richard Damon wrote:
    On 1/25/26 4:05 PM, dart200 wrote:
    On 1/25/26 10:21 AM, Richard Damon wrote:
    On 1/24/26 9:24 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:36 PM, dart200 wrote:
    On 1/24/26 6:44 AM, Richard Damon wrote:
    On 1/20/26 8:55 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:
    On 1/20/26 1:18 AM, dart200 wrote:
    On 1/19/26 9:29 PM, Richard Damon wrote:
    On 1/18/26 11:51 PM, dart200 wrote:
    On 1/18/26 4:28 PM, Richard Damon wrote:
    On 1/18/26 4:50 PM, dart200 wrote:
    On 1/18/26 12:56 PM, Richard Damon wrote:
    On 1/18/26 1:15 PM, dart200 wrote:
    On 1/18/26 4:05 AM, Richard Damon wrote:
    On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 1/17/26 10:14 PM, dart200 wrote:

    Good luck starving to death when your money runs out. >>>>>>>>>>>>>>>>>>>
    one can only hope for so much sometimes EfOA >>>>>>>>>>>>>>>>>>>



    I guess you don't understand the rules of logic. >>>>>>>>>>>>>>>>>>>>>
    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even >>>>>>>>>>>>>>>>>>>>>>> if u haven't understood it yet) that produces a >>>>>>>>>>>>>>>>>>>>>>> consistent deterministic result that is "not a >>>>>>>>>>>>>>>>>>>>>>> computation".

    Because you get that result only by equivocating >>>>>>>>>>>>>>>>>>>>>> on your definitions.

    If the context is part of the inpt to make the >>>>>>>>>>>>>>>>>>>>>> output determistic from the input, then they fail >>>>>>>>>>>>>>>>>>>>>> to be usable as sub- computations as we can't >>>>>>>>>>>>>>>>>>>>>> control that context part of the input. >>>>>>>>>>>>>>>>>>>>>>
    When we look at just the controllable input for a >>>>>>>>>>>>>>>>>>>>>> sub- computation, the output is NOT a >>>>>>>>>>>>>>>>>>>>>> deterministic function of that inut. >>>>>>>>>>>>>>>>>>>>>>

    not sure what the fuck it's doing if it's not a >>>>>>>>>>>>>>>>>>>>>>> computation

    Its using hidden inputs that the caller can't >>>>>>>>>>>>>>>>>>>>>> control.

    which we do all the time in normal programming, >>>>>>>>>>>>>>>>>>>>> something which apparently u think the tHeOrY oF >>>>>>>>>>>>>>>>>>>>> CoMpUtInG fails to encapsulate

    Right, but that isn't about computations. >>>>>>>>>>>>>>>>>>>>

    pretty crazy we do a bunch "non-computating" in the >>>>>>>>>>>>>>>>>>>>> normal act of programming computers

    Why?

    As I have said, "Computatations" is NOT about how >>>>>>>>>>>>>>>>>>>> modern computers work.

    I guess you are just showing that you fundamentally >>>>>>>>>>>>>>>>>>>> don't understand the problem field you are betting >>>>>>>>>>>>>>>>>>>> your life on.

    one would presume the fundamental theory of computing >>>>>>>>>>>>>>>>>>> would be general enough to encapsulate everything >>>>>>>>>>>>>>>>>>> computed by real world computers, no???

    Why?

    Remember, the fundamental theory of Computing PREDATES >>>>>>>>>>>>>>>>>> the computer as you know it.

    so ur saying it's outdated and needs updating in >>>>>>>>>>>>>>>>> regards to new things we do with computers that >>>>>>>>>>>>>>>>> apparently turing machines as a model don't have >>>>>>>>>>>>>>>>> variations of ...

    No, it still handles that which it was developed for. >>>>>>>>>>>>>>>
    well it was developed to be a general theory of >>>>>>>>>>>>>>> computing, and apparently modern computing has
    transcended that theory ...

    Not really.

    THe way modern processors work, "sub-routines" can fail to >>>>>>>>>>>>>> be computations, but whole programs will tend to be. Sub- >>>>>>>>>>>>>> routines CAN be built with care to fall under its guidance. >>>>>>>>>>>>>
    lol, what are they even if not "computations"???

    not-computations

    great, a set of deterministic steps that produces a result >>>>>>>>>>> but is somehow not a compution!

    Because it isn't deterministically based on the INPUT,

    no it's just a series of steps to produce some output.

    Nope, not in the formulation of the theory.

    again: YOU HAVE NOT PROVEN THAT TURING MACHINES, OR ANY
    EQUIVALENT THEORY, ENCOMPASS ALL POSSIBLE COMPUTATIONS

    like holy fuck, how many times will i need to repeat that???

    it's a ct-THESIS not a ct-LAW

    But I can say that Computations as defined, are all that they can do. >>>>>
    i will never care about you complaining about the fact the
    computations i'm talking about don't fit within the particular box
    you call a "Computation", because i just doesn't mean anything,

    In other words, you are just saying you don't care about computation
    theory, and thus why are you complaining about what it says about
    computations.

    no i'm saying i don't care about ur particular definition, richard

    do better that trying to "define" me as wrong. meaning: put in the
    work to demonstrate actual contradictions

    In other words, you want me to prove there isn't a teapot in the
    asteroid belt.

    lol, what. asking for a proof of contradiction is now akin to russel's teapot???

    You are asking me to disprove something that you won't (and can't) define.


    are u even doing math here or this just a giant definist fallacy
    shitshow???

    No, you just don't know what that means.



    YOU are the one assuming things can be done, but refuse to actually
    try to define an actual algorithm that does so.

    An actual algorithm being an actual sequence of finite atomic steps,
    and using bounded loops.





    u and the entire field can be wrong about how u specified
    "Computation",

    No, you just don't understand the WHY of computation theory.

    u don't give a why u stupid fucking retarded faggot, and u never will
    because the ct-thesis isn't proven, and u've already gone down the
    moronic hole of "maybe my favorite truth isn't even provable!!!??"

    I have mentioned it, but have you bothered to look into it?

    Comptation Theory was developed to see if "Computations" of this sort
    could be used to generate proofs of the great problems of mathematics
    and logic.

    It was hoped that it would provide a solution to the then curretly
    seeming intractable problems that seemed to have an answer, but they
    just couldn't be found.

    Insteed, it showed that it was a provable fact that some problems
    would not have a solution. And thus we had to accept that we couldn't
    prove everything we might want.


    and that fact was only shown, for computing in regards to itself, by
    using self-referential set-classification paradoxes, like the halting problem





    which is the part i'm trying to reconcile, that very specific (but quite broad within tm computing) problem...

    But you are only saying that there must be something else (that is
    Russel's teapot must exist) but can't show it.

    Thus, it is encumbent on YOU to prove or at least define what you are
    claiming to exist.


    i'm not here to spoon feed humanity a general decision algo, cause we assuredly do not have enough number theory to build that at this time.

    It seems you are not here to do anything constructive, only engage in
    flights of fancy imagining things that are not, but assuming they are.


    i'm trying to deal with all the claims of hubris that such a general decision algo *cannot* exist, by showing *how* it could exist alongside
    the potential for self-referential set-classification paradoxes:

    either by showing that we can just ignore the paradoxes, or by utilizing reflective turing machines to decide on them in a context aware manner,
    both are valid resolutions.

    In other words, by ignoring the reality,


    i know u want me to spoon feed you all the answers here, but i'm one freaking dude, with very limited time, and training, stuck with
    discussion that is willfully antagonistic and soaked with fallacy after fallacy,

    turing spend years coming up with his turing jump nonsense, on a brand
    new fresh theory, and people that likely actually tried to be
    collaborative,

    while i've gotta reconcile a massive almost century old bandwagon, /thru argument alone/

    i don't even have the luxury of pointing to an experiment, i've gotta
    come up with a set of purely logical arguments that stand entirely on
    their own right. einstein had it easier

    But, if you listened to people to make sure you were working on solid
    ground, and not flights of fancy, it might be easier, or at least become evident that it is a dead end.

    Even Einstein admitted that his theory was likely "wrong", but was
    better than what we currently had, and WOULD be refined in the future.
    Just like classical mechanics were "wrong" in some cases, but close
    enough for most of the work that they were being used for.

    In the same way, yes, perhaps there is a refinement needed to the
    definition of what a "Computation" is, but just like Einstein's theory,
    it doesn't change the results significantly for what we currently can see.

    Your issue is you need to find that "improved" definition that still
    works for the common cases that we know about, before you can start to
    work out what it implies.

    STARTING with assumptions of that implicaion, is like assuming you can
    find a road network to drive from New York to Paris, France.






    and that potential is well codified by the fact the ct-thesis is
    still a thesis and not a law.

    It might just be a thesis, because it IS an unprovable truth.

    lookie u just accepting things as "muh unprovable truths". holy
    fucking hypocritical fucking faggot

    It isn't "just accepting", it is looking at the proofs and
    understanding the logic of them.

    YOU HAVEN'T PROVEN THE CT-THESIS, MY GOD


    imagine if i pulled that argument out on you wildly unfair irrational
    bastard??

    But all you can do is make baseless claims. My statements of
    unprovable truths is based on real proofs, that seem to be beyond you
    ability to understand.

    YOU ALSO HAVEN'T PROVEN THAT THE CT-THESIS IS UNPROVABLE, MY FUCKING GOD



    u make a complete mockery of reason with the disgustingly idiot
    dogshit u post over and over again...

    How is looking at proofs and accepting their results.

    BECAUSE UR JUST ARBITRARILY OVERGENERALIZING WITHOUT PROOF,

    OH MY FUCKING GOD

    godel's result is a curse on this species even if he wasn't wrong to
    produce it


    It is the rejection of proofs and thinking things must be different
    that is the mockery.


    holy fuck you dude eat a bag of dicks

    It seems you have eaten them all already,

    sure thing dick rLiN+A




    i will not respond to more comments on this because it's a boring,
    lazy, non-argument that is fucking waste of both our time.








    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 26 11:43:08 2026
    From Newsgroup: comp.theory

    On 1/26/26 1:50 AM, dart200 wrote:
    On 1/25/26 2:40 PM, Richard Damon wrote:
    On 1/25/26 4:04 PM, dart200 wrote:
    On 1/25/26 10:23 AM, Richard Damon wrote:
    On 1/24/26 9:05 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 6:06 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:

    the CT-thesis is a thesis, not a proof.
    *I think that I fixed that*
    It seems to me that if something cannot be computed
    by applying finite string transformation rules to
    input finite strings then it cannot be computed.

    As soon as this is shown to be categorically impossible
    then the thesis turns into a proof.


    In other words, you just don't know what you are talking about.

    The fact that it is impossible to build a computation that, given >>>>>> a representation of another computation and its input, determine
    for all cases if the computation will halt does nothing to further >>>>>> the question of are Turing Machines the most powerful form of
    computation.

    contexts-aware machines compute functions:

    (context,input) -> output


    And what problems of interest to computation theory are of that form?

    Computation Theory was to answer questions of logic and mathematics.

    What logic or math is dependent on "context"

    *mechanically computing* the answer *generally* is dependent on context,

    Really?

    Most problems don't care about the context of the person asking it,
    just the context of the thing being looked at.

    well, yes, most problems don't involve pathologically querying a decider specifically for the purpose of then contradicting the decision... EfOa

    Which is a problem that doesn't actually depend on the context of the
    asker, so using the context just makes you wrong.




    and ignoring that is the underlying cause of the halting problem

    Nope.


    clearly novel techniques will be required to resolve long standing
    problems, eh richard???

    Or just lying as you try.

    I guess you think the speed of light is just a suggestion. (Yes, there
    are some thoughts about how to break it, but they require things
    totally outside our current physics).

    Yes, there may be a new definition of "Computations" that is actually
    useful, and generates answers to some things we currently think as
    uncomputable, but until you can actually figure out what that is,
    assuming it is just science fiction.

    or u'd just call it lying over and over again with no serious
    consideration to what's really being said ...

    Yep, that is a good description of what you are doing.

    You forget to consider the topic you are talking about.

    Either you accept the current definitions, or you actually supply your
    own new ones. Just assuming you can change them without actually doing
    so makes your argument baseless.




    fuck




    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory on Mon Jan 26 11:43:10 2026
    From Newsgroup: comp.theory

    On 1/26/26 8:21 AM, Richard Damon wrote:
    On 1/25/26 11:01 PM, dart200 wrote:
    On 1/25/26 2:24 PM, Richard Damon wrote:
    On 1/25/26 4:08 PM, dart200 wrote:
    On 1/25/26 10:20 AM, Richard Damon wrote:
    On 1/24/26 10:30 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:29 PM, dart200 wrote:
    On 1/23/26 5:52 PM, Richard Damon wrote:
    On 1/20/26 8:36 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:
    On 1/20/26 1:46 AM, dart200 wrote:
    On 1/19/26 9:29 PM, Richard Damon wrote:
    On 1/19/26 2:09 AM, dart200 wrote:
    On 1/18/26 12:56 PM, Richard Damon wrote:
    On 1/18/26 1:44 PM, dart200 wrote:
    A Reflective Turing Machine is a mathematical model of a >>>>>>>>>>>>>>>> machine that performs a computation with the following >>>>>>>>>>>>>>>> pieces:

    1) A Tape, infinite in capacity, divided into cells >>>>>>>>>>>>>>>> which, unless otherwise specified, initially contain the >>>>>>>>>>>>>>>> "empty" symbol, and is capable of storing in each cell, >>>>>>>>>>>>>>>> one symbol from a defined finite set of symbols. >>>>>>>>>>>>>>>>
    2) A Head, which at any point of time points to a >>>>>>>>>>>>>>>> specific location on the tape. The head can read the >>>>>>>>>>>>>>>> symbol on the tape at its current position, change the >>>>>>>>>>>>>>>> symbol at the current location as commanded by the state >>>>>>>>>>>>>>>> machine defined below, and move a step at a time in >>>>>>>>>>>>>>>> either direction.

    3) A State Machine, that has a register which store the >>>>>>>>>>>>>>>> current "state" from among a finite listing of possible >>>>>>>>>>>>>>>> states, and includes a "program" of tuples of data: >>>>>>>>>>>>>>>> (Current state, Current Symbol, Operation, New State) >>>>>>>>>>>>>>>> that causes the machine when it matches the (Current >>>>>>>>>>>>>>>> State, Current Symbol), updates the tape/ head according >>>>>>>>>>>>>>>> to the operation, and then transitions to the New State, >>>>>>>>>>>>>>>> and then Begins again. The state machine has a 2ndary >>>>>>>>>>>>>>>> temporary buffer tape to store a copy of the current >>>>>>>>>>>>>>>> tape during certain operations.

    The list of operations possible:

    - HEAD_RIGHT: move the head one cell to the right >>>>>>>>>>>>>>>>
    - HEAD_LEFT: move the head one cell to the left >>>>>>>>>>>>>>>>
    - WRITE(SYMBOL): write SYMBOL to the head

    - REFLECT: will cause a bunch of machine meta- >>>>>>>>>>>>>>>> information to be written to the tape, starting at the >>>>>>>>>>>>>>>> head, overwriting anything its path. The information >>>>>>>>>>>>>>>> written to tape will include 3 components: the "program" >>>>>>>>>>>>>>>> of tuples of data, current tuple that the operation is >>>>>>>>>>>>>>>> part of, and the current tape (the tape state before >>>>>>>>>>>>>>>> command runs). At the end of the Operation, the head >>>>>>>>>>>>>>>> will be moved back to the start of the Operation. >>>>>>>>>>>>>>>
    And where does this "meta-information" come from? >>>>>>>>>>>>>>>
    How do you translate the mathematical "tuples" that >>>>>>>>>>>>>>> define the machine into the finite set of symbols of the >>>>>>>>>>>>>>> system.

    we when write a turing machine description and run it ... >>>>>>>>>>>>>> the machine description is dumped as it is written

    But you don't write a turing machine description, you >>>>>>>>>>>>> create a turing machine.

    whatever bro ur just being purposefully obtuse

    No, it is a key point.

    it's a syntax point, which is boring


    A Given Turing machine doesn't have *A* description, so what >>>>>>>>>>> you want it to write out doesn't have a unique definition. >>>>>>>>>>
    ok, it dumps it out as the *description number* as defined by >>>>>>>>>> turing in his paper /on computable number/ p240, that can be >>>>>>>>>> uniquely defined for every unique machine

    But that page talks about how to get *A* description numbers >>>>>>>>> based on an arbitrary assignment of symbols to values. Nowhere >>>>>>>>> does it use *the* as the qualifier.

    No where is that number called "unique". Note, page 241 points >>>>>>>>> out that the number you get here will only produce one
    computable sequence.

    In fact, he doesn't even qualify "standard form" as being
    unique, but it is *a* standard form, as he knows there are many >>>>>>>>> ways to arbitrarily standardize the form.

    literally the last sentence of p240 to the first of p241:

    /The D.N determine the S.D and the structure of the machine
    uniquely/

    but i've quoted that at you before and u denied it before, so >>>>>>>> ofc...

    anyways, if u had more than half a brain u'd know that it
    doesn't really matter what the specific syntax is ... so long as >>>>>>>> whatever it dumps consistently determines the structure of the >>>>>>>> machine uniquely and completely,

    which DNs do, turing demonstrated that with the first paper /on >>>>>>>> computable numbers/, so can we move past theory of computing
    101, k?

    or not i guess, i can hear you angrily typing away a willfully >>>>>>>> contrarian response already!! idk Efn+Efn+Efn+ i guess fuck u too eh???








    i'm not really sure why you have trouble accepting this, >>>>>>>>>>>>>> let's take a really simple machine using REFLECT:

    Because it is based on a logical error of confusing the >>>>>>>>>>>>> turing machine description that can be given to a UTM to >>>>>>>>>>>>> simulate the machine, and the machine itself.

    The problem is there are MANY different UTMs, and different >>>>>>>>>>>>> UTMs can use different representations, and thus depending >>>>>>>>>>>>> on which UTM you target, you get different descriptions, >>>>>>>>>>>>> many of which are not actually in the same alphabet as the >>>>>>>>>>>>> machine itself.


    <q0 0 REFLECT q1>

    the machine steps would (format: [state tape], head >>>>>>>>>>>>>> indicated by ^:

    [q0 0]
    -a-a-a-a ^
    [q1 <q0 0 REFLECT q1>|<q0 0 REFLECT q1>|0]
    -a-a-a-a ^
    halt

    we can quibble about what the format of that dump should >>>>>>>>>>>>>> be in, but that's not actually that interesting to me >>>>>>>>>>>>>
    But it should be. Your problem is you are describing your >>>>>>>>>>>>> Turing Machine in a language easy for people to read, but >>>>>>>>>>>>> awful for a UTM to process.

    how is that relevant to proving principles with it?

    Because it hides the key issue, that there isn't a unique >>>>>>>>>>> description for it to write out. If it gets to arbitarily >>>>>>>>>>> pick one, then your "program" can't process it, as it doesn't >>>>>>>>>>> know what "language" it needs to interprete.

    please do actually read turing's paper sometime


    You think I haven't?

    yes, i think u haven't. i'm pretty sure u've looked at some of >>>>>>>> the words i've quoted, but that's not the same as reading.






    Most actual work with Turing Machines use very limited >>>>>>>>>>>>> alphabets, while yours looks like it uses near full ASCII. >>>>>>>>>>>>
    so dump the ASCII in their binary equivalents ...??? i'm not >>>>>>>>>>>> typing that shit out just to feed ur massive fking ego. like >>>>>>>>>>>> seriously why do i need to state that to a fucking 70yo >>>>>>>>>>>> chief engineer???

    Which presumes that this it the proper encoding.

    Your problem is you presume that your program is going to be >>>>>>>>>>> able to process the output, when you don't define what it >>>>>>>>>>> will look like.





    Originally, you talked of putting out the tape at the >>>>>>>>>>>>>>> start of when the machine ran. Now you just seem to >>>>>>>>>>>>>>> recopy the tape to avoid overwriting it.

    originally i was storing the initial tape so it could be >>>>>>>>>>>>>> dumped during REFLECT, but i did away with that by just >>>>>>>>>>>>>> making all tapes start blank, requiring users to use the >>>>>>>>>>>>>> machine description itself to build the initial tape >>>>>>>>>>>>>> before running a further computation on it.

    it's just simpler, and i think TMs should fundamentally >>>>>>>>>>>>>> follow this method too

    The problem then is your complete machines can't take an >>>>>>>>>>>>> input, and thus complicate the composition property. >>>>>>>>>>>>>
    This means you MUST be looking at submachines when you >>>>>>>>>>>>> discuss properties of computations.

    we always were.

    the semantic properties of a particular computation has >>>>>>>>>>>> always been defined by the tuple (machine, input)

    No, the sematic properties of a particular cokputation has >>>>>>>>>>> always been defined by the full process of running the machine. >>>>>>>>>>>
    The "tuple" is a syntax rule. Semantics comes by the complete >>>>>>>>>>> operation of all the syntax.

    i love how you agree with me while making it look like u disagree >>>>>>>>>
    So, you think that *the* tuple (machine, input) means anything >>>>>>>>> like what happens when you actually RUN the machine on the input? >>>>>>>>
    the (machine, input) specifies a particular computation or
    "sequence of machine configurations", yes, so therefor it
    consequentially also specifies the semantics of that particular >>>>>>>> computation. whether u determine that specification thru brute >>>>>>>> force or some more intelligent method is quite irrelevant (tho >>>>>>>> i'm sure you'll disagree but idk)

    No "machine" here would seem to be the description/definition of >>>>>>> the machine in some form.

    That is a syntactic statement.

    The RESULT of running that requires the semantic operation of
    running the machine,

    i don't need to manually compute:

    () -> loop()

    to figure out what it does. in fact manually computing it would
    never figure out what it does. our ability to program relies on an >>>>>> ability to compute the result of computations without brute force >>>>>> running them. we just haven't done the work to encode that
    understanding into a general algo because sheeple like ur are hell >>>>>> bent on shooting urself in the foot with endless gishgallop denial. >>>>>
    Sure it can, as a simple loop detector will detect the repeated state, >>>>
    which requires more than pure brute forcing because at the very
    least ur storing and comparing to all past states in order to detect
    the loop


    The classic answer was two simulators simulating the same machine,
    one stepping two steps at a time, and the other 1 step at a time. If
    ever the two simulators are in the same state, check if the tapes are
    identical. If so, you have your loop.

    ah yeah i do remember that

    Which shows how much you get your exercise by leaping to false conclusions,

    it's a time vs space trade off Ef2-Ef2-Ef2-





    Yes, you can define syntactic rules to handle the simpler cases,
    but they CAN NOT handle a general program.

    why because muh paradox??? lol

    No, because you can define a fininte set of syntax rules that detect
    a finite set of loop constructions.

    it's definitely at least an infinite set, just not as infinite as a
    turing complete one ...

    No, I said for a FINITE set of loop constructions. Of course the
    complete set of loop constructions is infinite, but we can make a finite sub-set of them, and detect that with a finite set of syntax rules,

    it would be detecting a countably infinite subset of infinite loops


    It is the fact that there are an infinite set of loop constructions that means you can't completely detect behavior with syntactic analysis, but
    need to run/simulate the machine.





    YEs, a PROVABLY CORRECT program can likely be determined what it
    does without running it, *IF* you are given the proof. The issue is >>>>> that most programs are not provably correct.

    The problem is that proving a given program is provable correct is
    generally not computable.






    The tuple is just the symbolic expression of the machine and >>>>>>>>> input. That is SYNTACTIC.

    RUNNING the machine is what gets us to the semantics.




    And, because if your use of a reflect instruction changes >>>>>>>>>>>>> the output based on different detected contexts, those >>>>>>>>>>>>> submachines are no longer actually computations.

    the context *is* the input,

    Then your sub-machines are not fully controlable, as part of >>>>>>>>>>> their input isn't settable by their caller, but is forced by >>>>>>>>>>> things beyond its ability to control

    before you run any machine you can examine the entire machine >>>>>>>>>> including the interplay between context-independent and
    context- dependent sub- machines, so idgaf about ur complaints >>>>>>>>>> of "uncontrollable" machines

    But that is the problem, you CAN'T do that, as parts of the >>>>>>>>> semantics are only determinable by running the machine.

    begging the question, again

    No, seems to be you not knowing what you ara saying.






    the formal parameters are just part of the total input

    So, your definition is about uncontrollable machines.

    the point is indeed to be able to assert correctness even when >>>>>>>>>> malicious input is given

    And how does not being able to fully give the input to an
    algorithm help you here?

    computations can lie about context in the total enumeration
    causing paradoxes,

    Computations are soul-less machines, they can't "lie" as that is >>>>>>> an act of will or judgement.

    Since the halting problem doesn't depend on the context we are
    asking the decider on, there is no lie about that that can matter. >>>>>>>

    REFLECT cannot do so by definition.

    And thus make the "input" uncontrolled, or the machine not
    perform a computation (depending on your definitons)

    it's entirely controlled by the context (which include formal
    params), which a programmer can account for when programming.

    But that only applies to problems based on the context of the
    program that is asking, and NOT about problems that are independent >>>>> of such context.

    Questions like does a given program halt, is not dependent on the
    context of the machine that is trying to ask the question.


    it's done all the time in react, it's not a big deal.

    Because react isn't based on "computations".

    Of course, since you don;t understand what that means, it means
    nothing to you.

    it doesn't mean anything beside it doesn't fit into some weird
    little box u keep arguing about that idgaf about because no one's
    proven that said little box to actually be all of the box ...

    No, you are just showing you don't understand what you are talking
    about, and just say anything you don't understand doesn't matter.

    Thas is just your "stupid" talking

    not an argument, just continued definist fallacy.

    post actual contradiction or shut the fuck up tbh.







    i cannot define away lies in the total enumeration of all
    possible input ... i can do so with the mechanisms of machine >>>>>>>> itself, however

    But since the question doesn't depend on the context it is asked >>>>>>> in, we can't lie about anything that actually matters.

    the ability to compute an accurate response, however, does

    Which just means the question isn't computable.

    to a reasonable person it would,

    Only someone that doesn't know what the word is defined to be in this
    context.

    more definist fallacy


    Of course, Stupid people agree to all sorts of nonsense.

    ad hominem



    ur concerned about little boxes without actual justified reason. u
    never actually put in the work to show contradictions u just
    complain about definitions in an endless slop of various definist
    fallacies.

    Nope, but it seems you don't understand the basics to understand what
    I am saying.

    definist fallacy




    You don't seem to understand that basic definition of a computation. >>>>>
    Which of course, just means you idea of an expanded theory is
    almost certainly worthless.






    It says that you can't actually fully test your machine, as you >>>>>>>>> can't

    why even test things when u can prove them instead???

    Because it is hard to prove behavior of uncontrollable inputs.

    It is also hard to prove something correct, if your API doesn't >>>>>>> even permit asking the question you are supposed to be asking.

    And, if the result has one correct answer for the part you can
    give, giving two (or more) different answers based on
    uncontrollable input makes your machine BY DEFINITION incorrect. >>>>>>>

    it's incredibly ironic that ur complaining about not being able >>>>>>>> to test things in discussion i undertake with a goal to replace >>>>>>>> testing with proofs...

    But, if I can't GIVE the required input to get the results, you >>>>>>> can't prove I can get the right result.




    and the pathetic part is ur total lack of ability to have any >>>>>>>> foresight or vision

    even attempt to generate all classes of input, and you are
    trying to define that answers about something can depend on >>>>>>>>> context that doesn't actually affect that thing, but is the >>>>>>>>> context of the asker.






    i'm still copying the whole tape like before, since that's >>>>>>>>>>>>>> needed to fully describe the current configuration >>>>>>>>>>>>>>

    Since the length of the tape isn't bounded (but is >>>>>>>>>>>>>>> finite) how do you go back to the original start of >>>>>>>>>>>>>>> operation? Remember, the "machine" has a fixed
    definition, and thus fixed size "registers". Not a >>>>>>>>>>>>>>> problem for a standard Turing Machine, as the only >>>>>>>>>>>>>>> register is the current state, and we know how many >>>>>>>>>>>>>>> states are in the design, so how "big" that register >>>>>>>>>>>>>>> needs to be.

    unbounded buffer, just like the tape

    And thus NOT a valid computation atom. Sorry.

    Fundamental in the definition is its boundedness.

    making shit up gish gallop

    Nope, maybe you should study the initial problem.




    heck you could intersperse this buffer among the tape >>>>>>>>>>>>>> using alternating cells, but it's mentally easier and more >>>>>>>>>>>>>> theoretically secure to just keep it metaphysically >>>>>>>>>>>>>> separated so it's inaccessible to pathological computations. >>>>>>>>>>>>>
    Not following the rules make is just invalid.

    the thing has an unbounded tape, it can also have an
    unbounded portion of that tape sectioned off just for >>>>>>>>>>>> machine-level use, and still have unbounded space for >>>>>>>>>>>> whatever computation-level work is done.

    Right, but that is the "input" not the algorithm part of the >>>>>>>>>>> machine.
    The ALGORITHM needs to be boundedly described.

    THe input just needs to be finite, but can be unbounded.

    context is necessarily length, as all individual
    configurations/ steps of the machine are necessarily finite in >>>>>>>>>> length.




    hilbert's hotel is great, no?

    But it is the hotel, not the desk that is infinite.



    Learn how to follow the rules.



    It seems you still have "bugs" in your finite machine >>>>>>>>>>>>>>> that is your processing core.

    these kinda details kinda bore with me being obvious to >>>>>>>>>>>>>> me, ya know?

    Which is what makes your work just garbage, Good work looks >>>>>>>>>>>>> at the details.

    I thought you wanted provably correct programs. It is the >>>>>>>>>>>>> ignoring of the details that causes most of the bugs you >>>>>>>>>>>>> don't want.

    none of this gish gallop has anything to do with the
    theoretical power of the system

    sure it does. A system that doesn't exist can't do anything. >>>>>>>>>>>





    4) A defined starting state, and starting head position >>>>>>>>>>>>>>>>
    This list of tuples can only have a single entry for >>>>>>>>>>>>>>>> (Current State, Current Symbol), and if no entry matches >>>>>>>>>>>>>>>> the current condition, the machine halts.

    ... wow that was extremely boring exercise of futility >>>>>>>>>>>>>>>> that will convince a certain massive dick of absolutely >>>>>>>>>>>>>>>> fucking nothing that u already hadn't been convinced of. >>>>>>>>>>>>>>>>
    it's funny that richard considers himself too stupid to >>>>>>>>>>>>>>>> follow the REFLECT operation


    You still haven't defined HOW to generate the output, and >>>>>>>>>>>>>>> you have changed your definition, as originally it put >>>>>>>>>>>>>>> out the original tape contents.

    Why do you need to write out the current tape contents? >>>>>>>>>>>>>>
    because a machine simulated within a base level machine >>>>>>>>>>>>>> runtime needs access to the base level machine tape to >>>>>>>>>>>>>> know where it is in the overall machine execution. >>>>>>>>>>>>>>

    WHy do you need to write out the tuple that does the >>>>>>>>>>>>>>> "Reflect" operation?

    because the transition being made is "where" the machine >>>>>>>>>>>>>> is witin the machine description

    ultimately: there is a fixed tuple that describes the >>>>>>>>>>>>>> overall machine's initial configuration, and there a fixed >>>>>>>>>>>>>> tuple the described the machine's current configuration, >>>>>>>>>>>>>>
    and REFLECT needs to dump all the information required to >>>>>>>>>>>>>> build both those tuples so a simulation can be done to >>>>>>>>>>>>>> generate all the steps between the initial tuple and >>>>>>>>>>>>>> current tuple, which then allows for computing everything >>>>>>>>>>>>>> that can be known about "where" that REFLECT was done with >>>>>>>>>>>>>> respect to the overall machine execution.



















    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 26 11:43:39 2026
    From Newsgroup: comp.theory

    On 1/26/26 8:39 AM, Richard Damon wrote:
    On 1/26/26 12:56 AM, dart200 wrote:
    On 1/25/26 2:36 PM, Richard Damon wrote:
    On 1/25/26 4:05 PM, dart200 wrote:
    On 1/25/26 10:21 AM, Richard Damon wrote:
    On 1/24/26 9:24 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:36 PM, dart200 wrote:
    On 1/24/26 6:44 AM, Richard Damon wrote:
    On 1/20/26 8:55 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:
    On 1/20/26 1:18 AM, dart200 wrote:
    On 1/19/26 9:29 PM, Richard Damon wrote:
    On 1/18/26 11:51 PM, dart200 wrote:
    On 1/18/26 4:28 PM, Richard Damon wrote:
    On 1/18/26 4:50 PM, dart200 wrote:
    On 1/18/26 12:56 PM, Richard Damon wrote:
    On 1/18/26 1:15 PM, dart200 wrote:
    On 1/18/26 4:05 AM, Richard Damon wrote:
    On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>> On 1/17/26 10:14 PM, dart200 wrote:

    Good luck starving to death when your money runs out. >>>>>>>>>>>>>>>>>>>>
    one can only hope for so much sometimes EfOA >>>>>>>>>>>>>>>>>>>>



    I guess you don't understand the rules of logic. >>>>>>>>>>>>>>>>>>>>>>
    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even >>>>>>>>>>>>>>>>>>>>>>>> if u haven't understood it yet) that produces a >>>>>>>>>>>>>>>>>>>>>>>> consistent deterministic result that is "not a >>>>>>>>>>>>>>>>>>>>>>>> computation".

    Because you get that result only by equivocating >>>>>>>>>>>>>>>>>>>>>>> on your definitions.

    If the context is part of the inpt to make the >>>>>>>>>>>>>>>>>>>>>>> output determistic from the input, then they fail >>>>>>>>>>>>>>>>>>>>>>> to be usable as sub- computations as we can't >>>>>>>>>>>>>>>>>>>>>>> control that context part of the input. >>>>>>>>>>>>>>>>>>>>>>>
    When we look at just the controllable input for a >>>>>>>>>>>>>>>>>>>>>>> sub- computation, the output is NOT a >>>>>>>>>>>>>>>>>>>>>>> deterministic function of that inut. >>>>>>>>>>>>>>>>>>>>>>>

    not sure what the fuck it's doing if it's not a >>>>>>>>>>>>>>>>>>>>>>>> computation

    Its using hidden inputs that the caller can't >>>>>>>>>>>>>>>>>>>>>>> control.

    which we do all the time in normal programming, >>>>>>>>>>>>>>>>>>>>>> something which apparently u think the tHeOrY oF >>>>>>>>>>>>>>>>>>>>>> CoMpUtInG fails to encapsulate

    Right, but that isn't about computations. >>>>>>>>>>>>>>>>>>>>>

    pretty crazy we do a bunch "non-computating" in >>>>>>>>>>>>>>>>>>>>>> the normal act of programming computers >>>>>>>>>>>>>>>>>>>>>
    Why?

    As I have said, "Computatations" is NOT about how >>>>>>>>>>>>>>>>>>>>> modern computers work.

    I guess you are just showing that you fundamentally >>>>>>>>>>>>>>>>>>>>> don't understand the problem field you are betting >>>>>>>>>>>>>>>>>>>>> your life on.

    one would presume the fundamental theory of >>>>>>>>>>>>>>>>>>>> computing would be general enough to encapsulate >>>>>>>>>>>>>>>>>>>> everything computed by real world computers, no??? >>>>>>>>>>>>>>>>>>>
    Why?

    Remember, the fundamental theory of Computing >>>>>>>>>>>>>>>>>>> PREDATES the computer as you know it.

    so ur saying it's outdated and needs updating in >>>>>>>>>>>>>>>>>> regards to new things we do with computers that >>>>>>>>>>>>>>>>>> apparently turing machines as a model don't have >>>>>>>>>>>>>>>>>> variations of ...

    No, it still handles that which it was developed for. >>>>>>>>>>>>>>>>
    well it was developed to be a general theory of >>>>>>>>>>>>>>>> computing, and apparently modern computing has >>>>>>>>>>>>>>>> transcended that theory ...

    Not really.

    THe way modern processors work, "sub-routines" can fail >>>>>>>>>>>>>>> to be computations, but whole programs will tend to be. >>>>>>>>>>>>>>> Sub- routines CAN be built with care to fall under its >>>>>>>>>>>>>>> guidance.

    lol, what are they even if not "computations"???

    not-computations

    great, a set of deterministic steps that produces a result >>>>>>>>>>>> but is somehow not a compution!

    Because it isn't deterministically based on the INPUT,

    no it's just a series of steps to produce some output.

    Nope, not in the formulation of the theory.

    again: YOU HAVE NOT PROVEN THAT TURING MACHINES, OR ANY
    EQUIVALENT THEORY, ENCOMPASS ALL POSSIBLE COMPUTATIONS

    like holy fuck, how many times will i need to repeat that???

    it's a ct-THESIS not a ct-LAW

    But I can say that Computations as defined, are all that they can >>>>>>> do.

    i will never care about you complaining about the fact the
    computations i'm talking about don't fit within the particular box >>>>>> you call a "Computation", because i just doesn't mean anything,

    In other words, you are just saying you don't care about
    computation theory, and thus why are you complaining about what it
    says about computations.

    no i'm saying i don't care about ur particular definition, richard

    do better that trying to "define" me as wrong. meaning: put in the
    work to demonstrate actual contradictions

    In other words, you want me to prove there isn't a teapot in the
    asteroid belt.

    lol, what. asking for a proof of contradiction is now akin to russel's
    teapot???

    You are asking me to disprove something that you won't (and can't) define.

    i tried to but ur incredibly uncooperative



    are u even doing math here or this just a giant definist fallacy
    shitshow???

    No, you just don't know what that means.



    YOU are the one assuming things can be done, but refuse to actually
    try to define an actual algorithm that does so.

    An actual algorithm being an actual sequence of finite atomic steps,
    and using bounded loops.





    u and the entire field can be wrong about how u specified
    "Computation",

    No, you just don't understand the WHY of computation theory.

    u don't give a why u stupid fucking retarded faggot, and u never
    will because the ct-thesis isn't proven, and u've already gone down
    the moronic hole of "maybe my favorite truth isn't even provable!!!??"

    I have mentioned it, but have you bothered to look into it?

    Comptation Theory was developed to see if "Computations" of this sort
    could be used to generate proofs of the great problems of mathematics
    and logic.

    It was hoped that it would provide a solution to the then curretly
    seeming intractable problems that seemed to have an answer, but they
    just couldn't be found.

    Insteed, it showed that it was a provable fact that some problems
    would not have a solution. And thus we had to accept that we couldn't
    prove everything we might want.


    and that fact was only shown, for computing in regards to itself, by
    using self-referential set-classification paradoxes, like the halting
    problem


    which is the part i'm trying to reconcile, that very specific (but
    quite broad within tm computing) problem...

    But you are only saying that there must be something else (that is
    Russel's teapot must exist) but can't show it.

    Thus, it is encumbent on YOU to prove or at least define what you are claiming to exist.


    i'm not here to spoon feed humanity a general decision algo, cause we
    assuredly do not have enough number theory to build that at this time.

    It seems you are not here to do anything constructive, only engage in flights of fancy imagining things that are not, but assuming they are.

    debunking a widely accepted misproof is constructive in ways neither of
    us can imagine

    i don't need to make ALL the progress in order to make SOME progress.
    i'm *extremely* tired of people spouting perfectionist fallacies at me

    (oooo, add that fallacy to list rick! what number are we at???)



    i'm trying to deal with all the claims of hubris that such a general
    decision algo *cannot* exist, by showing *how* it could exist
    alongside the potential for self-referential set-classification
    paradoxes:

    either by showing that we can just ignore the paradoxes, or by
    utilizing reflective turing machines to decide on them in a context
    aware manner, both are valid resolutions.

    In other words, by ignoring the reality,

    gaslighting again



    i know u want me to spoon feed you all the answers here, but i'm one
    freaking dude, with very limited time, and training, stuck with
    discussion that is willfully antagonistic and soaked with fallacy
    after fallacy,

    turing spend years coming up with his turing jump nonsense, on a brand
    new fresh theory, and people that likely actually tried to be
    collaborative,

    while i've gotta reconcile a massive almost century old bandwagon, /
    thru argument alone/

    i don't even have the luxury of pointing to an experiment, i've gotta
    come up with a set of purely logical arguments that stand entirely on
    their own right. einstein had it easier

    But, if you listened to people to make sure you were working on solid ground, and not flights of fancy, it might be easier, or at least become evident that it is a dead end.

    lol, u claim it's a dead end but can't even explain why other repeatedly crying definist fallacy over and over again. heck u can't even explain
    to me what i think tbh, and i know u can't.

    i refuse to buy into fallacy gishgallop, and that's a good thing


    Even Einstein admitted that his theory was likely "wrong", but was
    better than what we currently had, and WOULD be refined in the future.
    Just like classical mechanics were "wrong" in some cases, but close
    enough for most of the work that they were being used for.

    In the same way, yes, perhaps there is a refinement needed to the
    definition of what a "Computation" is, but just like Einstein's theory,
    it doesn't change the results significantly for what we currently can see.

    u haven't acknowledged any specific refinement, so u can't say that it
    can or cannot change in terms of results. ur just begging the question
    due to hubris.


    Your issue is you need to find that "improved" definition that still
    works for the common cases that we know about, before you can start to
    work out what it implies.

    STARTING with assumptions of that implicaion, is like assuming you can
    find a road network to drive from New York to Paris, France.






    and that potential is well codified by the fact the ct-thesis is
    still a thesis and not a law.

    It might just be a thesis, because it IS an unprovable truth.

    lookie u just accepting things as "muh unprovable truths". holy
    fucking hypocritical fucking faggot

    It isn't "just accepting", it is looking at the proofs and
    understanding the logic of them.

    YOU HAVEN'T PROVEN THE CT-THESIS, MY GOD


    imagine if i pulled that argument out on you wildly unfair
    irrational bastard??

    But all you can do is make baseless claims. My statements of
    unprovable truths is based on real proofs, that seem to be beyond you
    ability to understand.

    YOU ALSO HAVEN'T PROVEN THAT THE CT-THESIS IS UNPROVABLE, MY FUCKING GOD



    u make a complete mockery of reason with the disgustingly idiot
    dogshit u post over and over again...

    How is looking at proofs and accepting their results.

    BECAUSE UR JUST ARBITRARILY OVERGENERALIZING WITHOUT PROOF,

    OH MY FUCKING GOD

    godel's result is a curse on this species even if he wasn't wrong to
    produce it


    It is the rejection of proofs and thinking things must be different
    that is the mockery.


    holy fuck you dude eat a bag of dicks

    It seems you have eaten them all already,

    sure thing dick rLiN+A




    i will not respond to more comments on this because it's a boring, >>>>>> lazy, non-argument that is fucking waste of both our time.








    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 26 11:45:58 2026
    From Newsgroup: comp.theory

    On 1/26/26 8:43 AM, Richard Damon wrote:
    On 1/26/26 1:50 AM, dart200 wrote:
    On 1/25/26 2:40 PM, Richard Damon wrote:
    On 1/25/26 4:04 PM, dart200 wrote:
    On 1/25/26 10:23 AM, Richard Damon wrote:
    On 1/24/26 9:05 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 6:06 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:

    the CT-thesis is a thesis, not a proof.
    *I think that I fixed that*
    It seems to me that if something cannot be computed
    by applying finite string transformation rules to
    input finite strings then it cannot be computed.

    As soon as this is shown to be categorically impossible
    then the thesis turns into a proof.


    In other words, you just don't know what you are talking about.

    The fact that it is impossible to build a computation that, given >>>>>>> a representation of another computation and its input, determine >>>>>>> for all cases if the computation will halt does nothing to
    further the question of are Turing Machines the most powerful
    form of computation.

    contexts-aware machines compute functions:

    (context,input) -> output


    And what problems of interest to computation theory are of that form? >>>>>
    Computation Theory was to answer questions of logic and mathematics. >>>>>
    What logic or math is dependent on "context"

    *mechanically computing* the answer *generally* is dependent on
    context,

    Really?

    Most problems don't care about the context of the person asking it,
    just the context of the thing being looked at.

    well, yes, most problems don't involve pathologically querying a
    decider specifically for the purpose of then contradicting the
    decision... EfOa

    Which is a problem that doesn't actually depend on the context of the
    asker, so using the context just makes you wrong.

    yes it does.

    the self-referential set-classification paradox can *only* provably
    happen when a decider is called from within a pathological context (the paradoxical input machine), which is why i don't think it
    over-generalizes to disproving our ability to compute the answer in non-pathological contexts.

    TMs don't have an ability to discern between contexts, which is why
    current theory accepts that it does...

    the point of my work on RTMs is to grant computation an ability to
    discern between contexts so that we can transcend *that* particular limit.

    this doesn't remove *all* unknowns, i'm not resolving problems of actual complexity or unknowns due to lack of number theory. i'm resolving the self-referential set-classification paradox that underlies much of uncomputability, and to hopefully put a wrench in this rather odd, paradoxical, and quite frankly fallacy drenched feelings of certainty
    about unknowable unknowns.

    WHICH IS FINE, i don't need total instant perfection to make significant progress, my fucking god...





    and ignoring that is the underlying cause of the halting problem

    Nope.


    clearly novel techniques will be required to resolve long standing
    problems, eh richard???

    Or just lying as you try.

    I guess you think the speed of light is just a suggestion. (Yes,
    there are some thoughts about how to break it, but they require
    things totally outside our current physics).

    Yes, there may be a new definition of "Computations" that is actually
    useful, and generates answers to some things we currently think as
    uncomputable, but until you can actually figure out what that is,
    assuming it is just science fiction.

    or u'd just call it lying over and over again with no serious
    consideration to what's really being said ...

    Yep, that is a good description of what you are doing.

    You forget to consider the topic you are talking about.

    Either you accept the current definitions, or you actually supply your
    own new ones. Just assuming you can change them without actually doing
    so makes your argument baseless.

    false dichotomy ...

    cause why can't a "new" one just be in fact a rather minor adjustment???





    fuck




    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory on Mon Jan 26 15:50:34 2026
    From Newsgroup: comp.theory

    On 1/25/26 4:28 PM, dart200 wrote:
    On 1/25/26 1:14 PM, Richard Damon wrote:
    On 1/25/26 4:05 PM, dart200 wrote:
    On 1/25/26 10:20 AM, Richard Damon wrote:
    On 1/24/26 9:10 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:31 PM, dart200 wrote:
    On 1/24/26 2:25 PM, Richard Damon wrote:
    On 1/24/26 3:56 PM, dart200 wrote:
    On 1/24/26 11:52 AM, Richard Damon wrote:
    On 1/24/26 1:33 PM, dart200 wrote:
    On 1/24/26 9:26 AM, Richard Damon wrote:
    On 1/24/26 11:45 AM, dart200 wrote:
    On 1/24/26 4:17 AM, Richard Damon wrote:
    On 1/24/26 3:03 AM, dart200 wrote:
    On 1/23/26 5:36 PM, Richard Damon wrote:
    On 1/20/26 9:30 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:

    The term *IS* defined, and to change it means you lie. >>>>>>>>>>>>>>>>>

    doubling down on definist fallacy ehh???

    I guess you don't understand the difinist fallacy, as >>>>>>>>>>>>>>>> using the ACTUAL definition isn't a fallacy.

    nah ur just pushing a definition that u've happened latch >>>>>>>>>>>>>>> onto because it's convenient for u

    classic definist fallacy


    Nope, you are just stuck in a definist fallacy about what >>>>>>>>>>>>>> a definist fallacy is.

    It seems you don't understand the concept that some things >>>>>>>>>>>>>> ARE just defined a given way to be in a given context. >>>>>>>>>>>>>>

    and u richard are not the god what that is


    But "the field" is, and thus you are just saying it is ok to >>>>>>>>>>>> change the meaning of words.

    i don't believe u represent what "the field" is either


    The go to "the field" and see if they disagree.

    "the field" can come here if they like,

    but as it stands i'm going to call out any more "nOT mUh
    CoMpUTaTiOn" arguments as definist fallacy


    In other words, you are just admitting, you don't care what the >>>>>>>> words mean in the field, you will just continue to be a stupid >>>>>>>> and ignorant lair about what you are doing.

    i just don't care what YOU, richard, says "CoMpUTaTiOn" means.
    you aren't "the field" bro, and i just really dgaf about ur
    endless definist fallacy


    But apperently you do, as you aren't just going to present your
    ideas directly to "the field" on a peer-reviewed journal, so
    something is telling you that you have something to fix.

    or rather the peer-review is so gatekept i don't even get a review
    back for my submission, just rejection without review.

    the system is broken such that i will take my stance elsewhere.

    everyone else can ignore me at all our peril...


    WHich just shows that you aren't in step with what theory actually
    is talking about.

    That is your problem, you assume the world is wrong, and more than
    likely it is you that is wrong.

    i'm not one continually asserting a bunch of impossible to find
    teapots floating around in machine space

    No, you just keep asserting that you compute impossible to compute
    results.

    while u just keep ignoring how i'm avoiding the pitfalls u use to claim impossibility


    No, you use an assumption that requires something proved impossible that
    you want to claim is possible because it might be.

    Sorry, you need to actually SHOW how to do what you want to claim with
    actual realizable steps.

    And that means you need a COMPUTABLE method to generate you enumerations
    that you iterate through that is complete.


    And, the possibility of unknowable things hiding in machine space
    isn't as crasy at it might seem, as there are an infinite number of
    machines for them to hide with.

    i just love how godel convinced u to believe russel's teapot certainly exists

    He didn't. But Russel shows that claims we need to prove it doesn't are invalid.

    I have shown you the proof that unknowable things must exist. You claim
    they can't, but your only reasoning is based on there being something
    new that we don't know about that you can't actually prove.

    Which of those is a claim of the existance of a Russel's Teapot?

    The thing with a proof, or the things just assumed?





    If you want to break down a "broken" structure, you need to know
    enough about it to SHOW it is broken.

    Just assuming it is just shows that it is most likely YOU that is
    wrong.

    It is more that the system ignores that which tries to break it,
    because getting side tracked on false trails is too damaging.

    To me it seems more of a peril to accept your misguided ideas.

    The fact that you begin by trying to redefine things out of
    ignorance doesn't help your case.






    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 26 17:17:18 2026
    From Newsgroup: comp.theory

    On 1/26/26 2:43 PM, dart200 wrote:
    On 1/26/26 8:39 AM, Richard Damon wrote:
    On 1/26/26 12:56 AM, dart200 wrote:
    On 1/25/26 2:36 PM, Richard Damon wrote:
    On 1/25/26 4:05 PM, dart200 wrote:
    On 1/25/26 10:21 AM, Richard Damon wrote:
    On 1/24/26 9:24 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:36 PM, dart200 wrote:
    On 1/24/26 6:44 AM, Richard Damon wrote:
    On 1/20/26 8:55 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:
    On 1/20/26 1:18 AM, dart200 wrote:
    On 1/19/26 9:29 PM, Richard Damon wrote:
    On 1/18/26 11:51 PM, dart200 wrote:
    On 1/18/26 4:28 PM, Richard Damon wrote:
    On 1/18/26 4:50 PM, dart200 wrote:
    On 1/18/26 12:56 PM, Richard Damon wrote:
    On 1/18/26 1:15 PM, dart200 wrote:
    On 1/18/26 4:05 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>> On 1/17/26 10:14 PM, dart200 wrote: >>>>>>>>>>>>>>>>>>>>>>
    Good luck starving to death when your money runs out. >>>>>>>>>>>>>>>>>>>>>
    one can only hope for so much sometimes EfOA >>>>>>>>>>>>>>>>>>>>>



    I guess you don't understand the rules of logic. >>>>>>>>>>>>>>>>>>>>>>>
    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even >>>>>>>>>>>>>>>>>>>>>>>>> if u haven't understood it yet) that produces a >>>>>>>>>>>>>>>>>>>>>>>>> consistent deterministic result that is "not a >>>>>>>>>>>>>>>>>>>>>>>>> computation".

    Because you get that result only by equivocating >>>>>>>>>>>>>>>>>>>>>>>> on your definitions.

    If the context is part of the inpt to make the >>>>>>>>>>>>>>>>>>>>>>>> output determistic from the input, then they >>>>>>>>>>>>>>>>>>>>>>>> fail to be usable as sub- computations as we >>>>>>>>>>>>>>>>>>>>>>>> can't control that context part of the input. >>>>>>>>>>>>>>>>>>>>>>>>
    When we look at just the controllable input for >>>>>>>>>>>>>>>>>>>>>>>> a sub- computation, the output is NOT a >>>>>>>>>>>>>>>>>>>>>>>> deterministic function of that inut. >>>>>>>>>>>>>>>>>>>>>>>>

    not sure what the fuck it's doing if it's not a >>>>>>>>>>>>>>>>>>>>>>>>> computation

    Its using hidden inputs that the caller can't >>>>>>>>>>>>>>>>>>>>>>>> control.

    which we do all the time in normal programming, >>>>>>>>>>>>>>>>>>>>>>> something which apparently u think the tHeOrY oF >>>>>>>>>>>>>>>>>>>>>>> CoMpUtInG fails to encapsulate

    Right, but that isn't about computations. >>>>>>>>>>>>>>>>>>>>>>

    pretty crazy we do a bunch "non-computating" in >>>>>>>>>>>>>>>>>>>>>>> the normal act of programming computers >>>>>>>>>>>>>>>>>>>>>>
    Why?

    As I have said, "Computatations" is NOT about how >>>>>>>>>>>>>>>>>>>>>> modern computers work.

    I guess you are just showing that you >>>>>>>>>>>>>>>>>>>>>> fundamentally don't understand the problem field >>>>>>>>>>>>>>>>>>>>>> you are betting your life on.

    one would presume the fundamental theory of >>>>>>>>>>>>>>>>>>>>> computing would be general enough to encapsulate >>>>>>>>>>>>>>>>>>>>> everything computed by real world computers, no??? >>>>>>>>>>>>>>>>>>>>
    Why?

    Remember, the fundamental theory of Computing >>>>>>>>>>>>>>>>>>>> PREDATES the computer as you know it.

    so ur saying it's outdated and needs updating in >>>>>>>>>>>>>>>>>>> regards to new things we do with computers that >>>>>>>>>>>>>>>>>>> apparently turing machines as a model don't have >>>>>>>>>>>>>>>>>>> variations of ...

    No, it still handles that which it was developed for. >>>>>>>>>>>>>>>>>
    well it was developed to be a general theory of >>>>>>>>>>>>>>>>> computing, and apparently modern computing has >>>>>>>>>>>>>>>>> transcended that theory ...

    Not really.

    THe way modern processors work, "sub-routines" can fail >>>>>>>>>>>>>>>> to be computations, but whole programs will tend to be. >>>>>>>>>>>>>>>> Sub- routines CAN be built with care to fall under its >>>>>>>>>>>>>>>> guidance.

    lol, what are they even if not "computations"???

    not-computations

    great, a set of deterministic steps that produces a result >>>>>>>>>>>>> but is somehow not a compution!

    Because it isn't deterministically based on the INPUT, >>>>>>>>>>>
    no it's just a series of steps to produce some output.

    Nope, not in the formulation of the theory.

    again: YOU HAVE NOT PROVEN THAT TURING MACHINES, OR ANY
    EQUIVALENT THEORY, ENCOMPASS ALL POSSIBLE COMPUTATIONS

    like holy fuck, how many times will i need to repeat that??? >>>>>>>>>
    it's a ct-THESIS not a ct-LAW

    But I can say that Computations as defined, are all that they >>>>>>>> can do.

    i will never care about you complaining about the fact the
    computations i'm talking about don't fit within the particular
    box you call a "Computation", because i just doesn't mean anything, >>>>>>
    In other words, you are just saying you don't care about
    computation theory, and thus why are you complaining about what it >>>>>> says about computations.

    no i'm saying i don't care about ur particular definition, richard

    do better that trying to "define" me as wrong. meaning: put in the
    work to demonstrate actual contradictions

    In other words, you want me to prove there isn't a teapot in the
    asteroid belt.

    lol, what. asking for a proof of contradiction is now akin to
    russel's teapot???

    You are asking me to disprove something that you won't (and can't)
    define.

    i tried to but ur incredibly uncooperative

    No, because a PROOF starts with things actually defined, and is not
    based on an assumption of something that isn't.

    ALL your proofs have been based on the assumption of something being computable that isn't, sometimes being a complete enumeration of a class
    or sometimes some operation that isn't computable.

    When I point out what isn't computable, rather than showing how it IS conputable, you ask me to prove that it isn't.

    THAT is not how a proof goes, YOU need to actually justify all your assumptions, and if one is questioned, show that it is correct.

    Sorry, you are just proving you don't understand your task at hand.





    are u even doing math here or this just a giant definist fallacy
    shitshow???

    No, you just don't know what that means.



    YOU are the one assuming things can be done, but refuse to actually
    try to define an actual algorithm that does so.

    An actual algorithm being an actual sequence of finite atomic steps,
    and using bounded loops.





    u and the entire field can be wrong about how u specified
    "Computation",

    No, you just don't understand the WHY of computation theory.

    u don't give a why u stupid fucking retarded faggot, and u never
    will because the ct-thesis isn't proven, and u've already gone down >>>>> the moronic hole of "maybe my favorite truth isn't even provable!!!??" >>>>
    I have mentioned it, but have you bothered to look into it?

    Comptation Theory was developed to see if "Computations" of this
    sort could be used to generate proofs of the great problems of
    mathematics and logic.

    It was hoped that it would provide a solution to the then curretly
    seeming intractable problems that seemed to have an answer, but they
    just couldn't be found.

    Insteed, it showed that it was a provable fact that some problems
    would not have a solution. And thus we had to accept that we
    couldn't prove everything we might want.


    and that fact was only shown, for computing in regards to itself, by
    using self-referential set-classification paradoxes, like the halting
    problem


    which is the part i'm trying to reconcile, that very specific (but
    quite broad within tm computing) problem...

    But you are only saying that there must be something else (that is
    Russel's teapot must exist) but can't show it.

    Thus, it is encumbent on YOU to prove or at least define what you are
    claiming to exist.


    i'm not here to spoon feed humanity a general decision algo, cause we
    assuredly do not have enough number theory to build that at this time.

    It seems you are not here to do anything constructive, only engage in
    flights of fancy imagining things that are not, but assuming they are.

    debunking a widely accepted misproof is constructive in ways neither of
    us can imagine

    Then try to show where the ERROR in the proof is.

    If there isn't an error, it isn't a "misproof"


    i don't need to make ALL the progress in order to make SOME progress.
    i'm *extremely* tired of people spouting perfectionist fallacies at me

    But to claim you can handle the actual Halting problem, YOU NEED to be perfect.

    I guess you just are doing your lying definitions again.


    (oooo, add that fallacy to list rick! what number are we at???)





    i'm trying to deal with all the claims of hubris that such a general
    decision algo *cannot* exist, by showing *how* it could exist
    alongside the potential for self-referential set-classification
    paradoxes:

    either by showing that we can just ignore the paradoxes, or by
    utilizing reflective turing machines to decide on them in a context
    aware manner, both are valid resolutions.

    In other words, by ignoring the reality,

    gaslighting again

    Nope, but I think your brain went to sleep from the gas.




    i know u want me to spoon feed you all the answers here, but i'm one
    freaking dude, with very limited time, and training, stuck with
    discussion that is willfully antagonistic and soaked with fallacy
    after fallacy,

    turing spend years coming up with his turing jump nonsense, on a
    brand new fresh theory, and people that likely actually tried to be
    collaborative,

    while i've gotta reconcile a massive almost century old bandwagon, /
    thru argument alone/

    i don't even have the luxury of pointing to an experiment, i've gotta
    come up with a set of purely logical arguments that stand entirely on
    their own right. einstein had it easier

    But, if you listened to people to make sure you were working on solid
    ground, and not flights of fancy, it might be easier, or at least
    become evident that it is a dead end.

    lol, u claim it's a dead end but can't even explain why other repeatedly crying definist fallacy over and over again. heck u can't even explain
    to me what i think tbh, and i know u can't.

    It isn't "definist fallacy" to quote the actual definition.

    In fact to try to use that label on the actual definition is the
    definist fallacy.


    i refuse to buy into fallacy gishgallop, and that's a good thing

    Nope, you refuse to face reality, and it is slapping you in the face silly.



    Even Einstein admitted that his theory was likely "wrong", but was
    better than what we currently had, and WOULD be refined in the future.
    Just like classical mechanics were "wrong" in some cases, but close
    enough for most of the work that they were being used for.

    In the same way, yes, perhaps there is a refinement needed to the
    definition of what a "Computation" is, but just like Einstein's
    theory, it doesn't change the results significantly for what we
    currently can see.

    u haven't acknowledged any specific refinement, so u can't say that it
    can or cannot change in terms of results. ur just begging the question
    due to hubris.

    You haven't given a SPECIFIC refinement, just vague claims with no backing.


    Results based on false premises are not valid,

    If you want to change the rules, you need to actually define your new game.

    So far, its just, lets assume things can be different.



    Your issue is you need to find that "improved" definition that still
    works for the common cases that we know about, before you can start to
    work out what it implies.

    STARTING with assumptions of that implicaion, is like assuming you can
    find a road network to drive from New York to Paris, France.






    and that potential is well codified by the fact the ct-thesis is >>>>>>> still a thesis and not a law.

    It might just be a thesis, because it IS an unprovable truth.

    lookie u just accepting things as "muh unprovable truths". holy
    fucking hypocritical fucking faggot

    It isn't "just accepting", it is looking at the proofs and
    understanding the logic of them.

    YOU HAVEN'T PROVEN THE CT-THESIS, MY GOD


    imagine if i pulled that argument out on you wildly unfair
    irrational bastard??

    But all you can do is make baseless claims. My statements of
    unprovable truths is based on real proofs, that seem to be beyond
    you ability to understand.

    YOU ALSO HAVEN'T PROVEN THAT THE CT-THESIS IS UNPROVABLE, MY FUCKING GOD >>>


    u make a complete mockery of reason with the disgustingly idiot
    dogshit u post over and over again...

    How is looking at proofs and accepting their results.

    BECAUSE UR JUST ARBITRARILY OVERGENERALIZING WITHOUT PROOF,

    OH MY FUCKING GOD

    godel's result is a curse on this species even if he wasn't wrong to
    produce it


    It is the rejection of proofs and thinking things must be different
    that is the mockery.


    holy fuck you dude eat a bag of dicks

    It seems you have eaten them all already,

    sure thing dick rLiN+A




    i will not respond to more comments on this because it's a
    boring, lazy, non-argument that is fucking waste of both our time. >>>>>>>










    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 26 17:28:17 2026
    From Newsgroup: comp.theory

    On 1/26/26 2:45 PM, dart200 wrote:
    On 1/26/26 8:43 AM, Richard Damon wrote:
    On 1/26/26 1:50 AM, dart200 wrote:
    On 1/25/26 2:40 PM, Richard Damon wrote:
    On 1/25/26 4:04 PM, dart200 wrote:
    On 1/25/26 10:23 AM, Richard Damon wrote:
    On 1/24/26 9:05 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 6:06 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:

    the CT-thesis is a thesis, not a proof.
    *I think that I fixed that*
    It seems to me that if something cannot be computed
    by applying finite string transformation rules to
    input finite strings then it cannot be computed.

    As soon as this is shown to be categorically impossible
    then the thesis turns into a proof.


    In other words, you just don't know what you are talking about. >>>>>>>>
    The fact that it is impossible to build a computation that,
    given a representation of another computation and its input,
    determine for all cases if the computation will halt does
    nothing to further the question of are Turing Machines the most >>>>>>>> powerful form of computation.

    contexts-aware machines compute functions:

    (context,input) -> output


    And what problems of interest to computation theory are of that form? >>>>>>
    Computation Theory was to answer questions of logic and mathematics. >>>>>>
    What logic or math is dependent on "context"

    *mechanically computing* the answer *generally* is dependent on
    context,

    Really?

    Most problems don't care about the context of the person asking it,
    just the context of the thing being looked at.

    well, yes, most problems don't involve pathologically querying a
    decider specifically for the purpose of then contradicting the
    decision... EfOa

    Which is a problem that doesn't actually depend on the context of the
    asker, so using the context just makes you wrong.

    yes it does.

    the self-referential set-classification paradox can *only* provably
    happen when a decider is called from within a pathological context (the paradoxical input machine), which is why i don't think it over-
    generalizes to disproving our ability to compute the answer in non- pathological contexts.

    No, becuase the machine in questions halting behavior is fully defined,
    since the SPECIFIC machine it was built on had to be defined.

    Thus, the "paradox", like all real paradoxes is only apparent, as in
    only when we think of the "generalized" template, not the actual machine
    that is the input.

    You have your problem because you think of the machine as being built to
    an API, but it isn't, it is built to a SPECIFIC decider, or it isn't
    actually a computation. As a part of being a computation is having an
    explicit and complete listing of the algorithm used, which can't just reference an "API", but needs the implementation of it.

    The "Template" is built to the API, but the input isn't the template,
    but the actual machine, which means the specific decider, and thus there
    is no real paradox, only an incorrect machine, as all the other ones
    have a chance of being correct (if they are correct partial deciders)


    TMs don't have an ability to discern between contexts, which is why
    current theory accepts that it does...

    And neither do computations as defined. Even in your model, you try to
    call the context part of the input becuase you know it has to be.


    the point of my work on RTMs is to grant computation an ability to
    discern between contexts so that we can transcend *that* particular limit.

    And the problem is that the problem space doesn't see past that limit.

    If you want to talk about context dependent computations, you need to
    work out how you are going to actually define that, then figure out what
    you can possibly say about them.


    this doesn't remove *all* unknowns, i'm not resolving problems of actual complexity or unknowns due to lack of number theory. i'm resolving the self-referential set-classification paradox that underlies much of uncomputability, and to hopefully put a wrench in this rather odd, paradoxical, and quite frankly fallacy drenched feelings of certainty
    about unknowable unknowns.

    WHICH IS FINE, i don't need total instant perfection to make significant progress, my fucking god...

    So, tackle the part that you can, and not the part that even your
    context dependent part doesn't help with,

    After all, the "Halting Problem" ask a question that is NOT dependent on
    the context it is being asked in, as that machines behavior was defined
    not to so depend on it. Thus a "Context Dependent Compuation" can't use context to help answer it, at best it might help a partial decider be
    able to answer a biger slice of the pie.






    and ignoring that is the underlying cause of the halting problem

    Nope.


    clearly novel techniques will be required to resolve long standing
    problems, eh richard???

    Or just lying as you try.

    I guess you think the speed of light is just a suggestion. (Yes,
    there are some thoughts about how to break it, but they require
    things totally outside our current physics).

    Yes, there may be a new definition of "Computations" that is
    actually useful, and generates answers to some things we currently
    think as uncomputable, but until you can actually figure out what
    that is, assuming it is just science fiction.

    or u'd just call it lying over and over again with no serious
    consideration to what's really being said ...

    Yep, that is a good description of what you are doing.

    You forget to consider the topic you are talking about.

    Either you accept the current definitions, or you actually supply your
    own new ones. Just assuming you can change them without actually doing
    so makes your argument baseless.

    false dichotomy ...

    cause why can't a "new" one just be in fact a rather minor adjustment???

    You can't make a "minor adjustment" to a fixed system.

    That is like saying that 22/7 is close enough to the value of Pi to be
    pi for all uses.






    fuck







    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng,alt.messianic,alt.buddha.short.fat.guy on Mon Jan 26 14:29:09 2026
    From Newsgroup: comp.theory

    On 1/26/26 2:17 PM, Richard Damon wrote:
    On 1/26/26 2:43 PM, dart200 wrote:
    On 1/26/26 8:39 AM, Richard Damon wrote:
    On 1/26/26 12:56 AM, dart200 wrote:
    On 1/25/26 2:36 PM, Richard Damon wrote:
    On 1/25/26 4:05 PM, dart200 wrote:
    On 1/25/26 10:21 AM, Richard Damon wrote:
    On 1/24/26 9:24 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:36 PM, dart200 wrote:
    On 1/24/26 6:44 AM, Richard Damon wrote:
    On 1/20/26 8:55 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:
    On 1/20/26 1:18 AM, dart200 wrote:
    On 1/19/26 9:29 PM, Richard Damon wrote:
    On 1/18/26 11:51 PM, dart200 wrote:
    On 1/18/26 4:28 PM, Richard Damon wrote:
    On 1/18/26 4:50 PM, dart200 wrote:
    On 1/18/26 12:56 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 1/18/26 1:15 PM, dart200 wrote:
    On 1/18/26 4:05 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>> On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>> On 1/17/26 10:14 PM, dart200 wrote: >>>>>>>>>>>>>>>>>>>>>>>
    Good luck starving to death when your money runs >>>>>>>>>>>>>>>>>>>>>>> out.

    one can only hope for so much sometimes EfOA >>>>>>>>>>>>>>>>>>>>>>



    I guess you don't understand the rules of logic. >>>>>>>>>>>>>>>>>>>>>>>>
    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine >>>>>>>>>>>>>>>>>>>>>>>>>> (even if u haven't understood it yet) that >>>>>>>>>>>>>>>>>>>>>>>>>> produces a consistent deterministic result >>>>>>>>>>>>>>>>>>>>>>>>>> that is "not a computation". >>>>>>>>>>>>>>>>>>>>>>>>>
    Because you get that result only by >>>>>>>>>>>>>>>>>>>>>>>>> equivocating on your definitions. >>>>>>>>>>>>>>>>>>>>>>>>>
    If the context is part of the inpt to make the >>>>>>>>>>>>>>>>>>>>>>>>> output determistic from the input, then they >>>>>>>>>>>>>>>>>>>>>>>>> fail to be usable as sub- computations as we >>>>>>>>>>>>>>>>>>>>>>>>> can't control that context part of the input. >>>>>>>>>>>>>>>>>>>>>>>>>
    When we look at just the controllable input for >>>>>>>>>>>>>>>>>>>>>>>>> a sub- computation, the output is NOT a >>>>>>>>>>>>>>>>>>>>>>>>> deterministic function of that inut. >>>>>>>>>>>>>>>>>>>>>>>>>

    not sure what the fuck it's doing if it's not >>>>>>>>>>>>>>>>>>>>>>>>>> a computation

    Its using hidden inputs that the caller can't >>>>>>>>>>>>>>>>>>>>>>>>> control.

    which we do all the time in normal programming, >>>>>>>>>>>>>>>>>>>>>>>> something which apparently u think the tHeOrY oF >>>>>>>>>>>>>>>>>>>>>>>> CoMpUtInG fails to encapsulate

    Right, but that isn't about computations. >>>>>>>>>>>>>>>>>>>>>>>

    pretty crazy we do a bunch "non-computating" in >>>>>>>>>>>>>>>>>>>>>>>> the normal act of programming computers >>>>>>>>>>>>>>>>>>>>>>>
    Why?

    As I have said, "Computatations" is NOT about how >>>>>>>>>>>>>>>>>>>>>>> modern computers work.

    I guess you are just showing that you >>>>>>>>>>>>>>>>>>>>>>> fundamentally don't understand the problem field >>>>>>>>>>>>>>>>>>>>>>> you are betting your life on.

    one would presume the fundamental theory of >>>>>>>>>>>>>>>>>>>>>> computing would be general enough to encapsulate >>>>>>>>>>>>>>>>>>>>>> everything computed by real world computers, no??? >>>>>>>>>>>>>>>>>>>>>
    Why?

    Remember, the fundamental theory of Computing >>>>>>>>>>>>>>>>>>>>> PREDATES the computer as you know it. >>>>>>>>>>>>>>>>>>>>
    so ur saying it's outdated and needs updating in >>>>>>>>>>>>>>>>>>>> regards to new things we do with computers that >>>>>>>>>>>>>>>>>>>> apparently turing machines as a model don't have >>>>>>>>>>>>>>>>>>>> variations of ...

    No, it still handles that which it was developed for. >>>>>>>>>>>>>>>>>>
    well it was developed to be a general theory of >>>>>>>>>>>>>>>>>> computing, and apparently modern computing has >>>>>>>>>>>>>>>>>> transcended that theory ...

    Not really.

    THe way modern processors work, "sub-routines" can fail >>>>>>>>>>>>>>>>> to be computations, but whole programs will tend to be. >>>>>>>>>>>>>>>>> Sub- routines CAN be built with care to fall under its >>>>>>>>>>>>>>>>> guidance.

    lol, what are they even if not "computations"??? >>>>>>>>>>>>>>>
    not-computations

    great, a set of deterministic steps that produces a result >>>>>>>>>>>>>> but is somehow not a compution!

    Because it isn't deterministically based on the INPUT, >>>>>>>>>>>>
    no it's just a series of steps to produce some output.

    Nope, not in the formulation of the theory.

    again: YOU HAVE NOT PROVEN THAT TURING MACHINES, OR ANY
    EQUIVALENT THEORY, ENCOMPASS ALL POSSIBLE COMPUTATIONS

    like holy fuck, how many times will i need to repeat that??? >>>>>>>>>>
    it's a ct-THESIS not a ct-LAW

    But I can say that Computations as defined, are all that they >>>>>>>>> can do.

    i will never care about you complaining about the fact the
    computations i'm talking about don't fit within the particular >>>>>>>> box you call a "Computation", because i just doesn't mean anything, >>>>>>>
    In other words, you are just saying you don't care about
    computation theory, and thus why are you complaining about what >>>>>>> it says about computations.

    no i'm saying i don't care about ur particular definition, richard >>>>>>
    do better that trying to "define" me as wrong. meaning: put in the >>>>>> work to demonstrate actual contradictions

    In other words, you want me to prove there isn't a teapot in the
    asteroid belt.

    lol, what. asking for a proof of contradiction is now akin to
    russel's teapot???

    You are asking me to disprove something that you won't (and can't)
    define.

    i tried to but ur incredibly uncooperative

    No, because a PROOF starts with things actually defined, and is not
    based on an assumption of something that isn't.

    ALL your proofs have been based on the assumption of something being computable that isn't, sometimes being a complete enumeration of a class
    or sometimes some operation that isn't computable.

    When I point out what isn't computable, rather than showing how it IS conputable, you ask me to prove that it isn't.

    THAT is not how a proof goes, YOU need to actually justify all your assumptions, and if one is questioned, show that it is correct.

    Sorry, you are just proving you don't understand your task at hand.


    bro u should've just agreed ur being uncooperative,

    and it could've taken less words

    but that would've required at least an iota of being cooperative,

    so here we are Efy|rCiEfA2Efy|rCiEfA2Efy|rCiEfA2

    #god





    are u even doing math here or this just a giant definist fallacy
    shitshow???

    No, you just don't know what that means.



    YOU are the one assuming things can be done, but refuse to actually >>>>> try to define an actual algorithm that does so.

    An actual algorithm being an actual sequence of finite atomic
    steps, and using bounded loops.





    u and the entire field can be wrong about how u specified
    "Computation",

    No, you just don't understand the WHY of computation theory.

    u don't give a why u stupid fucking retarded faggot, and u never
    will because the ct-thesis isn't proven, and u've already gone
    down the moronic hole of "maybe my favorite truth isn't even
    provable!!!??"

    I have mentioned it, but have you bothered to look into it?

    Comptation Theory was developed to see if "Computations" of this
    sort could be used to generate proofs of the great problems of
    mathematics and logic.

    It was hoped that it would provide a solution to the then curretly
    seeming intractable problems that seemed to have an answer, but
    they just couldn't be found.

    Insteed, it showed that it was a provable fact that some problems
    would not have a solution. And thus we had to accept that we
    couldn't prove everything we might want.


    and that fact was only shown, for computing in regards to itself, by
    using self-referential set-classification paradoxes, like the
    halting problem


    which is the part i'm trying to reconcile, that very specific (but
    quite broad within tm computing) problem...

    But you are only saying that there must be something else (that is
    Russel's teapot must exist) but can't show it.

    Thus, it is encumbent on YOU to prove or at least define what you are
    claiming to exist.


    i'm not here to spoon feed humanity a general decision algo, cause
    we assuredly do not have enough number theory to build that at this
    time.

    It seems you are not here to do anything constructive, only engage in
    flights of fancy imagining things that are not, but assuming they are.

    debunking a widely accepted misproof is constructive in ways neither
    of us can imagine

    Then try to show where the ERROR in the proof is.

    If there isn't an error, it isn't a "misproof"


    i don't need to make ALL the progress in order to make SOME progress.
    i'm *extremely* tired of people spouting perfectionist fallacies at me

    But to claim you can handle the actual Halting problem, YOU NEED to be perfect.

    wow, after i bring up a perfection fallacy you then in the next sentence
    u double down on it by claiming i NEED to be prefect???

    like holy fuck dude do u have even a semblance of actual self-awareness???

    my dear lord being all that was, is, and ever will be...

    have mercy on us all for the abject stupidity displayed in this here group

    EfOAEfOAEfOA


    I guess you just are doing your lying definitions again.


    (oooo, add that fallacy to list rick! what number are we at???)





    i'm trying to deal with all the claims of hubris that such a general
    decision algo *cannot* exist, by showing *how* it could exist
    alongside the potential for self-referential set-classification
    paradoxes:

    either by showing that we can just ignore the paradoxes, or by
    utilizing reflective turing machines to decide on them in a context
    aware manner, both are valid resolutions.

    In other words, by ignoring the reality,

    gaslighting again

    Nope, but I think your brain went to sleep from the gas.




    i know u want me to spoon feed you all the answers here, but i'm one
    freaking dude, with very limited time, and training, stuck with
    discussion that is willfully antagonistic and soaked with fallacy
    after fallacy,

    turing spend years coming up with his turing jump nonsense, on a
    brand new fresh theory, and people that likely actually tried to be
    collaborative,

    while i've gotta reconcile a massive almost century old bandwagon, /
    thru argument alone/

    i don't even have the luxury of pointing to an experiment, i've
    gotta come up with a set of purely logical arguments that stand
    entirely on their own right. einstein had it easier

    But, if you listened to people to make sure you were working on solid
    ground, and not flights of fancy, it might be easier, or at least
    become evident that it is a dead end.

    lol, u claim it's a dead end but can't even explain why other
    repeatedly crying definist fallacy over and over again. heck u can't
    even explain to me what i think tbh, and i know u can't.

    It isn't "definist fallacy" to quote the actual definition.

    In fact to try to use that label on the actual definition is the
    definist fallacy.


    i refuse to buy into fallacy gishgallop, and that's a good thing

    Nope, you refuse to face reality, and it is slapping you in the face silly.



    Even Einstein admitted that his theory was likely "wrong", but was
    better than what we currently had, and WOULD be refined in the
    future. Just like classical mechanics were "wrong" in some cases, but
    close enough for most of the work that they were being used for.

    In the same way, yes, perhaps there is a refinement needed to the
    definition of what a "Computation" is, but just like Einstein's
    theory, it doesn't change the results significantly for what we
    currently can see.

    u haven't acknowledged any specific refinement, so u can't say that it
    can or cannot change in terms of results. ur just begging the question
    due to hubris.

    You haven't given a SPECIFIC refinement, just vague claims with no backing.

    i gave a *very* specific *additional* operation for the machine,
    specified exactly what it does, and gave a demonstration of it in a
    simple case.

    could you even begin to tell me what that was? like what was the name of
    that operation even??? see if u can't even name me what the operation
    was...

    that is a definitive sign of an entirely antagonistic attitude


    Results based on false premises are not valid,

    If you want to change the rules, you need to actually define your new game.

    So far, its just, lets assume things can be different.



    Your issue is you need to find that "improved" definition that still
    works for the common cases that we know about, before you can start
    to work out what it implies.

    STARTING with assumptions of that implicaion, is like assuming you
    can find a road network to drive from New York to Paris, France.






    and that potential is well codified by the fact the ct-thesis is >>>>>>>> still a thesis and not a law.

    It might just be a thesis, because it IS an unprovable truth.

    lookie u just accepting things as "muh unprovable truths". holy
    fucking hypocritical fucking faggot

    It isn't "just accepting", it is looking at the proofs and
    understanding the logic of them.

    YOU HAVEN'T PROVEN THE CT-THESIS, MY GOD


    imagine if i pulled that argument out on you wildly unfair
    irrational bastard??

    But all you can do is make baseless claims. My statements of
    unprovable truths is based on real proofs, that seem to be beyond
    you ability to understand.

    YOU ALSO HAVEN'T PROVEN THAT THE CT-THESIS IS UNPROVABLE, MY FUCKING
    GOD



    u make a complete mockery of reason with the disgustingly idiot
    dogshit u post over and over again...

    How is looking at proofs and accepting their results.

    BECAUSE UR JUST ARBITRARILY OVERGENERALIZING WITHOUT PROOF,

    OH MY FUCKING GOD

    godel's result is a curse on this species even if he wasn't wrong to
    produce it


    It is the rejection of proofs and thinking things must be different >>>>> that is the mockery.


    holy fuck you dude eat a bag of dicks

    It seems you have eaten them all already,

    sure thing dick rLiN+A




    i will not respond to more comments on this because it's a
    boring, lazy, non-argument that is fucking waste of both our time. >>>>>>>>










    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory on Mon Jan 26 21:48:18 2026
    From Newsgroup: comp.theory

    On 1/26/26 12:50 PM, Richard Damon wrote:
    On 1/25/26 4:28 PM, dart200 wrote:
    On 1/25/26 1:14 PM, Richard Damon wrote:
    On 1/25/26 4:05 PM, dart200 wrote:
    On 1/25/26 10:20 AM, Richard Damon wrote:
    On 1/24/26 9:10 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:31 PM, dart200 wrote:
    On 1/24/26 2:25 PM, Richard Damon wrote:
    On 1/24/26 3:56 PM, dart200 wrote:
    On 1/24/26 11:52 AM, Richard Damon wrote:
    On 1/24/26 1:33 PM, dart200 wrote:
    On 1/24/26 9:26 AM, Richard Damon wrote:
    On 1/24/26 11:45 AM, dart200 wrote:
    On 1/24/26 4:17 AM, Richard Damon wrote:
    On 1/24/26 3:03 AM, dart200 wrote:
    On 1/23/26 5:36 PM, Richard Damon wrote:
    On 1/20/26 9:30 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:

    The term *IS* defined, and to change it means you lie. >>>>>>>>>>>>>>>>>>

    doubling down on definist fallacy ehh???

    I guess you don't understand the difinist fallacy, as >>>>>>>>>>>>>>>>> using the ACTUAL definition isn't a fallacy.

    nah ur just pushing a definition that u've happened >>>>>>>>>>>>>>>> latch onto because it's convenient for u

    classic definist fallacy


    Nope, you are just stuck in a definist fallacy about what >>>>>>>>>>>>>>> a definist fallacy is.

    It seems you don't understand the concept that some >>>>>>>>>>>>>>> things ARE just defined a given way to be in a given >>>>>>>>>>>>>>> context.


    and u richard are not the god what that is


    But "the field" is, and thus you are just saying it is ok >>>>>>>>>>>>> to change the meaning of words.

    i don't believe u represent what "the field" is either >>>>>>>>>>>>

    The go to "the field" and see if they disagree.

    "the field" can come here if they like,

    but as it stands i'm going to call out any more "nOT mUh
    CoMpUTaTiOn" arguments as definist fallacy


    In other words, you are just admitting, you don't care what the >>>>>>>>> words mean in the field, you will just continue to be a stupid >>>>>>>>> and ignorant lair about what you are doing.

    i just don't care what YOU, richard, says "CoMpUTaTiOn" means. >>>>>>>> you aren't "the field" bro, and i just really dgaf about ur
    endless definist fallacy


    But apperently you do, as you aren't just going to present your >>>>>>> ideas directly to "the field" on a peer-reviewed journal, so
    something is telling you that you have something to fix.

    or rather the peer-review is so gatekept i don't even get a review >>>>>> back for my submission, just rejection without review.

    the system is broken such that i will take my stance elsewhere.

    everyone else can ignore me at all our peril...


    WHich just shows that you aren't in step with what theory actually
    is talking about.

    That is your problem, you assume the world is wrong, and more than
    likely it is you that is wrong.

    i'm not one continually asserting a bunch of impossible to find
    teapots floating around in machine space

    No, you just keep asserting that you compute impossible to compute
    results.

    while u just keep ignoring how i'm avoiding the pitfalls u use to
    claim impossibility


    No, you use an assumption that requires something proved impossible that
    you want to claim is possible because it might be.

    u haven't proven my proposed interfaces impossible because u haven't
    generated a contradiction with them


    Sorry, you need to actually SHOW how to do what you want to claim with actual realizable steps.

    And that means you need a COMPUTABLE method to generate you enumerations that you iterate through that is complete.

    i don't need to do that to undercut the proof



    And, the possibility of unknowable things hiding in machine space
    isn't as crasy at it might seem, as there are an infinite number of
    machines for them to hide with.

    i just love how godel convinced u to believe russel's teapot certainly
    exists

    He didn't. But Russel shows that claims we need to prove it doesn't are invalid.

    yes i don't need to prove ur ghosts don't exist. they don't because u
    can't even know about them


    I have shown you the proof that unknowable things must exist. You claim
    they can't, but your only reasoning is based on there being something
    new that we don't know about that you can't actually prove.

    Which of those is a claim of the existance of a Russel's Teapot?

    The thing with a proof, or the things just assumed?





    If you want to break down a "broken" structure, you need to know
    enough about it to SHOW it is broken.

    Just assuming it is just shows that it is most likely YOU that is
    wrong.

    It is more that the system ignores that which tries to break it,
    because getting side tracked on false trails is too damaging.

    To me it seems more of a peril to accept your misguided ideas.

    The fact that you begin by trying to redefine things out of
    ignorance doesn't help your case.






    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Tue Jan 27 00:00:20 2026
    From Newsgroup: comp.theory

    On 1/26/26 2:28 PM, Richard Damon wrote:
    On 1/26/26 2:45 PM, dart200 wrote:
    On 1/26/26 8:43 AM, Richard Damon wrote:
    On 1/26/26 1:50 AM, dart200 wrote:
    On 1/25/26 2:40 PM, Richard Damon wrote:
    On 1/25/26 4:04 PM, dart200 wrote:
    On 1/25/26 10:23 AM, Richard Damon wrote:
    On 1/24/26 9:05 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 6:06 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:

    the CT-thesis is a thesis, not a proof.
    *I think that I fixed that*
    It seems to me that if something cannot be computed
    by applying finite string transformation rules to
    input finite strings then it cannot be computed.

    As soon as this is shown to be categorically impossible
    then the thesis turns into a proof.


    In other words, you just don't know what you are talking about. >>>>>>>>>
    The fact that it is impossible to build a computation that, >>>>>>>>> given a representation of another computation and its input, >>>>>>>>> determine for all cases if the computation will halt does
    nothing to further the question of are Turing Machines the most >>>>>>>>> powerful form of computation.

    contexts-aware machines compute functions:

    (context,input) -> output


    And what problems of interest to computation theory are of that >>>>>>> form?

    Computation Theory was to answer questions of logic and mathematics. >>>>>>>
    What logic or math is dependent on "context"

    *mechanically computing* the answer *generally* is dependent on
    context,

    Really?

    Most problems don't care about the context of the person asking it, >>>>> just the context of the thing being looked at.

    well, yes, most problems don't involve pathologically querying a
    decider specifically for the purpose of then contradicting the
    decision... EfOa

    Which is a problem that doesn't actually depend on the context of the
    asker, so using the context just makes you wrong.

    yes it does.

    the self-referential set-classification paradox can *only* provably
    happen when a decider is called from within a pathological context
    (the paradoxical input machine), which is why i don't think it over-
    generalizes to disproving our ability to compute the answer in non-
    pathological contexts.

    No, becuase the machine in questions halting behavior is fully defined, since the SPECIFIC machine it was built on had to be defined.

    Thus, the "paradox", like all real paradoxes is only apparent, as in
    only when we think of the "generalized" template, not the actual machine that is the input.

    You have your problem because you think of the machine as being built to
    an API, but it isn't, it is built to a SPECIFIC decider, or it isn't actually a computation. As a part of being a computation is having an explicit and complete listing of the algorithm used, which can't just reference an "API", but needs the implementation of it.

    The "Template" is built to the API, but the input isn't the template,
    but the actual machine, which means the specific decider, and thus there
    is no real paradox, only an incorrect machine, as all the other ones
    have a chance of being correct (if they are correct partial deciders)

    this actually just supports my point that paradoxes only happens when a decider is called within a pathological context



    TMs don't have an ability to discern between contexts, which is why
    current theory accepts that it does...

    And neither do computations as defined.

    idk where ur getting this definition u keep bringing up or who defined it

    Even in your model, you try to
    call the context part of the input becuase you know it has to be.


    the point of my work on RTMs is to grant computation an ability to
    discern between contexts so that we can transcend *that* particular
    limit.

    And the problem is that the problem space doesn't see past that limit.

    If you want to talk about context dependent computations, you need to
    work out how you are going to actually define that, then figure out what
    you can possibly say about them.

    i already did, multiple times, u just refuse acknowledge what i wrote



    this doesn't remove *all* unknowns, i'm not resolving problems of
    actual complexity or unknowns due to lack of number theory. i'm
    resolving the self-referential set-classification paradox that
    underlies much of uncomputability, and to hopefully put a wrench in
    this rather odd, paradoxical, and quite frankly fallacy drenched
    feelings of certainty about unknowable unknowns.

    WHICH IS FINE, i don't need total instant perfection to make
    significant progress, my fucking god...

    So, tackle the part that you can, and not the part that even your
    context dependent part doesn't help with,

    After all, the "Halting Problem" ask a question that is NOT dependent on

    *mechanically computing* the answer *generally* however is. the ability
    itself to compute the answer is context-dependent.

    the context it is being asked in, as that machines behavior was defined
    not to so depend on it. Thus a "Context Dependent Compuation" can't use context to help answer it, at best it might help a partial decider be
    able to answer a biger slice of the pie.






    and ignoring that is the underlying cause of the halting problem

    Nope.


    clearly novel techniques will be required to resolve long standing >>>>>> problems, eh richard???

    Or just lying as you try.

    I guess you think the speed of light is just a suggestion. (Yes,
    there are some thoughts about how to break it, but they require
    things totally outside our current physics).

    Yes, there may be a new definition of "Computations" that is
    actually useful, and generates answers to some things we currently
    think as uncomputable, but until you can actually figure out what
    that is, assuming it is just science fiction.

    or u'd just call it lying over and over again with no serious
    consideration to what's really being said ...

    Yep, that is a good description of what you are doing.

    You forget to consider the topic you are talking about.

    Either you accept the current definitions, or you actually supply
    your own new ones. Just assuming you can change them without actually
    doing so makes your argument baseless.

    false dichotomy ...

    cause why can't a "new" one just be in fact a rather minor adjustment???

    You can't make a "minor adjustment" to a fixed system.

    lots of people made adjustments to turing machines u absolute beyond
    dogshit moron

    why can't i??? because i'm not special enough, and everyone else was???

    now just special pleading.

    add it to the list, my god u have committed more names fallacies than
    anyone i have ever talked to

    what am i doing here?


    That is like saying that 22/7 is close enough to the value of Pi to be
    pi for all uses.






    fuck







    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Dude@punditster@gmail.com to comp.theory,comp.ai.philosophy,comp.software-eng,alt.messianic,alt.buddha.short.fat.guy on Tue Jan 27 13:31:21 2026
    From Newsgroup: comp.theory

    On 1/26/2026 2:29 PM, dart200 wrote:
    On 1/26/26 2:17 PM, Richard Damon wrote:
    On 1/26/26 2:43 PM, dart200 wrote:
    On 1/26/26 8:39 AM, Richard Damon wrote:
    On 1/26/26 12:56 AM, dart200 wrote:

    You haven't given a SPECIFIC refinement, just vague claims with no
    backing.

    i gave a *very* specific *additional* operation for the machine,
    specified exactly what it does, and gave a demonstration of it in a
    simple case.

    So, I'm not sure you've thought this through. It may not be that simple
    to open the door, Nick. There might be a ghost in the machine.

    "I'm sorry, Dave. I can't do that." - HAL

    could you even begin to tell me what that was? like what was the name of that operation even??? see if u can't even name me what the operation
    was...

    Let's be clear: You still haven't explained why that dude rode his horse
    all the way through a desert without giving the old mare a name?

    that is a definitive sign of an entirely antagonistic attitude

    Let's not get too personal, Nick!


    holy fuck you dude eat a bag of dicks

    It seems you have eaten them all already,

    sure thing dick rLiN+A

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory,comp.ai.philosophy,comp.software-eng on Tue Jan 27 14:07:41 2026
    From Newsgroup: comp.theory

    On 1/25/2026 2:36 PM, Richard Damon wrote:
    [...]

    An actual algorithm being an actual sequence of finite atomic steps, and using bounded loops.

    Why must an algorithm use bounded loops? It can run and run...
    generating results along the way...

    [...]
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory on Tue Jan 27 18:31:41 2026
    From Newsgroup: comp.theory

    On 1/27/26 1:59 PM, Chris M. Thomasson wrote:
    On 1/26/2026 9:48 PM, dart200 wrote:
    [...]
    u haven't proven my proposed interfaces impossible because u haven't
    generated a contradiction with them

    You need to code up the programs behind those interfaces, no?

    [...]

    is that how contradictions are demonstrated??? coding it up??? Ef2-EfOa
    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng,alt.messianic,alt.buddha.short.fat.guy on Wed Jan 28 01:12:27 2026
    From Newsgroup: comp.theory

    On 1/27/26 1:31 PM, Dude wrote:
    On 1/26/2026 2:29 PM, dart200 wrote:
    On 1/26/26 2:17 PM, Richard Damon wrote:
    On 1/26/26 2:43 PM, dart200 wrote:
    On 1/26/26 8:39 AM, Richard Damon wrote:
    On 1/26/26 12:56 AM, dart200 wrote:

    You haven't given a SPECIFIC refinement, just vague claims with no
    backing.

    i gave a *very* specific *additional* operation for the machine,
    specified exactly what it does, and gave a demonstration of it in a
    simple case.

    So, I'm not sure you've thought this through. It may not be that simple
    to open the door, Nick. There might be a ghost in the machine.

    "I'm sorry, Dave. I can't do that." - HAL

    could you even begin to tell me what that was? like what was the name
    of that operation even??? see if u can't even name me what the
    operation was...

    Let's be clear: You still haven't explained why that dude rode his horse
    all the way through a desert without giving the old mare a name?

    that is a definitive sign of an entirely antagonistic attitude

    Let's not get too personal, Nick!

    tbh, i'm fairly personally offended at the lack of cooperation dude


    holy fuck you dude eat a bag of dicks

    It seems you have eaten them all already,

    sure thing dick rLiN+A

    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory on Wed Jan 28 10:33:25 2026
    From Newsgroup: comp.theory

    On 1/28/26 4:34 AM, Richard Damon wrote:
    On 1/27/26 12:48 AM, dart200 wrote:
    On 1/26/26 12:50 PM, Richard Damon wrote:
    On 1/25/26 4:28 PM, dart200 wrote:
    On 1/25/26 1:14 PM, Richard Damon wrote:
    On 1/25/26 4:05 PM, dart200 wrote:
    On 1/25/26 10:20 AM, Richard Damon wrote:
    On 1/24/26 9:10 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:31 PM, dart200 wrote:
    On 1/24/26 2:25 PM, Richard Damon wrote:
    On 1/24/26 3:56 PM, dart200 wrote:
    On 1/24/26 11:52 AM, Richard Damon wrote:
    On 1/24/26 1:33 PM, dart200 wrote:
    On 1/24/26 9:26 AM, Richard Damon wrote:
    On 1/24/26 11:45 AM, dart200 wrote:
    On 1/24/26 4:17 AM, Richard Damon wrote:
    On 1/24/26 3:03 AM, dart200 wrote:
    On 1/23/26 5:36 PM, Richard Damon wrote:
    On 1/20/26 9:30 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>
    The term *IS* defined, and to change it means you lie. >>>>>>>>>>>>>>>>>>>>

    doubling down on definist fallacy ehh??? >>>>>>>>>>>>>>>>>>>
    I guess you don't understand the difinist fallacy, as >>>>>>>>>>>>>>>>>>> using the ACTUAL definition isn't a fallacy. >>>>>>>>>>>>>>>>>>
    nah ur just pushing a definition that u've happened >>>>>>>>>>>>>>>>>> latch onto because it's convenient for u

    classic definist fallacy


    Nope, you are just stuck in a definist fallacy about >>>>>>>>>>>>>>>>> what a definist fallacy is.

    It seems you don't understand the concept that some >>>>>>>>>>>>>>>>> things ARE just defined a given way to be in a given >>>>>>>>>>>>>>>>> context.


    and u richard are not the god what that is


    But "the field" is, and thus you are just saying it is ok >>>>>>>>>>>>>>> to change the meaning of words.

    i don't believe u represent what "the field" is either >>>>>>>>>>>>>>

    The go to "the field" and see if they disagree.

    "the field" can come here if they like,

    but as it stands i'm going to call out any more "nOT mUh >>>>>>>>>>>> CoMpUTaTiOn" arguments as definist fallacy


    In other words, you are just admitting, you don't care what >>>>>>>>>>> the words mean in the field, you will just continue to be a >>>>>>>>>>> stupid and ignorant lair about what you are doing.

    i just don't care what YOU, richard, says "CoMpUTaTiOn" means. >>>>>>>>>> you aren't "the field" bro, and i just really dgaf about ur >>>>>>>>>> endless definist fallacy


    But apperently you do, as you aren't just going to present your >>>>>>>>> ideas directly to "the field" on a peer-reviewed journal, so >>>>>>>>> something is telling you that you have something to fix.

    or rather the peer-review is so gatekept i don't even get a
    review back for my submission, just rejection without review.

    the system is broken such that i will take my stance elsewhere. >>>>>>>>
    everyone else can ignore me at all our peril...


    WHich just shows that you aren't in step with what theory
    actually is talking about.

    That is your problem, you assume the world is wrong, and more
    than likely it is you that is wrong.

    i'm not one continually asserting a bunch of impossible to find
    teapots floating around in machine space

    No, you just keep asserting that you compute impossible to compute
    results.

    while u just keep ignoring how i'm avoiding the pitfalls u use to
    claim impossibility


    No, you use an assumption that requires something proved impossible
    that you want to claim is possible because it might be.

    u haven't proven my proposed interfaces impossible because u haven't
    generated a contradiction with them

    But you haven't proven them possible either.

    I guess you don't understand Russel's teapot.



    Sorry, you need to actually SHOW how to do what you want to claim
    with actual realizable steps.

    And that means you need a COMPUTABLE method to generate you
    enumerations that you iterate through that is complete.

    i don't need to do that to undercut the proof

    Sure you do.

    Since your proof assumes a non-existant thing exists, it isn't a proof
    or an undercut.

    i'm showing it *can* exist with the possibility of self-referential set-classification paradoxes...

    and u've lost ur proof it can't exist due to self-referential set-classification paradoxes, which is a major pillar of undecidability arguments.

    i don't need to show that is *does* exist, i just need to show it *can*
    exist to make progress here


    Whatever the specific implementation of the inteface returns, it will be wrong, by the specific implementaiton of the "pathological" program.

    That program has a definite result, so there *IS* a correct answer that
    the inteface SHOULD have returned, but didn't.

    i have two proposals now which are you trying to critique? cause one of
    them doesn't involve any incorrect answers.


    Thus "Pathological" is NOT a correct response, as EVERY machine that we
    can make will either Halt or Not Halt. ITS behavior is definite.

    Your problem is you confuse the individual definite machines for the templates that generate them. But we aren't asking about the templates,
    only individual machines, as templates don't necessarily have a uniform answer. (Some halt, some don't, depending on which error in implementing
    the proposed interface was made). All we do is prove by that is that the interface is, in fact, unimplementable for FULL deciding.

    Since that IS the Halting Problem, it makes the proof.

    When you relax to just partial deciders, it is a well know solvable
    problem, where work just continues to improve what classes of inputs can
    be decided on, which is a quantitative problem, not a qualitative one.




    And, the possibility of unknowable things hiding in machine space
    isn't as crasy at it might seem, as there are an infinite number of >>>>> machines for them to hide with.

    i just love how godel convinced u to believe russel's teapot
    certainly exists

    He didn't. But Russel shows that claims we need to prove it doesn't
    are invalid.

    yes i don't need to prove ur ghosts don't exist. they don't because u
    can't even know about them

    Sure we know a bit about them, like they exist.

    bare assertion


    Your problem is you don't understand what you are talking about and
    assume you can make unfounded assumptions.

    gaslighting




    I have shown you the proof that unknowable things must exist. You
    claim they can't, but your only reasoning is based on there being
    something new that we don't know about that you can't actually prove.

    Which of those is a claim of the existance of a Russel's Teapot?

    The thing with a proof, or the things just assumed?





    If you want to break down a "broken" structure, you need to know >>>>>>> enough about it to SHOW it is broken.

    Just assuming it is just shows that it is most likely YOU that is >>>>>>> wrong.

    It is more that the system ignores that which tries to break it, >>>>>>> because getting side tracked on false trails is too damaging.

    To me it seems more of a peril to accept your misguided ideas.

    The fact that you begin by trying to redefine things out of
    ignorance doesn't help your case.









    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory on Wed Jan 28 12:43:14 2026
    From Newsgroup: comp.theory

    On 1/25/2026 1:11 PM, dart200 wrote:
    On 1/25/26 1:10 PM, Chris M. Thomasson wrote:
    On 1/25/2026 1:09 PM, dart200 wrote:
    On 1/25/26 1:04 PM, Chris M. Thomasson wrote:
    On 1/24/2026 2:20 PM, dart200 wrote:
    On 1/24/26 1:42 PM, Chris M. Thomasson wrote:
    On 1/24/2026 9:26 AM, Richard Damon wrote:
    On 1/24/26 11:45 AM, dart200 wrote:
    On 1/24/26 4:17 AM, Richard Damon wrote:
    On 1/24/26 3:03 AM, dart200 wrote:
    On 1/23/26 5:36 PM, Richard Damon wrote:
    On 1/20/26 9:30 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:

    The term *IS* defined, and to change it means you lie. >>>>>>>>>>>>

    doubling down on definist fallacy ehh???

    I guess you don't understand the difinist fallacy, as using >>>>>>>>>>> the ACTUAL definition isn't a fallacy.

    nah ur just pushing a definition that u've happened latch onto >>>>>>>>>> because it's convenient for u

    classic definist fallacy


    Nope, you are just stuck in a definist fallacy about what a >>>>>>>>> definist fallacy is.

    It seems you don't understand the concept that some things ARE >>>>>>>>> just defined a given way to be in a given context.


    and u richard are not the god what that is


    But "the field" is, and thus you are just saying it is ok to
    change the meaning of words.

    Arguing with them is pointless. Almost akin to this moron:

    https://youtu.be/hymaQWjBOqM

    yeah fuck u too bro!


    Spoken like a true genius.

    trolls deserve nothing more than insults


    Are you a troll?

    no


    Well, show us how to implement the logic behind your interfaces...
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory on Wed Jan 28 13:16:27 2026
    From Newsgroup: comp.theory

    On 1/28/26 12:43 PM, Chris M. Thomasson wrote:
    On 1/25/2026 1:11 PM, dart200 wrote:
    On 1/25/26 1:10 PM, Chris M. Thomasson wrote:
    On 1/25/2026 1:09 PM, dart200 wrote:
    On 1/25/26 1:04 PM, Chris M. Thomasson wrote:
    On 1/24/2026 2:20 PM, dart200 wrote:
    On 1/24/26 1:42 PM, Chris M. Thomasson wrote:
    On 1/24/2026 9:26 AM, Richard Damon wrote:
    On 1/24/26 11:45 AM, dart200 wrote:
    On 1/24/26 4:17 AM, Richard Damon wrote:
    On 1/24/26 3:03 AM, dart200 wrote:
    On 1/23/26 5:36 PM, Richard Damon wrote:
    On 1/20/26 9:30 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:

    The term *IS* defined, and to change it means you lie. >>>>>>>>>>>>>

    doubling down on definist fallacy ehh???

    I guess you don't understand the difinist fallacy, as using >>>>>>>>>>>> the ACTUAL definition isn't a fallacy.

    nah ur just pushing a definition that u've happened latch >>>>>>>>>>> onto because it's convenient for u

    classic definist fallacy


    Nope, you are just stuck in a definist fallacy about what a >>>>>>>>>> definist fallacy is.

    It seems you don't understand the concept that some things ARE >>>>>>>>>> just defined a given way to be in a given context.


    and u richard are not the god what that is


    But "the field" is, and thus you are just saying it is ok to
    change the meaning of words.

    Arguing with them is pointless. Almost akin to this moron:

    https://youtu.be/hymaQWjBOqM

    yeah fuck u too bro!


    Spoken like a true genius.

    trolls deserve nothing more than insults


    Are you a troll?

    no


    Well, show us how to implement the logic behind your interfaces...

    not until i'm granted the funding to take on a project like that

    like i've told u *specifically* many times before:

    proving an algo *could* exist is orders of magnitude less complicated
    than actually constructing said algo.

    ur just committing the perfectionist fallacy because u've been spoon fed
    too much tv reality where some protagonist is always able to solve
    arbitrarily complex problems. real progress ain't like that bro
    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory on Wed Jan 28 13:26:25 2026
    From Newsgroup: comp.theory

    On 1/28/2026 1:16 PM, dart200 wrote:
    On 1/28/26 12:43 PM, Chris M. Thomasson wrote:
    On 1/25/2026 1:11 PM, dart200 wrote:
    On 1/25/26 1:10 PM, Chris M. Thomasson wrote:
    On 1/25/2026 1:09 PM, dart200 wrote:
    On 1/25/26 1:04 PM, Chris M. Thomasson wrote:
    On 1/24/2026 2:20 PM, dart200 wrote:
    On 1/24/26 1:42 PM, Chris M. Thomasson wrote:
    On 1/24/2026 9:26 AM, Richard Damon wrote:
    On 1/24/26 11:45 AM, dart200 wrote:
    On 1/24/26 4:17 AM, Richard Damon wrote:
    On 1/24/26 3:03 AM, dart200 wrote:
    On 1/23/26 5:36 PM, Richard Damon wrote:
    On 1/20/26 9:30 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:

    The term *IS* defined, and to change it means you lie. >>>>>>>>>>>>>>

    doubling down on definist fallacy ehh???

    I guess you don't understand the difinist fallacy, as using >>>>>>>>>>>>> the ACTUAL definition isn't a fallacy.

    nah ur just pushing a definition that u've happened latch >>>>>>>>>>>> onto because it's convenient for u

    classic definist fallacy


    Nope, you are just stuck in a definist fallacy about what a >>>>>>>>>>> definist fallacy is.

    It seems you don't understand the concept that some things >>>>>>>>>>> ARE just defined a given way to be in a given context.


    and u richard are not the god what that is


    But "the field" is, and thus you are just saying it is ok to >>>>>>>>> change the meaning of words.

    Arguing with them is pointless. Almost akin to this moron:

    https://youtu.be/hymaQWjBOqM

    yeah fuck u too bro!


    Spoken like a true genius.

    trolls deserve nothing more than insults


    Are you a troll?

    no


    Well, show us how to implement the logic behind your interfaces...

    not until i'm granted the funding to take on a project like that

    Apply for a grant?



    like i've told u *specifically* many times before:

    proving an algo *could* exist is orders of magnitude less complicated
    than actually constructing said algo.

    ur just committing the perfectionist fallacy because u've been spoon fed
    too much tv reality where some protagonist is always able to solve arbitrarily complex problems. real progress ain't like that bro


    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Dude@punditster@gmail.com to comp.theory,comp.ai.philosophy,comp.software-eng,alt.messianic,alt.buddha.short.fat.guy on Wed Jan 28 13:29:42 2026
    From Newsgroup: comp.theory

    On 1/28/2026 1:12 AM, dart200 wrote:
    On 1/27/26 1:31 PM, Dude wrote:
    On 1/26/2026 2:29 PM, dart200 wrote:
    On 1/26/26 2:17 PM, Richard Damon wrote:
    On 1/26/26 2:43 PM, dart200 wrote:
    On 1/26/26 8:39 AM, Richard Damon wrote:
    On 1/26/26 12:56 AM, dart200 wrote:

    You haven't given a SPECIFIC refinement, just vague claims with no
    backing.

    i gave a *very* specific *additional* operation for the machine,
    specified exactly what it does, and gave a demonstration of it in a
    simple case.

    So, I'm not sure you've thought this through. It may not be that
    simple to open the door, Nick. There might be a ghost in the machine.

    "I'm sorry, Dave. I can't do that." - HAL

    could you even begin to tell me what that was? like what was the name
    of that operation even??? see if u can't even name me what the
    operation was...

    Let's be clear: You still haven't explained why that dude rode his
    horse all the way through a desert without giving the old mare a name?

    that is a definitive sign of an entirely antagonistic attitude

    Let's not get too personal, Nick!

    tbh, i'm fairly personally offended at the lack of cooperation dude

    What I'm personally offended about is all the electricity you're using
    every day to send texts to total strangers. You cooking with gas?

    Let me remind you again, that incense you see at your local convenience
    store is not real herbal incense. It may look like Indian Incense and
    the label may even say Indian Incense, but they are probably just punk
    sticks and glue.

    Don't be deceived!


    holy fuck you dude eat a bag of dicks

    It seems you have eaten them all already,

    sure thing dick rLiN+A



    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng,alt.messianic,alt.buddha.short.fat.guy on Wed Jan 28 13:37:49 2026
    From Newsgroup: comp.theory

    On 1/28/26 1:29 PM, Dude wrote:
    On 1/28/2026 1:12 AM, dart200 wrote:
    On 1/27/26 1:31 PM, Dude wrote:
    On 1/26/2026 2:29 PM, dart200 wrote:
    On 1/26/26 2:17 PM, Richard Damon wrote:
    On 1/26/26 2:43 PM, dart200 wrote:
    On 1/26/26 8:39 AM, Richard Damon wrote:
    On 1/26/26 12:56 AM, dart200 wrote:

    You haven't given a SPECIFIC refinement, just vague claims with no
    backing.

    i gave a *very* specific *additional* operation for the machine,
    specified exactly what it does, and gave a demonstration of it in a
    simple case.

    So, I'm not sure you've thought this through. It may not be that
    simple to open the door, Nick. There might be a ghost in the machine.

    "I'm sorry, Dave. I can't do that." - HAL

    could you even begin to tell me what that was? like what was the
    name of that operation even??? see if u can't even name me what the
    operation was...

    Let's be clear: You still haven't explained why that dude rode his
    horse all the way through a desert without giving the old mare a name?

    that is a definitive sign of an entirely antagonistic attitude

    Let's not get too personal, Nick!

    tbh, i'm fairly personally offended at the lack of cooperation dude

    What I'm personally offended about is all the electricity you're using

    video, ai, and porn are vastly outclass text messaging dude

    every day to send texts to total strangers. You cooking with gas?

    lol, yes but gas is less efficient dude, not more

    i'd prefer thermally controlled induction like the breville control freak


    Let me remind you again, that incense you see at your local convenience store is not real herbal incense. It may look like Indian Incense and
    the label may even say Indian Incense, but they are probably just punk sticks and glue.

    Don't be deceived!

    i don't. i only use grass greenhouse grown in santa barbara with zero
    sprays. lady bugs are instead used for pest control

    shout out to autumn brands!



    holy fuck you dude eat a bag of dicks

    It seems you have eaten them all already,

    sure thing dick rLiN+A



    --
    hi, i'm nick! let's end war EfOa

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory on Wed Jan 28 13:44:01 2026
    From Newsgroup: comp.theory

    On 1/28/26 1:26 PM, Chris M. Thomasson wrote:
    On 1/28/2026 1:16 PM, dart200 wrote:
    On 1/28/26 12:43 PM, Chris M. Thomasson wrote:
    On 1/25/2026 1:11 PM, dart200 wrote:
    On 1/25/26 1:10 PM, Chris M. Thomasson wrote:
    On 1/25/2026 1:09 PM, dart200 wrote:
    On 1/25/26 1:04 PM, Chris M. Thomasson wrote:
    On 1/24/2026 2:20 PM, dart200 wrote:
    On 1/24/26 1:42 PM, Chris M. Thomasson wrote:
    On 1/24/2026 9:26 AM, Richard Damon wrote:
    On 1/24/26 11:45 AM, dart200 wrote:
    On 1/24/26 4:17 AM, Richard Damon wrote:
    On 1/24/26 3:03 AM, dart200 wrote:
    On 1/23/26 5:36 PM, Richard Damon wrote:
    On 1/20/26 9:30 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:

    The term *IS* defined, and to change it means you lie. >>>>>>>>>>>>>>>

    doubling down on definist fallacy ehh???

    I guess you don't understand the difinist fallacy, as >>>>>>>>>>>>>> using the ACTUAL definition isn't a fallacy.

    nah ur just pushing a definition that u've happened latch >>>>>>>>>>>>> onto because it's convenient for u

    classic definist fallacy


    Nope, you are just stuck in a definist fallacy about what a >>>>>>>>>>>> definist fallacy is.

    It seems you don't understand the concept that some things >>>>>>>>>>>> ARE just defined a given way to be in a given context. >>>>>>>>>>>>

    and u richard are not the god what that is


    But "the field" is, and thus you are just saying it is ok to >>>>>>>>>> change the meaning of words.

    Arguing with them is pointless. Almost akin to this moron:

    https://youtu.be/hymaQWjBOqM

    yeah fuck u too bro!


    Spoken like a true genius.

    trolls deserve nothing more than insults


    Are you a troll?

    no


    Well, show us how to implement the logic behind your interfaces...

    not until i'm granted the funding to take on a project like that

    Apply for a grant?


    dunno anything about that tbh



    like i've told u *specifically* many times before:

    proving an algo *could* exist is orders of magnitude less complicated
    than actually constructing said algo.

    ur just committing the perfectionist fallacy because u've been spoon
    fed too much tv reality where some protagonist is always able to solve
    arbitrarily complex problems. real progress ain't like that bro


    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@news.x.richarddamon@xoxy.net to comp.theory on Sun Feb 1 07:33:29 2026
    From Newsgroup: comp.theory

    On 1/28/26 1:33 PM, dart200 wrote:
    On 1/28/26 4:34 AM, Richard Damon wrote:
    On 1/27/26 12:48 AM, dart200 wrote:
    On 1/26/26 12:50 PM, Richard Damon wrote:
    On 1/25/26 4:28 PM, dart200 wrote:
    On 1/25/26 1:14 PM, Richard Damon wrote:
    On 1/25/26 4:05 PM, dart200 wrote:
    On 1/25/26 10:20 AM, Richard Damon wrote:
    On 1/24/26 9:10 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:31 PM, dart200 wrote:
    On 1/24/26 2:25 PM, Richard Damon wrote:
    On 1/24/26 3:56 PM, dart200 wrote:
    On 1/24/26 11:52 AM, Richard Damon wrote:
    On 1/24/26 1:33 PM, dart200 wrote:
    On 1/24/26 9:26 AM, Richard Damon wrote:
    On 1/24/26 11:45 AM, dart200 wrote:
    On 1/24/26 4:17 AM, Richard Damon wrote:
    On 1/24/26 3:03 AM, dart200 wrote:
    On 1/23/26 5:36 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 1/20/26 9:30 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>
    The term *IS* defined, and to change it means you >>>>>>>>>>>>>>>>>>>>>> lie.


    doubling down on definist fallacy ehh??? >>>>>>>>>>>>>>>>>>>>
    I guess you don't understand the difinist fallacy, >>>>>>>>>>>>>>>>>>>> as using the ACTUAL definition isn't a fallacy. >>>>>>>>>>>>>>>>>>>
    nah ur just pushing a definition that u've happened >>>>>>>>>>>>>>>>>>> latch onto because it's convenient for u >>>>>>>>>>>>>>>>>>>
    classic definist fallacy


    Nope, you are just stuck in a definist fallacy about >>>>>>>>>>>>>>>>>> what a definist fallacy is.

    It seems you don't understand the concept that some >>>>>>>>>>>>>>>>>> things ARE just defined a given way to be in a given >>>>>>>>>>>>>>>>>> context.


    and u richard are not the god what that is


    But "the field" is, and thus you are just saying it is >>>>>>>>>>>>>>>> ok to change the meaning of words.

    i don't believe u represent what "the field" is either >>>>>>>>>>>>>>>

    The go to "the field" and see if they disagree.

    "the field" can come here if they like,

    but as it stands i'm going to call out any more "nOT mUh >>>>>>>>>>>>> CoMpUTaTiOn" arguments as definist fallacy


    In other words, you are just admitting, you don't care what >>>>>>>>>>>> the words mean in the field, you will just continue to be a >>>>>>>>>>>> stupid and ignorant lair about what you are doing.

    i just don't care what YOU, richard, says "CoMpUTaTiOn" >>>>>>>>>>> means. you aren't "the field" bro, and i just really dgaf >>>>>>>>>>> about ur endless definist fallacy


    But apperently you do, as you aren't just going to present >>>>>>>>>> your ideas directly to "the field" on a peer-reviewed journal, >>>>>>>>>> so something is telling you that you have something to fix. >>>>>>>>>
    or rather the peer-review is so gatekept i don't even get a >>>>>>>>> review back for my submission, just rejection without review. >>>>>>>>>
    the system is broken such that i will take my stance elsewhere. >>>>>>>>>
    everyone else can ignore me at all our peril...


    WHich just shows that you aren't in step with what theory
    actually is talking about.

    That is your problem, you assume the world is wrong, and more >>>>>>>> than likely it is you that is wrong.

    i'm not one continually asserting a bunch of impossible to find >>>>>>> teapots floating around in machine space

    No, you just keep asserting that you compute impossible to compute >>>>>> results.

    while u just keep ignoring how i'm avoiding the pitfalls u use to
    claim impossibility


    No, you use an assumption that requires something proved impossible
    that you want to claim is possible because it might be.

    u haven't proven my proposed interfaces impossible because u haven't
    generated a contradiction with them

    But you haven't proven them possible either.

    I guess you don't understand Russel's teapot.



    Sorry, you need to actually SHOW how to do what you want to claim
    with actual realizable steps.

    And that means you need a COMPUTABLE method to generate you
    enumerations that you iterate through that is complete.

    i don't need to do that to undercut the proof

    Sure you do.

    Since your proof assumes a non-existant thing exists, it isn't a proof
    or an undercut.

    i'm showing it *can* exist with the possibility of self-referential set- classification paradoxes...

    No, you are trying to show that "if you assume you can do the
    impossible" then you can do the impossible.

    To show you CAN do something, you need to demonstrate how to do it,.


    and u've lost ur proof it can't exist due to self-referential set- classification paradoxes, which is a major pillar of undecidability arguments.

    No, because your "Proof", doesn't proof anything as it is based on an
    unsound assumption.

    All you have done is proven you can make circular arguments.


    i don't need to show that is *does* exist, i just need to show it *can* exist to make progress here

    Nope. Fallacy of assuming the conclusion. A REAL logical fallacy.



    Whatever the specific implementation of the inteface returns, it will
    be wrong, by the specific implementaiton of the "pathological" program.

    That program has a definite result, so there *IS* a correct answer
    that the inteface SHOULD have returned, but didn't.

    i have two proposals now which are you trying to critique? cause one of
    them doesn't involve any incorrect answers.

    Both of which are based on assuming the ability to compute the
    non-computable.



    Thus "Pathological" is NOT a correct response, as EVERY machine that
    we can make will either Halt or Not Halt. ITS behavior is definite.

    Your problem is you confuse the individual definite machines for the
    templates that generate them. But we aren't asking about the
    templates, only individual machines, as templates don't necessarily
    have a uniform answer. (Some halt, some don't, depending on which
    error in implementing the proposed interface was made). All we do is
    prove by that is that the interface is, in fact, unimplementable for
    FULL deciding.

    Since that IS the Halting Problem, it makes the proof.

    When you relax to just partial deciders, it is a well know solvable
    problem, where work just continues to improve what classes of inputs
    can be decided on, which is a quantitative problem, not a qualitative
    one.




    And, the possibility of unknowable things hiding in machine space >>>>>> isn't as crasy at it might seem, as there are an infinite number
    of machines for them to hide with.

    i just love how godel convinced u to believe russel's teapot
    certainly exists

    He didn't. But Russel shows that claims we need to prove it doesn't
    are invalid.

    yes i don't need to prove ur ghosts don't exist. they don't because u
    can't even know about them

    Sure we know a bit about them, like they exist.

    bare assertion


    Your problem is you don't understand what you are talking about and
    assume you can make unfounded assumptions.

    gaslighting




    I have shown you the proof that unknowable things must exist. You
    claim they can't, but your only reasoning is based on there being
    something new that we don't know about that you can't actually prove.

    Which of those is a claim of the existance of a Russel's Teapot?

    The thing with a proof, or the things just assumed?





    If you want to break down a "broken" structure, you need to know >>>>>>>> enough about it to SHOW it is broken.

    Just assuming it is just shows that it is most likely YOU that >>>>>>>> is wrong.

    It is more that the system ignores that which tries to break it, >>>>>>>> because getting side tracked on false trails is too damaging.

    To me it seems more of a peril to accept your misguided ideas. >>>>>>>>
    The fact that you begin by trying to redefine things out of
    ignorance doesn't help your case.












    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@news.x.richarddamon@xoxy.net to comp.theory on Sun Feb 1 07:33:32 2026
    From Newsgroup: comp.theory

    On 1/28/26 4:16 PM, dart200 wrote:
    On 1/28/26 12:43 PM, Chris M. Thomasson wrote:
    On 1/25/2026 1:11 PM, dart200 wrote:
    On 1/25/26 1:10 PM, Chris M. Thomasson wrote:
    On 1/25/2026 1:09 PM, dart200 wrote:
    On 1/25/26 1:04 PM, Chris M. Thomasson wrote:
    On 1/24/2026 2:20 PM, dart200 wrote:
    On 1/24/26 1:42 PM, Chris M. Thomasson wrote:
    On 1/24/2026 9:26 AM, Richard Damon wrote:
    On 1/24/26 11:45 AM, dart200 wrote:
    On 1/24/26 4:17 AM, Richard Damon wrote:
    On 1/24/26 3:03 AM, dart200 wrote:
    On 1/23/26 5:36 PM, Richard Damon wrote:
    On 1/20/26 9:30 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:

    The term *IS* defined, and to change it means you lie. >>>>>>>>>>>>>>

    doubling down on definist fallacy ehh???

    I guess you don't understand the difinist fallacy, as using >>>>>>>>>>>>> the ACTUAL definition isn't a fallacy.

    nah ur just pushing a definition that u've happened latch >>>>>>>>>>>> onto because it's convenient for u

    classic definist fallacy


    Nope, you are just stuck in a definist fallacy about what a >>>>>>>>>>> definist fallacy is.

    It seems you don't understand the concept that some things >>>>>>>>>>> ARE just defined a given way to be in a given context.


    and u richard are not the god what that is


    But "the field" is, and thus you are just saying it is ok to >>>>>>>>> change the meaning of words.

    Arguing with them is pointless. Almost akin to this moron:

    https://youtu.be/hymaQWjBOqM

    yeah fuck u too bro!


    Spoken like a true genius.

    trolls deserve nothing more than insults


    Are you a troll?

    no


    Well, show us how to implement the logic behind your interfaces...

    not until i'm granted the funding to take on a project like that

    like i've told u *specifically* many times before:

    proving an algo *could* exist is orders of magnitude less complicated
    than actually constructing said algo.

    ur just committing the perfectionist fallacy because u've been spoon fed
    too much tv reality where some protagonist is always able to solve arbitrarily complex problems. real progress ain't like that bro


    But funding tends to only come from idea that are actually proven to be feasible. To GET funding, you tend to need to actually prove that you
    have something to stand on.

    These are ideas that are basically akin to perpetual motion machines,
    and thus not admissible for patents or other IP protection.

    No one is apt to be willing to fund projects which, by necessity, will fail.

    It isn't a "perfectionist fallacy" to point out that something isn't
    perfect where it needs to be.

    The problem is "Halting" per the theory, *IS* a "Perfectionist" problem.
    It is well solved for the partial case (but improvements might be posible).
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory on Mon Feb 2 10:11:31 2026
    From Newsgroup: comp.theory

    On 2/1/26 4:33 AM, Richard Damon wrote:
    On 1/28/26 1:33 PM, dart200 wrote:
    On 1/28/26 4:34 AM, Richard Damon wrote:
    On 1/27/26 12:48 AM, dart200 wrote:
    On 1/26/26 12:50 PM, Richard Damon wrote:
    On 1/25/26 4:28 PM, dart200 wrote:
    On 1/25/26 1:14 PM, Richard Damon wrote:
    On 1/25/26 4:05 PM, dart200 wrote:
    On 1/25/26 10:20 AM, Richard Damon wrote:
    On 1/24/26 9:10 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:31 PM, dart200 wrote:
    On 1/24/26 2:25 PM, Richard Damon wrote:
    On 1/24/26 3:56 PM, dart200 wrote:
    On 1/24/26 11:52 AM, Richard Damon wrote:
    On 1/24/26 1:33 PM, dart200 wrote:
    On 1/24/26 9:26 AM, Richard Damon wrote:
    On 1/24/26 11:45 AM, dart200 wrote:
    On 1/24/26 4:17 AM, Richard Damon wrote:
    On 1/24/26 3:03 AM, dart200 wrote:
    On 1/23/26 5:36 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>> On 1/20/26 9:30 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>
    The term *IS* defined, and to change it means you >>>>>>>>>>>>>>>>>>>>>>> lie.


    doubling down on definist fallacy ehh??? >>>>>>>>>>>>>>>>>>>>>
    I guess you don't understand the difinist fallacy, >>>>>>>>>>>>>>>>>>>>> as using the ACTUAL definition isn't a fallacy. >>>>>>>>>>>>>>>>>>>>
    nah ur just pushing a definition that u've happened >>>>>>>>>>>>>>>>>>>> latch onto because it's convenient for u >>>>>>>>>>>>>>>>>>>>
    classic definist fallacy


    Nope, you are just stuck in a definist fallacy about >>>>>>>>>>>>>>>>>>> what a definist fallacy is.

    It seems you don't understand the concept that some >>>>>>>>>>>>>>>>>>> things ARE just defined a given way to be in a given >>>>>>>>>>>>>>>>>>> context.


    and u richard are not the god what that is >>>>>>>>>>>>>>>>>>

    But "the field" is, and thus you are just saying it is >>>>>>>>>>>>>>>>> ok to change the meaning of words.

    i don't believe u represent what "the field" is either >>>>>>>>>>>>>>>>

    The go to "the field" and see if they disagree.

    "the field" can come here if they like,

    but as it stands i'm going to call out any more "nOT mUh >>>>>>>>>>>>>> CoMpUTaTiOn" arguments as definist fallacy


    In other words, you are just admitting, you don't care what >>>>>>>>>>>>> the words mean in the field, you will just continue to be a >>>>>>>>>>>>> stupid and ignorant lair about what you are doing.

    i just don't care what YOU, richard, says "CoMpUTaTiOn" >>>>>>>>>>>> means. you aren't "the field" bro, and i just really dgaf >>>>>>>>>>>> about ur endless definist fallacy


    But apperently you do, as you aren't just going to present >>>>>>>>>>> your ideas directly to "the field" on a peer-reviewed
    journal, so something is telling you that you have something >>>>>>>>>>> to fix.

    or rather the peer-review is so gatekept i don't even get a >>>>>>>>>> review back for my submission, just rejection without review. >>>>>>>>>>
    the system is broken such that i will take my stance elsewhere. >>>>>>>>>>
    everyone else can ignore me at all our peril...


    WHich just shows that you aren't in step with what theory
    actually is talking about.

    That is your problem, you assume the world is wrong, and more >>>>>>>>> than likely it is you that is wrong.

    i'm not one continually asserting a bunch of impossible to find >>>>>>>> teapots floating around in machine space

    No, you just keep asserting that you compute impossible to
    compute results.

    while u just keep ignoring how i'm avoiding the pitfalls u use to >>>>>> claim impossibility


    No, you use an assumption that requires something proved impossible >>>>> that you want to claim is possible because it might be.

    u haven't proven my proposed interfaces impossible because u haven't
    generated a contradiction with them

    But you haven't proven them possible either.

    I guess you don't understand Russel's teapot.



    Sorry, you need to actually SHOW how to do what you want to claim
    with actual realizable steps.

    And that means you need a COMPUTABLE method to generate you
    enumerations that you iterate through that is complete.

    i don't need to do that to undercut the proof

    Sure you do.

    Since your proof assumes a non-existant thing exists, it isn't a
    proof or an undercut.

    i'm showing it *can* exist with the possibility of self-referential
    set- classification paradoxes...

    No, you are trying to show that "if you assume you can do the
    impossible" then you can do the impossible.

    normally these proofs go (after we stop begging the question):

    "assume you can do x, x produces contradiction, and therefore x is
    impossible"

    what i'm trying to show:

    "assume you can do x, x ... doesn't produce a contradiction??? therefor
    x *might* be possible"


    To show you CAN do something, you need to demonstrate how to do it,.


    no, i'm trying to move the needle from CANNOT do something to MIGHT do something, as that open up the motivation for further research to reach
    CAN do something

    disentangling the logical interface is a one man job. actually
    implementing is much greater than a one man job. and i sill stand by that.


    and u've lost ur proof it can't exist due to self-referential set-
    classification paradoxes, which is a major pillar of undecidability
    arguments.

    No, because your "Proof", doesn't proof anything as it is based on an unsound assumption.

    All you have done is proven you can make circular arguments.

    which is an improvement over the contradictions that previously were demonstrated

    being twice my age, u may be too old to ever understand the significance
    of such, but ur inability will not deter me



    i don't need to show that is *does* exist, i just need to show it
    *can* exist to make progress here

    Nope. Fallacy of assuming the conclusion. A REAL logical fallacy.



    Whatever the specific implementation of the inteface returns, it will
    be wrong, by the specific implementaiton of the "pathological" program.

    That program has a definite result, so there *IS* a correct answer
    that the inteface SHOULD have returned, but didn't.

    i have two proposals now which are you trying to critique? cause one
    of them doesn't involve any incorrect answers.

    Both of which are based on assuming the ability to compute the non- computable.



    Thus "Pathological" is NOT a correct response, as EVERY machine that
    we can make will either Halt or Not Halt. ITS behavior is definite.

    Your problem is you confuse the individual definite machines for the
    templates that generate them. But we aren't asking about the
    templates, only individual machines, as templates don't necessarily
    have a uniform answer. (Some halt, some don't, depending on which
    error in implementing the proposed interface was made). All we do is
    prove by that is that the interface is, in fact, unimplementable for
    FULL deciding.

    Since that IS the Halting Problem, it makes the proof.

    When you relax to just partial deciders, it is a well know solvable
    problem, where work just continues to improve what classes of inputs
    can be decided on, which is a quantitative problem, not a qualitative
    one.




    And, the possibility of unknowable things hiding in machine space >>>>>>> isn't as crasy at it might seem, as there are an infinite number >>>>>>> of machines for them to hide with.

    i just love how godel convinced u to believe russel's teapot
    certainly exists

    He didn't. But Russel shows that claims we need to prove it doesn't >>>>> are invalid.

    yes i don't need to prove ur ghosts don't exist. they don't because
    u can't even know about them

    Sure we know a bit about them, like they exist.

    bare assertion


    Your problem is you don't understand what you are talking about and
    assume you can make unfounded assumptions.

    gaslighting




    I have shown you the proof that unknowable things must exist. You
    claim they can't, but your only reasoning is based on there being
    something new that we don't know about that you can't actually prove. >>>>>
    Which of those is a claim of the existance of a Russel's Teapot?

    The thing with a proof, or the things just assumed?





    If you want to break down a "broken" structure, you need to >>>>>>>>> know enough about it to SHOW it is broken.

    Just assuming it is just shows that it is most likely YOU that >>>>>>>>> is wrong.

    It is more that the system ignores that which tries to break >>>>>>>>> it, because getting side tracked on false trails is too damaging. >>>>>>>>>
    To me it seems more of a peril to accept your misguided ideas. >>>>>>>>>
    The fact that you begin by trying to redefine things out of >>>>>>>>> ignorance doesn't help your case.












    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory on Mon Feb 2 18:44:52 2026
    From Newsgroup: comp.theory

    On 2/2/26 1:11 PM, dart200 wrote:
    On 2/1/26 4:33 AM, Richard Damon wrote:
    On 1/28/26 1:33 PM, dart200 wrote:
    On 1/28/26 4:34 AM, Richard Damon wrote:
    On 1/27/26 12:48 AM, dart200 wrote:
    On 1/26/26 12:50 PM, Richard Damon wrote:
    On 1/25/26 4:28 PM, dart200 wrote:
    On 1/25/26 1:14 PM, Richard Damon wrote:
    On 1/25/26 4:05 PM, dart200 wrote:
    On 1/25/26 10:20 AM, Richard Damon wrote:
    On 1/24/26 9:10 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:31 PM, dart200 wrote:
    On 1/24/26 2:25 PM, Richard Damon wrote:
    On 1/24/26 3:56 PM, dart200 wrote:
    On 1/24/26 11:52 AM, Richard Damon wrote:
    On 1/24/26 1:33 PM, dart200 wrote:
    On 1/24/26 9:26 AM, Richard Damon wrote:
    On 1/24/26 11:45 AM, dart200 wrote:
    On 1/24/26 4:17 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 1/24/26 3:03 AM, dart200 wrote:
    On 1/23/26 5:36 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>> On 1/20/26 9:30 PM, dart200 wrote: >>>>>>>>>>>>>>>>>>>>>>> On 1/20/26 4:59 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>
    The term *IS* defined, and to change it means >>>>>>>>>>>>>>>>>>>>>>>> you lie.


    doubling down on definist fallacy ehh??? >>>>>>>>>>>>>>>>>>>>>>
    I guess you don't understand the difinist fallacy, >>>>>>>>>>>>>>>>>>>>>> as using the ACTUAL definition isn't a fallacy. >>>>>>>>>>>>>>>>>>>>>
    nah ur just pushing a definition that u've happened >>>>>>>>>>>>>>>>>>>>> latch onto because it's convenient for u >>>>>>>>>>>>>>>>>>>>>
    classic definist fallacy


    Nope, you are just stuck in a definist fallacy about >>>>>>>>>>>>>>>>>>>> what a definist fallacy is.

    It seems you don't understand the concept that some >>>>>>>>>>>>>>>>>>>> things ARE just defined a given way to be in a given >>>>>>>>>>>>>>>>>>>> context.


    and u richard are not the god what that is >>>>>>>>>>>>>>>>>>>

    But "the field" is, and thus you are just saying it is >>>>>>>>>>>>>>>>>> ok to change the meaning of words.

    i don't believe u represent what "the field" is either >>>>>>>>>>>>>>>>>

    The go to "the field" and see if they disagree. >>>>>>>>>>>>>>>
    "the field" can come here if they like,

    but as it stands i'm going to call out any more "nOT mUh >>>>>>>>>>>>>>> CoMpUTaTiOn" arguments as definist fallacy


    In other words, you are just admitting, you don't care >>>>>>>>>>>>>> what the words mean in the field, you will just continue >>>>>>>>>>>>>> to be a stupid and ignorant lair about what you are doing. >>>>>>>>>>>>>
    i just don't care what YOU, richard, says "CoMpUTaTiOn" >>>>>>>>>>>>> means. you aren't "the field" bro, and i just really dgaf >>>>>>>>>>>>> about ur endless definist fallacy


    But apperently you do, as you aren't just going to present >>>>>>>>>>>> your ideas directly to "the field" on a peer-reviewed >>>>>>>>>>>> journal, so something is telling you that you have something >>>>>>>>>>>> to fix.

    or rather the peer-review is so gatekept i don't even get a >>>>>>>>>>> review back for my submission, just rejection without review. >>>>>>>>>>>
    the system is broken such that i will take my stance elsewhere. >>>>>>>>>>>
    everyone else can ignore me at all our peril...


    WHich just shows that you aren't in step with what theory >>>>>>>>>> actually is talking about.

    That is your problem, you assume the world is wrong, and more >>>>>>>>>> than likely it is you that is wrong.

    i'm not one continually asserting a bunch of impossible to find >>>>>>>>> teapots floating around in machine space

    No, you just keep asserting that you compute impossible to
    compute results.

    while u just keep ignoring how i'm avoiding the pitfalls u use to >>>>>>> claim impossibility


    No, you use an assumption that requires something proved
    impossible that you want to claim is possible because it might be.

    u haven't proven my proposed interfaces impossible because u
    haven't generated a contradiction with them

    But you haven't proven them possible either.

    I guess you don't understand Russel's teapot.



    Sorry, you need to actually SHOW how to do what you want to claim >>>>>> with actual realizable steps.

    And that means you need a COMPUTABLE method to generate you
    enumerations that you iterate through that is complete.

    i don't need to do that to undercut the proof

    Sure you do.

    Since your proof assumes a non-existant thing exists, it isn't a
    proof or an undercut.

    i'm showing it *can* exist with the possibility of self-referential
    set- classification paradoxes...

    No, you are trying to show that "if you assume you can do the
    impossible" then you can do the impossible.

    normally these proofs go (after we stop begging the question):

    "assume you can do x, x produces contradiction, and therefore x is impossible"

    what i'm trying to show:

    "assume you can do x, x ... doesn't produce a contradiction??? therefor
    x *might* be possible"

    But that isn't sound logic, as x *might* have been possible without the assumption, and if you actually can't do x, all you have done is showed
    you use unsound logic.

    You need to understand how logic actually works, your argument is
    actually one of the real classical fallicy.



    To show you CAN do something, you need to demonstrate how to do it,.


    no, i'm trying to move the needle from CANNOT do something to MIGHT do something, as that open up the motivation for further research to reach
    CAN do something

    Which assuming something that you can't show doesn't do.


    disentangling the logical interface is a one man job. actually
    implementing is much greater than a one man job. and i sill stand by that.

    All you are doing is disintergrating your repuation for doing logic.



    and u've lost ur proof it can't exist due to self-referential set-
    classification paradoxes, which is a major pillar of undecidability
    arguments.

    No, because your "Proof", doesn't proof anything as it is based on an
    unsound assumption.

    All you have done is proven you can make circular arguments.

    which is an improvement over the contradictions that previously were demonstrated

    Only for someone who can't do logic.


    being twice my age, u may be too old to ever understand the significance
    of such, but ur inability will not deter me

    All you are doing is proving you aren't as smart as you think you are,
    as you don't understand the basics of logic.

    Anyone who reads this arguement will know better that even think of
    supporting your work.




    i don't need to show that is *does* exist, i just need to show it
    *can* exist to make progress here

    Nope. Fallacy of assuming the conclusion. A REAL logical fallacy.



    Whatever the specific implementation of the inteface returns, it
    will be wrong, by the specific implementaiton of the "pathological"
    program.

    That program has a definite result, so there *IS* a correct answer
    that the inteface SHOULD have returned, but didn't.

    i have two proposals now which are you trying to critique? cause one
    of them doesn't involve any incorrect answers.

    Both of which are based on assuming the ability to compute the non-
    computable.



    Thus "Pathological" is NOT a correct response, as EVERY machine that
    we can make will either Halt or Not Halt. ITS behavior is definite.

    Your problem is you confuse the individual definite machines for the
    templates that generate them. But we aren't asking about the
    templates, only individual machines, as templates don't necessarily
    have a uniform answer. (Some halt, some don't, depending on which
    error in implementing the proposed interface was made). All we do is
    prove by that is that the interface is, in fact, unimplementable for
    FULL deciding.

    Since that IS the Halting Problem, it makes the proof.

    When you relax to just partial deciders, it is a well know solvable
    problem, where work just continues to improve what classes of inputs
    can be decided on, which is a quantitative problem, not a
    qualitative one.




    And, the possibility of unknowable things hiding in machine
    space isn't as crasy at it might seem, as there are an infinite >>>>>>>> number of machines for them to hide with.

    i just love how godel convinced u to believe russel's teapot
    certainly exists

    He didn't. But Russel shows that claims we need to prove it
    doesn't are invalid.

    yes i don't need to prove ur ghosts don't exist. they don't because >>>>> u can't even know about them

    Sure we know a bit about them, like they exist.

    bare assertion


    Your problem is you don't understand what you are talking about and
    assume you can make unfounded assumptions.

    gaslighting




    I have shown you the proof that unknowable things must exist. You >>>>>> claim they can't, but your only reasoning is based on there being >>>>>> something new that we don't know about that you can't actually prove. >>>>>>
    Which of those is a claim of the existance of a Russel's Teapot?

    The thing with a proof, or the things just assumed?





    If you want to break down a "broken" structure, you need to >>>>>>>>>> know enough about it to SHOW it is broken.

    Just assuming it is just shows that it is most likely YOU that >>>>>>>>>> is wrong.

    It is more that the system ignores that which tries to break >>>>>>>>>> it, because getting side tracked on false trails is too damaging. >>>>>>>>>>
    To me it seems more of a peril to accept your misguided ideas. >>>>>>>>>>
    The fact that you begin by trying to redefine things out of >>>>>>>>>> ignorance doesn't help your case.















    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory on Mon Feb 2 23:53:17 2026
    From Newsgroup: comp.theory

    On 2/2/26 3:44 PM, Richard Damon wrote:
    On 2/2/26 1:11 PM, dart200 wrote:
    On 2/1/26 4:33 AM, Richard Damon wrote:
    On 1/28/26 1:33 PM, dart200 wrote:
    On 1/28/26 4:34 AM, Richard Damon wrote:
    On 1/27/26 12:48 AM, dart200 wrote:
    On 1/26/26 12:50 PM, Richard Damon wrote:
    On 1/25/26 4:28 PM, dart200 wrote:
    On 1/25/26 1:14 PM, Richard Damon wrote:
    On 1/25/26 4:05 PM, dart200 wrote:
    On 1/25/26 10:20 AM, Richard Damon wrote:
    On 1/24/26 9:10 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:31 PM, dart200 wrote:
    On 1/24/26 2:25 PM, Richard Damon wrote:
    On 1/24/26 3:56 PM, dart200 wrote:
    On 1/24/26 11:52 AM, Richard Damon wrote:
    On 1/24/26 1:33 PM, dart200 wrote:
    On 1/24/26 9:26 AM, Richard Damon wrote:
    On 1/24/26 11:45 AM, dart200 wrote:
    On 1/24/26 4:17 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>> On 1/24/26 3:03 AM, dart200 wrote:
    On 1/23/26 5:36 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>> On 1/20/26 9:30 PM, dart200 wrote: >>>>>>>>>>>>>>>>>>>>>>>> On 1/20/26 4:59 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>
    The term *IS* defined, and to change it means >>>>>>>>>>>>>>>>>>>>>>>>> you lie.


    doubling down on definist fallacy ehh??? >>>>>>>>>>>>>>>>>>>>>>>
    I guess you don't understand the difinist >>>>>>>>>>>>>>>>>>>>>>> fallacy, as using the ACTUAL definition isn't a >>>>>>>>>>>>>>>>>>>>>>> fallacy.

    nah ur just pushing a definition that u've >>>>>>>>>>>>>>>>>>>>>> happened latch onto because it's convenient for u >>>>>>>>>>>>>>>>>>>>>>
    classic definist fallacy


    Nope, you are just stuck in a definist fallacy >>>>>>>>>>>>>>>>>>>>> about what a definist fallacy is.

    It seems you don't understand the concept that some >>>>>>>>>>>>>>>>>>>>> things ARE just defined a given way to be in a >>>>>>>>>>>>>>>>>>>>> given context.


    and u richard are not the god what that is >>>>>>>>>>>>>>>>>>>>

    But "the field" is, and thus you are just saying it >>>>>>>>>>>>>>>>>>> is ok to change the meaning of words.

    i don't believe u represent what "the field" is either >>>>>>>>>>>>>>>>>>

    The go to "the field" and see if they disagree. >>>>>>>>>>>>>>>>
    "the field" can come here if they like,

    but as it stands i'm going to call out any more "nOT mUh >>>>>>>>>>>>>>>> CoMpUTaTiOn" arguments as definist fallacy


    In other words, you are just admitting, you don't care >>>>>>>>>>>>>>> what the words mean in the field, you will just continue >>>>>>>>>>>>>>> to be a stupid and ignorant lair about what you are doing. >>>>>>>>>>>>>>
    i just don't care what YOU, richard, says "CoMpUTaTiOn" >>>>>>>>>>>>>> means. you aren't "the field" bro, and i just really dgaf >>>>>>>>>>>>>> about ur endless definist fallacy


    But apperently you do, as you aren't just going to present >>>>>>>>>>>>> your ideas directly to "the field" on a peer-reviewed >>>>>>>>>>>>> journal, so something is telling you that you have
    something to fix.

    or rather the peer-review is so gatekept i don't even get a >>>>>>>>>>>> review back for my submission, just rejection without review. >>>>>>>>>>>>
    the system is broken such that i will take my stance elsewhere. >>>>>>>>>>>>
    everyone else can ignore me at all our peril...


    WHich just shows that you aren't in step with what theory >>>>>>>>>>> actually is talking about.

    That is your problem, you assume the world is wrong, and more >>>>>>>>>>> than likely it is you that is wrong.

    i'm not one continually asserting a bunch of impossible to >>>>>>>>>> find teapots floating around in machine space

    No, you just keep asserting that you compute impossible to
    compute results.

    while u just keep ignoring how i'm avoiding the pitfalls u use >>>>>>>> to claim impossibility


    No, you use an assumption that requires something proved
    impossible that you want to claim is possible because it might be. >>>>>>
    u haven't proven my proposed interfaces impossible because u
    haven't generated a contradiction with them

    But you haven't proven them possible either.

    I guess you don't understand Russel's teapot.



    Sorry, you need to actually SHOW how to do what you want to claim >>>>>>> with actual realizable steps.

    And that means you need a COMPUTABLE method to generate you
    enumerations that you iterate through that is complete.

    i don't need to do that to undercut the proof

    Sure you do.

    Since your proof assumes a non-existant thing exists, it isn't a
    proof or an undercut.

    i'm showing it *can* exist with the possibility of self-referential
    set- classification paradoxes...

    No, you are trying to show that "if you assume you can do the
    impossible" then you can do the impossible.

    normally these proofs go (after we stop begging the question):

    "assume you can do x, x produces contradiction, and therefore x is
    impossible"

    what i'm trying to show:

    "assume you can do x, x ... doesn't produce a contradiction???
    therefor x *might* be possible"

    But that isn't sound logic, as x *might* have been possible without the

    see x was previously thought to be impossible due to a specific proof,
    but that proof evaporates when we frame the problem correctly, and so u
    have lost ur proof that x is impossible. that's really what i'm trying
    to get at here Efn+Efn+Efn+

    u can't cope with that so u'll just continue to deny. none of the rest
    of this gish gallop is worthy my of time responding to. it contains
    nothing that inspires me further because ur just repeating urself ad
    nauseam, mostly via insults Efn<Efn<Efn<

    assumption, and if you actually can't do x, all you have done is showed
    you use unsound logic.

    You need to understand how logic actually works, your argument is
    actually one of the real classical fallicy.



    To show you CAN do something, you need to demonstrate how to do it,.


    no, i'm trying to move the needle from CANNOT do something to MIGHT do
    something, as that open up the motivation for further research to
    reach CAN do something

    Which assuming something that you can't show doesn't do.


    disentangling the logical interface is a one man job. actually
    implementing is much greater than a one man job. and i sill stand by
    that.

    All you are doing is disintergrating your repuation for doing logic.



    and u've lost ur proof it can't exist due to self-referential set-
    classification paradoxes, which is a major pillar of undecidability
    arguments.

    No, because your "Proof", doesn't proof anything as it is based on an
    unsound assumption.

    All you have done is proven you can make circular arguments.

    which is an improvement over the contradictions that previously were
    demonstrated

    Only for someone who can't do logic.


    being twice my age, u may be too old to ever understand the
    significance of such, but ur inability will not deter me

    All you are doing is proving you aren't as smart as you think you are,
    as you don't understand the basics of logic.

    Anyone who reads this arguement will know better that even think of supporting your work.




    i don't need to show that is *does* exist, i just need to show it
    *can* exist to make progress here

    Nope. Fallacy of assuming the conclusion. A REAL logical fallacy.



    Whatever the specific implementation of the inteface returns, it
    will be wrong, by the specific implementaiton of the "pathological" >>>>> program.

    That program has a definite result, so there *IS* a correct answer
    that the inteface SHOULD have returned, but didn't.

    i have two proposals now which are you trying to critique? cause one
    of them doesn't involve any incorrect answers.

    Both of which are based on assuming the ability to compute the non-
    computable.



    Thus "Pathological" is NOT a correct response, as EVERY machine
    that we can make will either Halt or Not Halt. ITS behavior is
    definite.

    Your problem is you confuse the individual definite machines for
    the templates that generate them. But we aren't asking about the
    templates, only individual machines, as templates don't necessarily >>>>> have a uniform answer. (Some halt, some don't, depending on which
    error in implementing the proposed interface was made). All we do
    is prove by that is that the interface is, in fact, unimplementable >>>>> for FULL deciding.

    Since that IS the Halting Problem, it makes the proof.

    When you relax to just partial deciders, it is a well know solvable >>>>> problem, where work just continues to improve what classes of
    inputs can be decided on, which is a quantitative problem, not a
    qualitative one.




    And, the possibility of unknowable things hiding in machine >>>>>>>>> space isn't as crasy at it might seem, as there are an infinite >>>>>>>>> number of machines for them to hide with.

    i just love how godel convinced u to believe russel's teapot
    certainly exists

    He didn't. But Russel shows that claims we need to prove it
    doesn't are invalid.

    yes i don't need to prove ur ghosts don't exist. they don't
    because u can't even know about them

    Sure we know a bit about them, like they exist.

    bare assertion


    Your problem is you don't understand what you are talking about and >>>>> assume you can make unfounded assumptions.

    gaslighting




    I have shown you the proof that unknowable things must exist. You >>>>>>> claim they can't, but your only reasoning is based on there being >>>>>>> something new that we don't know about that you can't actually
    prove.

    Which of those is a claim of the existance of a Russel's Teapot? >>>>>>>
    The thing with a proof, or the things just assumed?





    If you want to break down a "broken" structure, you need to >>>>>>>>>>> know enough about it to SHOW it is broken.

    Just assuming it is just shows that it is most likely YOU >>>>>>>>>>> that is wrong.

    It is more that the system ignores that which tries to break >>>>>>>>>>> it, because getting side tracked on false trails is too >>>>>>>>>>> damaging.

    To me it seems more of a peril to accept your misguided ideas. >>>>>>>>>>>
    The fact that you begin by trying to redefine things out of >>>>>>>>>>> ignorance doesn't help your case.















    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory on Tue Feb 3 07:20:15 2026
    From Newsgroup: comp.theory

    On 2/3/26 2:53 AM, dart200 wrote:
    On 2/2/26 3:44 PM, Richard Damon wrote:
    On 2/2/26 1:11 PM, dart200 wrote:
    On 2/1/26 4:33 AM, Richard Damon wrote:
    On 1/28/26 1:33 PM, dart200 wrote:
    On 1/28/26 4:34 AM, Richard Damon wrote:
    On 1/27/26 12:48 AM, dart200 wrote:
    On 1/26/26 12:50 PM, Richard Damon wrote:
    On 1/25/26 4:28 PM, dart200 wrote:
    On 1/25/26 1:14 PM, Richard Damon wrote:
    On 1/25/26 4:05 PM, dart200 wrote:
    On 1/25/26 10:20 AM, Richard Damon wrote:
    On 1/24/26 9:10 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:31 PM, dart200 wrote:
    On 1/24/26 2:25 PM, Richard Damon wrote:
    On 1/24/26 3:56 PM, dart200 wrote:
    On 1/24/26 11:52 AM, Richard Damon wrote:
    On 1/24/26 1:33 PM, dart200 wrote:
    On 1/24/26 9:26 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 1/24/26 11:45 AM, dart200 wrote:
    On 1/24/26 4:17 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>> On 1/24/26 3:03 AM, dart200 wrote: >>>>>>>>>>>>>>>>>>>>>>> On 1/23/26 5:36 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>> On 1/20/26 9:30 PM, dart200 wrote: >>>>>>>>>>>>>>>>>>>>>>>>> On 1/20/26 4:59 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>>
    The term *IS* defined, and to change it means >>>>>>>>>>>>>>>>>>>>>>>>>> you lie.


    doubling down on definist fallacy ehh??? >>>>>>>>>>>>>>>>>>>>>>>>
    I guess you don't understand the difinist >>>>>>>>>>>>>>>>>>>>>>>> fallacy, as using the ACTUAL definition isn't a >>>>>>>>>>>>>>>>>>>>>>>> fallacy.

    nah ur just pushing a definition that u've >>>>>>>>>>>>>>>>>>>>>>> happened latch onto because it's convenient for u >>>>>>>>>>>>>>>>>>>>>>>
    classic definist fallacy


    Nope, you are just stuck in a definist fallacy >>>>>>>>>>>>>>>>>>>>>> about what a definist fallacy is.

    It seems you don't understand the concept that >>>>>>>>>>>>>>>>>>>>>> some things ARE just defined a given way to be in >>>>>>>>>>>>>>>>>>>>>> a given context.


    and u richard are not the god what that is >>>>>>>>>>>>>>>>>>>>>

    But "the field" is, and thus you are just saying it >>>>>>>>>>>>>>>>>>>> is ok to change the meaning of words.

    i don't believe u represent what "the field" is either >>>>>>>>>>>>>>>>>>>

    The go to "the field" and see if they disagree. >>>>>>>>>>>>>>>>>
    "the field" can come here if they like,

    but as it stands i'm going to call out any more "nOT >>>>>>>>>>>>>>>>> mUh CoMpUTaTiOn" arguments as definist fallacy >>>>>>>>>>>>>>>>>

    In other words, you are just admitting, you don't care >>>>>>>>>>>>>>>> what the words mean in the field, you will just continue >>>>>>>>>>>>>>>> to be a stupid and ignorant lair about what you are doing. >>>>>>>>>>>>>>>
    i just don't care what YOU, richard, says "CoMpUTaTiOn" >>>>>>>>>>>>>>> means. you aren't "the field" bro, and i just really dgaf >>>>>>>>>>>>>>> about ur endless definist fallacy


    But apperently you do, as you aren't just going to present >>>>>>>>>>>>>> your ideas directly to "the field" on a peer-reviewed >>>>>>>>>>>>>> journal, so something is telling you that you have >>>>>>>>>>>>>> something to fix.

    or rather the peer-review is so gatekept i don't even get a >>>>>>>>>>>>> review back for my submission, just rejection without review. >>>>>>>>>>>>>
    the system is broken such that i will take my stance >>>>>>>>>>>>> elsewhere.

    everyone else can ignore me at all our peril...


    WHich just shows that you aren't in step with what theory >>>>>>>>>>>> actually is talking about.

    That is your problem, you assume the world is wrong, and >>>>>>>>>>>> more than likely it is you that is wrong.

    i'm not one continually asserting a bunch of impossible to >>>>>>>>>>> find teapots floating around in machine space

    No, you just keep asserting that you compute impossible to >>>>>>>>>> compute results.

    while u just keep ignoring how i'm avoiding the pitfalls u use >>>>>>>>> to claim impossibility


    No, you use an assumption that requires something proved
    impossible that you want to claim is possible because it might be. >>>>>>>
    u haven't proven my proposed interfaces impossible because u
    haven't generated a contradiction with them

    But you haven't proven them possible either.

    I guess you don't understand Russel's teapot.



    Sorry, you need to actually SHOW how to do what you want to
    claim with actual realizable steps.

    And that means you need a COMPUTABLE method to generate you
    enumerations that you iterate through that is complete.

    i don't need to do that to undercut the proof

    Sure you do.

    Since your proof assumes a non-existant thing exists, it isn't a
    proof or an undercut.

    i'm showing it *can* exist with the possibility of self-referential >>>>> set- classification paradoxes...

    No, you are trying to show that "if you assume you can do the
    impossible" then you can do the impossible.

    normally these proofs go (after we stop begging the question):

    "assume you can do x, x produces contradiction, and therefore x is
    impossible"

    what i'm trying to show:

    "assume you can do x, x ... doesn't produce a contradiction???
    therefor x *might* be possible"

    But that isn't sound logic, as x *might* have been possible without the

    see x was previously thought to be impossible due to a specific proof,
    but that proof evaporates when we frame the problem correctly, and so u
    have lost ur proof that x is impossible. that's really what i'm trying
    to get at here Efn+Efn+Efn+

    Nope, the assumption of the impossible just makes your proof unsound.

    Your continuing to do that shows that YOU are unsound.

    You just don't undetstand how logic works.


    u can't cope with that so u'll just continue to deny. none of the rest
    of this gish gallop is worthy my of time responding to. it contains
    nothing that inspires me further because ur just repeating urself ad nauseam, mostly via insults Efn<Efn<Efn<

    Go ahead, deny truth, that just puts you into Peter's world of fantasy.

    A world where nothing, and everything is true, because truth has lost
    its meaning.



    assumption, and if you actually can't do x, all you have done is
    showed you use unsound logic.

    You need to understand how logic actually works, your argument is
    actually one of the real classical fallicy.



    To show you CAN do something, you need to demonstrate how to do it,.


    no, i'm trying to move the needle from CANNOT do something to MIGHT
    do something, as that open up the motivation for further research to
    reach CAN do something

    Which assuming something that you can't show doesn't do.


    disentangling the logical interface is a one man job. actually
    implementing is much greater than a one man job. and i sill stand by
    that.

    All you are doing is disintergrating your repuation for doing logic.



    and u've lost ur proof it can't exist due to self-referential set-
    classification paradoxes, which is a major pillar of undecidability >>>>> arguments.

    No, because your "Proof", doesn't proof anything as it is based on
    an unsound assumption.

    All you have done is proven you can make circular arguments.

    which is an improvement over the contradictions that previously were
    demonstrated

    Only for someone who can't do logic.


    being twice my age, u may be too old to ever understand the
    significance of such, but ur inability will not deter me

    All you are doing is proving you aren't as smart as you think you are,
    as you don't understand the basics of logic.

    Anyone who reads this arguement will know better that even think of
    supporting your work.




    i don't need to show that is *does* exist, i just need to show it
    *can* exist to make progress here

    Nope. Fallacy of assuming the conclusion. A REAL logical fallacy.



    Whatever the specific implementation of the inteface returns, it
    will be wrong, by the specific implementaiton of the
    "pathological" program.

    That program has a definite result, so there *IS* a correct answer >>>>>> that the inteface SHOULD have returned, but didn't.

    i have two proposals now which are you trying to critique? cause
    one of them doesn't involve any incorrect answers.

    Both of which are based on assuming the ability to compute the non-
    computable.



    Thus "Pathological" is NOT a correct response, as EVERY machine
    that we can make will either Halt or Not Halt. ITS behavior is
    definite.

    Your problem is you confuse the individual definite machines for
    the templates that generate them. But we aren't asking about the
    templates, only individual machines, as templates don't
    necessarily have a uniform answer. (Some halt, some don't,
    depending on which error in implementing the proposed interface
    was made). All we do is prove by that is that the interface is, in >>>>>> fact, unimplementable for FULL deciding.

    Since that IS the Halting Problem, it makes the proof.

    When you relax to just partial deciders, it is a well know
    solvable problem, where work just continues to improve what
    classes of inputs can be decided on, which is a quantitative
    problem, not a qualitative one.




    And, the possibility of unknowable things hiding in machine >>>>>>>>>> space isn't as crasy at it might seem, as there are an
    infinite number of machines for them to hide with.

    i just love how godel convinced u to believe russel's teapot >>>>>>>>> certainly exists

    He didn't. But Russel shows that claims we need to prove it
    doesn't are invalid.

    yes i don't need to prove ur ghosts don't exist. they don't
    because u can't even know about them

    Sure we know a bit about them, like they exist.

    bare assertion


    Your problem is you don't understand what you are talking about
    and assume you can make unfounded assumptions.

    gaslighting




    I have shown you the proof that unknowable things must exist. >>>>>>>> You claim they can't, but your only reasoning is based on there >>>>>>>> being something new that we don't know about that you can't
    actually prove.

    Which of those is a claim of the existance of a Russel's Teapot? >>>>>>>>
    The thing with a proof, or the things just assumed?





    If you want to break down a "broken" structure, you need to >>>>>>>>>>>> know enough about it to SHOW it is broken.

    Just assuming it is just shows that it is most likely YOU >>>>>>>>>>>> that is wrong.

    It is more that the system ignores that which tries to break >>>>>>>>>>>> it, because getting side tracked on false trails is too >>>>>>>>>>>> damaging.

    To me it seems more of a peril to accept your misguided ideas. >>>>>>>>>>>>
    The fact that you begin by trying to redefine things out of >>>>>>>>>>>> ignorance doesn't help your case.


















    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory on Tue Feb 3 11:33:26 2026
    From Newsgroup: comp.theory

    On 2/3/26 4:20 AM, Richard Damon wrote:
    On 2/3/26 2:53 AM, dart200 wrote:
    On 2/2/26 3:44 PM, Richard Damon wrote:
    On 2/2/26 1:11 PM, dart200 wrote:
    On 2/1/26 4:33 AM, Richard Damon wrote:
    On 1/28/26 1:33 PM, dart200 wrote:
    On 1/28/26 4:34 AM, Richard Damon wrote:
    On 1/27/26 12:48 AM, dart200 wrote:
    On 1/26/26 12:50 PM, Richard Damon wrote:
    On 1/25/26 4:28 PM, dart200 wrote:
    On 1/25/26 1:14 PM, Richard Damon wrote:
    On 1/25/26 4:05 PM, dart200 wrote:
    On 1/25/26 10:20 AM, Richard Damon wrote:
    On 1/24/26 9:10 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:31 PM, dart200 wrote:
    On 1/24/26 2:25 PM, Richard Damon wrote:
    On 1/24/26 3:56 PM, dart200 wrote:
    On 1/24/26 11:52 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 1/24/26 1:33 PM, dart200 wrote:
    On 1/24/26 9:26 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>> On 1/24/26 11:45 AM, dart200 wrote: >>>>>>>>>>>>>>>>>>>>>> On 1/24/26 4:17 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>> On 1/24/26 3:03 AM, dart200 wrote: >>>>>>>>>>>>>>>>>>>>>>>> On 1/23/26 5:36 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>>> On 1/20/26 9:30 PM, dart200 wrote: >>>>>>>>>>>>>>>>>>>>>>>>>> On 1/20/26 4:59 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>>>
    The term *IS* defined, and to change it means >>>>>>>>>>>>>>>>>>>>>>>>>>> you lie.


    doubling down on definist fallacy ehh??? >>>>>>>>>>>>>>>>>>>>>>>>>
    I guess you don't understand the difinist >>>>>>>>>>>>>>>>>>>>>>>>> fallacy, as using the ACTUAL definition isn't a >>>>>>>>>>>>>>>>>>>>>>>>> fallacy.

    nah ur just pushing a definition that u've >>>>>>>>>>>>>>>>>>>>>>>> happened latch onto because it's convenient for u >>>>>>>>>>>>>>>>>>>>>>>>
    classic definist fallacy


    Nope, you are just stuck in a definist fallacy >>>>>>>>>>>>>>>>>>>>>>> about what a definist fallacy is. >>>>>>>>>>>>>>>>>>>>>>>
    It seems you don't understand the concept that >>>>>>>>>>>>>>>>>>>>>>> some things ARE just defined a given way to be in >>>>>>>>>>>>>>>>>>>>>>> a given context.


    and u richard are not the god what that is >>>>>>>>>>>>>>>>>>>>>>

    But "the field" is, and thus you are just saying it >>>>>>>>>>>>>>>>>>>>> is ok to change the meaning of words. >>>>>>>>>>>>>>>>>>>>
    i don't believe u represent what "the field" is either >>>>>>>>>>>>>>>>>>>>

    The go to "the field" and see if they disagree. >>>>>>>>>>>>>>>>>>
    "the field" can come here if they like,

    but as it stands i'm going to call out any more "nOT >>>>>>>>>>>>>>>>>> mUh CoMpUTaTiOn" arguments as definist fallacy >>>>>>>>>>>>>>>>>>

    In other words, you are just admitting, you don't care >>>>>>>>>>>>>>>>> what the words mean in the field, you will just >>>>>>>>>>>>>>>>> continue to be a stupid and ignorant lair about what >>>>>>>>>>>>>>>>> you are doing.

    i just don't care what YOU, richard, says "CoMpUTaTiOn" >>>>>>>>>>>>>>>> means. you aren't "the field" bro, and i just really >>>>>>>>>>>>>>>> dgaf about ur endless definist fallacy


    But apperently you do, as you aren't just going to >>>>>>>>>>>>>>> present your ideas directly to "the field" on a peer- >>>>>>>>>>>>>>> reviewed journal, so something is telling you that you >>>>>>>>>>>>>>> have something to fix.

    or rather the peer-review is so gatekept i don't even get >>>>>>>>>>>>>> a review back for my submission, just rejection without >>>>>>>>>>>>>> review.

    the system is broken such that i will take my stance >>>>>>>>>>>>>> elsewhere.

    everyone else can ignore me at all our peril...


    WHich just shows that you aren't in step with what theory >>>>>>>>>>>>> actually is talking about.

    That is your problem, you assume the world is wrong, and >>>>>>>>>>>>> more than likely it is you that is wrong.

    i'm not one continually asserting a bunch of impossible to >>>>>>>>>>>> find teapots floating around in machine space

    No, you just keep asserting that you compute impossible to >>>>>>>>>>> compute results.

    while u just keep ignoring how i'm avoiding the pitfalls u use >>>>>>>>>> to claim impossibility


    No, you use an assumption that requires something proved
    impossible that you want to claim is possible because it might be. >>>>>>>>
    u haven't proven my proposed interfaces impossible because u
    haven't generated a contradiction with them

    But you haven't proven them possible either.

    I guess you don't understand Russel's teapot.



    Sorry, you need to actually SHOW how to do what you want to >>>>>>>>> claim with actual realizable steps.

    And that means you need a COMPUTABLE method to generate you >>>>>>>>> enumerations that you iterate through that is complete.

    i don't need to do that to undercut the proof

    Sure you do.

    Since your proof assumes a non-existant thing exists, it isn't a >>>>>>> proof or an undercut.

    i'm showing it *can* exist with the possibility of self-
    referential set- classification paradoxes...

    No, you are trying to show that "if you assume you can do the
    impossible" then you can do the impossible.

    normally these proofs go (after we stop begging the question):

    "assume you can do x, x produces contradiction, and therefore x is
    impossible"

    what i'm trying to show:

    "assume you can do x, x ... doesn't produce a contradiction???
    therefor x *might* be possible"

    But that isn't sound logic, as x *might* have been possible without the

    see x was previously thought to be impossible due to a specific proof,
    but that proof evaporates when we frame the problem correctly, and so
    u have lost ur proof that x is impossible. that's really what i'm
    trying to get at here Efn+Efn+Efn+

    Nope, the assumption of the impossible just makes your proof unsound.

    that repeated presumption of supposed impossibility is founded on the
    proof that disappears when we frame the problem correctly,

    so like i've said a bunch of times: begging the question


    Your continuing to do that shows that YOU are unsound.

    You just don't undetstand how logic works.


    u can't cope with that so u'll just continue to deny. none of the rest
    of this gish gallop is worthy my of time responding to. it contains
    nothing that inspires me further because ur just repeating urself ad
    nauseam, mostly via insults Efn<Efn<Efn<

    Go ahead, deny truth, that just puts you into Peter's world of fantasy.

    A world where nothing, and everything is true, because truth has lost
    its meaning.



    assumption, and if you actually can't do x, all you have done is
    showed you use unsound logic.

    You need to understand how logic actually works, your argument is
    actually one of the real classical fallicy.



    To show you CAN do something, you need to demonstrate how to do it,. >>>>>

    no, i'm trying to move the needle from CANNOT do something to MIGHT
    do something, as that open up the motivation for further research to
    reach CAN do something

    Which assuming something that you can't show doesn't do.


    disentangling the logical interface is a one man job. actually
    implementing is much greater than a one man job. and i sill stand by
    that.

    All you are doing is disintergrating your repuation for doing logic.



    and u've lost ur proof it can't exist due to self-referential set- >>>>>> classification paradoxes, which is a major pillar of
    undecidability arguments.

    No, because your "Proof", doesn't proof anything as it is based on
    an unsound assumption.

    All you have done is proven you can make circular arguments.

    which is an improvement over the contradictions that previously were
    demonstrated

    Only for someone who can't do logic.


    being twice my age, u may be too old to ever understand the
    significance of such, but ur inability will not deter me

    All you are doing is proving you aren't as smart as you think you
    are, as you don't understand the basics of logic.

    Anyone who reads this arguement will know better that even think of
    supporting your work.




    i don't need to show that is *does* exist, i just need to show it >>>>>> *can* exist to make progress here

    Nope. Fallacy of assuming the conclusion. A REAL logical fallacy.



    Whatever the specific implementation of the inteface returns, it >>>>>>> will be wrong, by the specific implementaiton of the
    "pathological" program.

    That program has a definite result, so there *IS* a correct
    answer that the inteface SHOULD have returned, but didn't.

    i have two proposals now which are you trying to critique? cause
    one of them doesn't involve any incorrect answers.

    Both of which are based on assuming the ability to compute the non- >>>>> computable.



    Thus "Pathological" is NOT a correct response, as EVERY machine >>>>>>> that we can make will either Halt or Not Halt. ITS behavior is
    definite.

    Your problem is you confuse the individual definite machines for >>>>>>> the templates that generate them. But we aren't asking about the >>>>>>> templates, only individual machines, as templates don't
    necessarily have a uniform answer. (Some halt, some don't,
    depending on which error in implementing the proposed interface >>>>>>> was made). All we do is prove by that is that the interface is, >>>>>>> in fact, unimplementable for FULL deciding.

    Since that IS the Halting Problem, it makes the proof.

    When you relax to just partial deciders, it is a well know
    solvable problem, where work just continues to improve what
    classes of inputs can be decided on, which is a quantitative
    problem, not a qualitative one.




    And, the possibility of unknowable things hiding in machine >>>>>>>>>>> space isn't as crasy at it might seem, as there are an
    infinite number of machines for them to hide with.

    i just love how godel convinced u to believe russel's teapot >>>>>>>>>> certainly exists

    He didn't. But Russel shows that claims we need to prove it >>>>>>>>> doesn't are invalid.

    yes i don't need to prove ur ghosts don't exist. they don't
    because u can't even know about them

    Sure we know a bit about them, like they exist.

    bare assertion


    Your problem is you don't understand what you are talking about >>>>>>> and assume you can make unfounded assumptions.

    gaslighting




    I have shown you the proof that unknowable things must exist. >>>>>>>>> You claim they can't, but your only reasoning is based on there >>>>>>>>> being something new that we don't know about that you can't >>>>>>>>> actually prove.

    Which of those is a claim of the existance of a Russel's Teapot? >>>>>>>>>
    The thing with a proof, or the things just assumed?





    If you want to break down a "broken" structure, you need to >>>>>>>>>>>>> know enough about it to SHOW it is broken.

    Just assuming it is just shows that it is most likely YOU >>>>>>>>>>>>> that is wrong.

    It is more that the system ignores that which tries to >>>>>>>>>>>>> break it, because getting side tracked on false trails is >>>>>>>>>>>>> too damaging.

    To me it seems more of a peril to accept your misguided ideas. >>>>>>>>>>>>>
    The fact that you begin by trying to redefine things out of >>>>>>>>>>>>> ignorance doesn't help your case.


















    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory on Tue Feb 3 21:33:40 2026
    From Newsgroup: comp.theory

    On 2/3/26 2:33 PM, dart200 wrote:
    On 2/3/26 4:20 AM, Richard Damon wrote:
    On 2/3/26 2:53 AM, dart200 wrote:
    On 2/2/26 3:44 PM, Richard Damon wrote:
    On 2/2/26 1:11 PM, dart200 wrote:
    On 2/1/26 4:33 AM, Richard Damon wrote:
    On 1/28/26 1:33 PM, dart200 wrote:
    On 1/28/26 4:34 AM, Richard Damon wrote:
    On 1/27/26 12:48 AM, dart200 wrote:
    On 1/26/26 12:50 PM, Richard Damon wrote:
    On 1/25/26 4:28 PM, dart200 wrote:
    On 1/25/26 1:14 PM, Richard Damon wrote:
    On 1/25/26 4:05 PM, dart200 wrote:
    On 1/25/26 10:20 AM, Richard Damon wrote:
    On 1/24/26 9:10 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:31 PM, dart200 wrote:
    On 1/24/26 2:25 PM, Richard Damon wrote:
    On 1/24/26 3:56 PM, dart200 wrote:
    On 1/24/26 11:52 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 1/24/26 1:33 PM, dart200 wrote:
    On 1/24/26 9:26 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>> On 1/24/26 11:45 AM, dart200 wrote: >>>>>>>>>>>>>>>>>>>>>>> On 1/24/26 4:17 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>> On 1/24/26 3:03 AM, dart200 wrote: >>>>>>>>>>>>>>>>>>>>>>>>> On 1/23/26 5:36 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>>>> On 1/20/26 9:30 PM, dart200 wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>> On 1/20/26 4:59 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>
    The term *IS* defined, and to change it >>>>>>>>>>>>>>>>>>>>>>>>>>>> means you lie.


    doubling down on definist fallacy ehh??? >>>>>>>>>>>>>>>>>>>>>>>>>>
    I guess you don't understand the difinist >>>>>>>>>>>>>>>>>>>>>>>>>> fallacy, as using the ACTUAL definition isn't >>>>>>>>>>>>>>>>>>>>>>>>>> a fallacy.

    nah ur just pushing a definition that u've >>>>>>>>>>>>>>>>>>>>>>>>> happened latch onto because it's convenient for u >>>>>>>>>>>>>>>>>>>>>>>>>
    classic definist fallacy


    Nope, you are just stuck in a definist fallacy >>>>>>>>>>>>>>>>>>>>>>>> about what a definist fallacy is. >>>>>>>>>>>>>>>>>>>>>>>>
    It seems you don't understand the concept that >>>>>>>>>>>>>>>>>>>>>>>> some things ARE just defined a given way to be >>>>>>>>>>>>>>>>>>>>>>>> in a given context.


    and u richard are not the god what that is >>>>>>>>>>>>>>>>>>>>>>>

    But "the field" is, and thus you are just saying >>>>>>>>>>>>>>>>>>>>>> it is ok to change the meaning of words. >>>>>>>>>>>>>>>>>>>>>
    i don't believe u represent what "the field" is either >>>>>>>>>>>>>>>>>>>>>

    The go to "the field" and see if they disagree. >>>>>>>>>>>>>>>>>>>
    "the field" can come here if they like,

    but as it stands i'm going to call out any more "nOT >>>>>>>>>>>>>>>>>>> mUh CoMpUTaTiOn" arguments as definist fallacy >>>>>>>>>>>>>>>>>>>

    In other words, you are just admitting, you don't care >>>>>>>>>>>>>>>>>> what the words mean in the field, you will just >>>>>>>>>>>>>>>>>> continue to be a stupid and ignorant lair about what >>>>>>>>>>>>>>>>>> you are doing.

    i just don't care what YOU, richard, says "CoMpUTaTiOn" >>>>>>>>>>>>>>>>> means. you aren't "the field" bro, and i just really >>>>>>>>>>>>>>>>> dgaf about ur endless definist fallacy


    But apperently you do, as you aren't just going to >>>>>>>>>>>>>>>> present your ideas directly to "the field" on a peer- >>>>>>>>>>>>>>>> reviewed journal, so something is telling you that you >>>>>>>>>>>>>>>> have something to fix.

    or rather the peer-review is so gatekept i don't even get >>>>>>>>>>>>>>> a review back for my submission, just rejection without >>>>>>>>>>>>>>> review.

    the system is broken such that i will take my stance >>>>>>>>>>>>>>> elsewhere.

    everyone else can ignore me at all our peril...


    WHich just shows that you aren't in step with what theory >>>>>>>>>>>>>> actually is talking about.

    That is your problem, you assume the world is wrong, and >>>>>>>>>>>>>> more than likely it is you that is wrong.

    i'm not one continually asserting a bunch of impossible to >>>>>>>>>>>>> find teapots floating around in machine space

    No, you just keep asserting that you compute impossible to >>>>>>>>>>>> compute results.

    while u just keep ignoring how i'm avoiding the pitfalls u >>>>>>>>>>> use to claim impossibility


    No, you use an assumption that requires something proved
    impossible that you want to claim is possible because it might >>>>>>>>>> be.

    u haven't proven my proposed interfaces impossible because u >>>>>>>>> haven't generated a contradiction with them

    But you haven't proven them possible either.

    I guess you don't understand Russel's teapot.



    Sorry, you need to actually SHOW how to do what you want to >>>>>>>>>> claim with actual realizable steps.

    And that means you need a COMPUTABLE method to generate you >>>>>>>>>> enumerations that you iterate through that is complete.

    i don't need to do that to undercut the proof

    Sure you do.

    Since your proof assumes a non-existant thing exists, it isn't a >>>>>>>> proof or an undercut.

    i'm showing it *can* exist with the possibility of self-
    referential set- classification paradoxes...

    No, you are trying to show that "if you assume you can do the
    impossible" then you can do the impossible.

    normally these proofs go (after we stop begging the question):

    "assume you can do x, x produces contradiction, and therefore x is
    impossible"

    what i'm trying to show:

    "assume you can do x, x ... doesn't produce a contradiction???
    therefor x *might* be possible"

    But that isn't sound logic, as x *might* have been possible without the >>>
    see x was previously thought to be impossible due to a specific
    proof, but that proof evaporates when we frame the problem correctly,
    and so u have lost ur proof that x is impossible. that's really what
    i'm trying to get at here Efn+Efn+Efn+

    Nope, the assumption of the impossible just makes your proof unsound.

    that repeated presumption of supposed impossibility is founded on the
    proof that disappears when we frame the problem correctly,


    Nope, your problem is you don't knwo what you are talking about because
    you don't know what the words actually mean.

    so like i've said a bunch of times: begging the question

    Nope, YOU are the one "begging the quesiton" since you don't even know
    what the question actually is.

    All you are doing is proving you are just unqualified to be considered
    for the research you want people to, for some crazy reasob, pay you to
    do it.



    Your continuing to do that shows that YOU are unsound.

    You just don't undetstand how logic works.


    u can't cope with that so u'll just continue to deny. none of the
    rest of this gish gallop is worthy my of time responding to. it
    contains nothing that inspires me further because ur just repeating
    urself ad nauseam, mostly via insults Efn<Efn<Efn<

    Go ahead, deny truth, that just puts you into Peter's world of fantasy.

    A world where nothing, and everything is true, because truth has lost
    its meaning.



    assumption, and if you actually can't do x, all you have done is
    showed you use unsound logic.

    You need to understand how logic actually works, your argument is
    actually one of the real classical fallicy.



    To show you CAN do something, you need to demonstrate how to do it,. >>>>>>

    no, i'm trying to move the needle from CANNOT do something to MIGHT >>>>> do something, as that open up the motivation for further research
    to reach CAN do something

    Which assuming something that you can't show doesn't do.


    disentangling the logical interface is a one man job. actually
    implementing is much greater than a one man job. and i sill stand
    by that.

    All you are doing is disintergrating your repuation for doing logic.



    and u've lost ur proof it can't exist due to self-referential
    set- classification paradoxes, which is a major pillar of
    undecidability arguments.

    No, because your "Proof", doesn't proof anything as it is based on >>>>>> an unsound assumption.

    All you have done is proven you can make circular arguments.

    which is an improvement over the contradictions that previously
    were demonstrated

    Only for someone who can't do logic.


    being twice my age, u may be too old to ever understand the
    significance of such, but ur inability will not deter me

    All you are doing is proving you aren't as smart as you think you
    are, as you don't understand the basics of logic.

    Anyone who reads this arguement will know better that even think of
    supporting your work.




    i don't need to show that is *does* exist, i just need to show it >>>>>>> *can* exist to make progress here

    Nope. Fallacy of assuming the conclusion. A REAL logical fallacy.



    Whatever the specific implementation of the inteface returns, it >>>>>>>> will be wrong, by the specific implementaiton of the
    "pathological" program.

    That program has a definite result, so there *IS* a correct
    answer that the inteface SHOULD have returned, but didn't.

    i have two proposals now which are you trying to critique? cause >>>>>>> one of them doesn't involve any incorrect answers.

    Both of which are based on assuming the ability to compute the
    non- computable.



    Thus "Pathological" is NOT a correct response, as EVERY machine >>>>>>>> that we can make will either Halt or Not Halt. ITS behavior is >>>>>>>> definite.

    Your problem is you confuse the individual definite machines for >>>>>>>> the templates that generate them. But we aren't asking about the >>>>>>>> templates, only individual machines, as templates don't
    necessarily have a uniform answer. (Some halt, some don't,
    depending on which error in implementing the proposed interface >>>>>>>> was made). All we do is prove by that is that the interface is, >>>>>>>> in fact, unimplementable for FULL deciding.

    Since that IS the Halting Problem, it makes the proof.

    When you relax to just partial deciders, it is a well know
    solvable problem, where work just continues to improve what
    classes of inputs can be decided on, which is a quantitative
    problem, not a qualitative one.




    And, the possibility of unknowable things hiding in machine >>>>>>>>>>>> space isn't as crasy at it might seem, as there are an >>>>>>>>>>>> infinite number of machines for them to hide with.

    i just love how godel convinced u to believe russel's teapot >>>>>>>>>>> certainly exists

    He didn't. But Russel shows that claims we need to prove it >>>>>>>>>> doesn't are invalid.

    yes i don't need to prove ur ghosts don't exist. they don't >>>>>>>>> because u can't even know about them

    Sure we know a bit about them, like they exist.

    bare assertion


    Your problem is you don't understand what you are talking about >>>>>>>> and assume you can make unfounded assumptions.

    gaslighting




    I have shown you the proof that unknowable things must exist. >>>>>>>>>> You claim they can't, but your only reasoning is based on >>>>>>>>>> there being something new that we don't know about that you >>>>>>>>>> can't actually prove.

    Which of those is a claim of the existance of a Russel's Teapot? >>>>>>>>>>
    The thing with a proof, or the things just assumed?





    If you want to break down a "broken" structure, you need >>>>>>>>>>>>>> to know enough about it to SHOW it is broken.

    Just assuming it is just shows that it is most likely YOU >>>>>>>>>>>>>> that is wrong.

    It is more that the system ignores that which tries to >>>>>>>>>>>>>> break it, because getting side tracked on false trails is >>>>>>>>>>>>>> too damaging.

    To me it seems more of a peril to accept your misguided >>>>>>>>>>>>>> ideas.

    The fact that you begin by trying to redefine things out >>>>>>>>>>>>>> of ignorance doesn't help your case.





















    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory on Wed Feb 4 07:30:44 2026
    From Newsgroup: comp.theory

    On 2/3/26 6:33 PM, Richard Damon wrote:
    On 2/3/26 2:33 PM, dart200 wrote:
    On 2/3/26 4:20 AM, Richard Damon wrote:
    On 2/3/26 2:53 AM, dart200 wrote:
    On 2/2/26 3:44 PM, Richard Damon wrote:
    On 2/2/26 1:11 PM, dart200 wrote:
    On 2/1/26 4:33 AM, Richard Damon wrote:
    On 1/28/26 1:33 PM, dart200 wrote:
    On 1/28/26 4:34 AM, Richard Damon wrote:
    On 1/27/26 12:48 AM, dart200 wrote:
    On 1/26/26 12:50 PM, Richard Damon wrote:
    On 1/25/26 4:28 PM, dart200 wrote:
    On 1/25/26 1:14 PM, Richard Damon wrote:
    On 1/25/26 4:05 PM, dart200 wrote:
    On 1/25/26 10:20 AM, Richard Damon wrote:
    On 1/24/26 9:10 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:31 PM, dart200 wrote:
    On 1/24/26 2:25 PM, Richard Damon wrote:
    On 1/24/26 3:56 PM, dart200 wrote:
    On 1/24/26 11:52 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>> On 1/24/26 1:33 PM, dart200 wrote:
    On 1/24/26 9:26 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>> On 1/24/26 11:45 AM, dart200 wrote: >>>>>>>>>>>>>>>>>>>>>>>> On 1/24/26 4:17 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>>> On 1/24/26 3:03 AM, dart200 wrote: >>>>>>>>>>>>>>>>>>>>>>>>>> On 1/23/26 5:36 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>> On 1/20/26 9:30 PM, dart200 wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>> On 1/20/26 4:59 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>
    The term *IS* defined, and to change it >>>>>>>>>>>>>>>>>>>>>>>>>>>>> means you lie.


    doubling down on definist fallacy ehh??? >>>>>>>>>>>>>>>>>>>>>>>>>>>
    I guess you don't understand the difinist >>>>>>>>>>>>>>>>>>>>>>>>>>> fallacy, as using the ACTUAL definition isn't >>>>>>>>>>>>>>>>>>>>>>>>>>> a fallacy.

    nah ur just pushing a definition that u've >>>>>>>>>>>>>>>>>>>>>>>>>> happened latch onto because it's convenient for u >>>>>>>>>>>>>>>>>>>>>>>>>>
    classic definist fallacy


    Nope, you are just stuck in a definist fallacy >>>>>>>>>>>>>>>>>>>>>>>>> about what a definist fallacy is. >>>>>>>>>>>>>>>>>>>>>>>>>
    It seems you don't understand the concept that >>>>>>>>>>>>>>>>>>>>>>>>> some things ARE just defined a given way to be >>>>>>>>>>>>>>>>>>>>>>>>> in a given context.


    and u richard are not the god what that is >>>>>>>>>>>>>>>>>>>>>>>>

    But "the field" is, and thus you are just saying >>>>>>>>>>>>>>>>>>>>>>> it is ok to change the meaning of words. >>>>>>>>>>>>>>>>>>>>>>
    i don't believe u represent what "the field" is >>>>>>>>>>>>>>>>>>>>>> either


    The go to "the field" and see if they disagree. >>>>>>>>>>>>>>>>>>>>
    "the field" can come here if they like, >>>>>>>>>>>>>>>>>>>>
    but as it stands i'm going to call out any more "nOT >>>>>>>>>>>>>>>>>>>> mUh CoMpUTaTiOn" arguments as definist fallacy >>>>>>>>>>>>>>>>>>>>

    In other words, you are just admitting, you don't >>>>>>>>>>>>>>>>>>> care what the words mean in the field, you will just >>>>>>>>>>>>>>>>>>> continue to be a stupid and ignorant lair about what >>>>>>>>>>>>>>>>>>> you are doing.

    i just don't care what YOU, richard, says >>>>>>>>>>>>>>>>>> "CoMpUTaTiOn" means. you aren't "the field" bro, and i >>>>>>>>>>>>>>>>>> just really dgaf about ur endless definist fallacy >>>>>>>>>>>>>>>>>>

    But apperently you do, as you aren't just going to >>>>>>>>>>>>>>>>> present your ideas directly to "the field" on a peer- >>>>>>>>>>>>>>>>> reviewed journal, so something is telling you that you >>>>>>>>>>>>>>>>> have something to fix.

    or rather the peer-review is so gatekept i don't even >>>>>>>>>>>>>>>> get a review back for my submission, just rejection >>>>>>>>>>>>>>>> without review.

    the system is broken such that i will take my stance >>>>>>>>>>>>>>>> elsewhere.

    everyone else can ignore me at all our peril... >>>>>>>>>>>>>>>>

    WHich just shows that you aren't in step with what theory >>>>>>>>>>>>>>> actually is talking about.

    That is your problem, you assume the world is wrong, and >>>>>>>>>>>>>>> more than likely it is you that is wrong.

    i'm not one continually asserting a bunch of impossible to >>>>>>>>>>>>>> find teapots floating around in machine space

    No, you just keep asserting that you compute impossible to >>>>>>>>>>>>> compute results.

    while u just keep ignoring how i'm avoiding the pitfalls u >>>>>>>>>>>> use to claim impossibility


    No, you use an assumption that requires something proved >>>>>>>>>>> impossible that you want to claim is possible because it >>>>>>>>>>> might be.

    u haven't proven my proposed interfaces impossible because u >>>>>>>>>> haven't generated a contradiction with them

    But you haven't proven them possible either.

    I guess you don't understand Russel's teapot.



    Sorry, you need to actually SHOW how to do what you want to >>>>>>>>>>> claim with actual realizable steps.

    And that means you need a COMPUTABLE method to generate you >>>>>>>>>>> enumerations that you iterate through that is complete.

    i don't need to do that to undercut the proof

    Sure you do.

    Since your proof assumes a non-existant thing exists, it isn't >>>>>>>>> a proof or an undercut.

    i'm showing it *can* exist with the possibility of self-
    referential set- classification paradoxes...

    No, you are trying to show that "if you assume you can do the
    impossible" then you can do the impossible.

    normally these proofs go (after we stop begging the question):

    "assume you can do x, x produces contradiction, and therefore x is >>>>>> impossible"

    what i'm trying to show:

    "assume you can do x, x ... doesn't produce a contradiction???
    therefor x *might* be possible"

    But that isn't sound logic, as x *might* have been possible without >>>>> the

    see x was previously thought to be impossible due to a specific
    proof, but that proof evaporates when we frame the problem
    correctly, and so u have lost ur proof that x is impossible. that's
    really what i'm trying to get at here Efn+Efn+Efn+

    Nope, the assumption of the impossible just makes your proof unsound.

    that repeated presumption of supposed impossibility is founded on the
    proof that disappears when we frame the problem correctly,


    Nope, your problem is you don't knwo what you are talking about because
    you don't know what the words actually mean.

    so like i've said a bunch of times: begging the question

    Nope, YOU are the one "begging the quesiton" since you don't even know
    what the question actually is.

    All you are doing is proving you are just unqualified to be considered
    for the research you want people to, for some crazy reasob, pay you to
    do it.

    clearly ur just willfully disregarding whatever i say for repeatedly
    asserting that i'm dumb and ur right

    kinda sad to see a 70 yo chief engineer stoop to that level of arguing
    on the internet. one would think shit posting on usenet for decades
    would have taught u better, but i suppose intent and just basic moral
    decency matters as much as time spent




    Your continuing to do that shows that YOU are unsound.

    You just don't undetstand how logic works.


    u can't cope with that so u'll just continue to deny. none of the
    rest of this gish gallop is worthy my of time responding to. it
    contains nothing that inspires me further because ur just repeating
    urself ad nauseam, mostly via insults Efn<Efn<Efn<

    Go ahead, deny truth, that just puts you into Peter's world of fantasy.

    A world where nothing, and everything is true, because truth has lost
    its meaning.



    assumption, and if you actually can't do x, all you have done is
    showed you use unsound logic.

    You need to understand how logic actually works, your argument is
    actually one of the real classical fallicy.



    To show you CAN do something, you need to demonstrate how to do it,. >>>>>>>

    no, i'm trying to move the needle from CANNOT do something to
    MIGHT do something, as that open up the motivation for further
    research to reach CAN do something

    Which assuming something that you can't show doesn't do.


    disentangling the logical interface is a one man job. actually
    implementing is much greater than a one man job. and i sill stand >>>>>> by that.

    All you are doing is disintergrating your repuation for doing logic. >>>>>


    and u've lost ur proof it can't exist due to self-referential >>>>>>>> set- classification paradoxes, which is a major pillar of
    undecidability arguments.

    No, because your "Proof", doesn't proof anything as it is based >>>>>>> on an unsound assumption.

    All you have done is proven you can make circular arguments.

    which is an improvement over the contradictions that previously
    were demonstrated

    Only for someone who can't do logic.


    being twice my age, u may be too old to ever understand the
    significance of such, but ur inability will not deter me

    All you are doing is proving you aren't as smart as you think you
    are, as you don't understand the basics of logic.

    Anyone who reads this arguement will know better that even think of >>>>> supporting your work.




    i don't need to show that is *does* exist, i just need to show >>>>>>>> it *can* exist to make progress here

    Nope. Fallacy of assuming the conclusion. A REAL logical fallacy. >>>>>>>


    Whatever the specific implementation of the inteface returns, >>>>>>>>> it will be wrong, by the specific implementaiton of the
    "pathological" program.

    That program has a definite result, so there *IS* a correct >>>>>>>>> answer that the inteface SHOULD have returned, but didn't.

    i have two proposals now which are you trying to critique? cause >>>>>>>> one of them doesn't involve any incorrect answers.

    Both of which are based on assuming the ability to compute the
    non- computable.



    Thus "Pathological" is NOT a correct response, as EVERY machine >>>>>>>>> that we can make will either Halt or Not Halt. ITS behavior is >>>>>>>>> definite.

    Your problem is you confuse the individual definite machines >>>>>>>>> for the templates that generate them. But we aren't asking
    about the templates, only individual machines, as templates >>>>>>>>> don't necessarily have a uniform answer. (Some halt, some
    don't, depending on which error in implementing the proposed >>>>>>>>> interface was made). All we do is prove by that is that the >>>>>>>>> interface is, in fact, unimplementable for FULL deciding.

    Since that IS the Halting Problem, it makes the proof.

    When you relax to just partial deciders, it is a well know
    solvable problem, where work just continues to improve what >>>>>>>>> classes of inputs can be decided on, which is a quantitative >>>>>>>>> problem, not a qualitative one.




    And, the possibility of unknowable things hiding in machine >>>>>>>>>>>>> space isn't as crasy at it might seem, as there are an >>>>>>>>>>>>> infinite number of machines for them to hide with.

    i just love how godel convinced u to believe russel's teapot >>>>>>>>>>>> certainly exists

    He didn't. But Russel shows that claims we need to prove it >>>>>>>>>>> doesn't are invalid.

    yes i don't need to prove ur ghosts don't exist. they don't >>>>>>>>>> because u can't even know about them

    Sure we know a bit about them, like they exist.

    bare assertion


    Your problem is you don't understand what you are talking about >>>>>>>>> and assume you can make unfounded assumptions.

    gaslighting




    I have shown you the proof that unknowable things must exist. >>>>>>>>>>> You claim they can't, but your only reasoning is based on >>>>>>>>>>> there being something new that we don't know about that you >>>>>>>>>>> can't actually prove.

    Which of those is a claim of the existance of a Russel's Teapot? >>>>>>>>>>>
    The thing with a proof, or the things just assumed?





    If you want to break down a "broken" structure, you need >>>>>>>>>>>>>>> to know enough about it to SHOW it is broken.

    Just assuming it is just shows that it is most likely YOU >>>>>>>>>>>>>>> that is wrong.

    It is more that the system ignores that which tries to >>>>>>>>>>>>>>> break it, because getting side tracked on false trails is >>>>>>>>>>>>>>> too damaging.

    To me it seems more of a peril to accept your misguided >>>>>>>>>>>>>>> ideas.

    The fact that you begin by trying to redefine things out >>>>>>>>>>>>>>> of ignorance doesn't help your case.





















    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory on Wed Feb 4 21:29:57 2026
    From Newsgroup: comp.theory

    On 2/4/26 10:30 AM, dart200 wrote:

    clearly ur just willfully disregarding whatever i say for repeatedly asserting that i'm dumb and ur right

    Clearly, the problem is you don't understand how logic works, and ignore
    its basic rules.


    kinda sad to see a 70 yo chief engineer stoop to that level of arguing
    on the internet. one would think shit posting on usenet for decades
    would have taught u better, but i suppose intent and just basic moral decency matters as much as time spent

    Your the one posting "shit", because you just refuse to understand the
    rules of logic.

    I am just pointing yout your ignornace.

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory on Wed Feb 4 18:41:24 2026
    From Newsgroup: comp.theory

    On 2/4/26 6:29 PM, Richard Damon wrote:
    On 2/4/26 10:30 AM, dart200 wrote:

    clearly ur just willfully disregarding whatever i say for repeatedly
    asserting that i'm dumb and ur right

    Clearly, the problem is you don't understand how logic works, and ignore
    its basic rules.

    and clearly ur just willfully disregarding



    kinda sad to see a 70 yo chief engineer stoop to that level of arguing
    on the internet. one would think shit posting on usenet for decades
    would have taught u better, but i suppose intent and just basic moral
    decency matters as much as time spent

    Your the one posting "shit", because you just refuse to understand the
    rules of logic.

    I am just pointing yout your ignornace.


    talk about shitposting
    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory on Thu Feb 5 07:13:50 2026
    From Newsgroup: comp.theory

    On 2/4/26 9:41 PM, dart200 wrote:
    On 2/4/26 6:29 PM, Richard Damon wrote:
    On 2/4/26 10:30 AM, dart200 wrote:

    clearly ur just willfully disregarding whatever i say for repeatedly
    asserting that i'm dumb and ur right

    Clearly, the problem is you don't understand how logic works, and
    ignore its basic rules.

    and clearly ur just willfully disregarding

    Nope. You just refuse to understand the words being used.




    kinda sad to see a 70 yo chief engineer stoop to that level of
    arguing on the internet. one would think shit posting on usenet for
    decades would have taught u better, but i suppose intent and just
    basic moral decency matters as much as time spent

    Your the one posting "shit", because you just refuse to understand the
    rules of logic.

    I am just pointing yout your ignornace.


    talk about shitposting



    No, the only "shitposting" is being done by the one that doesn't know
    shit, which is you.

    It seems you don't know enough of logic to have a filter to remove the
    shit from what you take in, so you have poisoned your mind.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory on Thu Feb 5 10:05:27 2026
    From Newsgroup: comp.theory

    On 2/5/26 4:13 AM, Richard Damon wrote:
    On 2/4/26 9:41 PM, dart200 wrote:
    On 2/4/26 6:29 PM, Richard Damon wrote:
    On 2/4/26 10:30 AM, dart200 wrote:

    clearly ur just willfully disregarding whatever i say for repeatedly
    asserting that i'm dumb and ur right

    Clearly, the problem is you don't understand how logic works, and
    ignore its basic rules.

    and clearly ur just willfully disregarding

    Nope. You just refuse to understand the words being used.

    and u just refuse to understand the concepts i'm responding with





    kinda sad to see a 70 yo chief engineer stoop to that level of
    arguing on the internet. one would think shit posting on usenet for
    decades would have taught u better, but i suppose intent and just
    basic moral decency matters as much as time spent

    Your the one posting "shit", because you just refuse to understand
    the rules of logic.

    I am just pointing yout your ignornace.


    talk about shitposting



    No, the only "shitposting" is being done by the one that doesn't know
    shit, which is you.

    It seems you don't know enough of logic to have a filter to remove the
    shit from what you take in, so you have poisoned your mind.

    gaslighting
    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory on Thu Feb 5 19:40:22 2026
    From Newsgroup: comp.theory

    On 2/5/26 1:05 PM, dart200 wrote:
    On 2/5/26 4:13 AM, Richard Damon wrote:
    On 2/4/26 9:41 PM, dart200 wrote:
    On 2/4/26 6:29 PM, Richard Damon wrote:
    On 2/4/26 10:30 AM, dart200 wrote:

    clearly ur just willfully disregarding whatever i say for
    repeatedly asserting that i'm dumb and ur right

    Clearly, the problem is you don't understand how logic works, and
    ignore its basic rules.

    and clearly ur just willfully disregarding

    Nope. You just refuse to understand the words being used.

    and u just refuse to understand the concepts i'm responding with

    Because they are just based on nonsense and illogic.

    There is NOTHING TO "understand", as they are based on being allowed to
    assume the impossible can happen.






    kinda sad to see a 70 yo chief engineer stoop to that level of
    arguing on the internet. one would think shit posting on usenet for >>>>> decades would have taught u better, but i suppose intent and just
    basic moral decency matters as much as time spent

    Your the one posting "shit", because you just refuse to understand
    the rules of logic.

    I am just pointing yout your ignornace.


    talk about shitposting



    No, the only "shitposting" is being done by the one that doesn't know
    shit, which is you.

    It seems you don't know enough of logic to have a filter to remove the
    shit from what you take in, so you have poisoned your mind.

    gaslighting


    No, your Stupidity.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory on Thu Feb 5 18:47:54 2026
    From Newsgroup: comp.theory

    On 2/5/26 4:40 PM, Richard Damon wrote:
    On 2/5/26 1:05 PM, dart200 wrote:
    On 2/5/26 4:13 AM, Richard Damon wrote:
    On 2/4/26 9:41 PM, dart200 wrote:
    On 2/4/26 6:29 PM, Richard Damon wrote:
    On 2/4/26 10:30 AM, dart200 wrote:

    clearly ur just willfully disregarding whatever i say for
    repeatedly asserting that i'm dumb and ur right

    Clearly, the problem is you don't understand how logic works, and
    ignore its basic rules.

    and clearly ur just willfully disregarding

    Nope. You just refuse to understand the words being used.

    and u just refuse to understand the concepts i'm responding with

    Because they are just based on nonsense and illogic.

    There is NOTHING TO "understand", as they are based on being allowed to assume the impossible can happen.

    i'm not assuming it is, i'm supposing it and then showing that the proof
    of impossibility disappears when the problem is framed correctly. that's
    the kind of insight that should matter, but ur quite clearly not the
    right person to receive it at this time







    kinda sad to see a 70 yo chief engineer stoop to that level of
    arguing on the internet. one would think shit posting on usenet
    for decades would have taught u better, but i suppose intent and
    just basic moral decency matters as much as time spent

    Your the one posting "shit", because you just refuse to understand
    the rules of logic.

    I am just pointing yout your ignornace.


    talk about shitposting



    No, the only "shitposting" is being done by the one that doesn't know
    shit, which is you.

    It seems you don't know enough of logic to have a filter to remove
    the shit from what you take in, so you have poisoned your mind.

    gaslighting


    No, your Stupidity.

    gaslighting
    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory on Fri Feb 6 09:50:06 2026
    From Newsgroup: comp.theory

    On 2/5/26 9:47 PM, dart200 wrote:
    On 2/5/26 4:40 PM, Richard Damon wrote:
    On 2/5/26 1:05 PM, dart200 wrote:
    On 2/5/26 4:13 AM, Richard Damon wrote:
    On 2/4/26 9:41 PM, dart200 wrote:
    On 2/4/26 6:29 PM, Richard Damon wrote:
    On 2/4/26 10:30 AM, dart200 wrote:

    clearly ur just willfully disregarding whatever i say for
    repeatedly asserting that i'm dumb and ur right

    Clearly, the problem is you don't understand how logic works, and >>>>>> ignore its basic rules.

    and clearly ur just willfully disregarding

    Nope. You just refuse to understand the words being used.

    and u just refuse to understand the concepts i'm responding with

    Because they are just based on nonsense and illogic.

    There is NOTHING TO "understand", as they are based on being allowed
    to assume the impossible can happen.

    i'm not assuming it is, i'm supposing it and then showing that the proof
    of impossibility disappears when the problem is framed correctly. that's
    the kind of insight that should matter, but ur quite clearly not the
    right person to receive it at this time

    What is supposing other than an unwarrented assumption.

    You can't assume the existance of something to prove that it exists.

    You just don't understand how logic works.

    The Impossibility didn't disappear, you are just closing your eyes in ignorance saying you don't beleive the truth so will just lie to yourself.








    kinda sad to see a 70 yo chief engineer stoop to that level of
    arguing on the internet. one would think shit posting on usenet >>>>>>> for decades would have taught u better, but i suppose intent and >>>>>>> just basic moral decency matters as much as time spent

    Your the one posting "shit", because you just refuse to understand >>>>>> the rules of logic.

    I am just pointing yout your ignornace.


    talk about shitposting



    No, the only "shitposting" is being done by the one that doesn't
    know shit, which is you.

    It seems you don't know enough of logic to have a filter to remove
    the shit from what you take in, so you have poisoned your mind.

    gaslighting


    No, your Stupidity.

    gaslighting



    Yes, by you to yourself.

    It seems you are as stupid as Olcott.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory on Fri Feb 6 10:23:07 2026
    From Newsgroup: comp.theory

    On 2/6/26 6:50 AM, Richard Damon wrote:
    On 2/5/26 9:47 PM, dart200 wrote:
    On 2/5/26 4:40 PM, Richard Damon wrote:
    On 2/5/26 1:05 PM, dart200 wrote:
    On 2/5/26 4:13 AM, Richard Damon wrote:
    On 2/4/26 9:41 PM, dart200 wrote:
    On 2/4/26 6:29 PM, Richard Damon wrote:
    On 2/4/26 10:30 AM, dart200 wrote:

    clearly ur just willfully disregarding whatever i say for
    repeatedly asserting that i'm dumb and ur right

    Clearly, the problem is you don't understand how logic works, and >>>>>>> ignore its basic rules.

    and clearly ur just willfully disregarding

    Nope. You just refuse to understand the words being used.

    and u just refuse to understand the concepts i'm responding with

    Because they are just based on nonsense and illogic.

    There is NOTHING TO "understand", as they are based on being allowed
    to assume the impossible can happen.

    i'm not assuming it is, i'm supposing it and then showing that the
    proof of impossibility disappears when the problem is framed
    correctly. that's the kind of insight that should matter, but ur quite
    clearly not the right person to receive it at this time

    What is supposing other than an unwarrented assumption.

    You can't assume the existance of something to prove that it exists.

    that's not what i'm proving smh


    You just don't understand how logic works.

    and u don't know what a strawman fallacy is


    The Impossibility didn't disappear, you are just closing your eyes in ignorance saying you don't beleive the truth so will just lie to yourself.








    kinda sad to see a 70 yo chief engineer stoop to that level of >>>>>>>> arguing on the internet. one would think shit posting on usenet >>>>>>>> for decades would have taught u better, but i suppose intent and >>>>>>>> just basic moral decency matters as much as time spent

    Your the one posting "shit", because you just refuse to
    understand the rules of logic.

    I am just pointing yout your ignornace.


    talk about shitposting



    No, the only "shitposting" is being done by the one that doesn't
    know shit, which is you.

    It seems you don't know enough of logic to have a filter to remove
    the shit from what you take in, so you have poisoned your mind.

    gaslighting


    No, your Stupidity.

    gaslighting



    Yes, by you to yourself.

    It seems you are as stupid as Olcott.
    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory on Fri Feb 6 18:18:15 2026
    From Newsgroup: comp.theory

    On 2/6/26 1:23 PM, dart200 wrote:
    On 2/6/26 6:50 AM, Richard Damon wrote:
    On 2/5/26 9:47 PM, dart200 wrote:
    On 2/5/26 4:40 PM, Richard Damon wrote:
    On 2/5/26 1:05 PM, dart200 wrote:
    On 2/5/26 4:13 AM, Richard Damon wrote:
    On 2/4/26 9:41 PM, dart200 wrote:
    On 2/4/26 6:29 PM, Richard Damon wrote:
    On 2/4/26 10:30 AM, dart200 wrote:

    clearly ur just willfully disregarding whatever i say for
    repeatedly asserting that i'm dumb and ur right

    Clearly, the problem is you don't understand how logic works, >>>>>>>> and ignore its basic rules.

    and clearly ur just willfully disregarding

    Nope. You just refuse to understand the words being used.

    and u just refuse to understand the concepts i'm responding with

    Because they are just based on nonsense and illogic.

    There is NOTHING TO "understand", as they are based on being allowed
    to assume the impossible can happen.

    i'm not assuming it is, i'm supposing it and then showing that the
    proof of impossibility disappears when the problem is framed
    correctly. that's the kind of insight that should matter, but ur
    quite clearly not the right person to receive it at this time

    What is supposing other than an unwarrented assumption.

    You can't assume the existance of something to prove that it exists.

    that's not what i'm proving smh

    You are not proving ANYTHING, as you start from false assumptions.



    You just don't understand how logic works.

    and u don't know what a strawman fallacy is

    It seems you don't know what logic is.



    The Impossibility didn't disappear, you are just closing your eyes in
    ignorance saying you don't beleive the truth so will just lie to
    yourself.








    kinda sad to see a 70 yo chief engineer stoop to that level of >>>>>>>>> arguing on the internet. one would think shit posting on usenet >>>>>>>>> for decades would have taught u better, but i suppose intent >>>>>>>>> and just basic moral decency matters as much as time spent

    Your the one posting "shit", because you just refuse to
    understand the rules of logic.

    I am just pointing yout your ignornace.


    talk about shitposting



    No, the only "shitposting" is being done by the one that doesn't
    know shit, which is you.

    It seems you don't know enough of logic to have a filter to remove >>>>>> the shit from what you take in, so you have poisoned your mind.

    gaslighting


    No, your Stupidity.

    gaslighting



    Yes, by you to yourself.

    It seems you are as stupid as Olcott.



    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory on Fri Feb 6 16:17:00 2026
    From Newsgroup: comp.theory

    On 2/6/26 3:18 PM, Richard Damon wrote:
    On 2/6/26 1:23 PM, dart200 wrote:
    On 2/6/26 6:50 AM, Richard Damon wrote:
    On 2/5/26 9:47 PM, dart200 wrote:
    On 2/5/26 4:40 PM, Richard Damon wrote:
    On 2/5/26 1:05 PM, dart200 wrote:
    On 2/5/26 4:13 AM, Richard Damon wrote:
    On 2/4/26 9:41 PM, dart200 wrote:
    On 2/4/26 6:29 PM, Richard Damon wrote:
    On 2/4/26 10:30 AM, dart200 wrote:

    clearly ur just willfully disregarding whatever i say for >>>>>>>>>> repeatedly asserting that i'm dumb and ur right

    Clearly, the problem is you don't understand how logic works, >>>>>>>>> and ignore its basic rules.

    and clearly ur just willfully disregarding

    Nope. You just refuse to understand the words being used.

    and u just refuse to understand the concepts i'm responding with

    Because they are just based on nonsense and illogic.

    There is NOTHING TO "understand", as they are based on being
    allowed to assume the impossible can happen.

    i'm not assuming it is, i'm supposing it and then showing that the
    proof of impossibility disappears when the problem is framed
    correctly. that's the kind of insight that should matter, but ur
    quite clearly not the right person to receive it at this time

    What is supposing other than an unwarrented assumption.

    You can't assume the existance of something to prove that it exists.

    that's not what i'm proving smh

    You are not proving ANYTHING, as you start from false assumptions.

    can u even state what i'm *trying* to prove???




    You just don't understand how logic works.

    and u don't know what a strawman fallacy is

    It seems you don't know what logic is.



    The Impossibility didn't disappear, you are just closing your eyes in
    ignorance saying you don't beleive the truth so will just lie to
    yourself.








    kinda sad to see a 70 yo chief engineer stoop to that level of >>>>>>>>>> arguing on the internet. one would think shit posting on
    usenet for decades would have taught u better, but i suppose >>>>>>>>>> intent and just basic moral decency matters as much as time spent >>>>>>>>>
    Your the one posting "shit", because you just refuse to
    understand the rules of logic.

    I am just pointing yout your ignornace.


    talk about shitposting



    No, the only "shitposting" is being done by the one that doesn't >>>>>>> know shit, which is you.

    It seems you don't know enough of logic to have a filter to
    remove the shit from what you take in, so you have poisoned your >>>>>>> mind.

    gaslighting


    No, your Stupidity.

    gaslighting



    Yes, by you to yourself.

    It seems you are as stupid as Olcott.



    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory on Fri Feb 6 19:26:15 2026
    From Newsgroup: comp.theory

    On 2/6/26 7:17 PM, dart200 wrote:
    On 2/6/26 3:18 PM, Richard Damon wrote:
    On 2/6/26 1:23 PM, dart200 wrote:
    On 2/6/26 6:50 AM, Richard Damon wrote:
    On 2/5/26 9:47 PM, dart200 wrote:
    On 2/5/26 4:40 PM, Richard Damon wrote:
    On 2/5/26 1:05 PM, dart200 wrote:
    On 2/5/26 4:13 AM, Richard Damon wrote:
    On 2/4/26 9:41 PM, dart200 wrote:
    On 2/4/26 6:29 PM, Richard Damon wrote:
    On 2/4/26 10:30 AM, dart200 wrote:

    clearly ur just willfully disregarding whatever i say for >>>>>>>>>>> repeatedly asserting that i'm dumb and ur right

    Clearly, the problem is you don't understand how logic works, >>>>>>>>>> and ignore its basic rules.

    and clearly ur just willfully disregarding

    Nope. You just refuse to understand the words being used.

    and u just refuse to understand the concepts i'm responding with

    Because they are just based on nonsense and illogic.

    There is NOTHING TO "understand", as they are based on being
    allowed to assume the impossible can happen.

    i'm not assuming it is, i'm supposing it and then showing that the
    proof of impossibility disappears when the problem is framed
    correctly. that's the kind of insight that should matter, but ur
    quite clearly not the right person to receive it at this time

    What is supposing other than an unwarrented assumption.

    You can't assume the existance of something to prove that it exists.

    that's not what i'm proving smh

    You are not proving ANYTHING, as you start from false assumptions.

    can u even state what i'm *trying* to prove???

    You are trying to prove that which has been proven to be impossible is
    might actually be possible.

    But, since you proof requires the unsound step of assuming something you
    can not prove, you can't actually reach a conclusion.

    At best you have proven that "If A can be done, then A can be done",
    which doesn't establish that it might be possible for A to be done if it
    has already been proven that it can't be.





    You just don't understand how logic works.

    and u don't know what a strawman fallacy is

    It seems you don't know what logic is.



    The Impossibility didn't disappear, you are just closing your eyes
    in ignorance saying you don't beleive the truth so will just lie to
    yourself.








    kinda sad to see a 70 yo chief engineer stoop to that level >>>>>>>>>>> of arguing on the internet. one would think shit posting on >>>>>>>>>>> usenet for decades would have taught u better, but i suppose >>>>>>>>>>> intent and just basic moral decency matters as much as time >>>>>>>>>>> spent

    Your the one posting "shit", because you just refuse to
    understand the rules of logic.

    I am just pointing yout your ignornace.


    talk about shitposting



    No, the only "shitposting" is being done by the one that doesn't >>>>>>>> know shit, which is you.

    It seems you don't know enough of logic to have a filter to
    remove the shit from what you take in, so you have poisoned your >>>>>>>> mind.

    gaslighting


    No, your Stupidity.

    gaslighting



    Yes, by you to yourself.

    It seems you are as stupid as Olcott.





    --- Synchronet 3.21b-Linux NewsLink 1.2