• Re: The halting problem proof question is incorrect

    From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.theory on Thu Oct 9 16:52:18 2025
    From Newsgroup: comp.theory

    This message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations as noted. All Rights Reserved except as stated
    in the message sig.

    On 26/09/2025 19:15, olcott wrote:

    In computability theory, the halting problem is the problem of
    determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run
    forever. https://en.wikipedia.org/wiki/Halting_problem


    Not exactly. The nature of the machine is left unspecified as stated,
    and a physical machine could be said to transition from some states to themselves rather than halting because they vary across time which
    relations among objects of formal systems don't.

    I think you need to include a restriction on the machines to those that
    have an actually irrevocable "exhaustion" signal strictly correlated
    with state-constancy. Perhaps its not enough even then when you want to
    couch the problem in terms of physical machines, and perhaps it's not
    useful because of the inseparability of the machine from its generalised environment "the universe" where we must talk of mere persistent
    structures (which is subjective anyway).

    Perhaps it's best to discuss computability only in terms of formal
    systems containing objects some having ultimate definientia and some not thereby leaving physical computers for discussions of the enforcement of
    local structure on the universe ("security"). It's much easier and
    agreeing anything about it isn't rendered impossible by the absence of a certainly-accurate physical theory. About formal systems we can, to some extent, discuss relations that are closely constrained to the discrete
    by a very complex common brain structure and decades of common
    linguistic conditioning. That's even though there are obviously similar limitations just as there are with discussion of mechanical computers
    because a quorum of logicians is the computer and we will perceive them
    as unreliable, distractable, and bounded then exclude all considerations
    of disobedience and of persistence of physical structure in our discourse.

    There are interesting further problems that become apparent with the
    formal systems approach which I haven't seen discussions of before, and
    which I'm too amateur to describe here for fear of embarrassing myself.

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory on Thu Oct 9 10:57:08 2025
    From Newsgroup: comp.theory

    On 10/9/2025 10:52 AM, Tristan Wibberley wrote:
    This message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations as noted. All Rights Reserved except as stated
    in the message sig.

    On 26/09/2025 19:15, olcott wrote:

    In computability theory, the halting problem is the problem of
    determining, from a description of an arbitrary computer program and an
    input, whether the program will finish running, or continue to run
    forever. https://en.wikipedia.org/wiki/Halting_problem


    Not exactly. The nature of the machine is left unspecified as stated,
    and a physical machine could be said to transition from some states to themselves rather than halting because they vary across time which
    relations among objects of formal systems don't.



    See my new post
    [Halting problem proof converted to Liar Paradox
    --- 2004 post converted to C]

    I think you need to include a restriction on the machines to those that
    have an actually irrevocable "exhaustion" signal strictly correlated
    with state-constancy. Perhaps its not enough even then when you want to
    couch the problem in terms of physical machines, and perhaps it's not
    useful because of the inseparability of the machine from its generalised environment "the universe" where we must talk of mere persistent
    structures (which is subjective anyway).

    Perhaps it's best to discuss computability only in terms of formal
    systems containing objects some having ultimate definientia and some not thereby leaving physical computers for discussions of the enforcement of local structure on the universe ("security"). It's much easier and
    agreeing anything about it isn't rendered impossible by the absence of a certainly-accurate physical theory. About formal systems we can, to some extent, discuss relations that are closely constrained to the discrete
    by a very complex common brain structure and decades of common
    linguistic conditioning. That's even though there are obviously similar limitations just as there are with discussion of mechanical computers
    because a quorum of logicians is the computer and we will perceive them
    as unreliable, distractable, and bounded then exclude all considerations
    of disobedience and of persistence of physical structure in our discourse.

    There are interesting further problems that become apparent with the
    formal systems approach which I haven't seen discussions of before, and
    which I'm too amateur to describe here for fear of embarrassing myself.

    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --
    Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.theory on Fri Oct 10 14:32:27 2025
    From Newsgroup: comp.theory

    This message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations as noted. All Rights Reserved except as stated
    in the message sig.

    On 09/10/2025 16:57, olcott wrote:
    On 10/9/2025 10:52 AM, Tristan Wibberley wrote:

    See my new post

    [Halting problem proof converted to Liar Paradox
    --- 2004 post converted to C]


    I've seen it. do you think it alleviates my concerns on the
    characterisation of the problem that I replied to?
    --
    Tristan Wibberley

    The message body is Copyright (C) 2025 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2