• Algol 68 / Genie - opinions on local procedures?

    From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Mon Aug 18 04:52:24 2025
    From Newsgroup: comp.lang.misc

    In a library source for rational numbers I'm using a GCD function
    to normalize the rational numbers. This function is used from other
    rational operations regularly since all numbers are always stored
    in their normalized form.

    I defined the 'PROC rat_gcd' in global space but with Algol 68 it
    could also be defined locally (to not pollute the global namespace)
    like

    ELSE # normalize #
    PROC rat_gcd = ... ;

    INT nom = ABS a, den = ABS b;
    INT sign = SIGN a * SIGN b;
    INT q = rat_gcd (nom, den);
    ( sign * nom OVER q, den OVER q )
    FI

    though performance measurements showed some noticeable degradation
    with a local function definition as depicted.

    I'd prefer it to be local but since it's ubiquitously used in that
    library the performance degradation (about 15% on avg) annoys me.

    Opinions on that?

    Janis
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Mon Aug 18 16:54:53 2025
    From Newsgroup: comp.lang.misc

    On 18/08/2025 03:52, Janis Papanagnou wrote:
    I defined the 'PROC rat_gcd' in global space but with Algol 68 it
    could also be defined locally (to not pollute the global namespace)
    like
    [... snip ...]
    though performance measurements showed some noticeable degradation
    with a local function definition as depicted.

    I can't /quite/ reproduce your problem. If I run just the
    interpreter ["a68g myprog.a68g"] then on my machine the timings are
    identical. If I optimise ["a68g -O3 myprog.a68g"], then /first time/
    through, I get a noticeable degradation [about 10% on my machine],
    but the timings converge if I run them repeatedly. YMMV. I suspect
    it's to do with storage management, and later runs are able to re-use
    heap storage that had to be grabbed first time. But that could be
    completely up the pole. Marcel would probably know.

    If you see the same, then I suggest you don't run programs
    for a first time. [:-)]

    I'd prefer it to be local but since it's ubiquitously used in that
    library the performance degradation (about 15% on avg) annoys me.
    Opinions on that?

    Personally, I'd always go for the version that looks nicer
    [ie, in keeping with your own inclinations, with the spirit of A68,
    with the One True (A68) indentation policy, and so on]. If you're
    worried about 15%, that will be more than compensated for by your
    next computer! If you're Really Worried about 15%, then I fear it's
    back to C [or whatever]; but that will cost you more than 15% in
    development time.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/West
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Mon Aug 18 18:30:54 2025
    From Newsgroup: comp.lang.misc

    On 18.08.2025 17:54, Andy Walker wrote:
    On 18/08/2025 03:52, Janis Papanagnou wrote:
    I defined the 'PROC rat_gcd' in global space but with Algol 68 it
    could also be defined locally (to not pollute the global namespace)
    like
    [... snip ...]
    though performance measurements showed some noticeable degradation
    with a local function definition as depicted.

    I can't /quite/ reproduce your problem. If I run just the
    interpreter ["a68g myprog.a68g"] then on my machine the timings are identical. If I optimise ["a68g -O3 myprog.a68g"], then /first time/ through, I get a noticeable degradation [about 10% on my machine],
    but the timings converge if I run them repeatedly. YMMV.

    Actually, with more tests, the variance got even greater; from 10%
    to 45% degradation. The variances, though, did not converge [in my environment].

    I suspect
    it's to do with storage management, and later runs are able to re-use
    heap storage that had to be grabbed first time.

    I also suspected some storage management effect; maybe that the GC
    got active at various stages. (But the code did not use anything
    that would require GC; to be honest, I'm puzzled.)

    But that could be
    completely up the pole. Marcel would probably know.

    If you see the same, then I suggest you don't run programs
    for a first time. [:-)]

    :-)


    I'd prefer it to be local but since it's ubiquitously used in that
    library the performance degradation (about 15% on avg) annoys me.
    Opinions on that?

    Personally, I'd always go for the version that looks nicer
    [ie, in keeping with your own inclinations, with the spirit of A68,
    with the One True (A68) indentation policy, and so on].

    That's what I'm tending towards. I think I'll put the GCD function
    in local scope to keep it away from the interface.

    If you're
    worried about 15%, that will be more than compensated for by your
    next computer!

    Actually I'm very conservative concerning computers; mine is 15+
    years old, and although I "recently" thought about getting an update
    here it's not my priority. ;-)

    If you're Really Worried about 15%, then I fear it's

    Not really. It's not the 10-45%, it's more the feeling that a library
    function should not only conform to the spirit of good software design
    but also be efficiently implemented (also in Algol 68).

    The "problem" (my "problem") here is that the effect should not appear
    in the first place since static scoping should not cost performance; I
    suppose it's an effect of Genie being effectively an interpreter here.

    But my Algol 68 programming is anyway just recreational, for fun, so
    I'll go with the cleaner (slower) implementation.

    back to C [or whatever]; but that will cost you more than 15% in
    development time.

    Uh-oh! - But no, that's not my intention here. ;-)

    Thanks!

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Tue Aug 19 00:45:00 2025
    From Newsgroup: comp.lang.misc

    On 18/08/2025 17:30, Janis Papanagnou wrote:
    Actually, with more tests, the variance got even greater; from 10%
    to 45% degradation. The variances, though, did not converge [in my environment].

    Ah. Then I backtrack from my previous explanation to an
    alternative, that your 15yo computer has insufficient cache, so
    every new run chews up more and more real storage. Or something.
    You may get some improvement by running "sweep heap" or similar
    from time to time, or using pragmats to allocate more storage.

    I also suspected some storage management effect; maybe that the GC
    got active at various stages. (But the code did not use anything
    that would require GC; to be honest, I'm puzzled.)

    ISTR that A68G uses heap storage rather more than you might
    expect. I think Marcel's documentation has more info.

    [...]
    If you're
    worried about 15%, that will be more than compensated for by your
    next computer!
    Actually I'm very conservative concerning computers; mine is 15+
    years old, and although I "recently" thought about getting an update
    here it's not my priority. ;-)

    Ah. I thought I was bad, keeping computers 10 years or so!
    I got a new one a couple of years back, and the difference in speed
    and storage was just ridiculous.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Soler
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Tue Aug 19 02:44:58 2025
    From Newsgroup: comp.lang.misc

    On 19.08.2025 01:45, Andy Walker wrote:
    [...]
    If you're
    worried about 15%, that will be more than compensated for by your
    next computer!
    Actually I'm very conservative concerning computers; mine is 15+
    years old, and although I "recently" thought about getting an update
    here it's not my priority. ;-)

    Ah. I thought I was bad, keeping computers 10 years or so!
    I got a new one a couple of years back, and the difference in speed
    and storage was just ridiculous.

    Well, used software tools (and their updates) required me to at least
    upgrade memory! (That's actually one point that annoys me in "modern"
    software development; rarely anyone seems to care economizing resource requirements.) But all the rest, especially the things that influence performance (CPU [speed, cores], graphic card, HDs/Cache, whatever) is comparably old stuff in my computer; but it works for me.[*]

    And, by the way, thanks for your suggestions and helpful information
    on my questions in all my recent Algol posts! It's also very pleasant
    being able to substantially exchange ideas on this (IMO) interesting
    legacy topic.

    Janis

    [*] If anything I'd probably only need an ASCII accelerating graphics
    card; see https://www.bbspot.com/News/2003/02/ati_ascii.html ;-)

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Wed Aug 20 00:47:31 2025
    From Newsgroup: comp.lang.misc

    On 19/08/2025 01:44, Janis Papanagnou wrote:
    [...] (That's actually one point that annoys me in "modern"
    software development; rarely anyone seems to care economizing resource requirements.) [...]

    Yeah. From time to time I wonder what would happen if we ran
    7th Edition Unix on a modern computer. Sadly, I have to admit that
    I too am rather careless of resources; if you have terabytes of SSD,
    it seems to be a waste of time worrying about a few megabytes.

    And, by the way, thanks for your suggestions and helpful information
    on my questions in all my recent Algol posts! It's also very pleasant
    being able to substantially exchange ideas on this (IMO) interesting
    legacy topic.

    You're very welcome, and I reciprocate your pleasure.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Peerson
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Wed Aug 20 00:43:22 2025
    From Newsgroup: comp.lang.misc

    On Wed, 20 Aug 2025 00:47:31 +0100, Andy Walker wrote:

    From time to time I wonder what would happen if we ran
    7th Edition Unix on a modern computer.

    The Linux kernel source is currently over 40 million lines, and I
    understand the vast majority of that is device drivers.

    If you were to run an old OS on new hardware, that would need drivers for
    that new hardware, too.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Wed Aug 20 23:58:58 2025
    From Newsgroup: comp.lang.misc

    On 20/08/2025 01:43, Lawrence DrCOOliveiro wrote:
    [I wrote:]
    From time to time I wonder what would happen if we ran
    7th Edition Unix on a modern computer.
    The Linux kernel source is currently over 40 million lines, and I
    understand the vast majority of that is device drivers.

    You seem to be making Janis's point, but that doesn't seem to
    be your intention?

    If you were to run an old OS on new hardware, that would need drivers for that new hardware, too.

    Yes, but what is so special about a modern disc drive, monitor, keyboard, mouse, ... that it needs "the vast majority" of 40M lines more
    than its equivalent for a PDP-11? Does this not again make Janis's point?

    Granted that the advent of 32- and 64-bit integers and addresses
    makes some programming much easier, and that we can no longer expect
    browsers and other major tools to fit into 64+64K bytes, is the actual
    bloat in any way justified? It's not just kernels and user software --
    it's also the documentation. In V7, "man cc" generates just under two
    pages of output; on my current computer, it generates over 27000 lines,
    call it 450 pages, and is thereby effectively unprintable and unreadable,
    so it is largely wasted.

    For V7, the entire documentation fits comfortably into two box
    files, and the entire source code is a modest pile of lineprinter output.
    Most of the commands on my current computer are undocumented and unused,
    and I have no idea at all what they do.

    Yes, I know how that "just happens", and I'm observing rather
    than complaining [I'd rather write programs, browse and send/read e-mails
    on my current computer than on the PDP-11]. But it does all give food for thought.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Peerson
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Thu Aug 21 02:59:32 2025
    From Newsgroup: comp.lang.misc

    On Wed, 20 Aug 2025 23:58:58 +0100, Andy Walker wrote:

    On 20/08/2025 01:43, Lawrence DrCOOliveiro wrote:

    If you were to run an old OS on new hardware, that would need
    drivers for that new hardware, too.

    Yes, but what is so special about a modern disc drive, monitor,
    keyboard, mouse, ... that it needs "the vast majority" of 40M lines
    more than its equivalent for a PDP-11?

    Keyboard and mouse -- USB.

    Disk drive -- that might connect via SCSI or SATA. Either one requires
    common SCSI-handling code. Plus you want a filesystem, donrCOt you?
    Preferably a modern one with better performance and reliability than
    Bell Labs was able to offer, back in the day. That requires caching
    and journalling support. Plus modern drives have monitoring built-in,
    which you will want to access. And you want RAID, which didnrCOt exist
    back then?

    Monitor -- video in the Linux kernel goes through the DRM (rCLDirect
    Rendering ManagerrCY) layer. Unix didnrCOt have GUIs back then, but you
    will likely want them now. The PDP-11 back then accessed its console
    (and other terminals) through serial ports. You might still want
    drivers for those, too.

    Both video and disk handling in turn would be built on the common
    PCI-handling code.

    Remember there is also hot-plugging support for these devices, which
    was unheard of back in the day.

    The CPU+support chipset itself will need some drivers, beyond what was conceived back then: for example, control of the various levels of
    caching, power saving, sensor monitoring, and of course memory
    management needs to be much more sophisticated nowadays.

    And what about networking? Would you really want to run a machine in a
    modern environment without networking?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Thu Aug 21 21:02:55 2025
    From Newsgroup: comp.lang.misc

    On 19/08/2025 01:44, Janis Papanagnou wrote:
    [...] (That's actually one point that annoys me in "modern"
    software development; rarely anyone seems to care economizing resource requirements.) [...]

    On 21.08.2025 00:58, Andy Walker wrote:
    On 20/08/2025 01:43, Lawrence DrCOOliveiro wrote:
    The Linux kernel source is currently over 40 million lines, and I
    understand the vast majority of that is device drivers.

    You seem to be making Janis's point, but that doesn't seem to
    be your intention?

    If you were to run an old OS on new hardware, that would need drivers for
    that new hardware, too.

    Yes, but what is so special about a modern disc drive, monitor,
    keyboard, mouse, ... that it needs "the vast majority" of 40M lines more
    than its equivalent for a PDP-11? [...]

    This was actually what I was also thinking when I read Lawrence's
    statement. (And even given his later more thorough list of modern functionalities this still doesn't quite explain the need for *so*
    much resource demands, IMO. I mean; didn't they flew to the moon
    in capsules with computers of kilobytes of memory. Yes, nowadays
    we have more features to support. But in previous days they *had*
    to economize; they had to "squeeze" algorithms to fit into 1 kiB
    of memory. Nowadays no one cares. And the computers running that
    software are an "externality"; there's no incentive, it seems,
    to write the software sophistically in an ergonomic way.)

    But that was (in my initial complaint; see above) anyway just one
    aspect of many.

    You already mentioned documentation. There we not only see this
    extreme huge and often badly structure unclear texts but also the
    information to text-size ratio is often in an extreme imbalance;
    to mention a few keywords, DOC, HTML, XML, JSON - and where the
    problem is not (not only) that one or the other of the formats is
    absolutely huge, but also that it's relatively huge compared to
    an equally or better fitting use of a more primitive format.

    Related to that; some HTML pages you load that contain just text
    payloads of few kiB but that has not only the HTML overhead but
    also loads Mebi (or Gibi?) bytes through dozens of JS libraries;
    and they're not even used! And yet I haven't mentioned pages that
    add more storage and performance demands due to advertisement
    logic (with more delays, and "of course" not considering data
    privacy); but that's of course intentionally (it's your choice).

    Economy is also related to GUI ergonomy; in configurability and
    usability. You can configure all sorts of GUI properties like GUI-schemes/appearance, you can adjust buttons left or right,
    but you cannot get a button with a necessary function, or one
    function in an easy accessible way. GUI's are overloaded with
    all sorts of trash which inevitably leads to an uneconomic use,
    and necessary features are unsupported or cannot be configured.
    (But providing such [only] fancy features contributes also to
    the code size.)

    Then there's the unnecessary dependencies. Just recently there
    was a discussion about (I think) the ffmpeg tool; it was shown
    that it includes hundreds of external libraries! Yet worse, many
    of them not serving it's main task (video processing/converting)
    but things like LDAP, and *tons* of libraries concerning Samba;
    the latter is also a problem of bad software organization given
    that there's so many libraries to be added for SMB "support"
    (whether that should be a part of a video converter or not).

    But also the performance or the systems/application design. If
    you start, e.g., a picture viewer and you have to wait a long
    time because the software designer thought it to be a good idea
    to present the directory tree in a separate part of the window,
    and to achieve that the program needs to recursively parse a
    huge subdirectory structure, and until you finally see that
    single picture that you wanted to see - and whose file name
    you already provided as argument! - half a minute passed.

    Or use of bad algorithms. Like a graphics processing software
    that doesn't terminate when trying to 90-# rotate a large image
    because it does try to do the rotation unsophisticatedly with
    a copy of the huge memory and with bit-wise operations instead
    using fast and lossless in-place algorithms (that are commonly
    known already since half a century).

    Etc. etc. - Above just off the top of my head; there's surely
    much more to say about economy and software development.

    And an important consequence is that bad design and bloat will
    make systems usually also less stable and unreliable. And it's
    often hard (or even impossible) to fix such monstrosities.

    <end of rant>

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Sat Aug 23 00:42:01 2025
    From Newsgroup: comp.lang.misc

    On 21/08/2025 03:59, Lawrence DrCOOliveiro wrote:
    On Wed, 20 Aug 2025 23:58:58 +0100, Andy Walker wrote:
    On 20/08/2025 01:43, Lawrence DrCOOliveiro wrote:
    If you were to run an old OS on new hardware, that would need
    drivers for that new hardware, too.
    Yes, but what is so special about a modern disc drive, monitor,
    keyboard, mouse, ... that it needs "the vast majority" of 40M lines
    more than its equivalent for a PDP-11?
    Keyboard and mouse -- USB. [...]

    You've given us a list of 20-odd features of modern systems
    that have been developed since 7th Edition Unix, and could no doubt
    think of another 20. What you didn't attempt was to explain why all
    these nice things need to occupy 40M lines of code. That's, give or
    take, 600k pages of code, call it 2000 books. That's, on your figures,
    just the kernel source; specifications [assuming there are such!] and documentation no doubt double that, and it's already more than normal
    people can read and understand. There is similar bloat is the commands
    and in the manual entries. It's out of control, witness the updates
    that come in every few days. It's fatally easy to say of "sh" or "cc"
    or "firefox" or ... "Wouldn't it be nice if it did X?", and fatally
    hard to say "It shouldn't really be doing X.", as there's always the possibility of someone somewhere who might perhaps be using it.

    See also Janis's nearby article.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Kinross
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Sat Aug 23 02:29:54 2025
    From Newsgroup: comp.lang.misc

    On 23.08.2025 01:42, Andy Walker wrote:
    On 21/08/2025 03:59, Lawrence DrCOOliveiro wrote:
    [...]

    You've given us a list of 20-odd features of modern systems
    that have been developed since 7th Edition Unix, and could no doubt
    think of another 20. What you didn't attempt was to explain why all
    these nice things need to occupy 40M lines of code. That's, give or
    take, 600k pages of code, call it 2000 books. That's, on your figures,
    just the kernel source; [...]

    That was a point I also found to be a very disturbing statement;
    I recall the kernel was designed to be small, the period of stay
    in kernel routines should generally also be short! - And now we
    have millions of lines that are either just idle or used against
    the Unix's design and operating principles?

    Meanwhile - I think probably since AIX? - we don't any more need
    to compile the drivers into the kernel (as formerly with SUN OS,
    for example). But does that really mean that all the drivers now
    bloat the kernel [as external modules] as well? - Sounds horrible.

    But I'm no expert on this topic, so interested to be enlightened
    if the situation is really as bad as Lawrence sketched it.

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Sat Aug 23 02:36:45 2025
    From Newsgroup: comp.lang.misc

    On Sat, 23 Aug 2025 00:42:01 +0100, Andy Walker wrote:

    What you didn't attempt was to explain why all these nice things
    need to occupy 40M lines of code.

    Go look at the code itself.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ivan Shmakov@ivan@siamics.netREMOVE.invalid to comp.lang.misc on Tue Aug 26 18:42:05 2025
    From Newsgroup: comp.lang.misc

    On 2025-08-19, Janis Papanagnou wrote:
    On 19.08.2025 01:45, Andy Walker wrote:

    If you're worried about 15%, that will be more than compensated
    for by your next computer!

    Actually I'm very conservative concerning computers; mine is 15+
    years old, and although I "recently" thought about getting an
    update here it's not my priority. ;-)

    Ah. I thought I was bad, keeping computers 10 years or so! I got
    a new one a couple of years back, and the difference in speed and
    storage was just ridiculous.

    Reading http://en.wikipedia.org/wiki/E-waste , I'm inclined to
    think that keeping computers for a decade might be not so bad
    a thing after all.

    Well, used software tools (and their updates) required me to at
    least upgrade memory! (That's actually one point that annoys me
    in "modern" software development; rarely anyone seems to care
    economizing resource requirements.)

    I doubt it's so much lack of care as it is simply being not a
    priority. Still, all the more reason to direct attention to
    the cases where such care /is/ given. Thankfully, the problem
    /is/ a known one (say, [1]), and IME, there still /are/ lean
    programs to choose from.

    By the by, I've been looking for "simple" self-hosting compilers
    recently - something with source that a semi-dedicated person
    can read through in reasonable time. What I've found so far is
    Pygmy Forth (naturally, I guess) and the T3X family of languages
    [2]. Are there perhaps other such compilers worthy of mention?

    [1] http://spectrum.ieee.org/lean-software-development
    [2] http://pygmy.utoh.org/pygmyforth.html
    [3] http://t3x.org/t3x/

    I'll also try to address here specific points raised elsewhere
    in this thread, particularly news:1087qgv$14ret$1@dont-email.me .

    First, the 4e7 lines of Linux code is somewhat unfair a measure.
    On my system, less than 5% of individual modules built from the
    Linux source are loaded right now:

    $ lsmod | wc -l
    175
    $ find /lib/modules/6.1.0-37-amd64/ -xdev -type f -name \*.ko | wc -l
    4024
    $

    That value would of course vary from system to system, but I'd
    think it's safe to say that in at least 90% of all deployments,
    less than 10% of Linux code will be loaded at any given time.

    For those who are looking for a system with more "comprehensible"
    sources, I would recommend NetBSD. And if anything, I personally
    find its list of supported platforms, http://netbsd.org/ports/ ,
    fairly impressive.

    Don't get me wrong: NetBSD won't fit for every use case Linux-based
    systems cover - the complexity of the Linux kernel isn't there
    for nothing - but just in case you /can/ live with a "limited"
    OS (say, one that doesn't support Docker), thanks to NetBSD, you
    /do/ have that option.

    With regards to applications, while binary distributions tend to
    opt to have the most "fully functional" built of any given
    package - from whence come lots of dependencies - a source-based
    one allows /you/ to choose what you need. And pkgsrc for NetBSD
    is such a distribution. Gentoo is a Linux-based distribution
    along the same lines.

    As to websites and JS libraries, for the past 25 years I've been
    using as my primary one a browser, Lynx, that never had support
    for JS, and likely never will have. IME, an /awful lot/ of
    websites are usable and useful entirely without JS. For those
    interested, I've recently made several comments in defense of
    "JS-free" web and web browsers, such as [4, 5, 6].

    [4] news:ID351XcOrll9pkb7@violet.siamics.net
    [5] news:6brTAD5tWnddeHXd@violet.siamics.net
    [6] news:ii6tqUtTe0Vi-Fnh@violet.siamics.net
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Wed Aug 27 00:28:20 2025
    From Newsgroup: comp.lang.misc

    On Tue, 26 Aug 2025 18:42:05 +0000, Ivan Shmakov wrote:

    First, the 4e7 lines of Linux code is somewhat unfair a measure. On
    my system, less than 5% of individual modules built from the Linux
    source are loaded right now ...

    Greg Kroah-Hartman is reported to have said that a typical
    workstation/server Linux kernel build only needs about 1-+ million
    lines of source code. A more complex build, like an Android kernel,
    needs something like 3|u that.

    For those who are looking for a system with more "comprehensible"
    sources, I would recommend NetBSD. And if anything, I personally
    find its list of supported platforms, http://netbsd.org/ports/ ,
    fairly impressive.

    Bit misleading, though. Note it counts rCLXenrCY (a Linux-based
    hypervisor) as a separate platform. Also, look at all the different
    68k, MIPS, ARM and PowerPC-based machines that are individually
    listed.

    Linux counts platform support based solely on CPU architecture (not
    surprising, since itrCOs just a kernel, not the userland as well). It
    covers all those CPUs listed (except maybe VAX), and a bunch of others
    as well.

    Each directory here <https://github.com/torvalds/linux/tree/master/arch> represents a separate supported architecture. Note extras like arm64,
    riscv, loongarch and s390.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Wed Aug 27 07:53:00 2025
    From Newsgroup: comp.lang.misc

    On 26.08.2025 20:42, Ivan Shmakov wrote:
    On 2025-08-19, Janis Papanagnou wrote:

    Well, used software tools (and their updates) required me to at
    least upgrade memory! (That's actually one point that annoys me
    in "modern" software development; rarely anyone seems to care
    economizing resource requirements.)

    I doubt it's so much lack of care as it is simply being not a
    priority. [...]

    But those are depending each other. - Quoting from your link below...

    Wirth:
    oTime pressure is probably the foremost reason behind the emergence
    of bulky software. The time pressure that designers endure discourages
    careful planning. It also discourages improving acceptable solutions;
    instead, it encourages quickly conceived software additions and
    corrections. Time pressure gradually corrupts an engineerAs standard
    of quality and perfection. It has a detrimental effect on people as
    well as products.o

    And, to be yet more clear; I also think it's [widely] just ignorance!
    (The mere existence of the article you quoted below is per se already
    a strong sign for that. But also other experiences, like talks with
    many IT-folks of various age and background reinforced my opinion on
    that.)

    [...]

    [1] http://spectrum.ieee.org/lean-software-development

    Thanks for the link; worth reading.

    (And also learned BTW that I missed that N. Wirth deceased last year.)

    [...]

    As to websites and JS libraries, for the past 25 years I've been
    using as my primary one a browser, Lynx, that never had support
    for JS, and likely never will have. IME, an /awful lot/ of
    websites are usable and useful entirely without JS. [...]

    Lynx. This is great. - I recall that in the 1990's I had a student in
    my team who had to provide some HTML information; I asked him to test
    his data in two common browsers (back these days I think Netscape and
    the MS IE), and (for obvious reasons), also with Lynx!

    (Privately I had later written HTML/JS to create applications (with
    dynamic content) since otherwise that would not have been possible;
    I had no own server with some application servers available. But I
    didn't use any frameworks or external libraries. Already bad enough.)

    But even with Browsers and JS activated with my old Firefox I cannot
    use or read many websites nowadays; because they demand newer browser
    versions.

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ivan Shmakov@ivan@siamics.netREMOVE.invalid to comp.lang.misc on Sat Aug 30 19:10:42 2025
    From Newsgroup: comp.lang.misc

    On 2025-08-27, Lawrence D'Oliveiro wrote:
    On Tue, 26 Aug 2025 18:42:05 +0000, Ivan Shmakov wrote:

    For those who are looking for a system with more "comprehensible"
    sources, I would recommend NetBSD. And if anything, I personally
    find its list of supported platforms, http://netbsd.org/ports/ ,
    fairly impressive.

    Bit misleading, though. Note it counts "Xen" (a Linux-based
    hypervisor) as a separate platform.

    What do you mean by "Linux-based"? NetBSD supports running
    as both Xen domU (unprivileged) /and/ dom0 (privileged.)
    AIUI, it's possible to run Linux domUs when NetBSD is dom0,
    and vice versa.

    Also, look at all the different 68k, MIPS, ARM and PowerPC-based
    machines that are individually listed.

    Linux counts platform support based solely on CPU architecture (not surprising, since it's just a kernel, not the userland as well).

    There's a "Ports by CPU architecture" section down the NetBSD
    ports page; it lists 16 individual CPU architectures.

    My point was that GNU/Linux distributions typically support
    less, and so do other BSDs (IIRC.) For instance, [1] lists 8:

    Architectures: all amd64 arm64 armel armhf i386 ppc64el riscv64 s390x

    [1] http://cdn-fastly.deb.debian.org_debian_dists_trixie_InRelease

    (And I'm pretty certain I saw ones that only support one or two.)

    The way I see it, it's the /kernel/ that it takes the most
    effort to port to a new platform - as it's where the support
    for peripherals lives, including platform-specific ones.

    No idea why Debian doesn't support other architectures supported
    by Linux. I'm going to guess it's lack of volunteers.

    It covers all those CPUs listed (except maybe VAX), and a bunch of
    others as well.

    Each directory here <https://github.com/torvalds/linux/tree/master/arch> represents a separate supported architecture. Note extras like arm64,

    Getting actual data out of Microsoft Github pages is a bit more
    involved than I'd prefer. Still:

    $ curl -- https://github.com/torvalds/linux/tree/master/arch \
    | pcregrep -ao1 -- "\"path\":\"arch/([/0-9a-z_.-]+)\"" | nl -ba
    1 alpha
    2 arc
    3 arm
    4 arm64
    5 csky
    6 hexagon
    7 loongarch
    8 m68k
    9 microblaze
    10 mips
    11 nios2
    12 openrisc
    13 parisc
    14 powerpc
    15 riscv
    16 s390
    17 sh
    18 sparc
    19 um
    20 x86
    21 xtensa
    22 .gitignore
    $

    So, yes, I guess it does beat NetBSD in that respect. But I
    still think that if you're interested in understanding how your
    OS works - at the source code level - you'd be better with
    NetBSD than with a Linux-based OS. (Not /quite/ a priority
    for me personally, TBH, but I appreciate it being an option.)
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ivan Shmakov@ivan@siamics.netREMOVE.invalid to comp.lang.misc on Sat Aug 30 19:39:49 2025
    From Newsgroup: comp.lang.misc

    On 2025-08-27, Janis Papanagnou wrote:
    On 26.08.2025 20:42, Ivan Shmakov wrote:
    On 2025-08-19, Janis Papanagnou wrote:

    Well, used software tools (and their updates) required me to at
    least upgrade memory! (That's actually one point that annoys me
    in "modern" software development; rarely anyone seems to care
    economizing resource requirements.)

    I doubt it's so much lack of care as it is simply being not a
    priority.

    But those are depending each other.

    I guess I should've expressed myself better: engineering is
    all about trade-offs, and there're often other things to care
    about once the program runs "fast enough" on the hardware that
    the customers are /assumed/ to have.

    Not to mention that taking too long to 'polish' your product,
    you risk ending up lagging behind your competitors.

    I could only hope that environmental concerns will eventually
    make resource usage a more important issue for code writers.

    And, to be yet more clear; I also think it's [widely] just ignorance!
    (The mere existence of the article you quoted below is per se already a strong sign for that. But also other experiences, like talks with many IT-folks of various age and background reinforced my opinion on that.)

    I suppose it might be the case of people involved with computers
    professionally not seeing much point in acquiring the skills that
    aren't in demand by employers.

    (Privately I had later written HTML/JS to create applications (with
    dynamic content) since otherwise that would not have been possible;
    I had no own server with some application servers available. But I
    didn't use any frameworks or external libraries. Already bad enough.)

    I can't say I'm a big fan of JS or ES, yet there're certainly
    languages I like even less. FWIW, I prefer to stick to ES 5.1,
    http://262.ecma-international.org/5.1/ specifically, as then I
    can use http://duktape.org/ or http://mujs.com/ to test the
    bulk of my code, rather than running it in Chromium or Firefox.

    Like I've mentioned elsewhere, it's not the language, or even
    its use to create web applications, that irks me: it's that
    often enough when I want some data, what I get instead is some
    application that I /must/ use to access that same data - in a
    manner predefined by its developer (say, one record at a time),
    and not particularly conductive to the task /I/ have at hand.

    As to frameworks, my /impression/ is that it makes sense to
    familiarize oneself with them only when there're actually
    /lots/ of similar programming problems that need to be solved,
    particularly when writing code as part of a team. As it never
    was the case for me personally, I've never seen much sense in
    investing effort into learning any framework, JS or otherwise.

    But even with Browsers and JS activated with my old Firefox I cannot
    use or read many websites nowadays; because they demand newer browser versions.

    "Demand" how?

    Server-side code can of course make arbitrary decisions based
    on the User-Agent: string, but that's a poor practice in general,
    and typically such restrictions can be bypassed by reading the
    archived copy of the webpage from http://web.archive.org/ .

    Also works when it's not a browser but /TLS/ version issue.

    Alternatively, associated JS code can test browser's capabilities,
    but that can be circumvented by disabling JS altogether.

    Also to mention is that many websites these days rely on some
    sort of "DDoS protection service" external to them. (I run my
    own servers, so I /do/ know some of the pain of mitigating heaps
    of junk requests originating from botnets - mainly compromised
    "wireless routers" I believe.)

    Such services employ captchas, and those in turn require JS,
    and might require recent browser versions as well. If that's
    the case, http://web.archive.org/ might or might not help.

    Other than using Wayback Machine, I believe there's no easy
    solution to this problem: should the operator disable "protection
    service," they risk the site becoming bogged down by junk requests
    and no longer available to legitimate users. Conversely, by
    employing such a service, they inconvenience their users, for
    even those who /do/ run modern browsers, will presumably have
    better things to do than solving captchas.

    So, personally, when encountering such behavior, I try Wayback
    Machine first. If it doesn't get me a version of the webpage
    as recent as I need, I consider contacting the website operator
    so that they might check and possibly tweak their "protection"
    settings to allow archival. If they can't, or won't, fix it,
    well, as mTCP HTTPSERV.EXE puts it, "countless more exist."
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Sat Aug 30 22:43:12 2025
    From Newsgroup: comp.lang.misc

    On Sat, 30 Aug 2025 19:10:42 +0000, Ivan Shmakov wrote:

    On Wed, 27 Aug 2025 00:28:20 -0000 (UTC), Lawrence DrCOOliveiro
    wrote:

    Bit misleading, though. Note it counts "Xen" (a Linux-based
    hypervisor) as a separate platform.

    What do you mean by "Linux-based"?

    I mean that Xen runs an actual Linux kernel in the hypervisor, and
    supports regular Linux distros as guests -- they donrCOt need to be
    modified to specially support Xen, or any other hypervisor. ItrCOs
    Linux above, and Linux below -- Linux at every layer.

    NetBSD supports running as both Xen domU (unprivileged) /and/ dom0 (privileged.)

    Linux doesnrCOt count these as separate platforms. TheyrCOre just
    considered a standard part of regular platform support.

    Linux counts platform support based solely on CPU architecture (not
    surprising, since it's just a kernel, not the userland as well).

    There's a "Ports by CPU architecture" section down the NetBSD
    ports page; it lists 16 individual CPU architectures.

    ThatrCOs not as many as Linux.

    My point was that GNU/Linux distributions typically support
    less ...

    But thatrCOs an issue with the various distributions, not with the Linux
    kernel itself. In the BSD world, there is no separate of rCLkernelrCY from rCLdistributionrCY. That makes things less flexible than the Linux world.

    For example, while base Debian itself may support something under a
    dozen architectures, there are offshoots of Debian that cover others.

    The way I see it, it's the /kernel/ that it takes the most
    effort to port to a new platform - as it's where the support
    for peripherals lives, including platform-specific ones.

    Given that the Linux kernel supports more of these different platforms
    than any BSD can manage, I think yourCOre just reinforcing my point.

    But I still think that if you're interested in understanding how
    your OS works - at the source code level - you'd be better with
    NetBSD than with a Linux-based OS.

    Linux separates the kernel from the userland. That makes things
    simpler than running everything together, as the BSDs do.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Sat Aug 30 22:45:27 2025
    From Newsgroup: comp.lang.misc

    On Sat, 30 Aug 2025 19:39:49 +0000, Ivan Shmakov wrote:

    Not to mention that taking too long to 'polish' your product, you
    risk ending up lagging behind your competitors.

    I would say, the open-source world is a counterexample to this. Look at
    how long it took GNU and Linux to end up dominating the entire computing landscape -- it didnrCOt happen overnight.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Sun Aug 31 08:32:20 2025
    From Newsgroup: comp.lang.misc

    On 30.08.2025 21:39, Ivan Shmakov wrote:
    On 2025-08-27, Janis Papanagnou wrote:
    [...]

    But even with Browsers and JS activated with my old Firefox I cannot
    use or read many websites nowadays; because they demand newer browser versions.

    "Demand" how?

    All sorts of "defunct"; from annoying notes telling me to upgrade
    my browser (while still seeing content and operating), then that
    message with a complete dis-functionality of dynamic content, and/
    or mis-formatted (to the degree of being unusable), or there's no
    text information at all displayed. And so on.

    If there's an issue with pages/services like reddit or sourceforge
    or (in the past; they seem to have fixed something) stackoverflow,
    or free services (news, weather, tv-program, etc.) I can just skip
    and ignore those services. But there's also commercial pages I've
    to use (like banks, tax/gov, or free mail providers, etc.) that I
    have (or need) to use then I must switch to another system or I'm
    out of luck. (Luckily I have systems available to choose from.)

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Sun Aug 31 08:34:59 2025
    From Newsgroup: comp.lang.misc

    On 30.08.2025 21:39, Ivan Shmakov wrote:
    [...]

    Not to mention that taking too long to 'polish' your product,
    you risk ending up lagging behind your competitors.

    It's not "polishing" that I was speaking about.

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ivan Shmakov@ivan@siamics.netREMOVE.invalid to comp.lang.misc on Sun Aug 31 13:35:51 2025
    From Newsgroup: comp.lang.misc

    On 2025-08-30, Lawrence D'Oliveiro wrote:
    On Sat, 30 Aug 2025 19:39:49 +0000, Ivan Shmakov wrote:

    Not to mention that taking too long to 'polish' your product, you
    risk ending up lagging behind your competitors.

    I would say, the open-source world is a counterexample to this.
    Look at how long it took GNU and Linux to end up dominating the
    entire computing landscape -- it didn't happen overnight.

    Indeed, one good thing about free software is that when one
    company closes down, another can pick up and go on from there.
    Such as how Netscape is no more, yet the legacy of its Navigator
    still survives in Firefox.

    I'm not sure how much of a consolation it is to the people
    who owned the companies that failed, though.

    Also, what indication is there that GNU is 'dominating' the
    landscape? Sure, Linux is everywhere (such as in now ubiquitous
    Android phones and TVs and whatnot), but I don't quite see GNU
    being adopted as widely.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Sun Aug 31 22:40:49 2025
    From Newsgroup: comp.lang.misc

    On Sun, 31 Aug 2025 13:35:51 +0000, Ivan Shmakov wrote:

    I'm not sure how much of a consolation it is to the people who owned
    the companies that failed, though.

    Companies fail all the time, open source or no open source. When a
    company that has developed a piece of proprietary software fails, then
    the software dies with the company. With open source, the software
    stands a chance of living on.

    E.g. Loki was an early attempt at developing games on Linux. They
    failed. But the SDL framework that they created for low-latency
    multimedia graphics lives on.

    Also, what indication is there that GNU is 'dominating' the
    landscape? Sure, Linux is everywhere (such as in now ubiquitous
    Android phones and TVs and whatnot), but I don't quite see GNU
    being adopted as widely.

    Look at all the markets that Linux has taken away from Microsoft --
    Windows Media Center, Windows Home Server -- all defunct. Windows
    Server too is in slow decline. And now handheld gaming with the Steam
    Deck. You will find GNU there.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ivan Shmakov@ivan@siamics.netREMOVE.invalid to comp.lang.misc on Thu Sep 4 18:25:44 2025
    From Newsgroup: comp.lang.misc

    On 2025-08-31, Lawrence D'Oliveiro wrote:
    On Sun, 31 Aug 2025 13:35:51 +0000, Ivan Shmakov wrote:

    Indeed, one good thing about free software is that when one company
    closes down, another can pick up and go on from there. Such as how
    Netscape is no more, yet the legacy of its Navigator still survives
    in Firefox.

    I'm not sure how much of a consolation it is to the people who owned
    the companies that failed, though.

    Companies fail all the time, open source or no open source. When
    a company that has developed a piece of proprietary software fails,
    then the software dies with the company. With open source, the
    software stands a chance of living on.

    It sounds like we're in agreement on this point, no?

    My other point, however, is this: when you do run a business,
    shouldn't you be more concerned that said /business/ succeeds,
    rather than the products it delivers, whatever they might be?

    And from where I stand, releasing software targetting tomorrow's
    computers is, as a rule, a better business practice - than
    targetting decade-old ones.

    E.g. Loki was an early attempt at developing games on Linux. They
    failed. But the SDL framework that they created for low-latency
    multimedia graphics lives on.

    Yes, that too. (Though I like my Firefox example better.)

    Also, what indication is there that GNU is 'dominating' the
    landscape? Sure, Linux is everywhere (such as in now ubiquitous
    Android phones and TVs and whatnot), but I don't quite see GNU
    being adopted as widely.

    Look at all the markets that Linux has taken away from Microsoft --
    Windows Media Center, Windows Home Server -- all defunct. Windows
    Server too is in slow decline.

    I've had very little interest in Microsoft since the 1990s.
    About the only Microsoft-related news I've since paid attention
    to were that Microsoft contributed a fair chunk of code to Linux;
    that Microsoft acquired Github; and that Windows now has WSL.

    I have no idea what Windows Media Center is (or was), and what
    alternatives to it the GNU project, http://gnu.org/ , now offers.

    (I'd guess VLC and FFmpeg might be such alternatives, but last
    I've checked, they were not part of GNU.)

    And now handheld gaming with the Steam Deck. You will find GNU there.

    So I've read http://en.wikipedia.org/wiki/Steam_Deck and found
    out that the device runs SteamOS which, as of version 3.0, is
    based on Arch Linux, thus presumably retaining a fair chunk of
    GNU within. (Bash, Coreutils, Libc, to guess a few packages.
    I doubt it includes GNU Emacs or GNU Chess, though.)

    That said, I'm not sure Steam Deck can /itself/ be said to
    dominate the market:

    Market research firm International Data Corporation estimated that
    between 3.7 and 4 million Steam Decks had been sold by the third anniversary of the device in February 2025.

    How big a market share of handheld gaming computers is 4e6?

    Also, I gather it's not a direct competitor to Android and
    Android-based mobile computers, right?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ivan Shmakov@ivan@siamics.netREMOVE.invalid to comp.lang.misc on Thu Sep 4 18:50:29 2025
    From Newsgroup: comp.lang.misc

    On 2025-08-30, Lawrence D'Oliveiro wrote:
    On Sat, 30 Aug 2025 19:10:42 +0000, Ivan Shmakov wrote:
    On Wed, 27 Aug 2025 00:28:20 -0000 (UTC), Lawrence D'Oliveiro wrote:

    I think it makes sense to restate the point I'm arguing for in
    this subthread (see news:y2C3FavstjxdDZ-_@violet.siamics.net ):

    For those who are looking for a system with more "comprehensible"
    sources, I would recommend NetBSD. And if anything, I personally
    find its list of supported platforms, http://netbsd.org/ports/ ,
    fairly impressive.

    Don't get me wrong: NetBSD won't fit for every use case Linux-based systems cover - the complexity of the Linux kernel isn't there
    for nothing - but just in case you /can/ live with a "limited"
    OS (say, one that doesn't support Docker), thanks to NetBSD, you
    /do/ have that option.

    To stress it out, it wasn't my intent to compare NetBSD as
    a whole to the Linux kernel (as that's just silly.) Neither
    was it my intent to compare the NetBSD kernel to Linux, as:
    a. I don't have any use cases for a /kernel/ outside of an OS
    distribution; and b. I mostly use Linux-based systems myself.
    (And hence arguing that way would be a case of failing to
    practice what I preach.)

    All the same, should I ever encounter a problem that requires
    kernel-mode coding, NetBSD would be at the top of my list of
    options - because of code readability.

    Bit misleading, though. Note it counts "Xen" (a Linux-based
    hypervisor) as a separate platform.

    What do you mean by "Linux-based"?

    I mean that Xen runs an actual Linux kernel in the hypervisor,
    and supports regular Linux distros as guests -- they don't need to
    be modified to specially support Xen, or any other hypervisor.

    It's been well over a decade since I've last used Xen, so I'm
    going more by http://en.wikipedia.org/wiki/Xen than experience.

    But just to be sure, I've checked the sources [1], and while
    I do see portions of Linux code reused here and there - such as,
    say, [2] below - I'd hesitate to call Xen at large "Linux-based."
    If anything, there's way more of Linux in the GNU Mach microkernel
    (consider the linux/src/drivers subtree in [3], for instance)
    than in the Xen hypervisor. (And I don't recall GNU Mach being
    called "Linux-based.")

    To note is that there seem to be no mention in CHANGELOG.md of
    anything suggesting that Xen uses Linux as its upstream project.

    * common/notifier.c
    *
    * Routines to manage notifier chains for passing status changes to any
    * interested routines.
    *
    * Original code from Linux kernel 2.6.27 (Alan Cox [...])

    [1] http://downloads.xenproject.org/release/xen/4.20.1/xen-4.20.1.tar.gz
    [2] xen-4.20.1/xen/common/notifier.c
    [3] git://git.sv.gnu.org/hurd/gnumach.git rev. 8d456cd9e417 from 2025-09-03

    It's Linux above, and Linux below -- Linux at every layer.

    Sure, if you want to run it that way. You can also run Xen
    with NetBSD at every layer, or, apparently, OpenSolaris.

    A GNU/Linux distribution AFAICR needs to provide Xen-capable
    kernel for it to be usable as dom0 - as well as Xen user-mode
    tools. Niche / lightweight distributions might omit such support.
    (There're a few build-time options related to Xen in Linux.)

    Also, Xen supports both hardware-assisted virtualization /and/
    paravirtualization. On x86-32, the former is not available, so
    the Linux build /must/ support paravirtualization in order to be
    usable with Xen, dom0 or domU.

    When hardware-assisted virtualization /is/ available, the things
    certainly get easier: pretty much anything that can run under,
    say, Qemu, can be run under Xen HVM. The performance may suffer,
    though, should your domU system happen to lack virtio drivers and
    should thus need to resort to using emulated peripherals instead.

    NetBSD supports running as both Xen domU (unprivileged) /and/
    dom0 (privileged.)

    Linux doesn't count these as separate platforms. They're just
    considered a standard part of regular platform support.

    Which means one needs to be careful when comparing architecture
    support between different kernels.

    My point was that GNU/Linux distributions typically support less

    But that's an issue with the various distributions, not with the
    Linux kernel itself.

    True. That, however, doesn't mean you can use Linux /by itself/
    outside of a distribution. (Unless, of course, you're looking
    for a kernel for a new distribution, but I doubt that undermines
    my point.) So architecture support /you/ will have /will/ be
    limited by the distribution you choose, regardless of what Linux
    itself might offer.

    In the BSD world, there is no separate of "kernel" from "distribution".
    That makes things less flexible than the Linux world.

    That's debatable. Debian for a while had a kFreeBSD port (with
    a variant of the FreeBSD kernel separate from FreeBSD proper), and
    from what I recall, it was discontinued due to lack of volunteers,
    not lack of flexibility.

    For example, while base Debian itself may support something under a
    dozen architectures, there are offshoots of Debian that cover others.

    How is this observation helpful?

    Suppose someone asks, "what OS would you recommend for running
    on loongarch?" and the best answer we here on Usenet can give
    is along the lines of "NetBSD won't work, but there're dozens
    of Debian offshoots around - be sure to check them all, as one
    might happen to support it." Really?

    If you know of Debian offshoots that support architectures
    that Debian itself doesn't, could you please list them here?
    Or, if there's already a list somewhere, share a pointer.

    The way I see it, it's the /kernel/ that it takes the most effort
    to port to a new platform - as it's where the support for peripherals
    lives, including platform-specific ones.

    Given that the Linux kernel supports more of these different
    platforms than any BSD can manage, I think you're just reinforcing
    my point.

    Certainly - if your point is that way more effort went into
    Linux over the past two to three decades than in any of BSDs.
    (And perhaps into /all/ of free BSDs combined, I'd guess.)

    But I still think that if you're interested in understanding how
    your OS works - at the source code level - you'd be better with
    NetBSD than with a Linux-based OS.

    Linux separates the kernel from the userland. That makes things
    simpler than running everything together, as the BSDs do.

    I fail to see why developing the kernel and an OS based on it
    as subprojects to one "umbrella" project would in any way hinder
    code readability.

    Just in case it somehow matters, there're separate tarballs under
    rsync://rsync.netbsd.org/NetBSD/NetBSD-10.1/source/sets/ for the
    kernel (syssrc.tgz) and userland (src, gnusrc, sharesrc, xsrc.)

    That said, I've last tinkered with Linux around the days of
    2.0.36 (IIRC), and I don't recall reading any Linux sources
    newer than version 4. If you have experience patching newer
    Linux kernels, and in particular if you find the code easy to
    follow, - please share your observations.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Fri Sep 5 00:03:17 2025
    From Newsgroup: comp.lang.misc

    On Thu, 04 Sep 2025 18:50:29 +0000, Ivan Shmakov wrote:

    - I'd hesitate to call Xen at large "Linux-based." If anything,
    there's way more of Linux in the GNU Mach microkernel (consider the linux/src/drivers subtree in [3], for instance) than in the Xen
    hypervisor.

    Call it what you like, the fact is, Linux supports it without having
    to list it as a separate platform.

    You could argue equally well that NetBSD is not rCLBSDrCY any more,
    because it has diverged too far from the original BSD kernel.

    That, however, doesn't mean you can use Linux /by itself/ outside of
    a distribution.

    How do you think distributions get created in the first place?

    <https://linuxfromscratch.org/>

    Suppose someone asks, "what OS would you recommend for running on
    loongarch?" and the best answer we here on Usenet can give is

    <https://distrowatch.com/search.php?ostype=All&category=All&origin=All&basedon=All&notbasedon=None&desktop=All&architecture=loongarch64&package=All&rolling=All&isosize=All&netinstall=All&language=All&defaultinit=All&status=Active#simpleresults>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.lang.misc on Fri Sep 5 12:02:09 2025
    From Newsgroup: comp.lang.misc

    In article <KKx97WvtTkldzxgb@violet.siamics.net>,
    Ivan Shmakov <ivan@siamics.netREMOVE.invalid> wrote:
    On 2025-08-30, Lawrence D'Oliveiro wrote:

    FYI, you are arguing with a known troll. It is unlikely to turn
    into a productive exercise, so caveat emptor.

    [snip]
    I mean that Xen runs an actual Linux kernel in the hypervisor,
    and supports regular Linux distros as guests -- they don't need to
    be modified to specially support Xen, or any other hypervisor.

    It's been well over a decade since I've last used Xen, so I'm
    going more by http://en.wikipedia.org/wiki/Xen than experience.

    But just to be sure, I've checked the sources [1], and while
    I do see portions of Linux code reused here and there - such as,
    say, [2] below - I'd hesitate to call Xen at large "Linux-based."
    If anything, there's way more of Linux in the GNU Mach microkernel
    (consider the linux/src/drivers subtree in [3], for instance)
    than in the Xen hypervisor. (And I don't recall GNU Mach being
    called "Linux-based.")

    To note is that there seem to be no mention in CHANGELOG.md of
    anything suggesting that Xen uses Linux as its upstream project.

    This is basically correct. Xen falls into the broad category
    known as "Type-1" hypervisors: meaning that Xen controls runs
    directly on the bare metal outside of the context of an existing
    OS (versus, say, KVM, Bhyve, etc). It is true that Xen was
    centered on Linux initially, and pulled in a lot of the code; I
    think it's fair to say that early versions largely started with
    (and in many ways were based on) the Linux kernel, but it has
    clearly gone its own way.

    In the Type-1 model, you still need some software component that
    lets you do stuff like configure virtual machines, provide
    device models to guests, and so on. It's common to provide a
    specially blessed VM instance (Dom0 in Xen; a "root VM" in
    Hyper-V) to do this.

    * common/notifier.c
    *
    * Routines to manage notifier chains for passing status changes to any
    * interested routines.
    *
    * Original code from Linux kernel 2.6.27 (Alan Cox [...])

    [1] http://downloads.xenproject.org/release/xen/4.20.1/xen-4.20.1.tar.gz
    [2] xen-4.20.1/xen/common/notifier.c
    [3] git://git.sv.gnu.org/hurd/gnumach.git rev. 8d456cd9e417 from 2025-09-03

    It's Linux above, and Linux below -- Linux at every layer.

    Sure, if you want to run it that way. You can also run Xen
    with NetBSD at every layer, or, apparently, OpenSolaris.

    A GNU/Linux distribution AFAICR needs to provide Xen-capable
    kernel for it to be usable as dom0 - as well as Xen user-mode
    tools. Niche / lightweight distributions might omit such support.
    (There're a few build-time options related to Xen in Linux.)

    Also, Xen supports both hardware-assisted virtualization /and/ paravirtualization. On x86-32, the former is not available, so
    the Linux build /must/ support paravirtualization in order to be
    usable with Xen, dom0 or domU.

    When hardware-assisted virtualization /is/ available, the things
    certainly get easier: pretty much anything that can run under,
    say, Qemu, can be run under Xen HVM. The performance may suffer,
    though, should your domU system happen to lack virtio drivers and
    should thus need to resort to using emulated peripherals instead.

    Yes. With Xen, you've got the Xen VMM running on the bare metal
    and then any OS capable of supporting Xen's Dom0 requirements
    running as Dom0, and essentially any OS running as a DomU guest.

    So to summarize, you've got a hypervisor that descended from an
    old version of Linux, but was heavily modified, running a gaggle
    of other systems, none of which necessarily needs to be Linux.

    NetBSD supports running as both Xen domU (unprivileged) /and/
    dom0 (privileged.)

    Linux doesn't count these as separate platforms. They're just
    considered a standard part of regular platform support.

    Which means one needs to be careful when comparing architecture
    support between different kernels.

    I gathered your point was that neither Dom0 nor DomU _had_ to be
    Linux, and that's true. Note that the troll likes to subtlely
    change the point that he's arguing.

    My point was that GNU/Linux distributions typically support less

    But that's an issue with the various distributions, not with the
    Linux kernel itself.

    True. That, however, doesn't mean you can use Linux /by itself/
    outside of a distribution. (Unless, of course, you're looking
    for a kernel for a new distribution, but I doubt that undermines
    my point.) So architecture support /you/ will have /will/ be
    limited by the distribution you choose, regardless of what Linux
    itself might offer.

    In the BSD world, there is no separate of "kernel" from "distribution". That makes things less flexible than the Linux world.

    That's debatable. Debian for a while had a kFreeBSD port (with
    a variant of the FreeBSD kernel separate from FreeBSD proper), and
    from what I recall, it was discontinued due to lack of volunteers,
    not lack of flexibility.

    For example, while base Debian itself may support something under a
    dozen architectures, there are offshoots of Debian that cover others.

    How is this observation helpful?

    Suppose someone asks, "what OS would you recommend for running
    on loongarch?" and the best answer we here on Usenet can give
    is along the lines of "NetBSD won't work, but there're dozens
    of Debian offshoots around - be sure to check them all, as one
    might happen to support it." Really?

    If you know of Debian offshoots that support architectures
    that Debian itself doesn't, could you please list them here?
    Or, if there's already a list somewhere, share a pointer.

    The way I see it, it's the /kernel/ that it takes the most effort
    to port to a new platform - as it's where the support for peripherals
    lives, including platform-specific ones.

    Given that the Linux kernel supports more of these different
    platforms than any BSD can manage, I think you're just reinforcing
    my point.

    Certainly - if your point is that way more effort went into
    Linux over the past two to three decades than in any of BSDs.
    (And perhaps into /all/ of free BSDs combined, I'd guess.)

    But I still think that if you're interested in understanding how
    your OS works - at the source code level - you'd be better with
    NetBSD than with a Linux-based OS.

    Linux separates the kernel from the userland. That makes things
    simpler than running everything together, as the BSDs do.

    I fail to see why developing the kernel and an OS based on it
    as subprojects to one "umbrella" project would in any way hinder
    code readability.

    Just in case it somehow matters, there're separate tarballs under rsync://rsync.netbsd.org/NetBSD/NetBSD-10.1/source/sets/ for the
    kernel (syssrc.tgz) and userland (src, gnusrc, sharesrc, xsrc.)

    That said, I've last tinkered with Linux around the days of
    2.0.36 (IIRC), and I don't recall reading any Linux sources
    newer than version 4. If you have experience patching newer
    Linux kernels, and in particular if you find the code easy to
    follow, - please share your observations.

    He doesn't.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ivan Shmakov@ivan@siamics.netREMOVE.invalid to comp.lang.misc on Sun Sep 7 15:55:43 2025
    From Newsgroup: comp.lang.misc

    On 2025-09-05, Lawrence D'Oliveiro wrote:
    On Thu, 04 Sep 2025 18:50:29 +0000, Ivan Shmakov wrote:

    I'd hesitate to call Xen at large "Linux-based." If anything,
    there's way more of Linux in the GNU Mach microkernel (consider
    the linux/src/drivers subtree in [3], for instance) than in the
    Xen hypervisor.

    Call it what you like, the fact is, Linux supports it without
    having to list it as a separate platform.

    I can't say I can quite grasp the importance of doing it one
    way or another, but well, I've been loosely working on my own
    "Debian offshoot" over the past few years, and should it ever
    come to a release, I'll be sure to test it with Xen and then
    list "Xen on amd64" alongside "amd64 on bare metal" in its list
    of supported platforms - NetBSD-style.

    You could argue equally well that NetBSD is not "BSD" any more,
    because it has diverged too far from the original BSD kernel.

    That's a good point, actually: as originally defined, "BSD" meant
    "Berkeley Software Distribution," and given that little (if any)
    work on NetBSD is (AIUI) currently being done at UCB, I'd say
    that yes, NetBSD is not "BSD" - and likely never have been.

    (Similarly, I find claims that "Debian is a free Unix" to be
    misleading: "GNU's Not Unix" is right on the cover, after all.)

    NetBSD is a descendant of 386BSD (as, AIUI, are all current
    "BSDs"), itself a descendant of 4.3BSD, so there /is/ a kind
    of continuity. (And likely bits of actual 4.3BSD code within
    NetBSD sources.) No idea if it's of much importance to anyone
    but OS historians.

    That, however, doesn't mean you can use Linux /by itself/ outside
    of a distribution. (Unless, of course, you're looking for a kernel
    for a new distribution, but I doubt that undermines my point.)

    How do you think distributions get created in the first place?

    <https://linuxfromscratch.org/>

    Like I've said, I doubt that undermines my point: you /still/
    choose among distributions rather than kernels, even if one
    (or more) of those distributions is of your own creation.

    When two decades ago I've put together my own "distribution"
    (I've never actually /distributed/ it, hence the quotes), the
    only CPU architecture it supported was "i386" - as that was the
    only one that I've had at hand and could test it on. How many
    others Linux supported at the time, I've had no idea - nor any
    reason to look into: they simply were out of my reach - and thus
    my concern - at the time.

    The aforementioned Debian derivative I'm working on currently
    only supports amd64, though I hope to add riscv64 and (or) arm64
    support eventually. From where I stand, adding support for
    anything beyond that (and especially architectures that aren't
    in Debian, and for which I thus cannot reuse Debian packages)
    is too much effort for too uncertain a gain.

    (Reportedly "i386" support is important for running Steam on
    Debian, but guess what? I use GOG.)

    Sure, it'd be nice to have a Debian derivative to run on my i586
    boxes (not supported after Jessie), but that's lots of effort,
    too - and then there's NetBSD that's already "486DX or better."

    With the above in mind, well, I'm willing to bet that if you
    ever put together your own distribution, it won't support every
    architecture Linux itself claims to support, either.

    Suppose someone asks, "what OS would you recommend for running on
    loongarch?" and the best answer we here on Usenet can give is

    <https://distrowatch.com/search.php?ostype=All&[...]>

    ... Or, in other words: "don't ask for recommendations here on
    Usenet, ask a website instead." What Usenet is even here for,
    then? Rants?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ivan Shmakov@ivan@siamics.netREMOVE.invalid to comp.lang.misc on Sun Sep 7 16:30:42 2025
    From Newsgroup: comp.lang.misc

    On 2025-09-05, Dan Cross wrote:
    In article <KKx97WvtTkldzxgb@violet.siamics.net>, Ivan Shmakov wrote: >>>>> On 2025-08-30, Lawrence D'Oliveiro wrote:

    FYI, you are arguing with a known troll. It is unlikely to turn
    into a productive exercise, so caveat emptor.

    I'm inclined to define productive public discussion as one
    that's informative and interesting to read. Given that I've
    actually ended up learning a couple of things along the way,
    I'd say it /was/ productive, in a way.

    With no "views" and "likes" counts here on Usenet, I have no way
    of measuring how interesting the subthread was to others (being
    ill-suited for the group as it is), so I kinda hope for the best.

    When hardware-assisted virtualization /is/ available, the things
    certainly get easier: pretty much anything that can run under,
    say, Qemu, can be run under Xen HVM. The performance may suffer,
    though, should your domU system happen to lack virtio drivers and
    should thus need to resort to using emulated peripherals instead.

    Yes. With Xen, you've got the Xen VMM running on the bare metal and
    then any OS capable of supporting Xen's Dom0 requirements running as
    Dom0, and essentially any OS running as a DomU guest.

    So to summarize, you've got a hypervisor that descended from an
    old version of Linux, but was heavily modified, running a gaggle
    of other systems, none of which necessarily needs to be Linux.

    Glad to know I wasn't too off the mark in this case.

    Linux doesn't count these as separate platforms. They're just
    considered a standard part of regular platform support.

    Which means one needs to be careful when comparing architecture
    support between different kernels.

    I gathered your point was that neither Dom0 nor DomU _had_ to be
    Linux, and that's true.

    More to the point here is that my opponent took offense at
    http://netbsd.org/ports/ listing "Xen" as one of the supported
    "platforms" - apparently for the sole reason that Linux does
    it differently.

    Note that the troll likes to subtlely change the point that he's
    arguing.

    Well, in a properly set up public debate, there's ought to be
    a prior agreement on who's arguing what. This is Usenet, however,
    so we all figure out what points we do and do not want to argue
    along the way. I doubt I can rightfully blame a person for not
    sharing my preferences about on what to argue about - especially
    as I don't pay them for having an argument with me.

    That said, I've last tinkered with Linux around the days of 2.0.36
    (IIRC), and I don't recall reading any Linux sources newer than
    version 4. If you have experience patching newer Linux kernels, and
    in particular if you find the code easy to follow, - please share.

    He doesn't.

    That's what I suspect as well. I'd still be delighted to be
    proven wrong.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Sun Sep 7 21:17:02 2025
    From Newsgroup: comp.lang.misc

    On Sun, 07 Sep 2025 15:55:43 +0000, Ivan Shmakov wrote:

    On Fri, 5 Sep 2025 00:03:17 -0000 (UTC), Lawrence DrCOOliveiro wrote:

    On Thu, 04 Sep 2025 18:50:29 +0000, Ivan Shmakov wrote:

    I'd hesitate to call Xen at large "Linux-based."

    Call it what you like, the fact is, Linux supports it without
    having to list it as a separate platform.

    I can't say I can quite grasp the importance of doing it one way or
    another ...

    The fact that NetBSD has to list it as a separate platform to get its
    count up.

    Also:

    [09:10 xcp-ng-126 ~]# uname -a
    Linux xcp-ng-126 4.19.0+1 #1 SMP Tue May 6 15:24:43 CEST 2025 x86_64 x86_64 x86_64 GNU/Linux

    ... you /still/ choose among distributions rather than kernels ...

    The fact that all Linux distros share essentially the same kernel
    makes it much easier to interoperate and also switch between them: rCLdistro-hoppingrCY is a common activity in the Linux world, itrCOs not something that can be encouraged in the BSD world.

    ... Or, in other words: "don't ask for recommendations here on
    Usenet, ask a website instead."

    You asked for information, clearly in the expectation that it would
    not be forthcoming. I gave you the information, now you find another
    reason to complain?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.lang.misc on Sat Oct 4 01:11:29 2025
    From Newsgroup: comp.lang.misc

    Andy Walker <anw@cuboid.co.uk> wrote:
    On 20/08/2025 01:43, Lawrence DrCOOliveiro wrote:
    [I wrote:]
    From time to time I wonder what would happen if we ran
    7th Edition Unix on a modern computer.
    The Linux kernel source is currently over 40 million lines, and I
    understand the vast majority of that is device drivers.

    You seem to be making Janis's point, but that doesn't seem to
    be your intention?

    If you were to run an old OS on new hardware, that would need drivers for
    that new hardware, too.

    Yes, but what is so special about a modern disc drive, monitor, keyboard, mouse, ... that it needs "the vast majority" of 40M lines more
    than its equivalent for a PDP-11? Does this not again make Janis's point?

    Lawrence gave a good list of things, but let me note few additional
    aspects. First there is _a lot_ of different drivers. In PDP-11
    times there were short list of available devices. Now there is a lot
    of different devices on the market and each one potentially need a
    specialised driver in the kernel. Linux actually make a good job here:
    typical driver handles a group of similar devices and various tables and conditonals in the code take care of differences between devices in the
    group. But still, there is a lot of different driver and generality
    of a single driver means it is more complicated than driver for a
    single device.

    On my desktop kernel boot messages say "14342K kernel code". Nominally assuming 10 bytes per source line it means 1.4 milions of lines of
    running code, so relatively small part of total kernel source.
    Note that this is generic kernel provided by distribution (Debian)
    which is supposed to run "on any PC". Compiling specialised
    kernel one can exclude various features from the kernel. I did
    not try this with recent kernels for PC, but past experience indicates
    that by exluding features and device support one can get substantially
    smaller kernel. I would guess 4MB for PC kernel with reasonnable
    set of features and possibly smaller if one decides to drop essential
    features.

    Second, modern devices frequently require complex setup control.
    Let me mention issue that probably does not affect PC-s but indicate
    tendency. Namely, I deal with small microcontrolles. One may wish
    to attach a LCD screen to such microcontroller. To setup one
    particular screen one need send about 100 numerical parameters to
    the chip controlling the display. Some parameters can be taken
    from a simple table, but some may vary and need to be computed
    by initialization routine. Once initialized display behaves as
    dumb frame buffer, so is rather simple to drive. Compared to
    that initialization code is surprisingly complex. Coming back
    to PC devices, such devices frequently have complex initialization.
    Actual operation may be more complex than earlier devices.
    Simply, device manufactures tend to move complexity to driver
    software.

    Third, we live in era of multicore machines. OS is supposed to
    efficiently utilize all cores and share work between them. This
    requires a lot of carefuly placed locks and when possible special
    code seqences using ordinary or atomic instructions that work
    when run in parallel on different cores. That leads to bigger
    code, especially at source level.

    Granted that the advent of 32- and 64-bit integers and addresses
    makes some programming much easier, and that we can no longer expect
    browsers and other major tools to fit into 64+64K bytes, is the actual
    bloat in any way justified?

    I think that Linux kernel actually is somewhat reasonable range.
    Early version of Linux would take say 1MB of 8MB available in the
    machine. Modern Linux takes more, but it is tiny part of the
    whole available memory. And given growth in functionality growth
    is size of kernel binary does not look so bad.

    I think that comparisons with early mainframes or PDP-11 are
    misleading in the sence that on early machines programmes struggled
    to fit programs into avaliable memory. Common technique was keeping
    data on disc and having multiple sequential passes. Program
    itself could be split into several overlays. Use of overlays
    essentially vanished with introduction of virtual memory coupled
    with multimegabyte real RAM. More relevant are comparisons
    with VAX and early Linux.

    AFAICS bloat happens mostly in user level. One reason is more
    friendly attitude of modern programs: instead of numeric error
    codes programs contains actual error messages. Internationalization
    and Unicode add to this. GUI-s demand event driven programming,
    which leads to much more complicated code than earlier programs.

    Sometimes programmes get lazy. I remember essay about Netscape
    mail index. I fetched it from the net and my impression was that
    the author was J. Zawinsky, but I can not find it on the net now.
    Anyway, the essay explained tricks used to save mail index in
    rather compact space ensuring fast loading. In later versions
    programmers replaced custom code by generic database, leading
    to something like 50 times bigger space use and slower loading
    of mail index. Overal code footprint probably also increased,
    as after change it included database handling code. But there
    were no need to maintain previous custom code.

    One reason that modern system are big and bloated is recursive
    pulling of dependencies. Namely, there is tendency to delegate
    work to libraries and more generally to depend on "standard"
    tools. But this in turn creates pressure on libraries and
    tools to cover "all" use cases and in particular to include
    rarely used functionality. In turn to provide a lot of
    functiality libraries and tools delegate part/most of work
    to different tools. This lead to complex dependency nets
    where most of functionality in total package (that is including
    all dependencies) is not needed for concrete application.

    It's not just kernels and user software --
    it's also the documentation. In V7, "man cc" generates just under two
    pages of output; on my current computer, it generates over 27000 lines,
    call it 450 pages, and is thereby effectively unprintable and unreadable,
    so it is largely wasted.

    For V7, the entire documentation fits comfortably into two box
    files, and the entire source code is a modest pile of lineprinter output. Most of the commands on my current computer are undocumented and unused,
    and I have no idea at all what they do.

    Hmm, on my machine '/usr/bin' contains 2547 commands. IIRC "minimal"
    install give some hundreds of commands, so most commands is from
    packages that I explicitely installed or their dependencies. Of course,
    I remember only minority of commands, but 'man' gives me information
    about many commands (my impression is that about majority, but I did
    not check this). I can easily find out which package installed given
    command and packages have at least short description, so even if I forget
    why I installed given package usually it is easy to find out general
    purpose of a command.

    Yes, I know how that "just happens", and I'm observing rather
    than complaining [I'd rather write programs, browse and send/read e-mails
    on my current computer than on the PDP-11]. But it does all give food for thought.

    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Sat Oct 4 03:37:35 2025
    From Newsgroup: comp.lang.misc

    On Sat, 4 Oct 2025 01:11:29 -0000 (UTC), Waldek Hebisch wrote:

    On my desktop kernel boot messages say "14342K kernel code". Nominally assuming 10 bytes per source line it means 1.4 milions of lines of
    running code, so relatively small part of total kernel source.

    Sounds close to the figure that Greg Kroah-Hartman gave. He also said that
    a Linux kernel build for an Android device requires about 3|u that amount
    of code.

    That little device in your pocket/purse is a whole lot more complex than
    the ones on your desktop or in your data centre. ;)
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Tue Oct 7 22:03:07 2025
    From Newsgroup: comp.lang.misc

    On 04/10/2025 02:11, Waldek Hebisch wrote:
    Andy Walker <anw@cuboid.co.uk> wrote:
    On 20/08/2025 01:43, Lawrence DrCOOliveiro wrote:
    [...]
    If you were to run an old OS on new hardware, that would need drivers for >>> that new hardware, too.
    Yes, but what is so special about a modern disc drive, monitor,
    keyboard, mouse, ... that it needs "the vast majority" of 40M lines more
    than its equivalent for a PDP-11? Does this not again make Janis's point?
    Lawrence gave a good list of things, but let me note few additional
    aspects. First there is _a lot_ of different drivers. In PDP-11
    times there were short list of available devices. Now there is a lot
    of different devices on the market and each one potentially need a specialised driver in the kernel. [...]

    Yes, but one would expect that to drive standardisation rather
    than bloat. There are rather a lot of devices that I can plug into the
    mains in my home, but I don't have to install hundreds or thousands of different types of socket.
    I think that comparisons with early mainframes or PDP-11 are
    misleading in the sence that on early machines programmes struggled
    to fit programs into avaliable memory. Common technique was keeping
    data on disc and having multiple sequential passes. Program
    itself could be split into several overlays. Use of overlays
    essentially vanished with introduction of virtual memory coupled
    with multimegabyte real RAM. More relevant are comparisons
    with VAX and early Linux.
    I would take issue with some of the historical aspects, but it
    would take us on a long detour. Just one comment: we've had virtual
    memory since 1959 [Atlas].

    AFAICS bloat happens mostly in user level. One reason is more
    friendly attitude of modern programs: instead of numeric error
    codes programs contains actual error messages.

    The systems I've used have always used actual error messages!

    [...]
    One reason that modern system are big and bloated is recursive
    pulling of dependencies. Namely, there is tendency to delegate
    work to libraries and more generally to depend on "standard"
    tools. But this in turn creates pressure on libraries and
    tools to cover "all" use cases and in particular to include
    rarely used functionality.

    Yes, but that's the sort of pressure that needs to be
    resisted; and isn't,

    [...]
    Hmm, on my machine '/usr/bin' contains 2547 commands. IIRC "minimal"
    install give some hundreds of commands, so most commands is from
    packages that I explicitely installed or their dependencies.

    I have 2580 in my "/usr/bin". That is almost all from the
    "medium (recommended)" installation; a handful of others have been
    added when I've found something missing (I'd guess perhaps 10). Of
    those I've actually used just 64! [Plus 26 in "$HOME/bin".] I
    checked a random sample of those 2580; more than 2/3 I have no
    idea from the name what they are for [yes, I know I can find out],
    and I'm an experienced Unix user with much more CS knowledge than
    the average punter. If I were to read an introductory book on
    Linux, I doubt whether many more than those 64 would be mentioned,
    so I wouldn't even be pointed at the "average" command.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Necke
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Tue Oct 7 22:04:43 2025
    From Newsgroup: comp.lang.misc

    On Tue, 7 Oct 2025 22:03:07 +0100, Andy Walker wrote:

    On 04/10/2025 02:11, Waldek Hebisch wrote:

    In PDP-11 times there were short list of available devices. Now
    there is a lot of different devices on the market and each one
    potentially need a specialised driver in the kernel. [...]

    Yes, but one would expect that to drive standardisation rather than
    bloat. There are rather a lot of devices that I can plug into the
    mains in my home, but I don't have to install hundreds or thousands
    of different types of socket.

    Most of your electronic devices would not plug directly into the
    mains, they would likely use some kind of DC adaptor/charger. How many
    of those do you have?

    You are trying to make an argument by analogy, and that is already
    heading for a pitfall. Those power connections you talk about are for transferring energy, not for transferring information. Information
    transfer is a much more complex business.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Wed Oct 8 01:27:02 2025
    From Newsgroup: comp.lang.misc

    On 08.10.2025 00:04, Lawrence DrCOOliveiro wrote:
    On Tue, 7 Oct 2025 22:03:07 +0100, Andy Walker wrote:
    On 04/10/2025 02:11, Waldek Hebisch wrote:

    In PDP-11 times there were short list of available devices. Now
    there is a lot of different devices on the market and each one
    potentially need a specialised driver in the kernel. [...]

    Yes, but one would expect that to drive standardisation rather than
    bloat. There are rather a lot of devices that I can plug into the
    mains in my home, but I don't have to install hundreds or thousands
    of different types of socket.

    Most of your electronic devices would not plug directly into the
    mains, they would likely use some kind of DC adaptor/charger. How many
    of those do you have?

    You are trying to make an argument by analogy, and that is already
    heading for a pitfall. Those power connections you talk about are for transferring energy, not for transferring information. Information
    transfer is a much more complex business.

    While analogies have, as you correctly note, an inherent problem
    the key statement - to aim towards standardization; and especially
    if computer scientists and experienced IT folks are involved! - is
    valid and should be emphasized and supported (instead of fostering
    pointless excuses, mollifying ourselves given the bad quality and
    lousy design of quite some of the software we suffer from today).

    Janis
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.lang.misc on Wed Oct 8 14:03:50 2025
    From Newsgroup: comp.lang.misc

    Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:
    On Sat, 30 Aug 2025 19:39:49 +0000, Ivan Shmakov wrote:

    Not to mention that taking too long to 'polish' your product, you
    risk ending up lagging behind your competitors.

    I would say, the open-source world is a counterexample to this. Look at
    how long it took GNU and Linux to end up dominating the entire computing landscape -- it didnrCOt happen overnight.

    Actually, open source nicely illustates this. First advice to
    open source projects is "release early, release often". Projects
    that delay release because they are "not ready" typically loose
    and eventually die.

    Open source projects typically want to offer high quality. But
    they have to limit their efforts to meet realease schedules.
    There are compromises which know bugs get fixed: some are deemed
    serious enough to block new release, but a lot get shipped.
    There is internal testing, but significant part of problems
    get discovered only after release.

    One can significantly increase quality by limiting addition of
    new featurs. But open source projects that try to do this
    typically loose.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Wed Oct 8 16:21:50 2025
    From Newsgroup: comp.lang.misc

    On 08.10.2025 16:03, Waldek Hebisch wrote:
    Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:
    On Sat, 30 Aug 2025 19:39:49 +0000, Ivan Shmakov wrote:

    Not to mention that taking too long to 'polish' your product, you
    risk ending up lagging behind your competitors.

    I would say, the open-source world is a counterexample to this. Look at
    how long it took GNU and Linux to end up dominating the entire computing
    landscape -- it didnrCOt happen overnight.

    Actually, open source nicely illustates this. First advice to
    open source projects is "release early, release often". Projects
    that delay release because they are "not ready" typically loose
    and eventually die.

    Open source projects typically want to offer high quality. But
    they have to limit their efforts to meet realease schedules.
    There are compromises which know bugs get fixed: some are deemed
    serious enough to block new release, but a lot get shipped.
    There is internal testing, but significant part of problems
    get discovered only after release.

    One can significantly increase quality by limiting addition of
    new featurs. But open source projects that try to do this
    typically loose.

    We can observe that software grows, and grows rank. My experience
    is that it makes sense to plan and occasionally add refactoring
    cycles in these cases. (There's also software planned accurately
    from the beginning, software that changes less, and is only used
    for its fixed designed purpose. But we're not speaking about that
    here.) A principal advantage of the "open-source world" (or rather
    the non-commercial world) is that there's neither competition nor
    need to quickly throw things into the market. So this area has at
    least the chance to adapt plans and contents without time pressure.
    Whether it's done is another question (and project specific). It
    should also be mentioned that some projects have e.g. security or
    quality requirements that gets tested and measured and require some
    adaptive process to increase these factors (without adding anything
    new).

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From ram@ram@zedat.fu-berlin.de (Stefan Ram) to comp.lang.misc on Wed Oct 8 14:53:37 2025
    From Newsgroup: comp.lang.misc

    antispam@fricas.org (Waldek Hebisch) wrote or quoted:
    Actually, open source nicely illustates this. First advice to
    open source projects is "release early, release often".

    I had thought about using this for my projects, but I can see
    the downsides too:

    If some projects drop too early, they still barely have any
    capabilities. The first curious potential users check it out and
    walk away thinking, "a toy product and not the skills that actually
    matter in practice". That vibe can stick around - "You don't get
    a second shot at a first impression." - and end up keeping people
    from giving the later, more capable versions a chance.

    Projects that delay release because they are "not ready"
    typically loose and eventually die.

    Exagerated.

    The actual TeX program version is currently at 3.141592653
    and was last updated in 2021. It is one of the most successful
    programs ever and the market leader for scientific articles
    and books that include math formulas.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From John Ames@commodorejohn@gmail.com to comp.lang.misc on Wed Oct 8 08:08:22 2025
    From Newsgroup: comp.lang.misc

    On Wed, 8 Oct 2025 16:21:50 +0200
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:

    A principal advantage of the "open-source world" (or rather the non- commercial world) is that there's neither competition nor need to
    quickly throw things into the market. So this area has at least the
    chance to adapt plans and contents without time pressure.

    Yes and no. There's definitely *less* pressure to make a specific
    release window and a project won't necessarily die just because there's
    no money in it - but large-scale open-source projects do compete for "mindshare" among open-source developers, who are a large but finite
    group with a finite amount of time and energy to sink into them.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From ram@ram@zedat.fu-berlin.de (Stefan Ram) to comp.lang.misc on Wed Oct 8 15:24:01 2025
    From Newsgroup: comp.lang.misc

    ram@zedat.fu-berlin.de (Stefan Ram) wrote or quoted:
    "You don't get a second shot at a first impression."

    And then there's the inertia. Once you publish something, you
    also want to put out all the side stuff like manuals and tests.

    But those, along with the users, just add more weight.

    Every little tweak means updating the docs again, and the
    users complain whenever something changes that they just got
    used to. Without all that mass and pressure, you can move a
    lot faster and try different things until it slowly becomes
    clear what really works best.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.lang.misc on Wed Oct 8 17:04:41 2025
    From Newsgroup: comp.lang.misc

    Andy Walker <anw@cuboid.co.uk> wrote:
    On 04/10/2025 02:11, Waldek Hebisch wrote:
    Andy Walker <anw@cuboid.co.uk> wrote:
    On 20/08/2025 01:43, Lawrence DrCOOliveiro wrote:
    [...]
    If you were to run an old OS on new hardware, that would need drivers for >>>> that new hardware, too.
    Yes, but what is so special about a modern disc drive, monitor,
    keyboard, mouse, ... that it needs "the vast majority" of 40M lines more >>> than its equivalent for a PDP-11? Does this not again make Janis's point? >> Lawrence gave a good list of things, but let me note few additional
    aspects. First there is _a lot_ of different drivers. In PDP-11
    times there were short list of available devices. Now there is a lot
    of different devices on the market and each one potentially need a
    specialised driver in the kernel. [...]

    Yes, but one would expect that to drive standardisation rather
    than bloat. There are rather a lot of devices that I can plug into the
    mains in my home, but I don't have to install hundreds or thousands of different types of socket.

    Note that a lot of companies do not want to compete in commodity
    market. They want to collect "extraordinary added value" by
    innovating. But most innovation is either in drivers or need
    driver support. So companies have motivation to limit use of
    drivers to their devices. I other words they _want_ to be
    incompatible with other devices. Of course, this is mitigated
    by total developement cost. But there is a bunch of gratitious incompatibilites. There are independently developed products,
    again developement is frequently secret and when devices enter
    market they need separate drivers.

    I think that comparisons with early mainframes or PDP-11 are
    misleading in the sence that on early machines programmes struggled
    to fit programs into avaliable memory. Common technique was keeping
    data on disc and having multiple sequential passes. Program
    itself could be split into several overlays. Use of overlays
    essentially vanished with introduction of virtual memory coupled
    with multimegabyte real RAM. More relevant are comparisons
    with VAX and early Linux.
    I would take issue with some of the historical aspects, but it
    would take us on a long detour. Just one comment: we've had virtual
    memory since 1959 [Atlas].

    1) You enough real memory to make virtual memory really effective.
    There are things that you really want to keep in real RAM.
    2) Fact that some big or innovative machine allows to do thing
    better is not enough. Programmers typically work with available
    hardware, so hardware/software support must be popular enough
    to matter for programmers practice.

    AFAICS bloat happens mostly in user level. One reason is more
    friendly attitude of modern programs: instead of numeric error
    codes programs contains actual error messages.

    The systems I've used have always used actual error messages!

    Well, first trick to fit program into available memory that I have
    seen was to eliminate textual messages, debugging support etc.
    That was especially in case of small systems without hard drive
    (so trick with storing messages on the disk was not possible).

    I already mentioned overlays, but let me add that they frequently
    were combined with multipass approach. Having less passes gives
    better speed as long as you have enough memory.

    Another trick used bytecode interpreters. Interpreter itself can
    be quite small and well-chosen bytecode typically is much smaller
    than machine code. But execution needs more CPU time. With
    careful tuning and rewriting critical parts in machine code
    one can get reasonable result, at cost of approproiate programmer
    effort.

    [...]
    One reason that modern system are big and bloated is recursive
    pulling of dependencies. Namely, there is tendency to delegate
    work to libraries and more generally to depend on "standard"
    tools. But this in turn creates pressure on libraries and
    tools to cover "all" use cases and in particular to include
    rarely used functionality.

    Yes, but that's the sort of pressure that needs to be
    resisted; and isn't,

    Well, I am trying to limit dependencies for my software, which
    may mean writing my own function for some purpose instead of
    using a library. OTOH if shared library is loaded into RAM
    by another user, than runtime cost of additional user is
    rather low. And writing my function means extra source code.
    So that is tricky balance.

    Concerning rarely used functionality in libraries, alone I do
    not think it is big problem. I mean, having such code in
    library means that sharing at source code level is possible
    so it is likely to lead to decrease in total amount of
    source code needed to implement functionality of modern
    systems.

    And I am trying to see things in the context. People have
    a lot of computing needs, some really important, some of
    lower priority. Computing hardware is much cheaper than it
    was in the past, so main barrier to satifying "all" computing
    needs is cost of creating software. Especially for less
    common needs it makes sense to use rather inefficient approaches
    because doing things "properly" would require too much programming
    effort. So, instead we try to maximize amount of functionalty
    given avaliable programmer effort and available computing
    resources. Which frequently mean that we are happy that
    a program works at all and not bothered to much by bloat.

    Let me add that personally I am probably much less tolerant
    to bloat than average. But when I use open source software,
    should I complian that it uses more resources than strictly
    neccessary? I certainly can not create myself all that
    software (or pay for its creation).

    [...]
    Hmm, on my machine '/usr/bin' contains 2547 commands. IIRC "minimal"
    install give some hundreds of commands, so most commands is from
    packages that I explicitely installed or their dependencies.

    I have 2580 in my "/usr/bin". That is almost all from the
    "medium (recommended)" installation; a handful of others have been
    added when I've found something missing (I'd guess perhaps 10). Of
    those I've actually used just 64! [Plus 26 in "$HOME/bin".] I
    checked a random sample of those 2580; more than 2/3 I have no
    idea from the name what they are for [yes, I know I can find out],
    and I'm an experienced Unix user with much more CS knowledge than
    the average punter. If I were to read an introductory book on
    Linux, I doubt whether many more than those 64 would be mentioned,
    so I wouldn't even be pointed at the "average" command.

    I use debian 12. "Bigger" install options tend to pull in GUI
    stuff that I do not want. I use LXDE and prefer to avoid KDE
    or Gnome. Of course there are Gnome/KDE programs that I use
    and that pull corresponding shared libraries, but AFAICS I get
    less Gnome and KDE programs that I would get choosing it as a
    desktop environment. On other machines I did minimal (non-GUI)
    install and it was much smaller. On a small machine I did GUI
    install, but installed only limited number of packages. On
    that machine I have 1274 programs in /usr/bin. So majority
    of programs on my main machine goes beyond "small" GUI install.

    Note: I do not remember if I did minimal install first and
    then added GOI or choos GUI option during initial install.
    The point is that this is GUI that I actually use, most commands
    is invoked from menus but I start some GUI things from command
    line.

    64 commands that you actually used looks very low. I am too lazy
    to count commands that I use. But I have cross-developement
    toolchains for AVR (27 commands), ARM (28 commands), RISC-V
    (25 commands). I have both normal x86_64 toolchain and
    support to x32 variant (28 commands). x86_64 toolchain installs
    things under duplicate name, 74 commands have prefix
    'x86_64-linux-gnu-' and each probably have unprefixed version.
    So only counting part of development commands that I use
    I have about 256 commands that I am reasonably likely to use
    (some commands come under multiple names, normally I would
    use only one name, but I may use other names say for ease of forming
    names in build scripts). Note that by default Debian installs
    essentially no developement support (Debian install 'cpp' for
    use by some other programs), so those are present because I
    need them.

    Also, note that stuff that used to be in '/bin' is now in
    '/usr/bin'. That is tens of simple command that any serious
    Unix user is likely to use.

    Of course, if you take an "average user", then such person may
    be unaware of existence of command line so number of commands
    that such person explicitely uses would be 0. Even some
    developers apparently depend on GUI tools and are lost when
    they need say to write a Makefile. But a lot of behind the
    scene machinery (and some commands) exists to allow such user
    productively use computers.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Wed Oct 8 21:18:58 2025
    From Newsgroup: comp.lang.misc

    On Wed, 8 Oct 2025 08:08:22 -0700, John Ames wrote:

    ... large-scale open-source projects do compete for
    "mindshare" among open-source developers, who are a large but finite
    group with a finite amount of time and energy to sink into them.

    The rCLmindsharerCY is among the passive users who take whatrCOs given and complain about how it doesnrCOt fit their needs.

    WhatrCOs more important is the rCLcontribusharerCY -- the active community that
    contributes to the project. That matters much more than sheer numbers of users.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From John Ames@commodorejohn@gmail.com to comp.lang.misc on Wed Oct 8 14:31:31 2025
    From Newsgroup: comp.lang.misc

    On Wed, 8 Oct 2025 21:18:58 -0000 (UTC)
    Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:
    ... large-scale open-source projects do compete for
    "mindshare" among open-source developers, who are a large but finite
    group with a finite amount of time and energy to sink into them.

    The rCLmindsharerCY is among the passive users who take whatrCOs given and complain about how it doesnrCOt fit their needs.

    WhatrCOs more important is the rCLcontribusharerCY -- the active community that contributes to the project. That matters much more than sheer
    numbers of users.
    If that's the terminology you prefer, sure. The point stands.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Thu Oct 9 00:09:50 2025
    From Newsgroup: comp.lang.misc

    On Wed, 8 Oct 2025 14:31:31 -0700, John Ames wrote:

    On Wed, 8 Oct 2025 21:18:58 -0000 (UTC)
    Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:

    ... large-scale open-source projects do compete for "mindshare" among
    open-source developers, who are a large but finite group with a
    finite amount of time and energy to sink into them.

    The rCLmindsharerCY is among the passive users who take whatrCOs given and >> complain about how it doesnrCOt fit their needs.

    WhatrCOs more important is the rCLcontribusharerCY -- the active community >> that contributes to the project. That matters much more than sheer
    numbers of users.

    If that's the terminology you prefer, sure. The point stands.

    You were talking about thinking, not doing. ItrCOs the doing that counts.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.lang.misc on Thu Oct 9 00:34:06 2025
    From Newsgroup: comp.lang.misc

    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    On 08.10.2025 16:03, Waldek Hebisch wrote:
    Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:
    On Sat, 30 Aug 2025 19:39:49 +0000, Ivan Shmakov wrote:

    Not to mention that taking too long to 'polish' your product, you
    risk ending up lagging behind your competitors.

    I would say, the open-source world is a counterexample to this. Look at >>> how long it took GNU and Linux to end up dominating the entire computing >>> landscape -- it didnrCOt happen overnight.

    Actually, open source nicely illustates this. First advice to
    open source projects is "release early, release often". Projects
    that delay release because they are "not ready" typically loose
    and eventually die.

    Open source projects typically want to offer high quality. But
    they have to limit their efforts to meet realease schedules.
    There are compromises which know bugs get fixed: some are deemed
    serious enough to block new release, but a lot get shipped.
    There is internal testing, but significant part of problems
    get discovered only after release.

    One can significantly increase quality by limiting addition of
    new featurs. But open source projects that try to do this
    typically loose.

    We can observe that software grows, and grows rank. My experience
    is that it makes sense to plan and occasionally add refactoring
    cycles in these cases. (There's also software planned accurately
    from the beginning, software that changes less, and is only used
    for its fixed designed purpose. But we're not speaking about that
    here.) A principal advantage of the "open-source world" (or rather
    the non-commercial world) is that there's neither competition nor
    need to quickly throw things into the market. So this area has at
    least the chance to adapt plans and contents without time pressure.

    What you wrote corresponds to one-man hobby project. There is a lot
    of such projects. But more important is software from multiperson
    project. And by now a lot of open source is developed by paid
    developers in commerial setting. Note that GPL requires that
    distributor makes code available to recipents of binaries.
    So comercial entity improving GPL-ed program has to open-source
    the improvements. Given that they can not keep improvements
    secret there is incentive to conribute them back to original
    project: if improvements are integrated maintanence on contributor
    side gets easier. But this calculation breaks down if original
    source keeps intermediate code secret and rarely is doing
    releases. Even if developement tree is public and contributions
    are promptly integrated rare releases means that users wanting
    improvement must use potentially unstable developement versions.

    Sometimes comercial entities want to ship some product which
    makes use of some system tool installed on user system. They
    may need improvement to this tool and for them things get
    easier if improved tool quickly enters distribution.

    Coming back to hobby case, one reason people contribute is
    because thay want some extra feature, they develop it and
    wnat it included.

    All 3 cases above have common factor: contributor really want
    feature included in the program. If maintainer/lead developer
    rejects the code or introduces large delay, then potential
    contributors are discouraged, so project may loose developers
    (or at least fail to gain new ones).

    There is also question of popularity among users. While by
    definition "ordirnary users" do not contribute code, they
    report bugs, ask for enhancemts, sometimes help with documentation,
    project website or simply spead positive opinion about project.
    This is important too: without good documentaion potential
    users may consider program to be buggy, without user contributed
    bug reports discovering (and hence fixing) bugs may be hard,
    without user feedback developers may concentrate on wrong features.

    And of course, to join a project developers must first realize
    that the project exists and get interested in it.

    Whether it's done is another question (and project specific). It
    should also be mentioned that some projects have e.g. security or
    quality requirements that gets tested and measured and require some
    adaptive process to increase these factors (without adding anything
    new).

    Actually, security is another thing which puts pressure to
    release quickly: if there is security problem developers want
    to distribute fixed version as soon as possible.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.lang.misc on Thu Oct 9 01:39:33 2025
    From Newsgroup: comp.lang.misc

    Stefan Ram <ram@zedat.fu-berlin.de> wrote:
    antispam@fricas.org (Waldek Hebisch) wrote or quoted:
    Actually, open source nicely illustates this. First advice to
    open source projects is "release early, release often".

    I had thought about using this for my projects, but I can see
    the downsides too:

    If some projects drop too early, they still barely have any
    capabilities. The first curious potential users check it out and
    walk away thinking, "a toy product and not the skills that actually
    matter in practice". That vibe can stick around - "You don't get
    a second shot at a first impression." - and end up keeping people
    from giving the later, more capable versions a chance.

    You need to manage expectations. Yours and your users. You may
    think that you have some cool thing, but even if it is working
    well in your opinion first potential user to come may trash it
    and discourage others. Or you may hit into something that a lot
    of people want, but is not handled by existing software. Then
    even if buggy/imperfect program may easily gain a lot of users.

    If you decide to distribute some early not entirely finished version
    it makes sense to explicitely say alpha/beta/for developers only
    as appropriate.

    Unless your program is close to trivial and you spent a lot of
    effort to make it correct expect that if you get users they
    will discover bugs.

    A lot depends how you view your project. If it is something
    that you want to "give" to users and expect good words in
    return, you may be disappointed. AFAICS potential benefits
    of distributing code are:
    - you get bug reports or enhancement ideas allowing you to
    improve the program
    - you get contributions, so part of work is done by others

    OTOH:
    - contributors may want to take project in quite different
    direction that you want
    - you may get spurious bug reports, where program works as
    designed, but user expected soemting else
    - you may get harsh critique

    If you feel that what you have is too immature, then it makes
    sense to keep it private. Similarly, you want things
    designed/impemented in specific way it is best to do it
    in private.

    But IMO in most cases releasing early makes sense.

    Projects that delay release because they are "not ready"
    typically loose and eventually die.

    Exagerated.

    The actual TeX program version is currently at 3.141592653
    and was last updated in 2021. It is one of the most successful
    programs ever and the market leader for scientific articles
    and books that include math formulas.

    I am well aware of TeX and I would interpret things in rather
    different way. First, main selling point of TeX proper is
    backwards compatibility, that is rendering old papers now
    in the same way as when they were written and hopefully current
    paper in the future will be rendered the same as now. While
    compatibility in general is good, in case of scientific articles
    it is especially important.

    Second, despite first point there is continuing dissatisfaction
    that TeX proper misses various features. Some things get
    added but are not reflected in official version number, there are
    mutated versions and attempts to created rather different
    thing (like Lyx and Texmacs). There was a faild attempt
    (or possibly multiple attempts) at developing compatible
    replacement.

    Third, TeX is a macro processor and a lot of formatting functionality
    in use is implemented via TeX macros and not in TeX proper.
    There is rather fast stream of changes to available macro
    packages and a lot of bloat there. TeX proper is about 24kloc
    of source code after macro expansion. Modern binaries contain
    similar (or somewhat larger) amout of "system support", which
    orignally was intended to be rather small wrapper around needed
    OS functions but now adds extra functionalty (not reflected in
    main version number). If you install supporting macro packages
    and extra tools like Debian 'texlive-full' package they will use
    about 9 GB.

    So TeX is very special case: there is strong compatibility
    requirement and there is a lot of changes outside core part
    written by Knuth.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Thu Oct 9 03:48:44 2025
    From Newsgroup: comp.lang.misc

    On 09.10.2025 02:34, Waldek Hebisch wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    [...]
    We can observe that software grows, and grows rank. My experience
    is that it makes sense to plan and occasionally add refactoring
    cycles in these cases. (There's also software planned accurately
    from the beginning, software that changes less, and is only used
    for its fixed designed purpose. But we're not speaking about that
    here.) A principal advantage of the "open-source world" (or rather
    the non-commercial world) is that there's neither competition nor
    need to quickly throw things into the market. So this area has at
    least the chance to adapt plans and contents without time pressure.

    What you wrote corresponds to one-man hobby project. [...]

    No. (I wasn't speaking about "one-man hobby projects"; rather more
    the "opposite" project setup.)

    [...] But more important is software from multiperson project. [...]

    Yes.

    [ open source and GPL stuff ]

    I wasn't specifically speaking about that.

    [ specific sceneries and assumptions ]

    [ more open source specific sceneries and assumptions ]

    [ open source example sceneries and assumptions about involved people ]

    (I wasn't focusing on these things you expanded on. - And I won't
    comment on that.)


    Whether it's done is another question (and project specific). It
    should also be mentioned that some projects have e.g. security or
    quality requirements that gets tested and measured and require some
    adaptive process to increase these factors (without adding anything
    new).

    Actually, security is another thing which puts pressure to
    release quickly: if there is security problem developers want
    to distribute fixed version as soon as possible.

    Yes, but I wasn't speaking about fixing security holes [quickly].

    I was speaking about planning a software in a development process
    with requirements including security and principles to guarantee
    quality. That doesn't prevent measures to continuously get back
    in feedback loops to enhance the structure, plan growth, redesign
    and refactor necessary parts, especially when new features are
    planned to be added.

    And I was speaking about software that obviously doesn't adhere
    to such principles, and grow rank.

    The point with open source software (where you seem to have a bias
    for) is a special set of beasts. But as said; they'd have a chance
    (but typically and often don't use it, it seems).

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Thu Oct 9 04:00:16 2025
    From Newsgroup: comp.lang.misc

    On 09.10.2025 03:39, Waldek Hebisch wrote:
    [...]

    But IMO in most cases releasing early makes sense.

    LOL, yeah! - Let the users and customers search the bugs for you!

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.lang.misc on Thu Oct 9 14:19:14 2025
    From Newsgroup: comp.lang.misc

    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    On 09.10.2025 03:39, Waldek Hebisch wrote:
    [...]

    But IMO in most cases releasing early makes sense.

    LOL, yeah! - Let the users and customers search the bugs for you!

    Maybe you think that you can write perfect software. There are
    widely available statistics which show that software of nontrivial
    size have bugs. Developer may plan things (FYI I spent a lot
    of time on planning befor I start coding), test things but after
    some time arrives at point of diminishing returns: finding new
    bugs takes a lot of effort. If you have paying customers you should
    have money a hire testers. You should hire multiple developers
    and do code reviews. But within reasonable resource bounds and
    using known tecniques you will arrive at point where finding new
    bugs takes too much resources.

    If your customers need/demand higher quality they should pay
    appropriatly to cover needed cost. But expecting no bugs is
    simply unrealistic. I read about developement of software
    controlling Space Shuttle. Team doing that boasted that
    that have sophisticated developement process ensuring high
    quality. They had 400 people working on 400 kloc program.
    Given that developement was spread over more than 10 years
    that looks as very low "productivity", that pretty high
    developement cost. Yet they where not able to say "no bugs".
    IIRC they where not even able to say "no bugs discovered
    during actual mission", all that they were able to say
    was "no serious trouble due to bugs". Potential effects
    of failure of Space Shuttle software were pretty serious,
    so it was fully justified to spent substantial effort on
    quality.

    What I develop is quite non-critical, I am almost certain
    that "no serious trouble due to bugs" will be true even if
    my software is full of bugs. And similar thing applies to
    substantial part of open source software. I wrote above
    about hiring testers. Non commercial open source projects
    typically do not have money to pay for testing. But
    there are people who are willing to do testing for free.
    More precisely, given not critical nature of software
    bugs are just inconvenience and frequently there are
    folks who consider inconvenience due to bugs as small
    compared to benefits offered by software.

    When I wrote about releasing early, I mean releasing when
    stream of new bugs goes down, that is attempting to predict
    point of diminishing returns. More conservative approach
    would continue testing for longer time in hope of finding
    "last bug". Some people may use formulation like "do
    all what it possible to increase quality and after that
    release", but there is always something more that could
    be done. So one need to decide when it is enough and release.

    I have a problem (and tone of your message suggest that you
    may have this problem too), I really would prefer to catch as many
    bugs as possible during developement and due to this I
    probably release to late. Note that part of my testing may
    be using a program just to do some work. Now, if program
    is doing valuable service to me, there is reasoble chance
    that it will do valuable work for some other people.
    Pragmatically you can view this a a deal: other people
    get value from work done by the program, I in exchange get
    defect reports that allow me to improve the program.
    I see nothing wrong in such a deal, as long as it is
    honest, in particular when provider of the program
    realistically states what program can do.

    BTW: Some users judge quality of software looking at number of
    bugs reports. More bugs reports is supposed to mean higher
    quality. If that looks wrong to you more detailed reasoning
    goes as follows: number of bug reports grows with number of
    users. If there is small number of bug reports it indicates
    that there is small number of user and possibly that users
    do not bother reporting bugs. Now, user do not report bugs
    when they consider software to be hoplessy bad. And user
    in general prefer higher quality software, so small number
    of user suggest low quality. So, either way, low number
    of bug reports really may mean low quality. This method
    may fail if you manage to create perfect software free of
    bugs with perfect documentation so that there will be no
    spurious bug reports. But in real life program tend to have
    enough bugs that this method has at least some merit.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From ram@ram@zedat.fu-berlin.de (Stefan Ram) to comp.lang.misc on Thu Oct 9 14:48:46 2025
    From Newsgroup: comp.lang.misc

    antispam@fricas.org (Waldek Hebisch) wrote or quoted:
    If your customers need/demand higher quality they should pay
    appropriatly to cover needed cost. But expecting no bugs is
    simply unrealistic. I read about developement of software
    controlling Space Shuttle. Team doing that boasted that
    that have sophisticated developement process ensuring high
    quality. They had 400 people working on 400 kloc program.
    Given that developement was spread over more than 10 years
    that looks as very low "productivity", that pretty high
    developement cost. Yet they where not able to say "no bugs".
    IIRC they where not even able to say "no bugs discovered
    during actual mission", all that they were able to say
    was "no serious trouble due to bugs". Potential effects
    of failure of Space Shuttle software were pretty serious,
    so it was fully justified to spent substantial effort on
    quality.

    Recommended reading:

    "They Write the Right Stuff" (1996-12) - Charles Fishman.

    Was on the web for a very long time, but I think
    they now have some kind of wall before it, but it
    might still be in some archives.

    The gist is:

    - About half the people, even in leading positions,
    are women.

    - They work from 9 to 5, no "sprint" long nights,
    maybe rare exceptions from this rule.

    - About half the staff are testers, but as the programmers
    do not want them to find errors, the programmers already
    do their own testing before they give their code to the
    actual testers. So more time is spend on testing than on
    coding.

    - The quality measure is a statistical estimate of the
    number of bugs remaining per line of code which has to
    be below a certain value

    One discipline that is set up to write software free from errors is
    /cleanroom software engineering/ (not to be confused with /cleanroom
    engineering/ of software for avoiding problems with copyright).

    I have a problem (and tone of your message suggest that you
    may have this problem too), I really would prefer to catch as many
    bugs as possible during developement and due to this I
    probably release to late.

    I am only writing software in my leisure I want to use myself.
    I should release it after it has proven to be usable and helpful
    to myself. But sometimes I feel inpatient and tempted to release
    something for the satisfaction of having released something, even
    when it's not useful, like the frontend of a compiler without a
    backend. I need to put effort into some self-control not to do this.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From John Ames@commodorejohn@gmail.com to comp.lang.misc on Thu Oct 9 08:44:31 2025
    From Newsgroup: comp.lang.misc

    On Thu, 9 Oct 2025 00:09:50 -0000 (UTC)
    Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:
    If that's the terminology you prefer, sure. The point stands.

    You were talking about thinking, not doing. ItrCOs the doing that
    counts.
    I was talking about the doing; you just want to use a different word
    for it. That's all fine, but it doesn't change what I was saying.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Thu Oct 9 21:52:08 2025
    From Newsgroup: comp.lang.misc

    On Thu, 9 Oct 2025 08:44:31 -0700, John Ames wrote:

    On Thu, 9 Oct 2025 00:09:50 -0000 (UTC)
    Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:

    If that's the terminology you prefer, sure. The point stands.

    You were talking about thinking, not doing. ItrCOs the doing that counts.

    I was talking about the doing ...

    You used the word rCLmindsharerCY. Trying to redefine what rCLmindrCY means, now?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From John Ames@commodorejohn@gmail.com to comp.lang.misc on Thu Oct 9 15:21:11 2025
    From Newsgroup: comp.lang.misc

    On Thu, 9 Oct 2025 21:52:08 -0000 (UTC)
    Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:
    If that's the terminology you prefer, sure. The point stands.

    You were talking about thinking, not doing. ItrCOs the doing that
    counts.

    I was talking about the doing ...

    You used the word rCLmindsharerCY. Trying to redefine what rCLmindrCY means, now?
    No, just using it in the context of developer minds.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Fri Oct 10 01:11:56 2025
    From Newsgroup: comp.lang.misc

    On 07/10/2025 23:04, Lawrence DrCOOliveiro wrote:
    On Tue, 7 Oct 2025 22:03:07 +0100, Andy Walker wrote:
    On 04/10/2025 02:11, Waldek Hebisch wrote:
    In PDP-11 times there were short list of available devices. Now
    there is a lot of different devices on the market and each one
    potentially need a specialised driver in the kernel. [...]
    Yes, but one would expect that to drive standardisation rather than
    bloat. There are rather a lot of devices that I can plug into the
    mains in my home, but I don't have to install hundreds or thousands
    of different types of socket.
    Most of your electronic devices would not plug directly into the
    mains, they would likely use some kind of DC adaptor/charger. How many
    of those do you have?

    Actually, most of mine /do/ plug directly into the mains. Or
    do you not count computers, TVs, printers, [music] keyboards, ... as electronic? Yes, I also have some that come with some kind of charger
    or connector. They are surprisingly diverse: shavers, toothbrushes,
    various ornaments, ... as well as 'phones, portable drives, USB sticks
    kindles, .... A feature is that all those mentioned work via a USB
    connexion [supplied with the device], irrespective of whether the Man
    on the Clapham Omnibus would describe them as "electronic". Is that
    not standardisation in action? [Of course there are many other formal
    or informal standards, such as MIDI, UTF-8, MPEG, ... to choose from.]

    You are trying to make an argument by analogy, and that is already
    heading for a pitfall. Those power connections you talk about are for transferring energy, not for transferring information. Information
    transfer is a much more complex business.

    USB sticks, portable drives, ... transfer information as well.
    But again, if information transfer is so complex, one would expect that
    to drive standardisation rather than everyone re-inventing the wheel.
    I note nearby articles giving reasons for the diversity, but commercial interests of companies don't coincide with the interests of users.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Simpson
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Fri Oct 10 01:39:50 2025
    From Newsgroup: comp.lang.misc

    On Fri, 10 Oct 2025 01:11:56 +0100, Andy Walker wrote:

    A feature is that all those mentioned work via a USB
    connexion [supplied with the device], irrespective of whether the Man
    on the Clapham Omnibus would describe them as "electronic". Is that
    not standardisation in action?

    Do you know how many different kinds of rCLUSBrCY there are?

    USB sticks, portable drives, ... transfer information as well.
    But again, if information transfer is so complex, one would expect that
    to drive standardisation rather than everyone re-inventing the wheel.

    Besides the different versions of USB previously alluded to, let me also mention BlueTooth, BlueTooth LE, HDMI, DisplayPort, ZigBee, all the
    different flavours of Ethernet, with and without PoE, Wi-Fi, DECT (if you happen to have a cordless landline phone), 3G, 4G, 5G ...
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Fri Oct 10 12:09:00 2025
    From Newsgroup: comp.lang.misc

    On 09.10.2025 16:19, Waldek Hebisch wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    On 09.10.2025 03:39, Waldek Hebisch wrote:
    [...]

    But IMO in most cases releasing early makes sense.

    LOL, yeah! - Let the users and customers search the bugs for you!

    Maybe you think that you can write perfect software.

    No. (What makes you think that I would think that one could be
    sure to write [non-trivial] software error-free and perfect?)

    But there are methods to write more reliably reliable software!

    Also, that software has more or less bugs doesn't justify, IMO,
    "early releases" as principle, and especially not as principle
    to obtain software quality; this was the inherent point of the
    statements in previous posts.

    [...] But within reasonable resource bounds and
    using known tecniques you will arrive at point where finding new
    bugs takes too much resources.

    And this is also no rationale for "early releases".


    If your customers need/demand higher quality they should pay
    appropriatly to cover needed cost. But expecting no bugs is
    simply unrealistic.

    This is again the same straw-man; no one said or implied such
    a thing.

    I read about developement of software
    controlling Space Shuttle. Team doing that boasted that
    that have sophisticated developement process ensuring high
    quality. They had 400 people working on 400 kloc program.
    Given that developement was spread over more than 10 years
    that looks as very low "productivity", that pretty high
    developement cost. Yet they where not able to say "no bugs".
    IIRC they where not even able to say "no bugs discovered
    during actual mission", all that they were able to say
    was "no serious trouble due to bugs". Potential effects
    of failure of Space Shuttle software were pretty serious,
    so it was fully justified to spent substantial effort on
    quality.

    And sometimes it doesn't help if the methods aren't applied
    consequently! (Since you mentioned a space technology example
    you may want to read about the Ariane 5 incident; the
    post-mortem report is very enlightening concerning errors in
    demanding software projects that are basically extremely good
    organized!)


    What I develop is quite non-critical, I am almost certain
    that "no serious trouble due to bugs" will be true even if
    my software is full of bugs. [...]

    I've no insight in your projects and methods. (That's not my
    business.)

    From the professional projects I participated in in the past
    we could not risk having lots of bugs or severe bugs. So we
    either had or installed the necessary methods and provisions
    to achieve the high quality we needed; and that worked well.

    So probably you have other projects in mind; but even then,
    the costs for fixes of problems appearing later are much
    higher than detecting them early in the development process.
    (In your area maybe "only" loss of reputation, not money, or
    lives.)


    When I wrote about releasing early, I mean releasing when
    stream of new bugs goes down, that is attempting to predict
    point of diminishing returns.

    The problem with that is that bugs - assuming we speak about
    those reported; mind your "early release" approach! - appear
    too late to be fixed cheaply. (And predictions need numbers
    and project context information! And experience.)

    More conservative approach
    would continue testing for longer time in hope of finding
    "last bug". [...]

    That is not a "conservative approach" but nonsense, and I've
    never heard such procedure to be established in professional
    contexts; unless you could apply a formal verifier you cannot
    say when there's no bugs any more.

    No. Conservative, or standard approach, is to apply all the
    methods of software development, from formal specification,
    tools testing specifications, designs, tests on the various
    levels, supporting tools, experts on the respective QA and
    project management tasks, and of course trained programmers.


    I have a problem (and tone of your message suggest that you
    may have this problem too), I really would prefer to catch as many
    bugs as possible during developement and due to this I
    probably release to late. [...]

    Yes, I try to reduce the cases where bugs may slip in. For my
    personal toy-projects that's not critical, but if I want to
    (sort of) "publish" anything I spend yet more effort. For the
    professional projects I was engaged in it was non-negotiable,
    though; the quality measures were a must!

    Note that part of my testing may
    be using a program just to do some work. Now, if program
    is doing valuable service to me, there is reasoble chance
    that it will do valuable work for some other people.
    Pragmatically you can view this a a deal: other people
    get value from work done by the program, I in exchange get
    defect reports that allow me to improve the program.
    I see nothing wrong in such a deal, as long as it is
    honest, in particular when provider of the program
    realistically states what program can do.

    Personally I don't want to publish in a quality that results
    in a lot of feedback that requires a lot of time to handle.
    And it's not only about coding-bugs but also about software
    design. (My rants often have a strong focus on "lousy design"
    and on bugs only as a secondary factor; my experience tells
    me also that lousy designs are also a significant source of
    implemented bugs.)


    BTW: Some users judge quality of software looking at number of
    bugs reports. More bugs reports is supposed to mean higher
    quality.

    More bug reports primarily suggests that the software is of
    inferior quality (presuming all other factors for comparison
    are equal or appropriately normalized).

    If that looks wrong to you more detailed reasoning
    goes as follows: number of bug reports grows with number of
    users.

    This is not necessarily the case.

    If there is small number of bug reports it indicates
    that there is small number of user and possibly that users
    do not bother reporting bugs.

    This is not a given consequence.

    Now, user do not report bugs
    when they consider software to be hoplessy bad. And user
    in general prefer higher quality software, so small number
    of user suggest low quality. So, either way, low number
    of bug reports really may mean low quality. This method
    may fail if you manage to create perfect software free of
    bugs with perfect documentation so that there will be no
    spurious bug reports. But in real life program tend to have
    enough bugs that this method has at least some merit.

    It is always sensible going for high quality early in the
    development process to create less bugs and less bug reports.
    Frankly, to say that a lot of bug reports (thus bugs) is good
    or useful in any way is completely erroneous. You can draw
    quality conclusions also from no or "too few" bugs and focus
    on these.

    Mind also that fixing bugs often is also "patching" software;
    with many bug reports you have to do a lot of changes, in a
    way defined by the time scale as they arrive, not by design.
    In well-designed and well-written software that may be a
    less critical factor, but to achieve that quality state in
    the first place you'd have to install some quality measures.

    Lots of bugs and bug-reports is nothing you sensibly want to
    achieve to obtain software quality.

    Though you should track all numbers you can get from your
    development process to be able to draw conclusions and act.

    Anecdotally, to give an impression of contexts I worked in...

    We had a client server component architecture, each component
    with one dedicated person responsible for it. We measured the
    errors from tests on various levels (unit, system, integration,
    acceptance tests). We also measured check-ins, and number of
    changes of the requirements (for example). I installed the QA
    measures I found to be necessary, and we had the best outcome;
    best outcome means quasi-zero bugs!

    In another context I took responsibility for a refactoring of
    governmental software, software that evolved over years before
    I joined. Here the refactoring not only reduced LOCs but also
    restructured the code. The number of bugs reduced linearly with
    the reduction of code, and yet more with the better structure.
    (But of course that needed the QA infrastructure and processes
    we had invented, it's not a given.)

    PS: You wrote quite some hypotheses about open source and the
    user community there; I'm positive that the principles we got
    from the commercial professional software development are also
    valid for the area you were focusing on. But YMMV, of course.

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Fri Oct 10 12:36:02 2025
    From Newsgroup: comp.lang.misc

    On 09.10.2025 16:48, Stefan Ram wrote:
    Recommended reading:

    "They Write the Right Stuff" (1996-12) - Charles Fishman.
    [...]
    The gist is:
    [...]
    - About half the staff are testers, but as the programmers
    do not want them to find errors, the programmers already
    do their own testing before they give their code to the
    actual testers. So more time is spend on testing than on
    coding.

    (It's dangerous to reply on a quoted text. Anyway. A few comments...)

    I cannot tell where that is from, what project context, company, etc.
    Just hoping that "the programmers" is not meant as generalization but
    that it's meant just for the context/company he writes about.

    The author write as if there's just one sort of testing, and this done
    by ominous "testers". We had (in the various companies) indeed testers
    but they were doing component tests, system tests, integration tests,
    another group did the acceptance tests, and the programmers/developers
    did unit tests and component tests. Each type of tests has its reason
    and is important. All added to high quality software.

    The quoted last sentence seems to suggest - but I may misinterpret -
    that it's bad to spend a sensible time on "testing". In a way it gives
    the impression that many LOCs are the "productive" and more important
    part of the job, and sadly I got to know stupid managers that believed
    such nonsense.

    Janis

    PS: Whenever QA effort gets reduced it gets expensive, money and lives.
    When was it that Boing reduced QA staff and subsequently a couple 737s
    crashed? (You can find similar stories from other companies.)

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Sat Oct 11 12:47:41 2025
    From Newsgroup: comp.lang.misc

    On 10/10/2025 02:39, Lawrence DrCOOliveiro wrote:
    [I wrote:]
    A feature is that all those mentioned work via a USB
    connexion [supplied with the device], irrespective of whether the Man
    on the Clapham Omnibus would describe them as "electronic". Is that
    not standardisation in action?
    Do you know how many different kinds of rCLUSBrCY there are?

    Is this an exam? I have a decent-enough idea, inc the various ways
    in which the versions are compatible with each other. Standardisation is
    not the same as "everything stays the same for 30 years despite the huge changes in what domestic devices can do". More to do with things "just working" without someone having to write 40 million lines of code in case
    I happen to install some obscure peripheral on my computer.

    [...]>> But again, if information transfer is so complex, one would expect that >> to drive standardisation rather than everyone re-inventing the wheel.
    Besides the different versions of USB previously alluded to, let me also mention BlueTooth, [...], 4G, 5G ...
    So which is it? Are devices becoming standardised or are people insisting on re-inventing the wheel? Is this what consumers want or is
    it large companies trying to lock them in to their products? Cui bono?
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Lehar
    --- Synchronet 3.21a-Linux NewsLink 1.2