• Algol 68 / Genie - opinions on local procedures?

    From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Mon Aug 18 04:52:24 2025
    From Newsgroup: comp.lang.misc

    In a library source for rational numbers I'm using a GCD function
    to normalize the rational numbers. This function is used from other
    rational operations regularly since all numbers are always stored
    in their normalized form.

    I defined the 'PROC rat_gcd' in global space but with Algol 68 it
    could also be defined locally (to not pollute the global namespace)
    like

    ELSE # normalize #
    PROC rat_gcd = ... ;

    INT nom = ABS a, den = ABS b;
    INT sign = SIGN a * SIGN b;
    INT q = rat_gcd (nom, den);
    ( sign * nom OVER q, den OVER q )
    FI

    though performance measurements showed some noticeable degradation
    with a local function definition as depicted.

    I'd prefer it to be local but since it's ubiquitously used in that
    library the performance degradation (about 15% on avg) annoys me.

    Opinions on that?

    Janis
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Mon Aug 18 16:54:53 2025
    From Newsgroup: comp.lang.misc

    On 18/08/2025 03:52, Janis Papanagnou wrote:
    I defined the 'PROC rat_gcd' in global space but with Algol 68 it
    could also be defined locally (to not pollute the global namespace)
    like
    [... snip ...]
    though performance measurements showed some noticeable degradation
    with a local function definition as depicted.

    I can't /quite/ reproduce your problem. If I run just the
    interpreter ["a68g myprog.a68g"] then on my machine the timings are
    identical. If I optimise ["a68g -O3 myprog.a68g"], then /first time/
    through, I get a noticeable degradation [about 10% on my machine],
    but the timings converge if I run them repeatedly. YMMV. I suspect
    it's to do with storage management, and later runs are able to re-use
    heap storage that had to be grabbed first time. But that could be
    completely up the pole. Marcel would probably know.

    If you see the same, then I suggest you don't run programs
    for a first time. [:-)]

    I'd prefer it to be local but since it's ubiquitously used in that
    library the performance degradation (about 15% on avg) annoys me.
    Opinions on that?

    Personally, I'd always go for the version that looks nicer
    [ie, in keeping with your own inclinations, with the spirit of A68,
    with the One True (A68) indentation policy, and so on]. If you're
    worried about 15%, that will be more than compensated for by your
    next computer! If you're Really Worried about 15%, then I fear it's
    back to C [or whatever]; but that will cost you more than 15% in
    development time.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/West
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Mon Aug 18 18:30:54 2025
    From Newsgroup: comp.lang.misc

    On 18.08.2025 17:54, Andy Walker wrote:
    On 18/08/2025 03:52, Janis Papanagnou wrote:
    I defined the 'PROC rat_gcd' in global space but with Algol 68 it
    could also be defined locally (to not pollute the global namespace)
    like
    [... snip ...]
    though performance measurements showed some noticeable degradation
    with a local function definition as depicted.

    I can't /quite/ reproduce your problem. If I run just the
    interpreter ["a68g myprog.a68g"] then on my machine the timings are identical. If I optimise ["a68g -O3 myprog.a68g"], then /first time/ through, I get a noticeable degradation [about 10% on my machine],
    but the timings converge if I run them repeatedly. YMMV.

    Actually, with more tests, the variance got even greater; from 10%
    to 45% degradation. The variances, though, did not converge [in my environment].

    I suspect
    it's to do with storage management, and later runs are able to re-use
    heap storage that had to be grabbed first time.

    I also suspected some storage management effect; maybe that the GC
    got active at various stages. (But the code did not use anything
    that would require GC; to be honest, I'm puzzled.)

    But that could be
    completely up the pole. Marcel would probably know.

    If you see the same, then I suggest you don't run programs
    for a first time. [:-)]

    :-)


    I'd prefer it to be local but since it's ubiquitously used in that
    library the performance degradation (about 15% on avg) annoys me.
    Opinions on that?

    Personally, I'd always go for the version that looks nicer
    [ie, in keeping with your own inclinations, with the spirit of A68,
    with the One True (A68) indentation policy, and so on].

    That's what I'm tending towards. I think I'll put the GCD function
    in local scope to keep it away from the interface.

    If you're
    worried about 15%, that will be more than compensated for by your
    next computer!

    Actually I'm very conservative concerning computers; mine is 15+
    years old, and although I "recently" thought about getting an update
    here it's not my priority. ;-)

    If you're Really Worried about 15%, then I fear it's

    Not really. It's not the 10-45%, it's more the feeling that a library
    function should not only conform to the spirit of good software design
    but also be efficiently implemented (also in Algol 68).

    The "problem" (my "problem") here is that the effect should not appear
    in the first place since static scoping should not cost performance; I
    suppose it's an effect of Genie being effectively an interpreter here.

    But my Algol 68 programming is anyway just recreational, for fun, so
    I'll go with the cleaner (slower) implementation.

    back to C [or whatever]; but that will cost you more than 15% in
    development time.

    Uh-oh! - But no, that's not my intention here. ;-)

    Thanks!

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Tue Aug 19 00:45:00 2025
    From Newsgroup: comp.lang.misc

    On 18/08/2025 17:30, Janis Papanagnou wrote:
    Actually, with more tests, the variance got even greater; from 10%
    to 45% degradation. The variances, though, did not converge [in my environment].

    Ah. Then I backtrack from my previous explanation to an
    alternative, that your 15yo computer has insufficient cache, so
    every new run chews up more and more real storage. Or something.
    You may get some improvement by running "sweep heap" or similar
    from time to time, or using pragmats to allocate more storage.

    I also suspected some storage management effect; maybe that the GC
    got active at various stages. (But the code did not use anything
    that would require GC; to be honest, I'm puzzled.)

    ISTR that A68G uses heap storage rather more than you might
    expect. I think Marcel's documentation has more info.

    [...]
    If you're
    worried about 15%, that will be more than compensated for by your
    next computer!
    Actually I'm very conservative concerning computers; mine is 15+
    years old, and although I "recently" thought about getting an update
    here it's not my priority. ;-)

    Ah. I thought I was bad, keeping computers 10 years or so!
    I got a new one a couple of years back, and the difference in speed
    and storage was just ridiculous.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Soler
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Tue Aug 19 02:44:58 2025
    From Newsgroup: comp.lang.misc

    On 19.08.2025 01:45, Andy Walker wrote:
    [...]
    If you're
    worried about 15%, that will be more than compensated for by your
    next computer!
    Actually I'm very conservative concerning computers; mine is 15+
    years old, and although I "recently" thought about getting an update
    here it's not my priority. ;-)

    Ah. I thought I was bad, keeping computers 10 years or so!
    I got a new one a couple of years back, and the difference in speed
    and storage was just ridiculous.

    Well, used software tools (and their updates) required me to at least
    upgrade memory! (That's actually one point that annoys me in "modern"
    software development; rarely anyone seems to care economizing resource requirements.) But all the rest, especially the things that influence performance (CPU [speed, cores], graphic card, HDs/Cache, whatever) is comparably old stuff in my computer; but it works for me.[*]

    And, by the way, thanks for your suggestions and helpful information
    on my questions in all my recent Algol posts! It's also very pleasant
    being able to substantially exchange ideas on this (IMO) interesting
    legacy topic.

    Janis

    [*] If anything I'd probably only need an ASCII accelerating graphics
    card; see https://www.bbspot.com/News/2003/02/ati_ascii.html ;-)

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Wed Aug 20 00:47:31 2025
    From Newsgroup: comp.lang.misc

    On 19/08/2025 01:44, Janis Papanagnou wrote:
    [...] (That's actually one point that annoys me in "modern"
    software development; rarely anyone seems to care economizing resource requirements.) [...]

    Yeah. From time to time I wonder what would happen if we ran
    7th Edition Unix on a modern computer. Sadly, I have to admit that
    I too am rather careless of resources; if you have terabytes of SSD,
    it seems to be a waste of time worrying about a few megabytes.

    And, by the way, thanks for your suggestions and helpful information
    on my questions in all my recent Algol posts! It's also very pleasant
    being able to substantially exchange ideas on this (IMO) interesting
    legacy topic.

    You're very welcome, and I reciprocate your pleasure.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Peerson
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Wed Aug 20 00:43:22 2025
    From Newsgroup: comp.lang.misc

    On Wed, 20 Aug 2025 00:47:31 +0100, Andy Walker wrote:

    From time to time I wonder what would happen if we ran
    7th Edition Unix on a modern computer.

    The Linux kernel source is currently over 40 million lines, and I
    understand the vast majority of that is device drivers.

    If you were to run an old OS on new hardware, that would need drivers for
    that new hardware, too.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Wed Aug 20 23:58:58 2025
    From Newsgroup: comp.lang.misc

    On 20/08/2025 01:43, Lawrence DrCOOliveiro wrote:
    [I wrote:]
    From time to time I wonder what would happen if we ran
    7th Edition Unix on a modern computer.
    The Linux kernel source is currently over 40 million lines, and I
    understand the vast majority of that is device drivers.

    You seem to be making Janis's point, but that doesn't seem to
    be your intention?

    If you were to run an old OS on new hardware, that would need drivers for that new hardware, too.

    Yes, but what is so special about a modern disc drive, monitor, keyboard, mouse, ... that it needs "the vast majority" of 40M lines more
    than its equivalent for a PDP-11? Does this not again make Janis's point?

    Granted that the advent of 32- and 64-bit integers and addresses
    makes some programming much easier, and that we can no longer expect
    browsers and other major tools to fit into 64+64K bytes, is the actual
    bloat in any way justified? It's not just kernels and user software --
    it's also the documentation. In V7, "man cc" generates just under two
    pages of output; on my current computer, it generates over 27000 lines,
    call it 450 pages, and is thereby effectively unprintable and unreadable,
    so it is largely wasted.

    For V7, the entire documentation fits comfortably into two box
    files, and the entire source code is a modest pile of lineprinter output.
    Most of the commands on my current computer are undocumented and unused,
    and I have no idea at all what they do.

    Yes, I know how that "just happens", and I'm observing rather
    than complaining [I'd rather write programs, browse and send/read e-mails
    on my current computer than on the PDP-11]. But it does all give food for thought.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Peerson
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Thu Aug 21 02:59:32 2025
    From Newsgroup: comp.lang.misc

    On Wed, 20 Aug 2025 23:58:58 +0100, Andy Walker wrote:

    On 20/08/2025 01:43, Lawrence DrCOOliveiro wrote:

    If you were to run an old OS on new hardware, that would need
    drivers for that new hardware, too.

    Yes, but what is so special about a modern disc drive, monitor,
    keyboard, mouse, ... that it needs "the vast majority" of 40M lines
    more than its equivalent for a PDP-11?

    Keyboard and mouse -- USB.

    Disk drive -- that might connect via SCSI or SATA. Either one requires
    common SCSI-handling code. Plus you want a filesystem, donrCOt you?
    Preferably a modern one with better performance and reliability than
    Bell Labs was able to offer, back in the day. That requires caching
    and journalling support. Plus modern drives have monitoring built-in,
    which you will want to access. And you want RAID, which didnrCOt exist
    back then?

    Monitor -- video in the Linux kernel goes through the DRM (rCLDirect
    Rendering ManagerrCY) layer. Unix didnrCOt have GUIs back then, but you
    will likely want them now. The PDP-11 back then accessed its console
    (and other terminals) through serial ports. You might still want
    drivers for those, too.

    Both video and disk handling in turn would be built on the common
    PCI-handling code.

    Remember there is also hot-plugging support for these devices, which
    was unheard of back in the day.

    The CPU+support chipset itself will need some drivers, beyond what was conceived back then: for example, control of the various levels of
    caching, power saving, sensor monitoring, and of course memory
    management needs to be much more sophisticated nowadays.

    And what about networking? Would you really want to run a machine in a
    modern environment without networking?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Thu Aug 21 21:02:55 2025
    From Newsgroup: comp.lang.misc

    On 19/08/2025 01:44, Janis Papanagnou wrote:
    [...] (That's actually one point that annoys me in "modern"
    software development; rarely anyone seems to care economizing resource requirements.) [...]

    On 21.08.2025 00:58, Andy Walker wrote:
    On 20/08/2025 01:43, Lawrence DrCOOliveiro wrote:
    The Linux kernel source is currently over 40 million lines, and I
    understand the vast majority of that is device drivers.

    You seem to be making Janis's point, but that doesn't seem to
    be your intention?

    If you were to run an old OS on new hardware, that would need drivers for
    that new hardware, too.

    Yes, but what is so special about a modern disc drive, monitor,
    keyboard, mouse, ... that it needs "the vast majority" of 40M lines more
    than its equivalent for a PDP-11? [...]

    This was actually what I was also thinking when I read Lawrence's
    statement. (And even given his later more thorough list of modern functionalities this still doesn't quite explain the need for *so*
    much resource demands, IMO. I mean; didn't they flew to the moon
    in capsules with computers of kilobytes of memory. Yes, nowadays
    we have more features to support. But in previous days they *had*
    to economize; they had to "squeeze" algorithms to fit into 1 kiB
    of memory. Nowadays no one cares. And the computers running that
    software are an "externality"; there's no incentive, it seems,
    to write the software sophistically in an ergonomic way.)

    But that was (in my initial complaint; see above) anyway just one
    aspect of many.

    You already mentioned documentation. There we not only see this
    extreme huge and often badly structure unclear texts but also the
    information to text-size ratio is often in an extreme imbalance;
    to mention a few keywords, DOC, HTML, XML, JSON - and where the
    problem is not (not only) that one or the other of the formats is
    absolutely huge, but also that it's relatively huge compared to
    an equally or better fitting use of a more primitive format.

    Related to that; some HTML pages you load that contain just text
    payloads of few kiB but that has not only the HTML overhead but
    also loads Mebi (or Gibi?) bytes through dozens of JS libraries;
    and they're not even used! And yet I haven't mentioned pages that
    add more storage and performance demands due to advertisement
    logic (with more delays, and "of course" not considering data
    privacy); but that's of course intentionally (it's your choice).

    Economy is also related to GUI ergonomy; in configurability and
    usability. You can configure all sorts of GUI properties like GUI-schemes/appearance, you can adjust buttons left or right,
    but you cannot get a button with a necessary function, or one
    function in an easy accessible way. GUI's are overloaded with
    all sorts of trash which inevitably leads to an uneconomic use,
    and necessary features are unsupported or cannot be configured.
    (But providing such [only] fancy features contributes also to
    the code size.)

    Then there's the unnecessary dependencies. Just recently there
    was a discussion about (I think) the ffmpeg tool; it was shown
    that it includes hundreds of external libraries! Yet worse, many
    of them not serving it's main task (video processing/converting)
    but things like LDAP, and *tons* of libraries concerning Samba;
    the latter is also a problem of bad software organization given
    that there's so many libraries to be added for SMB "support"
    (whether that should be a part of a video converter or not).

    But also the performance or the systems/application design. If
    you start, e.g., a picture viewer and you have to wait a long
    time because the software designer thought it to be a good idea
    to present the directory tree in a separate part of the window,
    and to achieve that the program needs to recursively parse a
    huge subdirectory structure, and until you finally see that
    single picture that you wanted to see - and whose file name
    you already provided as argument! - half a minute passed.

    Or use of bad algorithms. Like a graphics processing software
    that doesn't terminate when trying to 90-# rotate a large image
    because it does try to do the rotation unsophisticatedly with
    a copy of the huge memory and with bit-wise operations instead
    using fast and lossless in-place algorithms (that are commonly
    known already since half a century).

    Etc. etc. - Above just off the top of my head; there's surely
    much more to say about economy and software development.

    And an important consequence is that bad design and bloat will
    make systems usually also less stable and unreliable. And it's
    often hard (or even impossible) to fix such monstrosities.

    <end of rant>

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Sat Aug 23 00:42:01 2025
    From Newsgroup: comp.lang.misc

    On 21/08/2025 03:59, Lawrence DrCOOliveiro wrote:
    On Wed, 20 Aug 2025 23:58:58 +0100, Andy Walker wrote:
    On 20/08/2025 01:43, Lawrence DrCOOliveiro wrote:
    If you were to run an old OS on new hardware, that would need
    drivers for that new hardware, too.
    Yes, but what is so special about a modern disc drive, monitor,
    keyboard, mouse, ... that it needs "the vast majority" of 40M lines
    more than its equivalent for a PDP-11?
    Keyboard and mouse -- USB. [...]

    You've given us a list of 20-odd features of modern systems
    that have been developed since 7th Edition Unix, and could no doubt
    think of another 20. What you didn't attempt was to explain why all
    these nice things need to occupy 40M lines of code. That's, give or
    take, 600k pages of code, call it 2000 books. That's, on your figures,
    just the kernel source; specifications [assuming there are such!] and documentation no doubt double that, and it's already more than normal
    people can read and understand. There is similar bloat is the commands
    and in the manual entries. It's out of control, witness the updates
    that come in every few days. It's fatally easy to say of "sh" or "cc"
    or "firefox" or ... "Wouldn't it be nice if it did X?", and fatally
    hard to say "It shouldn't really be doing X.", as there's always the possibility of someone somewhere who might perhaps be using it.

    See also Janis's nearby article.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Kinross
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Sat Aug 23 02:29:54 2025
    From Newsgroup: comp.lang.misc

    On 23.08.2025 01:42, Andy Walker wrote:
    On 21/08/2025 03:59, Lawrence DrCOOliveiro wrote:
    [...]

    You've given us a list of 20-odd features of modern systems
    that have been developed since 7th Edition Unix, and could no doubt
    think of another 20. What you didn't attempt was to explain why all
    these nice things need to occupy 40M lines of code. That's, give or
    take, 600k pages of code, call it 2000 books. That's, on your figures,
    just the kernel source; [...]

    That was a point I also found to be a very disturbing statement;
    I recall the kernel was designed to be small, the period of stay
    in kernel routines should generally also be short! - And now we
    have millions of lines that are either just idle or used against
    the Unix's design and operating principles?

    Meanwhile - I think probably since AIX? - we don't any more need
    to compile the drivers into the kernel (as formerly with SUN OS,
    for example). But does that really mean that all the drivers now
    bloat the kernel [as external modules] as well? - Sounds horrible.

    But I'm no expert on this topic, so interested to be enlightened
    if the situation is really as bad as Lawrence sketched it.

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Sat Aug 23 02:36:45 2025
    From Newsgroup: comp.lang.misc

    On Sat, 23 Aug 2025 00:42:01 +0100, Andy Walker wrote:

    What you didn't attempt was to explain why all these nice things
    need to occupy 40M lines of code.

    Go look at the code itself.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ivan Shmakov@ivan@siamics.netREMOVE.invalid to comp.lang.misc on Tue Aug 26 18:42:05 2025
    From Newsgroup: comp.lang.misc

    On 2025-08-19, Janis Papanagnou wrote:
    On 19.08.2025 01:45, Andy Walker wrote:

    If you're worried about 15%, that will be more than compensated
    for by your next computer!

    Actually I'm very conservative concerning computers; mine is 15+
    years old, and although I "recently" thought about getting an
    update here it's not my priority. ;-)

    Ah. I thought I was bad, keeping computers 10 years or so! I got
    a new one a couple of years back, and the difference in speed and
    storage was just ridiculous.

    Reading http://en.wikipedia.org/wiki/E-waste , I'm inclined to
    think that keeping computers for a decade might be not so bad
    a thing after all.

    Well, used software tools (and their updates) required me to at
    least upgrade memory! (That's actually one point that annoys me
    in "modern" software development; rarely anyone seems to care
    economizing resource requirements.)

    I doubt it's so much lack of care as it is simply being not a
    priority. Still, all the more reason to direct attention to
    the cases where such care /is/ given. Thankfully, the problem
    /is/ a known one (say, [1]), and IME, there still /are/ lean
    programs to choose from.

    By the by, I've been looking for "simple" self-hosting compilers
    recently - something with source that a semi-dedicated person
    can read through in reasonable time. What I've found so far is
    Pygmy Forth (naturally, I guess) and the T3X family of languages
    [2]. Are there perhaps other such compilers worthy of mention?

    [1] http://spectrum.ieee.org/lean-software-development
    [2] http://pygmy.utoh.org/pygmyforth.html
    [3] http://t3x.org/t3x/

    I'll also try to address here specific points raised elsewhere
    in this thread, particularly news:1087qgv$14ret$1@dont-email.me .

    First, the 4e7 lines of Linux code is somewhat unfair a measure.
    On my system, less than 5% of individual modules built from the
    Linux source are loaded right now:

    $ lsmod | wc -l
    175
    $ find /lib/modules/6.1.0-37-amd64/ -xdev -type f -name \*.ko | wc -l
    4024
    $

    That value would of course vary from system to system, but I'd
    think it's safe to say that in at least 90% of all deployments,
    less than 10% of Linux code will be loaded at any given time.

    For those who are looking for a system with more "comprehensible"
    sources, I would recommend NetBSD. And if anything, I personally
    find its list of supported platforms, http://netbsd.org/ports/ ,
    fairly impressive.

    Don't get me wrong: NetBSD won't fit for every use case Linux-based
    systems cover - the complexity of the Linux kernel isn't there
    for nothing - but just in case you /can/ live with a "limited"
    OS (say, one that doesn't support Docker), thanks to NetBSD, you
    /do/ have that option.

    With regards to applications, while binary distributions tend to
    opt to have the most "fully functional" built of any given
    package - from whence come lots of dependencies - a source-based
    one allows /you/ to choose what you need. And pkgsrc for NetBSD
    is such a distribution. Gentoo is a Linux-based distribution
    along the same lines.

    As to websites and JS libraries, for the past 25 years I've been
    using as my primary one a browser, Lynx, that never had support
    for JS, and likely never will have. IME, an /awful lot/ of
    websites are usable and useful entirely without JS. For those
    interested, I've recently made several comments in defense of
    "JS-free" web and web browsers, such as [4, 5, 6].

    [4] news:ID351XcOrll9pkb7@violet.siamics.net
    [5] news:6brTAD5tWnddeHXd@violet.siamics.net
    [6] news:ii6tqUtTe0Vi-Fnh@violet.siamics.net
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Wed Aug 27 00:28:20 2025
    From Newsgroup: comp.lang.misc

    On Tue, 26 Aug 2025 18:42:05 +0000, Ivan Shmakov wrote:

    First, the 4e7 lines of Linux code is somewhat unfair a measure. On
    my system, less than 5% of individual modules built from the Linux
    source are loaded right now ...

    Greg Kroah-Hartman is reported to have said that a typical
    workstation/server Linux kernel build only needs about 1-+ million
    lines of source code. A more complex build, like an Android kernel,
    needs something like 3|u that.

    For those who are looking for a system with more "comprehensible"
    sources, I would recommend NetBSD. And if anything, I personally
    find its list of supported platforms, http://netbsd.org/ports/ ,
    fairly impressive.

    Bit misleading, though. Note it counts rCLXenrCY (a Linux-based
    hypervisor) as a separate platform. Also, look at all the different
    68k, MIPS, ARM and PowerPC-based machines that are individually
    listed.

    Linux counts platform support based solely on CPU architecture (not
    surprising, since itrCOs just a kernel, not the userland as well). It
    covers all those CPUs listed (except maybe VAX), and a bunch of others
    as well.

    Each directory here <https://github.com/torvalds/linux/tree/master/arch> represents a separate supported architecture. Note extras like arm64,
    riscv, loongarch and s390.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Wed Aug 27 07:53:00 2025
    From Newsgroup: comp.lang.misc

    On 26.08.2025 20:42, Ivan Shmakov wrote:
    On 2025-08-19, Janis Papanagnou wrote:

    Well, used software tools (and their updates) required me to at
    least upgrade memory! (That's actually one point that annoys me
    in "modern" software development; rarely anyone seems to care
    economizing resource requirements.)

    I doubt it's so much lack of care as it is simply being not a
    priority. [...]

    But those are depending each other. - Quoting from your link below...

    Wirth:
    oTime pressure is probably the foremost reason behind the emergence
    of bulky software. The time pressure that designers endure discourages
    careful planning. It also discourages improving acceptable solutions;
    instead, it encourages quickly conceived software additions and
    corrections. Time pressure gradually corrupts an engineerAs standard
    of quality and perfection. It has a detrimental effect on people as
    well as products.o

    And, to be yet more clear; I also think it's [widely] just ignorance!
    (The mere existence of the article you quoted below is per se already
    a strong sign for that. But also other experiences, like talks with
    many IT-folks of various age and background reinforced my opinion on
    that.)

    [...]

    [1] http://spectrum.ieee.org/lean-software-development

    Thanks for the link; worth reading.

    (And also learned BTW that I missed that N. Wirth deceased last year.)

    [...]

    As to websites and JS libraries, for the past 25 years I've been
    using as my primary one a browser, Lynx, that never had support
    for JS, and likely never will have. IME, an /awful lot/ of
    websites are usable and useful entirely without JS. [...]

    Lynx. This is great. - I recall that in the 1990's I had a student in
    my team who had to provide some HTML information; I asked him to test
    his data in two common browsers (back these days I think Netscape and
    the MS IE), and (for obvious reasons), also with Lynx!

    (Privately I had later written HTML/JS to create applications (with
    dynamic content) since otherwise that would not have been possible;
    I had no own server with some application servers available. But I
    didn't use any frameworks or external libraries. Already bad enough.)

    But even with Browsers and JS activated with my old Firefox I cannot
    use or read many websites nowadays; because they demand newer browser
    versions.

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ivan Shmakov@ivan@siamics.netREMOVE.invalid to comp.lang.misc on Sat Aug 30 19:10:42 2025
    From Newsgroup: comp.lang.misc

    On 2025-08-27, Lawrence D'Oliveiro wrote:
    On Tue, 26 Aug 2025 18:42:05 +0000, Ivan Shmakov wrote:

    For those who are looking for a system with more "comprehensible"
    sources, I would recommend NetBSD. And if anything, I personally
    find its list of supported platforms, http://netbsd.org/ports/ ,
    fairly impressive.

    Bit misleading, though. Note it counts "Xen" (a Linux-based
    hypervisor) as a separate platform.

    What do you mean by "Linux-based"? NetBSD supports running
    as both Xen domU (unprivileged) /and/ dom0 (privileged.)
    AIUI, it's possible to run Linux domUs when NetBSD is dom0,
    and vice versa.

    Also, look at all the different 68k, MIPS, ARM and PowerPC-based
    machines that are individually listed.

    Linux counts platform support based solely on CPU architecture (not surprising, since it's just a kernel, not the userland as well).

    There's a "Ports by CPU architecture" section down the NetBSD
    ports page; it lists 16 individual CPU architectures.

    My point was that GNU/Linux distributions typically support
    less, and so do other BSDs (IIRC.) For instance, [1] lists 8:

    Architectures: all amd64 arm64 armel armhf i386 ppc64el riscv64 s390x

    [1] http://cdn-fastly.deb.debian.org_debian_dists_trixie_InRelease

    (And I'm pretty certain I saw ones that only support one or two.)

    The way I see it, it's the /kernel/ that it takes the most
    effort to port to a new platform - as it's where the support
    for peripherals lives, including platform-specific ones.

    No idea why Debian doesn't support other architectures supported
    by Linux. I'm going to guess it's lack of volunteers.

    It covers all those CPUs listed (except maybe VAX), and a bunch of
    others as well.

    Each directory here <https://github.com/torvalds/linux/tree/master/arch> represents a separate supported architecture. Note extras like arm64,

    Getting actual data out of Microsoft Github pages is a bit more
    involved than I'd prefer. Still:

    $ curl -- https://github.com/torvalds/linux/tree/master/arch \
    | pcregrep -ao1 -- "\"path\":\"arch/([/0-9a-z_.-]+)\"" | nl -ba
    1 alpha
    2 arc
    3 arm
    4 arm64
    5 csky
    6 hexagon
    7 loongarch
    8 m68k
    9 microblaze
    10 mips
    11 nios2
    12 openrisc
    13 parisc
    14 powerpc
    15 riscv
    16 s390
    17 sh
    18 sparc
    19 um
    20 x86
    21 xtensa
    22 .gitignore
    $

    So, yes, I guess it does beat NetBSD in that respect. But I
    still think that if you're interested in understanding how your
    OS works - at the source code level - you'd be better with
    NetBSD than with a Linux-based OS. (Not /quite/ a priority
    for me personally, TBH, but I appreciate it being an option.)
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ivan Shmakov@ivan@siamics.netREMOVE.invalid to comp.lang.misc on Sat Aug 30 19:39:49 2025
    From Newsgroup: comp.lang.misc

    On 2025-08-27, Janis Papanagnou wrote:
    On 26.08.2025 20:42, Ivan Shmakov wrote:
    On 2025-08-19, Janis Papanagnou wrote:

    Well, used software tools (and their updates) required me to at
    least upgrade memory! (That's actually one point that annoys me
    in "modern" software development; rarely anyone seems to care
    economizing resource requirements.)

    I doubt it's so much lack of care as it is simply being not a
    priority.

    But those are depending each other.

    I guess I should've expressed myself better: engineering is
    all about trade-offs, and there're often other things to care
    about once the program runs "fast enough" on the hardware that
    the customers are /assumed/ to have.

    Not to mention that taking too long to 'polish' your product,
    you risk ending up lagging behind your competitors.

    I could only hope that environmental concerns will eventually
    make resource usage a more important issue for code writers.

    And, to be yet more clear; I also think it's [widely] just ignorance!
    (The mere existence of the article you quoted below is per se already a strong sign for that. But also other experiences, like talks with many IT-folks of various age and background reinforced my opinion on that.)

    I suppose it might be the case of people involved with computers
    professionally not seeing much point in acquiring the skills that
    aren't in demand by employers.

    (Privately I had later written HTML/JS to create applications (with
    dynamic content) since otherwise that would not have been possible;
    I had no own server with some application servers available. But I
    didn't use any frameworks or external libraries. Already bad enough.)

    I can't say I'm a big fan of JS or ES, yet there're certainly
    languages I like even less. FWIW, I prefer to stick to ES 5.1,
    http://262.ecma-international.org/5.1/ specifically, as then I
    can use http://duktape.org/ or http://mujs.com/ to test the
    bulk of my code, rather than running it in Chromium or Firefox.

    Like I've mentioned elsewhere, it's not the language, or even
    its use to create web applications, that irks me: it's that
    often enough when I want some data, what I get instead is some
    application that I /must/ use to access that same data - in a
    manner predefined by its developer (say, one record at a time),
    and not particularly conductive to the task /I/ have at hand.

    As to frameworks, my /impression/ is that it makes sense to
    familiarize oneself with them only when there're actually
    /lots/ of similar programming problems that need to be solved,
    particularly when writing code as part of a team. As it never
    was the case for me personally, I've never seen much sense in
    investing effort into learning any framework, JS or otherwise.

    But even with Browsers and JS activated with my old Firefox I cannot
    use or read many websites nowadays; because they demand newer browser versions.

    "Demand" how?

    Server-side code can of course make arbitrary decisions based
    on the User-Agent: string, but that's a poor practice in general,
    and typically such restrictions can be bypassed by reading the
    archived copy of the webpage from http://web.archive.org/ .

    Also works when it's not a browser but /TLS/ version issue.

    Alternatively, associated JS code can test browser's capabilities,
    but that can be circumvented by disabling JS altogether.

    Also to mention is that many websites these days rely on some
    sort of "DDoS protection service" external to them. (I run my
    own servers, so I /do/ know some of the pain of mitigating heaps
    of junk requests originating from botnets - mainly compromised
    "wireless routers" I believe.)

    Such services employ captchas, and those in turn require JS,
    and might require recent browser versions as well. If that's
    the case, http://web.archive.org/ might or might not help.

    Other than using Wayback Machine, I believe there's no easy
    solution to this problem: should the operator disable "protection
    service," they risk the site becoming bogged down by junk requests
    and no longer available to legitimate users. Conversely, by
    employing such a service, they inconvenience their users, for
    even those who /do/ run modern browsers, will presumably have
    better things to do than solving captchas.

    So, personally, when encountering such behavior, I try Wayback
    Machine first. If it doesn't get me a version of the webpage
    as recent as I need, I consider contacting the website operator
    so that they might check and possibly tweak their "protection"
    settings to allow archival. If they can't, or won't, fix it,
    well, as mTCP HTTPSERV.EXE puts it, "countless more exist."
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Sat Aug 30 22:43:12 2025
    From Newsgroup: comp.lang.misc

    On Sat, 30 Aug 2025 19:10:42 +0000, Ivan Shmakov wrote:

    On Wed, 27 Aug 2025 00:28:20 -0000 (UTC), Lawrence DrCOOliveiro
    wrote:

    Bit misleading, though. Note it counts "Xen" (a Linux-based
    hypervisor) as a separate platform.

    What do you mean by "Linux-based"?

    I mean that Xen runs an actual Linux kernel in the hypervisor, and
    supports regular Linux distros as guests -- they donrCOt need to be
    modified to specially support Xen, or any other hypervisor. ItrCOs
    Linux above, and Linux below -- Linux at every layer.

    NetBSD supports running as both Xen domU (unprivileged) /and/ dom0 (privileged.)

    Linux doesnrCOt count these as separate platforms. TheyrCOre just
    considered a standard part of regular platform support.

    Linux counts platform support based solely on CPU architecture (not
    surprising, since it's just a kernel, not the userland as well).

    There's a "Ports by CPU architecture" section down the NetBSD
    ports page; it lists 16 individual CPU architectures.

    ThatrCOs not as many as Linux.

    My point was that GNU/Linux distributions typically support
    less ...

    But thatrCOs an issue with the various distributions, not with the Linux
    kernel itself. In the BSD world, there is no separate of rCLkernelrCY from rCLdistributionrCY. That makes things less flexible than the Linux world.

    For example, while base Debian itself may support something under a
    dozen architectures, there are offshoots of Debian that cover others.

    The way I see it, it's the /kernel/ that it takes the most
    effort to port to a new platform - as it's where the support
    for peripherals lives, including platform-specific ones.

    Given that the Linux kernel supports more of these different platforms
    than any BSD can manage, I think yourCOre just reinforcing my point.

    But I still think that if you're interested in understanding how
    your OS works - at the source code level - you'd be better with
    NetBSD than with a Linux-based OS.

    Linux separates the kernel from the userland. That makes things
    simpler than running everything together, as the BSDs do.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Sat Aug 30 22:45:27 2025
    From Newsgroup: comp.lang.misc

    On Sat, 30 Aug 2025 19:39:49 +0000, Ivan Shmakov wrote:

    Not to mention that taking too long to 'polish' your product, you
    risk ending up lagging behind your competitors.

    I would say, the open-source world is a counterexample to this. Look at
    how long it took GNU and Linux to end up dominating the entire computing landscape -- it didnrCOt happen overnight.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Sun Aug 31 08:32:20 2025
    From Newsgroup: comp.lang.misc

    On 30.08.2025 21:39, Ivan Shmakov wrote:
    On 2025-08-27, Janis Papanagnou wrote:
    [...]

    But even with Browsers and JS activated with my old Firefox I cannot
    use or read many websites nowadays; because they demand newer browser versions.

    "Demand" how?

    All sorts of "defunct"; from annoying notes telling me to upgrade
    my browser (while still seeing content and operating), then that
    message with a complete dis-functionality of dynamic content, and/
    or mis-formatted (to the degree of being unusable), or there's no
    text information at all displayed. And so on.

    If there's an issue with pages/services like reddit or sourceforge
    or (in the past; they seem to have fixed something) stackoverflow,
    or free services (news, weather, tv-program, etc.) I can just skip
    and ignore those services. But there's also commercial pages I've
    to use (like banks, tax/gov, or free mail providers, etc.) that I
    have (or need) to use then I must switch to another system or I'm
    out of luck. (Luckily I have systems available to choose from.)

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.misc on Sun Aug 31 08:34:59 2025
    From Newsgroup: comp.lang.misc

    On 30.08.2025 21:39, Ivan Shmakov wrote:
    [...]

    Not to mention that taking too long to 'polish' your product,
    you risk ending up lagging behind your competitors.

    It's not "polishing" that I was speaking about.

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ivan Shmakov@ivan@siamics.netREMOVE.invalid to comp.lang.misc on Sun Aug 31 13:35:51 2025
    From Newsgroup: comp.lang.misc

    On 2025-08-30, Lawrence D'Oliveiro wrote:
    On Sat, 30 Aug 2025 19:39:49 +0000, Ivan Shmakov wrote:

    Not to mention that taking too long to 'polish' your product, you
    risk ending up lagging behind your competitors.

    I would say, the open-source world is a counterexample to this.
    Look at how long it took GNU and Linux to end up dominating the
    entire computing landscape -- it didn't happen overnight.

    Indeed, one good thing about free software is that when one
    company closes down, another can pick up and go on from there.
    Such as how Netscape is no more, yet the legacy of its Navigator
    still survives in Firefox.

    I'm not sure how much of a consolation it is to the people
    who owned the companies that failed, though.

    Also, what indication is there that GNU is 'dominating' the
    landscape? Sure, Linux is everywhere (such as in now ubiquitous
    Android phones and TVs and whatnot), but I don't quite see GNU
    being adopted as widely.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.lang.misc on Sun Aug 31 22:40:49 2025
    From Newsgroup: comp.lang.misc

    On Sun, 31 Aug 2025 13:35:51 +0000, Ivan Shmakov wrote:

    I'm not sure how much of a consolation it is to the people who owned
    the companies that failed, though.

    Companies fail all the time, open source or no open source. When a
    company that has developed a piece of proprietary software fails, then
    the software dies with the company. With open source, the software
    stands a chance of living on.

    E.g. Loki was an early attempt at developing games on Linux. They
    failed. But the SDL framework that they created for low-latency
    multimedia graphics lives on.

    Also, what indication is there that GNU is 'dominating' the
    landscape? Sure, Linux is everywhere (such as in now ubiquitous
    Android phones and TVs and whatnot), but I don't quite see GNU
    being adopted as widely.

    Look at all the markets that Linux has taken away from Microsoft --
    Windows Media Center, Windows Home Server -- all defunct. Windows
    Server too is in slow decline. And now handheld gaming with the Steam
    Deck. You will find GNU there.
    --- Synchronet 3.21a-Linux NewsLink 1.2