• Fallacies Advocating Software Bloat

    From Ben Collver@bencollver@tilde.pink to comp.misc on Sun Dec 21 16:13:36 2025
    From Newsgroup: comp.misc

    Fallacies Advocating Software Bloat
    ===================================

    New computers are more efficient than old ones; therefore we need
    to make all software so bloated that it does not run on old
    computers, to make sure that those old computer become obsolete and
    people stop using them.

    This one is often used by environmentalists, and it is wrong in so
    many obvious ways.

    * New computers generally consume more power than old ones. Of x86
    CPUs anything older than Pentium II uses only a single-digit amount
    of watts.

    * Bloated code causes also new computers to use more electricity than
    would otherwise be required for the task.

    * Most importantly, when we create non-bloated computer programs, we
    are not necessarily targeting old CPUs--we are targeting old
    instruction sets. The patents of those old instruction sets are
    already expired and CPUs that use them can be freely produced by
    anyone. They are also widely supported by compilers and other
    existing software. Making software that works on old and/or
    patent-free instruction sets is necessary to preserve our digital
    freedoms.

    * If those old computers end up not being used, they are thrown to
    landfills, causing more environmental damage that way.

    Everything new is always more secure than the old; therefore we
    need to make all software so bloated that it does not run on old
    computers, to force people to use new computers that are so much
    more secure than the old ones.

    This one is often used by corporate security experts.

    * Everything new is not always more secure. In fact the opposite is
    often true--mindlessly changing stuff just for the sake of novelty
    creates an infrastructure that will never become properly
    battle-tested. Typically the pieces of software that are assumed to
    be the most secure programs in existence have been around for a
    long time, and their current form is the result of decades of
    small, incremental and carefully thought changes to their codebase.

    * Because general purpose computers are Turing-complete, there is not
    a single reason why an old computer would be somehow less secure
    than a new one. Encryption is mathematics, and the exact same
    encryption can be performed on any two machines with the same
    levels of Turing-powerfulness. If anything, new computers are
    actually often less Turing-powerful than the old ones, thanks to
    various firmware- and hardware-level restrictions (firmware
    signing, UEFI Secure Boot etc.) that have been implemented to them
    because Microsoft has demanded the hardware manufacturers to do so.

    * What corporate security experts usually mean with "old computers"
    is actually "old operating systems" (or more specifically "old
    versions of Windows"), because they somehow associate the
    individual computers with the operating system that was originally
    installed to them in the factory. But the operating system is not
    an integrated part of the computer itself--instead it is just a
    bootable program that can be easily changed.

    Monitors nowadays use less power than the CRTs of old; therefore,
    to save power, we must make bloated user interfaces that don't work
    with small resolutions.

    This one is often used by HD/4K/8K enthusiasts.

    * Old CRTs don't really use that much power at all--a typical 15"
    color CRT uses less than most lightbulbs. Monochrome CRTs are even
    less power hungry, usually consuming something between 15 to
    30 watts of power. The CRT itself doesn't actually usually use much
    power. The neck of the CRT, where the electron gun resides, is
    where most of the CRT's power is spent. Corporate propaganda often
    states a very commonly heard lie that CRTs consume hundreds of
    watts of power, but that's not physically possible--the neck of the
    CRT would melt if that was true. Although there are exceptions to
    the rule, most CRT displays are actually quite power efficient for
    a self-illuminating display technology.

    * With "modern" flat panel displays, especially OLEDs, the power
    consumption grows in an almost linear fashion with the area of the
    display. This means that we can actually save more power by
    creating scalable user interfaces that also work well on smaller
    display resolutions.

    From: <http://sininenankka.dy.fi/leetos/swbloat.php>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.misc on Sun Dec 21 18:52:55 2025
    From Newsgroup: comp.misc

    On Sun, 21 Dec 2025 16:13:36 -0000 (UTC), Ben Collver wrote:

    Fallacies Advocating Software Bloat

    A lot of the arguments seem to be handwaving, doctrinaire stuff.

    For comparison, here <https://www.youtube.com/watch?v=Nbv9L-WIu0s> is
    an actual analysis of working code, to see how the rCLbloatrCY creeps in
    over time, and what it is actually achieving.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Kettlewell@invalid@invalid.invalid to comp.misc on Sun Dec 21 19:12:30 2025
    From Newsgroup: comp.misc

    Lawrence DrCOOliveiro <ldo@nz.invalid> writes:
    Ben Collver wrote:

    Fallacies Advocating Software Bloat

    A lot of the arguments seem to be handwaving, doctrinaire stuff.

    None of the claimed positions actually cite any real, specific people
    who hold them. The author appears to be arguing with people who exist
    only inside their own head.
    --
    https://www.greenend.org.uk/rjk/
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Computer Nerd Kev@not@telling.you.invalid to comp.misc on Mon Dec 22 13:26:58 2025
    From Newsgroup: comp.misc

    Richard Kettlewell <invalid@invalid.invalid> wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    Ben Collver wrote:
    Fallacies Advocating Software Bloat

    A lot of the arguments seem to be handwaving, doctrinaire stuff.

    None of the claimed positions actually cite any real, specific people
    who hold them. The author appears to be arguing with people who exist
    only inside their own head.

    At least so far as the first argument goes, it definitely reflects
    the attitude of some Linux kernel developers:

    From the LKML:
    "On Tue, Apr 08, 2025 at 11:16:26AM +0100, Maciej W. Rozycki wrote:
    On Sun, 6 Apr 2025, Borislav Petkov wrote:
    I don't have your old rust and maybe you should simply throw it
    in the garbage - that thing is probably not worth the
    electricity it uses to power up... :-)

    C'mon, these are good room heaters with nice extra side effects. ;)

    Maybe we should intentionally prevent booting Linux on such machines and make
    this our community's contribution in the fight against global warming!

    :-P
    "
    --
    __ __
    #_ < |\| |< _#
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Kettlewell@invalid@invalid.invalid to comp.misc on Mon Dec 22 09:39:55 2025
    From Newsgroup: comp.misc

    Computer Nerd Kev <not@telling.you.invalid> writes:
    Richard Kettlewell <invalid@invalid.invalid> wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    Ben Collver wrote:
    Fallacies Advocating Software Bloat

    A lot of the arguments seem to be handwaving, doctrinaire stuff.

    None of the claimed positions actually cite any real, specific people
    who hold them. The author appears to be arguing with people who exist
    only inside their own head.

    At least so far as the first argument goes, it definitely reflects
    the attitude of some Linux kernel developers:

    From the LKML:
    "On Tue, Apr 08, 2025 at 11:16:26AM +0100, Maciej W. Rozycki wrote:
    On Sun, 6 Apr 2025, Borislav Petkov wrote:
    I don't have your old rust and maybe you should simply throw it
    in the garbage - that thing is probably not worth the
    electricity it uses to power up... :-)

    C'mon, these are good room heaters with nice extra side effects. ;)

    Maybe we should intentionally prevent booting Linux on such machines
    and make this our community's contribution in the fight against
    global warming!

    :-P
    "

    There is no statement about making anything more rCybloatedrCO in that
    quote.
    --
    https://www.greenend.org.uk/rjk/
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ben Collver@bencollver@tilde.pink to comp.misc on Mon Dec 22 14:36:29 2025
    From Newsgroup: comp.misc

    On 2025-12-22, Computer Nerd Kev <not@telling.you.invalid> wrote:
    At least so far as the first argument goes, it definitely reflects
    the attitude of some Linux kernel developers:

    Regarding power usage it's fairly simple:

    Older computers had smaller wattage power supplies, and the typical
    usage pattern was to power down when you weren't using it.

    UPS battery backup power can be educational. I seem to recall that
    the CRTs would drain the batteries faster than the LCDs did, which
    contradicts one of the arguments in the original article.

    On the other hand, power usage is only the tip of the iceberg in terms
    of ecological footprint. I have no idea about the comparative cost of manufacture, nor the comparative load of toxic materials. Considering
    these factors it would make sense to extend the service life.

    -Ben
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From not@not@telling.you.invalid (Computer Nerd Kev) to comp.misc on Tue Dec 23 07:43:11 2025
    From Newsgroup: comp.misc

    Richard Kettlewell <invalid@invalid.invalid> wrote:
    Computer Nerd Kev <not@telling.you.invalid> writes:
    Richard Kettlewell <invalid@invalid.invalid> wrote:
    None of the claimed positions actually cite any real, specific people
    who hold them. The author appears to be arguing with people who exist
    only inside their own head.

    At least so far as the first argument goes, it definitely reflects
    the attitude of some Linux kernel developers:

    From the LKML:
    "On Tue, Apr 08, 2025 at 11:16:26AM +0100, Maciej W. Rozycki wrote:
    On Sun, 6 Apr 2025, Borislav Petkov wrote:
    I don't have your old rust and maybe you should simply throw it
    in the garbage - that thing is probably not worth the
    electricity it uses to power up... :-)

    C'mon, these are good room heaters with nice extra side effects. ;)

    Maybe we should intentionally prevent booting Linux on such machines
    and make this our community's contribution in the fight against
    global warming!

    :-P
    "

    There is no statement about making anything more 'bloated' in that
    quote.

    I read the statements about bloat in the article assuming a good
    dose of hyperbole. I doubt the author really thinks programmers are
    setting out to make new programs bloated _just_ so they can't run
    on old computers, but they don't consider it an issue (or even say
    it's an advantage) when such software won't run on them.

    Anyway today I tried out booting their lEEt/OS from a floppy on
    this Pentium 1 PC I'm posting from and it's impressive work.
    Although trying to run one (ambitious) MSDOS program under their
    "ST-DOS" did cause a reboot and the CDROM driver wouldn't load
    after that, as one-man hobby OSs go that's much less trouble than I
    expect. No FAT32 support though.

    http://sininenankka.dy.fi/leetos/
    --
    __ __
    #_ < |\| |< _#
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From not@not@telling.you.invalid (Computer Nerd Kev) to comp.misc on Tue Dec 23 08:01:22 2025
    From Newsgroup: comp.misc

    Ben Collver <bencollver@tilde.pink> wrote:
    On 2025-12-22, Computer Nerd Kev <not@telling.you.invalid> wrote:
    At least so far as the first argument goes, it definitely reflects
    the attitude of some Linux kernel developers:

    Regarding power usage it's fairly simple:

    Older computers had smaller wattage power supplies, and the typical
    usage pattern was to power down when you weren't using it.

    Some faster modern processors designed for portable or embedded use
    are very energy efficient too, but people won't use them because
    they say even they're too slow. Even the Raspberry Pis have become
    so power hungry that a cooling fan is strongly recommended for the
    newer models (besides Zero and Pico). It's all very silly IMHO.

    UPS battery backup power can be educational. I seem to recall that
    the CRTs would drain the batteries faster than the LCDs did, which contradicts one of the arguments in the original article.

    Comparing wattage ratings on TVs I've noticed the trend has been
    that while there are savings in watts/display-area with new LCD/LED
    tech, they tend to be offset by people choosing to increase the
    display area, for equal or greater power consumption overall. If
    software forces the increase in display area by designing user
    interfaces that don't work on smaller displays, he might have a
    point about the same issue with computer monitors. Mind you I can
    still find/use software that works fine on small displays, so I
    don't agree that users have no choice about that.

    On the other hand, power usage is only the tip of the iceberg in terms
    of ecological footprint. I have no idea about the comparative cost of manufacture, nor the comparative load of toxic materials. Considering
    these factors it would make sense to extend the service life.

    Well I'm coming to you from behind a CRT right now!
    --
    __ __
    #_ < |\| |< _#
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From candycanearter07@candycanearter07@candycanearter07.nomail.afraid to comp.misc on Tue Dec 23 16:00:03 2025
    From Newsgroup: comp.misc

    Ben Collver <bencollver@tilde.pink> wrote at 16:13 this Sunday (GMT):
    Fallacies Advocating Software Bloat
    ===================================

    New computers are more efficient than old ones; therefore we need
    to make all software so bloated that it does not run on old
    computers, to make sure that those old computer become obsolete and
    people stop using them.

    This one is often used by environmentalists, and it is wrong in so
    many obvious ways.

    * New computers generally consume more power than old ones. Of x86
    CPUs anything older than Pentium II uses only a single-digit amount
    of watts.

    * Bloated code causes also new computers to use more electricity than
    would otherwise be required for the task.

    * Most importantly, when we create non-bloated computer programs, we
    are not necessarily targeting old CPUs--we are targeting old
    instruction sets. The patents of those old instruction sets are
    already expired and CPUs that use them can be freely produced by
    anyone. They are also widely supported by compilers and other
    existing software. Making software that works on old and/or
    patent-free instruction sets is necessary to preserve our digital
    freedoms.

    * If those old computers end up not being used, they are thrown to
    landfills, causing more environmental damage that way.

    Everything new is always more secure than the old; therefore we
    need to make all software so bloated that it does not run on old
    computers, to force people to use new computers that are so much
    more secure than the old ones.

    This one is often used by corporate security experts.

    * Everything new is not always more secure. In fact the opposite is
    often true--mindlessly changing stuff just for the sake of novelty
    creates an infrastructure that will never become properly
    battle-tested. Typically the pieces of software that are assumed to
    be the most secure programs in existence have been around for a
    long time, and their current form is the result of decades of
    small, incremental and carefully thought changes to their codebase.

    Don't forget how the forced AI usage introduces so many security issues

    * Because general purpose computers are Turing-complete, there is not
    a single reason why an old computer would be somehow less secure
    than a new one. Encryption is mathematics, and the exact same
    encryption can be performed on any two machines with the same
    levels of Turing-powerfulness. If anything, new computers are
    actually often less Turing-powerful than the old ones, thanks to
    various firmware- and hardware-level restrictions (firmware
    signing, UEFI Secure Boot etc.) that have been implemented to them
    because Microsoft has demanded the hardware manufacturers to do so.

    Old computers might run it a bit slower, but yeah

    * What corporate security experts usually mean with "old computers"
    is actually "old operating systems" (or more specifically "old
    versions of Windows"), because they somehow associate the
    individual computers with the operating system that was originally
    installed to them in the factory. But the operating system is not
    an integrated part of the computer itself--instead it is just a
    bootable program that can be easily changed.

    Blame Microsoft for putting Windows on literally every computer, and
    also the whole making the next versions of Windows so "advanced" and "futuristic" that you simply MUST buy a new computer and the old one
    just can't POSSIBLY run it

    Monitors nowadays use less power than the CRTs of old; therefore,
    to save power, we must make bloated user interfaces that don't work
    with small resolutions.

    This one is often used by HD/4K/8K enthusiasts.

    * Old CRTs don't really use that much power at all--a typical 15"
    color CRT uses less than most lightbulbs. Monochrome CRTs are even
    less power hungry, usually consuming something between 15 to
    30 watts of power. The CRT itself doesn't actually usually use much
    power. The neck of the CRT, where the electron gun resides, is
    where most of the CRT's power is spent. Corporate propaganda often
    states a very commonly heard lie that CRTs consume hundreds of
    watts of power, but that's not physically possible--the neck of the
    CRT would melt if that was true. Although there are exceptions to
    the rule, most CRT displays are actually quite power efficient for
    a self-illuminating display technology.

    * With "modern" flat panel displays, especially OLEDs, the power
    consumption grows in an almost linear fashion with the area of the
    display. This means that we can actually save more power by
    creating scalable user interfaces that also work well on smaller
    display resolutions.

    From: <http://sininenankka.dy.fi/leetos/swbloat.php>


    CRTs arguably are a bit harder to read and I personally hate the noise
    it makes, but yeah. Also, there are plenty of small LCD monitors that
    people might want/need to use that could benefit, and also in general
    having more compact UI means you can put more windows on your screen
    --
    user <candycane> is generated from /dev/urandom
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From snipeco.2@snipeco.2@gmail.com (Sn!pe) to comp.misc on Tue Dec 23 16:19:07 2025
    From Newsgroup: comp.misc

    candycanearter07 wrote:

    Blame Microsoft for putting Windows on literally every computer

    Not quite _every_ computer.
    --
    ^-^. Sn!pe My pet rock Gordon just is.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Kettlewell@invalid@invalid.invalid to comp.misc on Tue Dec 23 20:13:58 2025
    From Newsgroup: comp.misc

    snipeco.2@gmail.com (Sn!pe) writes:
    candycanearter07 wrote:
    Blame Microsoft for putting Windows on literally every computer

    Not quite _every_ computer.

    Not even every x86 PC.
    --
    https://www.greenend.org.uk/rjk/
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.misc on Tue Dec 23 20:32:04 2025
    From Newsgroup: comp.misc

    On Tue, 23 Dec 2025 16:00:03 -0000 (UTC), candycanearter07 wrote:

    Blame Microsoft for putting Windows on literally every computer ...

    Literally no. Android devices, for example (unlike Apple ones) are
    actual computers. And none of them run Windows.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From mechanicjay@mechanicjay@sol.smbfc.net (Mechanicjay) to comp.misc on Wed Dec 24 06:53:30 2025
    From Newsgroup: comp.misc

    On Tue, 23 Dec 2025, Richard Kettlewell <invalid@invalid.invalid> wrote: >snipeco.2@gmail.com (Sn!pe) writes:
    candycanearter07 wrote:
    Blame Microsoft for putting Windows on literally every computer

    Not quite _every_ computer.

    Not even every x86 PC.

    I believe I've seen Windows 2000 running on a DEC Alpha.
    I've also used a 68K machine running Microsoft Xenix.

    Imagine what could have been!!

    --
    Sent from my Personal DECstation 5000/25
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From candycanearter07@candycanearter07@candycanearter07.nomail.afraid to comp.misc on Wed Dec 24 14:10:03 2025
    From Newsgroup: comp.misc

    Sn!pe <snipeco.2@gmail.com> wrote at 16:19 this Tuesday (GMT):
    candycanearter07 wrote:

    Blame Microsoft for putting Windows on literally every computer

    Not quite _every_ computer.


    Enough that most people don't know there IS an alternative (or that the
    only two choices are Windows and MacOS)
    --
    user <candycane> is generated from /dev/urandom
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kerr-Mudd, John@admin@127.0.0.1 to comp.misc on Wed Dec 24 16:57:03 2025
    From Newsgroup: comp.misc

    On Wed, 24 Dec 2025 14:10:03 -0000 (UTC)
    candycanearter07 <candycanearter07@candycanearter07.nomail.afraid> wrote:

    Sn!pe <snipeco.2@gmail.com> wrote at 16:19 this Tuesday (GMT):
    candycanearter07 wrote:

    Blame Microsoft for putting Windows on literally every computer

    Not quite _every_ computer.


    Enough that most people don't know there IS an alternative (or that the
    only two choices are Windows and MacOS)


    Wot no Plan 9?
    --
    Bah, and indeed Humbug.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From snipeco.2@snipeco.2@gmail.com (Sn!pe) to comp.misc on Wed Dec 24 17:52:37 2025
    From Newsgroup: comp.misc

    Kerr-Mudd, John <admin@127.0.0.1> wrote:

    candycanearter07 <candycanearter07@candycanearter07.nomail.afraid> wrote:
    Sn!pe <snipeco.2@gmail.com> wrote at 16:19 this Tuesday (GMT):
    candycanearter07 wrote:

    Blame Microsoft for putting Windows on literally every computer

    Not quite _every_ computer.


    Enough that most people don't know there IS an alternative (or that the only two choices are Windows and MacOS)


    Wot no Plan 9?


    From Outer Space!
    --
    ^-^. Sn!pe My pet rock Gordon just is.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.misc on Wed Dec 24 19:01:38 2025
    From Newsgroup: comp.misc

    On Wed, 24 Dec 2025 06:53:30 -0000 (UTC), Mechanicjay wrote:

    I believe I've seen Windows 2000 running on a DEC Alpha.

    All the non-x86 ports of Windows NT have failed.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.misc on Wed Dec 24 19:02:02 2025
    From Newsgroup: comp.misc

    On Wed, 24 Dec 2025 16:57:03 +0000, Kerr-Mudd, John wrote:

    Wot no Plan 9?

    What exactly does Plan9 get you?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From snipeco.2@snipeco.2@gmail.com (Sn!pe) to comp.misc on Wed Dec 24 19:13:09 2025
    From Newsgroup: comp.misc

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Wed, 24 Dec 2025 16:57:03 +0000, Kerr-Mudd, John wrote:

    Wot no Plan 9?

    What exactly does Plan9 get you?

    ChatGPT says:
    --------
    Plan 9 from Outer Space is a 1959 American science-fiction film directed
    by Ed Wood. It's infamous for its low budget, wooden acting, continuity
    errors, and bizarre plot. The story involves aliens attempting to stop
    humans from creating a doomsday weapon by resurrecting the dead.

    It's often cited as one of the "worst films ever made," yet it gained
    cult status for its unintentional humor and earnest, if chaotic,
    execution.
    --
    ^-^. Sn!pe My pet rock Gordon just is.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From kludge@kludge@panix.com (Scott Dorsey) to comp.misc on Wed Dec 24 15:44:02 2025
    From Newsgroup: comp.misc

    Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> wrote:
    On Wed, 24 Dec 2025 16:57:03 +0000, Kerr-Mudd, John wrote:

    Wot no Plan 9?

    What exactly does Plan9 get you?

    The ability to have as much stuff as possible running outside the kernel
    ring. The more stuff you can kick out of the kernel, the less stuff there
    is which can cause catastrophic failure when things go wrong.

    OSX started out adapting some of the Plan9 philosophy but it mostly turned
    into bloat and the current OSX kernel looks nothing like a classic microkernel.

    There are also some distributed processing features built into Plan9. To
    be honest I don't think those are really of much benefit in the modern environment but they are pretty ingenious.
    --scott
    --
    "C'est un Nagra. C'est suisse, et tres, tres precis."
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.misc on Wed Dec 24 22:51:11 2025
    From Newsgroup: comp.misc

    On Wed, 24 Dec 2025 15:44:02 -0500 (EST), Scott Dorsey wrote:

    On Wed, 24 Dec 2025 19:02:02 -0000 (UTC), Lawrence DrCOOliveiro wrote:

    What exactly does Plan9 get you?

    The ability to have as much stuff as possible running outside the
    kernel ring. The more stuff you can kick out of the kernel, the less
    stuff there is which can cause catastrophic failure when things go
    wrong.

    Ah, the hoary old microkernel refrain. YourCOd think, after something
    like four decades of repeating the same tired old claims without being
    able to back them up, the microkernel fans would have given up by now.

    OSX started out adapting some of the Plan9 philosophy but it mostly
    turned into bloat and the current OSX kernel looks nothing like a
    classic microkernel.

    Gee, I wonder why they succumbed to real-world evidence in that way ...
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Cloud Nine@cloud@nine.invalid to comp.misc on Thu Dec 25 00:05:48 2025
    From Newsgroup: comp.misc

    On Wed, 24 Dec 2025 15:44:02 -0500 (EST)
    kludge@panix.com (Scott Dorsey) wrote:

    Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> wrote:
    On Wed, 24 Dec 2025 16:57:03 +0000, Kerr-Mudd, John wrote:

    Wot no Plan 9?

    What exactly does Plan9 get you?

    The ability to have as much stuff as possible running outside the kernel ring. The more stuff you can kick out of the kernel, the less stuff there
    is which can cause catastrophic failure when things go wrong.

    OSX started out adapting some of the Plan9 philosophy but it mostly turned into bloat and the current OSX kernel looks nothing like a classic microkernel.

    There are also some distributed processing features built into Plan9. To
    be honest I don't think those are really of much benefit in the modern environment but they are pretty ingenious.
    --scott

    Some of the ideas in plan9 are useful. However, the plan9 project fails to provide a coherent and intutive GUI, editor, web browser, and file browser that are ready to use out of the box. It is too arcane for mere mortals to even try to use.

    Having great ideas does no good when one must have a compsci PhD as a bar to entry.

    As with Linux and Linux distros, the project is more about promoting the project than providing something for people to improve their work flow.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ivan Shmakov@ivan@siamics.netREMOVE.invalid to comp.misc on Thu Dec 25 06:20:31 2025
    From Newsgroup: comp.misc

    Sami Tikkanen wrote:

    http://sininenankka.dy.fi/leetos/swbloat.php

    lEEt/OS seems like an interesting project, and I certainly have
    sympathy to its "easily programmable", "user always in control"
    and "keep old computers in use" [philosophy] as well.

    I have my doubts regarding the goals it sets and how it tries
    to approach them, but there might be some overlap with my own
    efforts, and I wouldn't mind contributing my code to this project.

    In particular, regarding "old computers," I've seen recent reports
    of running, successfully, current NetBSD on 1990-era hardware:
    MIPS- and 80486-based. (The bundled version of GCC apparently
    takes /minutes/ to compile "hello world" on a 80486, though.
    Sadly cannot test it myself: my only Socket 3 mainboard reports
    "BIOS ROM checksum error.")

    Personally, I /think/ that while a DOS-like system makes every
    sense for something like IBM PC/XT, NetBSD - with its focus on
    portability - fits rather well for 486+ and comparable machines.

    The webpage is no doubt a bit presumptuous on the whole, but
    the "counterclaims" given IMO have merit.

    Anyway, thanks Ben Collver for bringing it here.

    [philosophy] http://sininenankka.dy.fi/leetos/philosophy.php

    "New computers are more efficient than old ones; therefore we need
    to make all software so bloated that it does not run on old computers,
    to make sure that those old computer become obsolete and people stop
    using them."

    This one is often used by environmentalists, and it is wrong in so
    many obvious ways.

    That's about as "environmentalist" as playing games on Steam -
    on GNU/Linux - is "free software activism."

    * New computers generally consume more power than old ones. Of x86
    CPUs anything older than Pentium II uses only a single-digit amount
    of watts.

    That's somewhat offset by the power drawn by the chipset and
    peripherals, though. For an example, I've just started my
    Pentium 166 MMX-based box (PSU + mainboard + PCI VGA & NIC)
    and it's under 30 W during POST. I gather adding an IDE HDD
    there would add some 15 W (during active use) on top of that.

    For comparison, Olinuxino A64 board is specified to use a
    10 W PSU. It also has better performance, doesn't require
    IDE storage (running off an SDHC card), and takes much less
    space on one's desk. And it's OSHW, too.

    * Bloated code causes also new computers to use more electricity
    than would otherwise be required for the task.

    Certainly. And that includes not only "installed" applications,
    but also whatever web applications one might choose to run. Or
    have no choice but to run, such as ads and captchas. See, e. g.,
    http://mdpi.com/2227-7080/8/2/18 .

    As a rule, viewing a web page takes less power than running
    a (client-side) web application. From whence, it /does/ make
    sense to disable Javascript in one's browser whenever possible
    - or to use one that has no support for JS in the first place.

    * Most importantly, when we create non-bloated computer programs,
    we are not necessarily targeting old CPUs - we are targeting old
    INSTRUCTION SETS. The patents of those old instruction sets are
    already expired and CPUs that use them can be freely produced by
    anyone.

    I'm frankly at a loss to what extent those claims might be
    valid or relevant.

    In particular, was, say, 8086 ISA ever patented? And if
    it wasn't, or if, as the author seems to suggest, its patent
    expired, do we have chip manufacturers lining up to produce
    cheap 8086/8088 clones?

    [...]

    * If those old computers end up not being used, they are thrown to
    landfills, causing more environmental damage that way.

    The Wikipedia [e-waste] article might be a good starting point
    for researching this problem in detail. AIUI, most of discarded
    consumer electronics globally do indeed end up in landfills -
    presumably for the countless future generations to "thank" us for.

    And that's something that I doubt will ever change until people
    at large start paying for recycling.

    In general, environmental benefits of switching to a newer
    computer - if there're any in the first place - would be offset
    by the environmental impacts of manufacturing, delivery of the
    new computer to the customer, delivery of the old one to the
    recycling facility, and the recycling process itself.

    Upgrade often enough, and no amount of "power efficiency" of
    your new hardware will save you from harming the environment.

    [e-waste] http://en.wikipedia.org/wiki/Electronic_waste

    "Everything new is always more secure than the old; therefore we need
    to make all software so bloated that it does not run on old computers,
    to force people to use new computers that are so much more secure than
    the old ones."

    This one is often used by corporate security experts.

    [...]

    * What corporate security experts usually mean with "old computers"
    is actually "old operating systems" (or more specifically "old
    versions of Windows"), because they somehow associate the
    individual computers with the operating system that was originally
    installed to them in the factory. But the operating system is not
    an integrated part of the computer itself - instead it is just a
    bootable program that can be easily changed.

    Whether an OS can or cannot be replaced depends largely on the
    manpower available.

    In lots of cases, buying M computers with N year technical
    support contract (and sending them to a landfill once the
    contract expires) /will/ be cheaper for a corporation than
    employing their own staff for said technical support, including
    "OS changes."

    That does not, normally, apply for /personal/ computing, but
    there's a somewhat similar issue with the availability of
    hardware: how does one replace their Pentium MBs when they die?

    "Monitors nowadays use less power than the CRTs of old; therefore,
    to save power, we must make bloated user interfaces that don't work
    with small resolutions."

    This one is often used by HD/4K/8K enthusiasts.

    * Old CRTs don't really use that much power at all - a typical 15"
    color CRT uses less than most lightbulbs.

    Which is still a considerable amount of power.

    Whether that's an issue would depend on the climate. Around
    here, there's typically a couple of weeks in summer when the
    weather is hot. It is thus not unreasonable to run a fraction
    of kW worth of "inefficient" computing hardware the rest of the
    year - as a kind of a "data furnace."

    Conversely, someone living in the climate where AC is a must
    during a significant fraction of the year would perhaps prefer
    not to waste grid power to create extra work for their AC.

    Monochrome CRTs are even less power hungry, usually consuming
    something between 15 to 30 watts of power.

    [...]

    Although there are exceptions to the rule, most CRT displays are
    actually quite power efficient for a self-illuminating display
    technology.

    I'd like to run some tests, but note that unlike monochrome CRTs,
    color ones employ shadow masks, which, if [cathode ray tube] is
    to be believed, "block 80-85% of the electron beam" - and thus
    are ought to have about 20% of the efficiency of the monochromes.

    [cathode ray tube] http://en.wikipedia.org/wiki/Cathode_ray_tube

    * With "modern" flat panel displays, especially OLEDs, the power
    consumption grows in an almost linear fashion with the area of the
    display. This means that we can actually save more power by
    creating scalable user interfaces that also work well on smaller
    display resolutions.

    It's perhaps worth noting that portable computers, such as
    tablets, tend to employ higher-dpi displays than those common
    for "desktops" (300 dpi vs. 100 dpi, unless I be mistaken.)
    Thus it is possible to have small, energy-efficient displays
    that still have lots of pixels.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From not@not@telling.you.invalid (Computer Nerd Kev) to comp.misc on Fri Dec 26 07:41:04 2025
    From Newsgroup: comp.misc

    Ivan Shmakov <ivan@siamics.netremove.invalid> wrote:
    Sami Tikkanen wrote:
    * Most importantly, when we create non-bloated computer programs,
    we are not necessarily targeting old CPUs - we are targeting old
    INSTRUCTION SETS. The patents of those old instruction sets are
    already expired and CPUs that use them can be freely produced by
    anyone.

    I'm frankly at a loss to what extent those claims might be
    valid or relevant.

    In particular, was, say, 8086 ISA ever patented? And if
    it wasn't, or if, as the author seems to suggest, its patent
    expired, do we have chip manufacturers lining up to produce
    cheap 8086/8088 clones?

    I don't know about the legal aspects, but regarding the relevence I
    think the application would be FPGA impementations rather than
    reproducing the original chips. This is one such project for the
    486:

    https://github.com/MiSTer-devel/ao486_MiSTer

    There are similar projects for the 586, though I'm not sure if
    they're as complete. I assume earlier x86 CPUs have been done
    too.

    * If those old computers end up not being used, they are thrown to
    landfills, causing more environmental damage that way.

    The Wikipedia [e-waste] article might be a good starting point
    for researching this problem in detail. AIUI, most of discarded
    consumer electronics globally do indeed end up in landfills -
    presumably for the countless future generations to "thank" us for.

    And that's something that I doubt will ever change until people
    at large start paying for recycling.

    In some Australian states they banned disposing of electronics
    in rubbish. The government pays to offer "free" electronics
    recycling bins, which of course really means we all pay for it.
    The companies that collect from those bins charge to recycle the
    contents. Though since that can involve sending the stuff overseas
    to places where it may not be illegal to put it in landfill, there
    seems to be ample scope for companies to rip the government off on
    that.

    One TV show added a GPS tracker to a dress sent for recycling, and
    though it was eventually cut up for rags in Australia, that was
    only after it had been trucked all over the country between various
    warehouses. I imagine the efficiency is even worse with
    electronics recycling.

    In general, environmental benefits of switching to a newer
    computer - if there're any in the first place - would be offset
    by the environmental impacts of manufacturing, delivery of the
    new computer to the customer, delivery of the old one to the
    recycling facility, and the recycling process itself.

    Upgrade often enough, and no amount of "power efficiency" of
    your new hardware will save you from harming the environment.

    [e-waste] http://en.wikipedia.org/wiki/Electronic_waste

    Of course even if you do get disposal perfect, the environmental
    damange and energy use from making new tech still applies. I'd
    assume that outweighs the environmental cost of disposal anyway.
    --
    __ __
    #_ < |\| |< _#
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Kettlewell@invalid@invalid.invalid to comp.misc on Thu Dec 25 22:19:43 2025
    From Newsgroup: comp.misc

    Ivan Shmakov <ivan@siamics.netREMOVE.invalid> writes:
    * Most importantly, when we create non-bloated computer programs,
    we are not necessarily targeting old CPUs - we are targeting old
    INSTRUCTION SETS. The patents of those old instruction sets are
    already expired and CPUs that use them can be freely produced by
    anyone.

    I'm frankly at a loss to what extent those claims might be
    valid or relevant.

    In particular, was, say, 8086 ISA ever patented? And if
    it wasn't, or if, as the author seems to suggest, its patent
    expired, do we have chip manufacturers lining up to produce
    cheap 8086/8088 clones?

    Chip manufacturers produced cheap 8086 clones when it was a worthwhile
    thing to do, i.e. in the 1980s. Today it would be a bizarre choice.

    If you want an IP-free ISA then RISC-V is the place to look today,
    although the extension system is a bit of a maze...

    * If those old computers end up not being used, they are thrown to
    landfills, causing more environmental damage that way.

    The Wikipedia [e-waste] article might be a good starting point
    for researching this problem in detail. AIUI, most of discarded
    consumer electronics globally do indeed end up in landfills -
    presumably for the countless future generations to "thank" us for.

    Working computers I generally dispose of by passing on to friends or colleagues. Last I heard my 2006 laptop was still going strong. For my
    most recent upgrade I was able to trade in the old one for -u100 or so.

    For less useful electronics I take a trip to the local computer
    recycling firm once every few years. TheyrCOve never charged me a penny so presumably theyrCOre getting some kind of value out of my electronic
    waste.
    --
    https://www.greenend.org.uk/rjk/
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From kludge@kludge@panix.com (Scott Dorsey) to comp.misc on Fri Dec 26 08:53:31 2025
    From Newsgroup: comp.misc

    Cloud Nine <cloud@nine.invalid> wrote:

    Some of the ideas in plan9 are useful. However, the plan9 project fails to provide a coherent and intutive GUI, editor, web browser, and file browser that are ready to use out of the box. It is too arcane for mere mortals to even try to use.

    On one hand, it was never really intended for that. On the other hand,
    Linux was really never intended for that anyway and community support
    changed it until it slowly and then suddenly took off.

    Having great ideas does no good when one must have a compsci PhD as a bar to entry.

    But CS students and professors were the original audience for the system. Not to say that it couldn't become wider-known but it didn't. The folks writing
    it didn't plan for it to grow (any more than Linus planned for Linux to grow).

    As with Linux and Linux distros, the project is more about promoting the project than providing something for people to improve their work flow.

    This is true. There were plenty of other Unixlike kernels out there at the time Linux came about, from little things like xinu and minix to commercial products like qnx. But the community is what made Linux what it is today. --scott
    --
    "C'est un Nagra. C'est suisse, et tres, tres precis."
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From kludge@kludge@panix.com (Scott Dorsey) to comp.misc on Fri Dec 26 09:05:04 2025
    From Newsgroup: comp.misc

    Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> wrote:
    On Wed, 24 Dec 2025 15:44:02 -0500 (EST), Scott Dorsey wrote:

    On Wed, 24 Dec 2025 19:02:02 -0000 (UTC), Lawrence DrCOOliveiro wrote:

    What exactly does Plan9 get you?

    The ability to have as much stuff as possible running outside the
    kernel ring. The more stuff you can kick out of the kernel, the less
    stuff there is which can cause catastrophic failure when things go
    wrong.

    Ah, the hoary old microkernel refrain. You'd think, after something
    like four decades of repeating the same tired old claims without being
    able to back them up, the microkernel fans would have given up by now.

    How so? The microkernel does exactly what it's claimed to do, and it's a
    very commonly used architecture.

    It's usually not a good idea on the x86 because message-passing performance between user-space programs becomes very poor because of the x86
    memory protection limitatations.

    You do see lot of microkernels in embedded systems today, on architectures
    more friendly to user-space sharing. And of course most of the hupervisors
    in use for virtualization today are microkernels since the user space limitations aren't such a big deal there.

    OSX started out adapting some of the Plan9 philosophy but it mostly
    turned into bloat and the current OSX kernel looks nothing like a
    classic microkernel.

    Gee, I wonder why they succumbed to real-world evidence in that way ...

    There's still a microkernel at the bottom of OSX, but really the XNU kernel mostly just runs BSD under itself. OSX does use mach message passing still, but I think the end result is both contrary to the microkernel philosophy
    and generally contrary to good design practices. It's just another case
    of feeping creaturism rather than a deliberate move in any direction.
    --scott
    --
    "C'est un Nagra. C'est suisse, et tres, tres precis."
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.misc on Fri Dec 26 21:07:02 2025
    From Newsgroup: comp.misc

    On Fri, 26 Dec 2025 09:05:04 -0500 (EST), Scott Dorsey wrote:

    On Wed, 24 Dec 2025 22:51:11 -0000 (UTC), Lawrence DrCOOliveiro wrote:

    On Wed, 24 Dec 2025 15:44:02 -0500 (EST), Scott Dorsey wrote:

    On Wed, 24 Dec 2025 19:02:02 -0000 (UTC), Lawrence DrCOOliveiro wrote:

    What exactly does Plan9 get you?

    The ability to have as much stuff as possible running outside the
    kernel ring. The more stuff you can kick out of the kernel, the
    less stuff there is which can cause catastrophic failure when
    things go wrong.

    Ah, the hoary old microkernel refrain. You'd think, after something
    like four decades of repeating the same tired old claims without
    being able to back them up, the microkernel fans would have given
    up by now.

    How so? The microkernel does exactly what it's claimed to do ...

    You claimed it yourself, that it is somehow supposed to offer greater reliability. Only it never does. All it does is sap performance, at
    the very lowest level of the software stack where it matters most.

    ... and it's a very commonly used architecture.

    Many keep trying to use it, but many donrCOt succeed.

    E.g. GNU Hurd. That has been in development for about as long as Linux
    has. There are grown adults walking the Earth, who were not alive when
    the project started. It still hasnrCOt been able to get close to
    production quality.

    So much for microkernels making it less likely for things going wrong,
    eh?

    There's still a microkernel at the bottom of OSX ...

    I think the BSD kernel started out as something vaguely
    microkernel-based, but that got severely compromised over time, for
    the sake of performance if nothing else.

    Which again, proves my point.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From kludge@kludge@panix.com (Scott Dorsey) to comp.misc on Fri Dec 26 19:05:34 2025
    From Newsgroup: comp.misc

    Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> wrote:
    ... and it's a very commonly used architecture.

    Many keep trying to use it, but many don't succeed.

    Odds are your car's media system uses qnx. Your laser printer and toaster might use VxWorks. Satellites and aircraft use cFS. Lots of places where people care about reliability and verifiability use microkernels.


    There's still a microkernel at the bottom of OSX ...

    I think the BSD kernel started out as something vaguely
    microkernel-based, but that got severely compromised over time, for
    the sake of performance if nothing else.

    No, I am talking about the XNU stuff that sits below the BSD layer in OSX. --scott
    --
    "C'est un Nagra. C'est suisse, et tres, tres precis."
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.misc on Sat Dec 27 05:55:22 2025
    From Newsgroup: comp.misc

    On Fri, 26 Dec 2025 19:05:34 -0500 (EST), Scott Dorsey wrote:

    On Fri, 26 Dec 2025 21:07:02 -0000 (UTC), Lawrence DrCOOliveiro wrote:

    ... and it's a very commonly used architecture.

    Many keep trying to use it, but many don't succeed.

    Odds are your car's media system uses qnx.

    No. Though I did use one that was based on Linux. That seems to be
    more common these days.

    Your laser printer and toaster might use VxWorks. Satellites and
    aircraft use cFS. Lots of places where people care about reliability
    and verifiability use microkernels.

    Linux seems to be taking over most of those.

    Remember the rCLIngenuityrCY helicopter that was sent to Mars, and
    performed so admirably beyond its projected lifespan? That was running
    Linux.

    There's still a microkernel at the bottom of OSX ...

    I think the BSD kernel started out as something vaguely
    microkernel-based, but that got severely compromised over time, for
    the sake of performance if nothing else.

    No, I am talking about the XNU stuff that sits below the BSD layer
    in OSX.

    So was I.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ivan Shmakov@ivan@siamics.netREMOVE.invalid to comp.misc on Sat Dec 27 10:20:40 2025
    From Newsgroup: comp.misc

    On 2025-12-25, Richard Kettlewell wrote:
    Ivan Shmakov <ivan@siamics.netREMOVE.invalid> writes:
    Sami Tikkanen wrote:

    The patents of those old instruction sets are already expired and
    CPUs that use them can be freely produced by anyone.

    I'm frankly at a loss to what extent those claims might be valid
    or relevant.

    In particular, was, say, 8086 ISA ever patented? And if it wasn't,
    or if, as the author seems to suggest, its patent expired, do we
    have chip manufacturers lining up to produce cheap 8086/8088 clones?

    Chip manufacturers produced cheap 8086 clones when it was a worthwhile
    thing to do, i. e. in the 1980s. Today it would be a bizarre choice.

    http://en.wikipedia.org/wiki/Motorola_6809 gives no citations,
    but still has the following bit:

    6809> In 2015, Freescale authorized Rochester Electronics to start
    6809> manufacturing the MC6809 once again as a drop-in replacement
    6809> and copy of the original NMOS device. Freescale supplied
    6809> Rochester the original GDSII physical design database.
    6809> At the end of 2016, Rochester's MC6809 (including the MC68A09,
    6809> and MC68B09) is fully qualified and available in production.

    So perhaps producing 8086/8088 clones in some form wouldn't
    be all that bizzare after all.

    I personally wouldn't mind a 8086-based PC-104 SBC to run
    FreeDOS and (or) ELKS on, for instance.

    One good thing about such hardware is that one can learn
    the software that runs on it inside-out in a modest amount
    of time. Such as over a semester. It might be a kind of
    "spherical horse oscillating in vacuum," but such problems
    /are/ the staple of university-grade physics education, and I
    see no reason to doubt it would also work for CS pretty well.
    (Especially considering the availability of emulators.)

    An alternative that doesn't involve hardware no longer being
    produced is using MCUs or MCU-based boards, such as Arduino.

    If you want an IP-free ISA then RISC-V is the place to look today,
    although the extension system is a bit of a maze...

    Which renders the point moot, I suppose. There /might/ be ISAs
    that are both still relevant and no longer patented, but I'm
    not aware of any.

    While not eager to try "big" RISC-V-based machines right away,
    I wouldn't mind trying out RISC-V MCUs, such as, say, CH32V203.

    Wonder what software I'd need for that. I gather JTAG support
    is yet to be standardized, so there needs to be some ad hoc
    flasher? Also, simulators seem to be in short supply. (Unlike,
    say, AVR, for which there's Simavr.)

    Working computers I generally dispose of by passing on to friends or colleagues. Last I heard my 2006 laptop was still going strong.
    For my most recent upgrade I was able to trade in the old one for
    100 GBP or so.

    For 2006 computers to remain usable and useful, someone needs
    to maintain software that runs on them. Which is one of the
    points Sami tries to make, one with which I agree completely.

    For less useful electronics I take a trip to the local computer
    recycling firm once every few years. They've never charged me a
    penny so presumably they're getting some kind of value out of my
    electronic waste.

    The nearest recycling facility (or facilities) I'm aware of are
    some 300 km from where I live. They provide recycling bins for
    polyethylene terephthalate (PET) bottles in the more densely
    populated parts of the city, and they send a truck a few times
    a year to gather some other plastics (such as polypropylene, PP,
    but sadly not also popular in grocery packaging styrofoam.)

    There's reportedly some capacity to recycle electronics over
    there, but not something I have familiarity with.

    We've been able to sell bulkier items (microwave ovens, washing
    machines), as well as used tin cans, for scrap metal, though.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Kettlewell@invalid@invalid.invalid to comp.misc on Sat Dec 27 15:44:19 2025
    From Newsgroup: comp.misc

    Ivan Shmakov <ivan@siamics.netREMOVE.invalid> writes:
    On 2025-12-25, Richard Kettlewell wrote:
    Chip manufacturers produced cheap 8086 clones when it was a worthwhile
    thing to do, i. e. in the 1980s. Today it would be a bizarre choice.

    http://en.wikipedia.org/wiki/Motorola_6809 gives no citations,
    but still has the following bit:

    6809> In 2015, Freescale authorized Rochester Electronics to start
    6809> manufacturing the MC6809 once again as a drop-in replacement
    6809> and copy of the original NMOS device. Freescale supplied
    6809> Rochester the original GDSII physical design database.
    6809> At the end of 2016, Rochester's MC6809 (including the MC68A09,
    6809> and MC68B09) is fully qualified and available in production.

    ItrCOs still on their price list at over $100/unit in volume, which seems rather expensive. I wonder what their expected market is?

    If you want an IP-free ISA then RISC-V is the place to look today,
    although the extension system is a bit of a maze...

    Which renders the point moot, I suppose. There /might/ be ISAs
    that are both still relevant and no longer patented, but I'm
    not aware of any.

    MIPS held that role for a while, AFAIK primarily as an educational
    choice.
    --
    https://www.greenend.org.uk/rjk/
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Ivan Shmakov@ivan@siamics.netREMOVE.invalid to comp.misc on Sat Dec 27 20:35:13 2025
    From Newsgroup: comp.misc

    On 2025-12-25, I wrote:
    Sami Tikkanen wrote:

    * New computers generally consume more power than old ones. Of x86
    CPUs anything older than Pentium II uses only a single-digit amount
    of watts.

    That's somewhat offset by the power drawn by the chipset and
    peripherals, though. For an example, I've just started my
    Pentium 166 MMX-based box (PSU + mainboard + PCI VGA & NIC) and
    it's under 30 W during POST. I gather adding an IDE HDD there
    would add some 15 W (during active use) on top of that.

    For comparison, Olinuxino A64 board is specified to use a 10 W PSU.

    Note, however, that 30 W above was measured at the mains
    outlet, while 10 W is what's expected at the PSU output.
    Whatever overhead PSU itself might have is not accounted for.

    Although there are exceptions to the rule, most CRT displays are
    actually quite power efficient for a self-illuminating display
    technology.

    I'd like to run some tests, but note that unlike monochrome CRTs,
    color ones employ shadow masks, which, if [cathode ray tube] is
    to be believed, "block 80-85% of the electron beam" - and thus
    are ought to have about 20% of the efficiency of the monochromes.

    [cathode ray tube] http://en.wikipedia.org/wiki/Cathode_ray_tube

    I've measured power consumption of a monochrome CRT TVM MG-14III
    monitor (about 12.5" measured viewable diagonal), a color CRT
    GoldStar 1468 one (14" overall diagonal, 12.9" viewable, per
    label), and Samsung 723N color LCD, both active and idle (VGA
    cable connected to a turned-off PC; except for MG-14III, which
    doesn't seem to have such a mode), at brightness settings I
    personally find comfortable. Aspect ratio is an educated guess
    on my part. The results are as follows.

    Model Mfg. Type Aspect D, in S, in^2 P, W P / S, W in^{-2}

    14III 1993-10 RoC CRT/M 4:3 12.5 75.0 25 - 0.333
    1468 1996-03 RoK CRT/C 4:3 12.9 79.9 62 (7) 0.776
    723N 2010-02 PRC LCD/C 5:4 17 141.0 20 (2) 0.142

    So, 1468 offers about 57% of viewable screen area of 723N - at
    thrice its power consumption, making it about 18% as efficient
    (as can be seen from the rightmost "watts per square inch" column.)

    Monochrome MG-14III is better, yet still requires more than
    twice the power per unit of screen area than 723N.

    Then again, assuming that larger LCDs have about the same
    efficiency as 723N, at 20" they will take about the same 25 W
    as 14III, and at 31", they'll outpower 1468. Hence a person
    who uses an old, 14" CRT /might/ be wasting less power than
    someone who uses a modern big flat LCD (all other consequences
    of using a CRT monitor notwithstanding.)
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.misc on Sat Dec 27 21:29:35 2025
    From Newsgroup: comp.misc

    On Sat, 27 Dec 2025 15:44:19 +0000, Richard Kettlewell wrote:

    Ivan Shmakov <ivan@siamics.netREMOVE.invalid> writes:

    There /might/ be ISAs that are both still relevant and no longer
    patented, but I'm not aware of any.

    MIPS held that role for a while, AFAIK primarily as an educational
    choice.

    No reason why you canrCOt still use it for that.

    I believe they got rid of delayed branches a while back.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From kludge@kludge@panix.com (Scott Dorsey) to comp.misc on Sat Dec 27 17:50:32 2025
    From Newsgroup: comp.misc

    Richard Kettlewell <invalid@invalid.invalid> wrote:
    6809> In 2015, Freescale authorized Rochester Electronics to start
    6809> manufacturing the MC6809 once again as a drop-in replacement
    6809> and copy of the original NMOS device. Freescale supplied
    6809> Rochester the original GDSII physical design database.
    6809> At the end of 2016, Rochester's MC6809 (including the MC68A09,
    6809> and MC68B09) is fully qualified and available in production.

    It's still on their price list at over $100/unit in volume, which seems >rather expensive. I wonder what their expected market is?

    It's Rochester, so it's either military, aviation or space, and it might
    be mostly for one customer. They often will have a single customer willing
    to put an obsolete item into production for a lifetime buy, and then they
    put some more in stock while they are at it.

    These are customers for whom engineering changes are very difficult and
    usually require recertification (and sometimes even revalidation) so they
    are willing to pay a lot to avoid a change.
    --scott
    --
    "C'est un Nagra. C'est suisse, et tres, tres precis."
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From D Finnigan@dog_cow@macgui.com to comp.misc on Thu Jan 8 10:39:31 2026
    From Newsgroup: comp.misc

    On 12/22/25 8:36 AM, Ben Collver wrote:
    On 2025-12-22, Computer Nerd Kev <not@telling.you.invalid> wrote:
    At least so far as the first argument goes, it definitely reflects
    the attitude of some Linux kernel developers:

    Regarding power usage it's fairly simple:

    Older computers had smaller wattage power supplies, and the typical
    usage pattern was to power down when you weren't using it.

    The wattage rating on a power supply is the maximum power capacity that
    the power supply can safely provide. It does not mean that, for example,
    a machine with a 300W supply is continuously drawing 2.5A @ 120V of
    current. It will instead provide power demanded of it up to the limit of
    its rated capacity, after which some over-current device will trip and shutdown the current.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Anthk NM@anthk@disroot.org to comp.misc on Mon Jan 12 07:23:22 2026
    From Newsgroup: comp.misc

    On 2025-12-25, Ivan Shmakov <ivan@siamics.netREMOVE.invalid> wrote:
    Sami Tikkanen wrote:

    http://sininenankka.dy.fi/leetos/swbloat.php

    lEEt/OS seems like an interesting project, and I certainly have
    sympathy to its "easily programmable", "user always in control"
    and "keep old computers in use" [philosophy] as well.

    I have my doubts regarding the goals it sets and how it tries
    to approach them, but there might be some overlap with my own
    efforts, and I wouldn't mind contributing my code to this project.

    In particular, regarding "old computers," I've seen recent reports
    of running, successfully, current NetBSD on 1990-era hardware:
    MIPS- and 80486-based. (The bundled version of GCC apparently
    takes /minutes/ to compile "hello world" on a 80486, though.
    Sadly cannot test it myself: my only Socket 3 mainboard reports
    "BIOS ROM checksum error.")

    Personally, I /think/ that while a DOS-like system makes every
    sense for something like IBM PC/XT, NetBSD - with its focus on
    portability - fits rather well for 486+ and comparable machines.

    The webpage is no doubt a bit presumptuous on the whole, but
    the "counterclaims" given IMO have merit.

    Anyway, thanks Ben Collver for bringing it here.

    [philosophy] http://sininenankka.dy.fi/leetos/philosophy.php

    "New computers are more efficient than old ones; therefore we need
    to make all software so bloated that it does not run on old computers,
    to make sure that those old computer become obsolete and people stop
    using them."

    This one is often used by environmentalists, and it is wrong in so
    many obvious ways.

    That's about as "environmentalist" as playing games on Steam -
    on GNU/Linux - is "free software activism."

    * New computers generally consume more power than old ones. Of x86
    CPUs anything older than Pentium II uses only a single-digit amount
    of watts.

    That's somewhat offset by the power drawn by the chipset and
    peripherals, though. For an example, I've just started my
    Pentium 166 MMX-based box (PSU + mainboard + PCI VGA & NIC)
    and it's under 30 W during POST. I gather adding an IDE HDD
    there would add some 15 W (during active use) on top of that.

    For comparison, Olinuxino A64 board is specified to use a
    10 W PSU. It also has better performance, doesn't require
    IDE storage (running off an SDHC card), and takes much less
    space on one's desk. And it's OSHW, too.

    * Bloated code causes also new computers to use more electricity
    than would otherwise be required for the task.

    Certainly. And that includes not only "installed" applications,
    but also whatever web applications one might choose to run. Or
    have no choice but to run, such as ads and captchas. See, e. g.,
    http://mdpi.com/2227-7080/8/2/18 .

    As a rule, viewing a web page takes less power than running
    a (client-side) web application. From whence, it /does/ make
    sense to disable Javascript in one's browser whenever possible
    - or to use one that has no support for JS in the first place.

    * Most importantly, when we create non-bloated computer programs,
    we are not necessarily targeting old CPUs - we are targeting old
    INSTRUCTION SETS. The patents of those old instruction sets are
    already expired and CPUs that use them can be freely produced by
    anyone.

    I'm frankly at a loss to what extent those claims might be
    valid or relevant.

    In particular, was, say, 8086 ISA ever patented? And if
    it wasn't, or if, as the author seems to suggest, its patent
    expired, do we have chip manufacturers lining up to produce
    cheap 8086/8088 clones?

    [...]

    * If those old computers end up not being used, they are thrown to
    landfills, causing more environmental damage that way.

    The Wikipedia [e-waste] article might be a good starting point
    for researching this problem in detail. AIUI, most of discarded
    consumer electronics globally do indeed end up in landfills -
    presumably for the countless future generations to "thank" us for.

    And that's something that I doubt will ever change until people
    at large start paying for recycling.

    In general, environmental benefits of switching to a newer
    computer - if there're any in the first place - would be offset
    by the environmental impacts of manufacturing, delivery of the
    new computer to the customer, delivery of the old one to the
    recycling facility, and the recycling process itself.

    Upgrade often enough, and no amount of "power efficiency" of
    your new hardware will save you from harming the environment.

    [e-waste] http://en.wikipedia.org/wiki/Electronic_waste

    "Everything new is always more secure than the old; therefore we need
    to make all software so bloated that it does not run on old computers,
    to force people to use new computers that are so much more secure than
    the old ones."

    This one is often used by corporate security experts.

    [...]

    * What corporate security experts usually mean with "old computers"
    is actually "old operating systems" (or more specifically "old
    versions of Windows"), because they somehow associate the
    individual computers with the operating system that was originally
    installed to them in the factory. But the operating system is not
    an integrated part of the computer itself - instead it is just a
    bootable program that can be easily changed.

    Whether an OS can or cannot be replaced depends largely on the
    manpower available.

    In lots of cases, buying M computers with N year technical
    support contract (and sending them to a landfill once the
    contract expires) /will/ be cheaper for a corporation than
    employing their own staff for said technical support, including
    "OS changes."

    That does not, normally, apply for /personal/ computing, but
    there's a somewhat similar issue with the availability of
    hardware: how does one replace their Pentium MBs when they die?

    "Monitors nowadays use less power than the CRTs of old; therefore,
    to save power, we must make bloated user interfaces that don't work
    with small resolutions."

    This one is often used by HD/4K/8K enthusiasts.

    * Old CRTs don't really use that much power at all - a typical 15"
    color CRT uses less than most lightbulbs.

    Which is still a considerable amount of power.

    Whether that's an issue would depend on the climate. Around
    here, there's typically a couple of weeks in summer when the
    weather is hot. It is thus not unreasonable to run a fraction
    of kW worth of "inefficient" computing hardware the rest of the
    year - as a kind of a "data furnace."

    Conversely, someone living in the climate where AC is a must
    during a significant fraction of the year would perhaps prefer
    not to waste grid power to create extra work for their AC.

    Monochrome CRTs are even less power hungry, usually consuming
    something between 15 to 30 watts of power.

    [...]

    Although there are exceptions to the rule, most CRT displays are
    actually quite power efficient for a self-illuminating display
    technology.

    I'd like to run some tests, but note that unlike monochrome CRTs,
    color ones employ shadow masks, which, if [cathode ray tube] is
    to be believed, "block 80-85% of the electron beam" - and thus
    are ought to have about 20% of the efficiency of the monochromes.

    [cathode ray tube] http://en.wikipedia.org/wiki/Cathode_ray_tube

    * With "modern" flat panel displays, especially OLEDs, the power
    consumption grows in an almost linear fashion with the area of the
    display. This means that we can actually save more power by
    creating scalable user interfaces that also work well on smaller
    display resolutions.

    It's perhaps worth noting that portable computers, such as
    tablets, tend to employ higher-dpi displays than those common
    for "desktops" (300 dpi vs. 100 dpi, unless I be mistaken.)
    Thus it is possible to have small, energy-efficient displays
    that still have lots of pixels.

    Use TCC or cparser. It will compile C99 code much faster.
    Also, get stuff from https://t3x.org; it has really great
    and lightweight compilers.
    --- Synchronet 3.21a-Linux NewsLink 1.2