Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 27 |
Nodes: | 6 (0 / 6) |
Uptime: | 39:01:32 |
Calls: | 631 |
Calls today: | 2 |
Files: | 1,187 |
D/L today: |
24 files (29,813K bytes) |
Messages: | 174,061 |
I defined the 'PROC rat_gcd' in global space but with Algol 68 it
could also be defined locally (to not pollute the global namespace)
like
[... snip ...]
though performance measurements showed some noticeable degradation
with a local function definition as depicted.
I'd prefer it to be local but since it's ubiquitously used in that
library the performance degradation (about 15% on avg) annoys me.
Opinions on that?
On 18/08/2025 03:52, Janis Papanagnou wrote:
I defined the 'PROC rat_gcd' in global space but with Algol 68 it
could also be defined locally (to not pollute the global namespace)
like
[... snip ...]
though performance measurements showed some noticeable degradation
with a local function definition as depicted.
I can't /quite/ reproduce your problem. If I run just the
interpreter ["a68g myprog.a68g"] then on my machine the timings are identical. If I optimise ["a68g -O3 myprog.a68g"], then /first time/ through, I get a noticeable degradation [about 10% on my machine],
but the timings converge if I run them repeatedly. YMMV.
I suspect
it's to do with storage management, and later runs are able to re-use
heap storage that had to be grabbed first time.
But that could be
completely up the pole. Marcel would probably know.
If you see the same, then I suggest you don't run programs
for a first time. [:-)]
I'd prefer it to be local but since it's ubiquitously used in that
library the performance degradation (about 15% on avg) annoys me.
Opinions on that?
Personally, I'd always go for the version that looks nicer
[ie, in keeping with your own inclinations, with the spirit of A68,
with the One True (A68) indentation policy, and so on].
If you're
worried about 15%, that will be more than compensated for by your
next computer!
If you're Really Worried about 15%, then I fear it's
back to C [or whatever]; but that will cost you more than 15% in
development time.
Actually, with more tests, the variance got even greater; from 10%
to 45% degradation. The variances, though, did not converge [in my environment].
I also suspected some storage management effect; maybe that the GC
got active at various stages. (But the code did not use anything
that would require GC; to be honest, I'm puzzled.)
If you'reActually I'm very conservative concerning computers; mine is 15+
worried about 15%, that will be more than compensated for by your
next computer!
years old, and although I "recently" thought about getting an update
here it's not my priority. ;-)
[...]
If you'reActually I'm very conservative concerning computers; mine is 15+
worried about 15%, that will be more than compensated for by your
next computer!
years old, and although I "recently" thought about getting an update
here it's not my priority. ;-)
Ah. I thought I was bad, keeping computers 10 years or so!
I got a new one a couple of years back, and the difference in speed
and storage was just ridiculous.
[...] (That's actually one point that annoys me in "modern"
software development; rarely anyone seems to care economizing resource requirements.) [...]
And, by the way, thanks for your suggestions and helpful information
on my questions in all my recent Algol posts! It's also very pleasant
being able to substantially exchange ideas on this (IMO) interesting
legacy topic.
From time to time I wonder what would happen if we ran
7th Edition Unix on a modern computer.
From time to time I wonder what would happen if we ranThe Linux kernel source is currently over 40 million lines, and I
7th Edition Unix on a modern computer.
understand the vast majority of that is device drivers.
If you were to run an old OS on new hardware, that would need drivers for that new hardware, too.
On 20/08/2025 01:43, Lawrence DrCOOliveiro wrote:
If you were to run an old OS on new hardware, that would need
drivers for that new hardware, too.
Yes, but what is so special about a modern disc drive, monitor,
keyboard, mouse, ... that it needs "the vast majority" of 40M lines
more than its equivalent for a PDP-11?
[...] (That's actually one point that annoys me in "modern"
software development; rarely anyone seems to care economizing resource requirements.) [...]
On 20/08/2025 01:43, Lawrence DrCOOliveiro wrote:
The Linux kernel source is currently over 40 million lines, and I
understand the vast majority of that is device drivers.
You seem to be making Janis's point, but that doesn't seem to
be your intention?
If you were to run an old OS on new hardware, that would need drivers for
that new hardware, too.
Yes, but what is so special about a modern disc drive, monitor,
keyboard, mouse, ... that it needs "the vast majority" of 40M lines more
than its equivalent for a PDP-11? [...]
On Wed, 20 Aug 2025 23:58:58 +0100, Andy Walker wrote:
On 20/08/2025 01:43, Lawrence DrCOOliveiro wrote:Keyboard and mouse -- USB. [...]
If you were to run an old OS on new hardware, that would needYes, but what is so special about a modern disc drive, monitor,
drivers for that new hardware, too.
keyboard, mouse, ... that it needs "the vast majority" of 40M lines
more than its equivalent for a PDP-11?
On 21/08/2025 03:59, Lawrence DrCOOliveiro wrote:
[...]
You've given us a list of 20-odd features of modern systems
that have been developed since 7th Edition Unix, and could no doubt
think of another 20. What you didn't attempt was to explain why all
these nice things need to occupy 40M lines of code. That's, give or
take, 600k pages of code, call it 2000 books. That's, on your figures,
just the kernel source; [...]
What you didn't attempt was to explain why all these nice things
need to occupy 40M lines of code.
On 2025-08-19, Janis Papanagnou wrote:
On 19.08.2025 01:45, Andy Walker wrote:
If you're worried about 15%, that will be more than compensated
for by your next computer!
Actually I'm very conservative concerning computers; mine is 15+
years old, and although I "recently" thought about getting an
update here it's not my priority. ;-)
Ah. I thought I was bad, keeping computers 10 years or so! I got
a new one a couple of years back, and the difference in speed and
storage was just ridiculous.
Well, used software tools (and their updates) required me to at
least upgrade memory! (That's actually one point that annoys me
in "modern" software development; rarely anyone seems to care
economizing resource requirements.)
First, the 4e7 lines of Linux code is somewhat unfair a measure. On
my system, less than 5% of individual modules built from the Linux
source are loaded right now ...
For those who are looking for a system with more "comprehensible"
sources, I would recommend NetBSD. And if anything, I personally
find its list of supported platforms, http://netbsd.org/ports/ ,
fairly impressive.
On 2025-08-19, Janis Papanagnou wrote:
Well, used software tools (and their updates) required me to at
least upgrade memory! (That's actually one point that annoys me
in "modern" software development; rarely anyone seems to care
economizing resource requirements.)
I doubt it's so much lack of care as it is simply being not a
priority. [...]
[...]
[1] http://spectrum.ieee.org/lean-software-development
[...]
As to websites and JS libraries, for the past 25 years I've been
using as my primary one a browser, Lynx, that never had support
for JS, and likely never will have. IME, an /awful lot/ of
websites are usable and useful entirely without JS. [...]
On 2025-08-27, Lawrence D'Oliveiro wrote:
On Tue, 26 Aug 2025 18:42:05 +0000, Ivan Shmakov wrote:
For those who are looking for a system with more "comprehensible"
sources, I would recommend NetBSD. And if anything, I personally
find its list of supported platforms, http://netbsd.org/ports/ ,
fairly impressive.
Bit misleading, though. Note it counts "Xen" (a Linux-based
hypervisor) as a separate platform.
Also, look at all the different 68k, MIPS, ARM and PowerPC-based
machines that are individually listed.
Linux counts platform support based solely on CPU architecture (not surprising, since it's just a kernel, not the userland as well).
Architectures: all amd64 arm64 armel armhf i386 ppc64el riscv64 s390x
It covers all those CPUs listed (except maybe VAX), and a bunch of
others as well.
Each directory here <https://github.com/torvalds/linux/tree/master/arch> represents a separate supported architecture. Note extras like arm64,
On 2025-08-27, Janis Papanagnou wrote:
On 26.08.2025 20:42, Ivan Shmakov wrote:
On 2025-08-19, Janis Papanagnou wrote:
Well, used software tools (and their updates) required me to at
least upgrade memory! (That's actually one point that annoys me
in "modern" software development; rarely anyone seems to care
economizing resource requirements.)
I doubt it's so much lack of care as it is simply being not a
priority.
But those are depending each other.
And, to be yet more clear; I also think it's [widely] just ignorance!
(The mere existence of the article you quoted below is per se already a strong sign for that. But also other experiences, like talks with many IT-folks of various age and background reinforced my opinion on that.)
(Privately I had later written HTML/JS to create applications (with
dynamic content) since otherwise that would not have been possible;
I had no own server with some application servers available. But I
didn't use any frameworks or external libraries. Already bad enough.)
But even with Browsers and JS activated with my old Firefox I cannot
use or read many websites nowadays; because they demand newer browser versions.
On Wed, 27 Aug 2025 00:28:20 -0000 (UTC), Lawrence DrCOOliveiro
wrote:
Bit misleading, though. Note it counts "Xen" (a Linux-based
hypervisor) as a separate platform.
What do you mean by "Linux-based"?
NetBSD supports running as both Xen domU (unprivileged) /and/ dom0 (privileged.)
Linux counts platform support based solely on CPU architecture (not
surprising, since it's just a kernel, not the userland as well).
There's a "Ports by CPU architecture" section down the NetBSD
ports page; it lists 16 individual CPU architectures.
My point was that GNU/Linux distributions typically support
less ...
The way I see it, it's the /kernel/ that it takes the most
effort to port to a new platform - as it's where the support
for peripherals lives, including platform-specific ones.
But I still think that if you're interested in understanding how
your OS works - at the source code level - you'd be better with
NetBSD than with a Linux-based OS.
Not to mention that taking too long to 'polish' your product, you
risk ending up lagging behind your competitors.
[...]On 2025-08-27, Janis Papanagnou wrote:
But even with Browsers and JS activated with my old Firefox I cannot
use or read many websites nowadays; because they demand newer browser versions.
"Demand" how?
[...]
Not to mention that taking too long to 'polish' your product,
you risk ending up lagging behind your competitors.
On 2025-08-30, Lawrence D'Oliveiro wrote:
On Sat, 30 Aug 2025 19:39:49 +0000, Ivan Shmakov wrote:
Not to mention that taking too long to 'polish' your product, you
risk ending up lagging behind your competitors.
I would say, the open-source world is a counterexample to this.
Look at how long it took GNU and Linux to end up dominating the
entire computing landscape -- it didn't happen overnight.
I'm not sure how much of a consolation it is to the people who owned
the companies that failed, though.
Also, what indication is there that GNU is 'dominating' the
landscape? Sure, Linux is everywhere (such as in now ubiquitous
Android phones and TVs and whatnot), but I don't quite see GNU
being adopted as widely.
On 2025-08-31, Lawrence D'Oliveiro wrote:
On Sun, 31 Aug 2025 13:35:51 +0000, Ivan Shmakov wrote:
Indeed, one good thing about free software is that when one company
closes down, another can pick up and go on from there. Such as how
Netscape is no more, yet the legacy of its Navigator still survives
in Firefox.
I'm not sure how much of a consolation it is to the people who owned
the companies that failed, though.
Companies fail all the time, open source or no open source. When
a company that has developed a piece of proprietary software fails,
then the software dies with the company. With open source, the
software stands a chance of living on.
E.g. Loki was an early attempt at developing games on Linux. They
failed. But the SDL framework that they created for low-latency
multimedia graphics lives on.
Also, what indication is there that GNU is 'dominating' the
landscape? Sure, Linux is everywhere (such as in now ubiquitous
Android phones and TVs and whatnot), but I don't quite see GNU
being adopted as widely.
Look at all the markets that Linux has taken away from Microsoft --
Windows Media Center, Windows Home Server -- all defunct. Windows
Server too is in slow decline.
And now handheld gaming with the Steam Deck. You will find GNU there.
Market research firm International Data Corporation estimated that
between 3.7 and 4 million Steam Decks had been sold by the third anniversary of the device in February 2025.
On 2025-08-30, Lawrence D'Oliveiro wrote:
On Sat, 30 Aug 2025 19:10:42 +0000, Ivan Shmakov wrote:
On Wed, 27 Aug 2025 00:28:20 -0000 (UTC), Lawrence D'Oliveiro wrote:
For those who are looking for a system with more "comprehensible"
sources, I would recommend NetBSD. And if anything, I personally
find its list of supported platforms, http://netbsd.org/ports/ ,
fairly impressive.
Don't get me wrong: NetBSD won't fit for every use case Linux-based systems cover - the complexity of the Linux kernel isn't there
for nothing - but just in case you /can/ live with a "limited"
OS (say, one that doesn't support Docker), thanks to NetBSD, you
/do/ have that option.
Bit misleading, though. Note it counts "Xen" (a Linux-based
hypervisor) as a separate platform.
What do you mean by "Linux-based"?
I mean that Xen runs an actual Linux kernel in the hypervisor,
and supports regular Linux distros as guests -- they don't need to
be modified to specially support Xen, or any other hypervisor.
* common/notifier.c
*
* Routines to manage notifier chains for passing status changes to any
* interested routines.
*
* Original code from Linux kernel 2.6.27 (Alan Cox [...])
It's Linux above, and Linux below -- Linux at every layer.
NetBSD supports running as both Xen domU (unprivileged) /and/
dom0 (privileged.)
Linux doesn't count these as separate platforms. They're just
considered a standard part of regular platform support.
My point was that GNU/Linux distributions typically support less
But that's an issue with the various distributions, not with the
Linux kernel itself.
In the BSD world, there is no separate of "kernel" from "distribution".
That makes things less flexible than the Linux world.
For example, while base Debian itself may support something under a
dozen architectures, there are offshoots of Debian that cover others.
The way I see it, it's the /kernel/ that it takes the most effort
to port to a new platform - as it's where the support for peripherals
lives, including platform-specific ones.
Given that the Linux kernel supports more of these different
platforms than any BSD can manage, I think you're just reinforcing
my point.
But I still think that if you're interested in understanding how
your OS works - at the source code level - you'd be better with
NetBSD than with a Linux-based OS.
Linux separates the kernel from the userland. That makes things
simpler than running everything together, as the BSDs do.
- I'd hesitate to call Xen at large "Linux-based." If anything,
there's way more of Linux in the GNU Mach microkernel (consider the linux/src/drivers subtree in [3], for instance) than in the Xen
hypervisor.
That, however, doesn't mean you can use Linux /by itself/ outside of
a distribution.
Suppose someone asks, "what OS would you recommend for running on
loongarch?" and the best answer we here on Usenet can give is
On 2025-08-30, Lawrence D'Oliveiro wrote:
[snip]
I mean that Xen runs an actual Linux kernel in the hypervisor,
and supports regular Linux distros as guests -- they don't need to
be modified to specially support Xen, or any other hypervisor.
It's been well over a decade since I've last used Xen, so I'm
going more by http://en.wikipedia.org/wiki/Xen than experience.
But just to be sure, I've checked the sources [1], and while
I do see portions of Linux code reused here and there - such as,
say, [2] below - I'd hesitate to call Xen at large "Linux-based."
If anything, there's way more of Linux in the GNU Mach microkernel
(consider the linux/src/drivers subtree in [3], for instance)
than in the Xen hypervisor. (And I don't recall GNU Mach being
called "Linux-based.")
To note is that there seem to be no mention in CHANGELOG.md of
anything suggesting that Xen uses Linux as its upstream project.
* common/notifier.c
*
* Routines to manage notifier chains for passing status changes to any
* interested routines.
*
* Original code from Linux kernel 2.6.27 (Alan Cox [...])
[1] http://downloads.xenproject.org/release/xen/4.20.1/xen-4.20.1.tar.gz
[2] xen-4.20.1/xen/common/notifier.c
[3] git://git.sv.gnu.org/hurd/gnumach.git rev. 8d456cd9e417 from 2025-09-03
It's Linux above, and Linux below -- Linux at every layer.
Sure, if you want to run it that way. You can also run Xen
with NetBSD at every layer, or, apparently, OpenSolaris.
A GNU/Linux distribution AFAICR needs to provide Xen-capable
kernel for it to be usable as dom0 - as well as Xen user-mode
tools. Niche / lightweight distributions might omit such support.
(There're a few build-time options related to Xen in Linux.)
Also, Xen supports both hardware-assisted virtualization /and/ paravirtualization. On x86-32, the former is not available, so
the Linux build /must/ support paravirtualization in order to be
usable with Xen, dom0 or domU.
When hardware-assisted virtualization /is/ available, the things
certainly get easier: pretty much anything that can run under,
say, Qemu, can be run under Xen HVM. The performance may suffer,
though, should your domU system happen to lack virtio drivers and
should thus need to resort to using emulated peripherals instead.
NetBSD supports running as both Xen domU (unprivileged) /and/
dom0 (privileged.)
Linux doesn't count these as separate platforms. They're just
considered a standard part of regular platform support.
Which means one needs to be careful when comparing architecture
support between different kernels.
My point was that GNU/Linux distributions typically support less
But that's an issue with the various distributions, not with the
Linux kernel itself.
True. That, however, doesn't mean you can use Linux /by itself/
outside of a distribution. (Unless, of course, you're looking
for a kernel for a new distribution, but I doubt that undermines
my point.) So architecture support /you/ will have /will/ be
limited by the distribution you choose, regardless of what Linux
itself might offer.
In the BSD world, there is no separate of "kernel" from "distribution". That makes things less flexible than the Linux world.
That's debatable. Debian for a while had a kFreeBSD port (with
a variant of the FreeBSD kernel separate from FreeBSD proper), and
from what I recall, it was discontinued due to lack of volunteers,
not lack of flexibility.
For example, while base Debian itself may support something under a
dozen architectures, there are offshoots of Debian that cover others.
How is this observation helpful?
Suppose someone asks, "what OS would you recommend for running
on loongarch?" and the best answer we here on Usenet can give
is along the lines of "NetBSD won't work, but there're dozens
of Debian offshoots around - be sure to check them all, as one
might happen to support it." Really?
If you know of Debian offshoots that support architectures
that Debian itself doesn't, could you please list them here?
Or, if there's already a list somewhere, share a pointer.
The way I see it, it's the /kernel/ that it takes the most effort
to port to a new platform - as it's where the support for peripherals
lives, including platform-specific ones.
Given that the Linux kernel supports more of these different
platforms than any BSD can manage, I think you're just reinforcing
my point.
Certainly - if your point is that way more effort went into
Linux over the past two to three decades than in any of BSDs.
(And perhaps into /all/ of free BSDs combined, I'd guess.)
But I still think that if you're interested in understanding how
your OS works - at the source code level - you'd be better with
NetBSD than with a Linux-based OS.
Linux separates the kernel from the userland. That makes things
simpler than running everything together, as the BSDs do.
I fail to see why developing the kernel and an OS based on it
as subprojects to one "umbrella" project would in any way hinder
code readability.
Just in case it somehow matters, there're separate tarballs under rsync://rsync.netbsd.org/NetBSD/NetBSD-10.1/source/sets/ for the
kernel (syssrc.tgz) and userland (src, gnusrc, sharesrc, xsrc.)
That said, I've last tinkered with Linux around the days of
2.0.36 (IIRC), and I don't recall reading any Linux sources
newer than version 4. If you have experience patching newer
Linux kernels, and in particular if you find the code easy to
follow, - please share your observations.
On 2025-09-05, Lawrence D'Oliveiro wrote:
On Thu, 04 Sep 2025 18:50:29 +0000, Ivan Shmakov wrote:
I'd hesitate to call Xen at large "Linux-based." If anything,
there's way more of Linux in the GNU Mach microkernel (consider
the linux/src/drivers subtree in [3], for instance) than in the
Xen hypervisor.
Call it what you like, the fact is, Linux supports it without
having to list it as a separate platform.
You could argue equally well that NetBSD is not "BSD" any more,
because it has diverged too far from the original BSD kernel.
That, however, doesn't mean you can use Linux /by itself/ outside
of a distribution. (Unless, of course, you're looking for a kernel
for a new distribution, but I doubt that undermines my point.)
How do you think distributions get created in the first place?
<https://linuxfromscratch.org/>
Suppose someone asks, "what OS would you recommend for running on
loongarch?" and the best answer we here on Usenet can give is
<https://distrowatch.com/search.php?ostype=All&[...]>
On 2025-09-05, Dan Cross wrote:
In article <KKx97WvtTkldzxgb@violet.siamics.net>, Ivan Shmakov wrote: >>>>> On 2025-08-30, Lawrence D'Oliveiro wrote:
FYI, you are arguing with a known troll. It is unlikely to turn
into a productive exercise, so caveat emptor.
When hardware-assisted virtualization /is/ available, the things
certainly get easier: pretty much anything that can run under,
say, Qemu, can be run under Xen HVM. The performance may suffer,
though, should your domU system happen to lack virtio drivers and
should thus need to resort to using emulated peripherals instead.
Yes. With Xen, you've got the Xen VMM running on the bare metal and
then any OS capable of supporting Xen's Dom0 requirements running as
Dom0, and essentially any OS running as a DomU guest.
So to summarize, you've got a hypervisor that descended from an
old version of Linux, but was heavily modified, running a gaggle
of other systems, none of which necessarily needs to be Linux.
Linux doesn't count these as separate platforms. They're just
considered a standard part of regular platform support.
Which means one needs to be careful when comparing architecture
support between different kernels.
I gathered your point was that neither Dom0 nor DomU _had_ to be
Linux, and that's true.
Note that the troll likes to subtlely change the point that he's
arguing.
That said, I've last tinkered with Linux around the days of 2.0.36
(IIRC), and I don't recall reading any Linux sources newer than
version 4. If you have experience patching newer Linux kernels, and
in particular if you find the code easy to follow, - please share.
He doesn't.
On Fri, 5 Sep 2025 00:03:17 -0000 (UTC), Lawrence DrCOOliveiro wrote:
On Thu, 04 Sep 2025 18:50:29 +0000, Ivan Shmakov wrote:
I'd hesitate to call Xen at large "Linux-based."
Call it what you like, the fact is, Linux supports it without
having to list it as a separate platform.
I can't say I can quite grasp the importance of doing it one way or
another ...
... you /still/ choose among distributions rather than kernels ...
... Or, in other words: "don't ask for recommendations here on
Usenet, ask a website instead."
On 20/08/2025 01:43, Lawrence DrCOOliveiro wrote:
[I wrote:]
From time to time I wonder what would happen if we ranThe Linux kernel source is currently over 40 million lines, and I
7th Edition Unix on a modern computer.
understand the vast majority of that is device drivers.
You seem to be making Janis's point, but that doesn't seem to
be your intention?
If you were to run an old OS on new hardware, that would need drivers for
that new hardware, too.
Yes, but what is so special about a modern disc drive, monitor, keyboard, mouse, ... that it needs "the vast majority" of 40M lines more
than its equivalent for a PDP-11? Does this not again make Janis's point?
Granted that the advent of 32- and 64-bit integers and addresses
makes some programming much easier, and that we can no longer expect
browsers and other major tools to fit into 64+64K bytes, is the actual
bloat in any way justified?
It's not just kernels and user software --
it's also the documentation. In V7, "man cc" generates just under two
pages of output; on my current computer, it generates over 27000 lines,
call it 450 pages, and is thereby effectively unprintable and unreadable,
so it is largely wasted.
For V7, the entire documentation fits comfortably into two box
files, and the entire source code is a modest pile of lineprinter output. Most of the commands on my current computer are undocumented and unused,
and I have no idea at all what they do.
Yes, I know how that "just happens", and I'm observing rather
than complaining [I'd rather write programs, browse and send/read e-mails
on my current computer than on the PDP-11]. But it does all give food for thought.
On my desktop kernel boot messages say "14342K kernel code". Nominally assuming 10 bytes per source line it means 1.4 milions of lines of
running code, so relatively small part of total kernel source.
Andy Walker <anw@cuboid.co.uk> wrote:[...]
On 20/08/2025 01:43, Lawrence DrCOOliveiro wrote:
Lawrence gave a good list of things, but let me note few additionalIf you were to run an old OS on new hardware, that would need drivers for >>> that new hardware, too.Yes, but what is so special about a modern disc drive, monitor,
keyboard, mouse, ... that it needs "the vast majority" of 40M lines more
than its equivalent for a PDP-11? Does this not again make Janis's point?
aspects. First there is _a lot_ of different drivers. In PDP-11
times there were short list of available devices. Now there is a lot
of different devices on the market and each one potentially need a specialised driver in the kernel. [...]
I think that comparisons with early mainframes or PDP-11 areI would take issue with some of the historical aspects, but it
misleading in the sence that on early machines programmes struggled
to fit programs into avaliable memory. Common technique was keeping
data on disc and having multiple sequential passes. Program
itself could be split into several overlays. Use of overlays
essentially vanished with introduction of virtual memory coupled
with multimegabyte real RAM. More relevant are comparisons
with VAX and early Linux.
AFAICS bloat happens mostly in user level. One reason is more
friendly attitude of modern programs: instead of numeric error
codes programs contains actual error messages.
One reason that modern system are big and bloated is recursive
pulling of dependencies. Namely, there is tendency to delegate
work to libraries and more generally to depend on "standard"
tools. But this in turn creates pressure on libraries and
tools to cover "all" use cases and in particular to include
rarely used functionality.
Hmm, on my machine '/usr/bin' contains 2547 commands. IIRC "minimal"
install give some hundreds of commands, so most commands is from
packages that I explicitely installed or their dependencies.
On 04/10/2025 02:11, Waldek Hebisch wrote:
In PDP-11 times there were short list of available devices. Now
there is a lot of different devices on the market and each one
potentially need a specialised driver in the kernel. [...]
Yes, but one would expect that to drive standardisation rather than
bloat. There are rather a lot of devices that I can plug into the
mains in my home, but I don't have to install hundreds or thousands
of different types of socket.
On Tue, 7 Oct 2025 22:03:07 +0100, Andy Walker wrote:
On 04/10/2025 02:11, Waldek Hebisch wrote:
In PDP-11 times there were short list of available devices. Now
there is a lot of different devices on the market and each one
potentially need a specialised driver in the kernel. [...]
Yes, but one would expect that to drive standardisation rather than
bloat. There are rather a lot of devices that I can plug into the
mains in my home, but I don't have to install hundreds or thousands
of different types of socket.
Most of your electronic devices would not plug directly into the
mains, they would likely use some kind of DC adaptor/charger. How many
of those do you have?
You are trying to make an argument by analogy, and that is already
heading for a pitfall. Those power connections you talk about are for transferring energy, not for transferring information. Information
transfer is a much more complex business.
On Sat, 30 Aug 2025 19:39:49 +0000, Ivan Shmakov wrote:
Not to mention that taking too long to 'polish' your product, you
risk ending up lagging behind your competitors.
I would say, the open-source world is a counterexample to this. Look at
how long it took GNU and Linux to end up dominating the entire computing landscape -- it didnrCOt happen overnight.
Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:
On Sat, 30 Aug 2025 19:39:49 +0000, Ivan Shmakov wrote:
Not to mention that taking too long to 'polish' your product, you
risk ending up lagging behind your competitors.
I would say, the open-source world is a counterexample to this. Look at
how long it took GNU and Linux to end up dominating the entire computing
landscape -- it didnrCOt happen overnight.
Actually, open source nicely illustates this. First advice to
open source projects is "release early, release often". Projects
that delay release because they are "not ready" typically loose
and eventually die.
Open source projects typically want to offer high quality. But
they have to limit their efforts to meet realease schedules.
There are compromises which know bugs get fixed: some are deemed
serious enough to block new release, but a lot get shipped.
There is internal testing, but significant part of problems
get discovered only after release.
One can significantly increase quality by limiting addition of
new featurs. But open source projects that try to do this
typically loose.
Actually, open source nicely illustates this. First advice to
open source projects is "release early, release often".
Projects that delay release because they are "not ready"
typically loose and eventually die.
A principal advantage of the "open-source world" (or rather the non- commercial world) is that there's neither competition nor need to
quickly throw things into the market. So this area has at least the
chance to adapt plans and contents without time pressure.
"You don't get a second shot at a first impression."
On 04/10/2025 02:11, Waldek Hebisch wrote:
Andy Walker <anw@cuboid.co.uk> wrote:[...]
On 20/08/2025 01:43, Lawrence DrCOOliveiro wrote:
aspects. First there is _a lot_ of different drivers. In PDP-11If you were to run an old OS on new hardware, that would need drivers for >>>> that new hardware, too.Yes, but what is so special about a modern disc drive, monitor,
keyboard, mouse, ... that it needs "the vast majority" of 40M lines more >>> than its equivalent for a PDP-11? Does this not again make Janis's point? >> Lawrence gave a good list of things, but let me note few additional
times there were short list of available devices. Now there is a lot
of different devices on the market and each one potentially need a
specialised driver in the kernel. [...]
Yes, but one would expect that to drive standardisation rather
than bloat. There are rather a lot of devices that I can plug into the
mains in my home, but I don't have to install hundreds or thousands of different types of socket.
I think that comparisons with early mainframes or PDP-11 areI would take issue with some of the historical aspects, but it
misleading in the sence that on early machines programmes struggled
to fit programs into avaliable memory. Common technique was keeping
data on disc and having multiple sequential passes. Program
itself could be split into several overlays. Use of overlays
essentially vanished with introduction of virtual memory coupled
with multimegabyte real RAM. More relevant are comparisons
with VAX and early Linux.
would take us on a long detour. Just one comment: we've had virtual
memory since 1959 [Atlas].
AFAICS bloat happens mostly in user level. One reason is more
friendly attitude of modern programs: instead of numeric error
codes programs contains actual error messages.
The systems I've used have always used actual error messages!
[...]
One reason that modern system are big and bloated is recursive
pulling of dependencies. Namely, there is tendency to delegate
work to libraries and more generally to depend on "standard"
tools. But this in turn creates pressure on libraries and
tools to cover "all" use cases and in particular to include
rarely used functionality.
Yes, but that's the sort of pressure that needs to be
resisted; and isn't,
[...]
Hmm, on my machine '/usr/bin' contains 2547 commands. IIRC "minimal"
install give some hundreds of commands, so most commands is from
packages that I explicitely installed or their dependencies.
I have 2580 in my "/usr/bin". That is almost all from the
"medium (recommended)" installation; a handful of others have been
added when I've found something missing (I'd guess perhaps 10). Of
those I've actually used just 64! [Plus 26 in "$HOME/bin".] I
checked a random sample of those 2580; more than 2/3 I have no
idea from the name what they are for [yes, I know I can find out],
and I'm an experienced Unix user with much more CS knowledge than
the average punter. If I were to read an introductory book on
Linux, I doubt whether many more than those 64 would be mentioned,
so I wouldn't even be pointed at the "average" command.
... large-scale open-source projects do compete for
"mindshare" among open-source developers, who are a large but finite
group with a finite amount of time and energy to sink into them.
If that's the terminology you prefer, sure. The point stands.... large-scale open-source projects do compete for
"mindshare" among open-source developers, who are a large but finite
group with a finite amount of time and energy to sink into them.
The rCLmindsharerCY is among the passive users who take whatrCOs given and complain about how it doesnrCOt fit their needs.
WhatrCOs more important is the rCLcontribusharerCY -- the active community that contributes to the project. That matters much more than sheer
numbers of users.
On Wed, 8 Oct 2025 21:18:58 -0000 (UTC)
Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:
... large-scale open-source projects do compete for "mindshare" among
open-source developers, who are a large but finite group with a
finite amount of time and energy to sink into them.
The rCLmindsharerCY is among the passive users who take whatrCOs given and >> complain about how it doesnrCOt fit their needs.
WhatrCOs more important is the rCLcontribusharerCY -- the active community >> that contributes to the project. That matters much more than sheer
numbers of users.
If that's the terminology you prefer, sure. The point stands.
On 08.10.2025 16:03, Waldek Hebisch wrote:
Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:
On Sat, 30 Aug 2025 19:39:49 +0000, Ivan Shmakov wrote:
Not to mention that taking too long to 'polish' your product, you
risk ending up lagging behind your competitors.
I would say, the open-source world is a counterexample to this. Look at >>> how long it took GNU and Linux to end up dominating the entire computing >>> landscape -- it didnrCOt happen overnight.
Actually, open source nicely illustates this. First advice to
open source projects is "release early, release often". Projects
that delay release because they are "not ready" typically loose
and eventually die.
Open source projects typically want to offer high quality. But
they have to limit their efforts to meet realease schedules.
There are compromises which know bugs get fixed: some are deemed
serious enough to block new release, but a lot get shipped.
There is internal testing, but significant part of problems
get discovered only after release.
One can significantly increase quality by limiting addition of
new featurs. But open source projects that try to do this
typically loose.
We can observe that software grows, and grows rank. My experience
is that it makes sense to plan and occasionally add refactoring
cycles in these cases. (There's also software planned accurately
from the beginning, software that changes less, and is only used
for its fixed designed purpose. But we're not speaking about that
here.) A principal advantage of the "open-source world" (or rather
the non-commercial world) is that there's neither competition nor
need to quickly throw things into the market. So this area has at
least the chance to adapt plans and contents without time pressure.
Whether it's done is another question (and project specific). It
should also be mentioned that some projects have e.g. security or
quality requirements that gets tested and measured and require some
adaptive process to increase these factors (without adding anything
new).
antispam@fricas.org (Waldek Hebisch) wrote or quoted:
Actually, open source nicely illustates this. First advice to
open source projects is "release early, release often".
I had thought about using this for my projects, but I can see
the downsides too:
If some projects drop too early, they still barely have any
capabilities. The first curious potential users check it out and
walk away thinking, "a toy product and not the skills that actually
matter in practice". That vibe can stick around - "You don't get
a second shot at a first impression." - and end up keeping people
from giving the later, more capable versions a chance.
Projects that delay release because they are "not ready"
typically loose and eventually die.
Exagerated.
The actual TeX program version is currently at 3.141592653
and was last updated in 2021. It is one of the most successful
programs ever and the market leader for scientific articles
and books that include math formulas.
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:[...]
We can observe that software grows, and grows rank. My experience
is that it makes sense to plan and occasionally add refactoring
cycles in these cases. (There's also software planned accurately
from the beginning, software that changes less, and is only used
for its fixed designed purpose. But we're not speaking about that
here.) A principal advantage of the "open-source world" (or rather
the non-commercial world) is that there's neither competition nor
need to quickly throw things into the market. So this area has at
least the chance to adapt plans and contents without time pressure.
What you wrote corresponds to one-man hobby project. [...]
[...] But more important is software from multiperson project. [...]
[ open source and GPL stuff ]
[ specific sceneries and assumptions ]
[ more open source specific sceneries and assumptions ]
[ open source example sceneries and assumptions about involved people ]
Whether it's done is another question (and project specific). It
should also be mentioned that some projects have e.g. security or
quality requirements that gets tested and measured and require some
adaptive process to increase these factors (without adding anything
new).
Actually, security is another thing which puts pressure to
release quickly: if there is security problem developers want
to distribute fixed version as soon as possible.
[...]
But IMO in most cases releasing early makes sense.
On 09.10.2025 03:39, Waldek Hebisch wrote:
[...]
But IMO in most cases releasing early makes sense.
LOL, yeah! - Let the users and customers search the bugs for you!
If your customers need/demand higher quality they should pay
appropriatly to cover needed cost. But expecting no bugs is
simply unrealistic. I read about developement of software
controlling Space Shuttle. Team doing that boasted that
that have sophisticated developement process ensuring high
quality. They had 400 people working on 400 kloc program.
Given that developement was spread over more than 10 years
that looks as very low "productivity", that pretty high
developement cost. Yet they where not able to say "no bugs".
IIRC they where not even able to say "no bugs discovered
during actual mission", all that they were able to say
was "no serious trouble due to bugs". Potential effects
of failure of Space Shuttle software were pretty serious,
so it was fully justified to spent substantial effort on
quality.
I have a problem (and tone of your message suggest that you
may have this problem too), I really would prefer to catch as many
bugs as possible during developement and due to this I
probably release to late.
I was talking about the doing; you just want to use a different wordIf that's the terminology you prefer, sure. The point stands.
You were talking about thinking, not doing. ItrCOs the doing that
counts.
On Thu, 9 Oct 2025 00:09:50 -0000 (UTC)
Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:
If that's the terminology you prefer, sure. The point stands.
You were talking about thinking, not doing. ItrCOs the doing that counts.
I was talking about the doing ...
No, just using it in the context of developer minds.If that's the terminology you prefer, sure. The point stands.
You were talking about thinking, not doing. ItrCOs the doing that
counts.
I was talking about the doing ...
You used the word rCLmindsharerCY. Trying to redefine what rCLmindrCY means, now?
On Tue, 7 Oct 2025 22:03:07 +0100, Andy Walker wrote:
On 04/10/2025 02:11, Waldek Hebisch wrote:Most of your electronic devices would not plug directly into the
In PDP-11 times there were short list of available devices. NowYes, but one would expect that to drive standardisation rather than
there is a lot of different devices on the market and each one
potentially need a specialised driver in the kernel. [...]
bloat. There are rather a lot of devices that I can plug into the
mains in my home, but I don't have to install hundreds or thousands
of different types of socket.
mains, they would likely use some kind of DC adaptor/charger. How many
of those do you have?
You are trying to make an argument by analogy, and that is already
heading for a pitfall. Those power connections you talk about are for transferring energy, not for transferring information. Information
transfer is a much more complex business.
A feature is that all those mentioned work via a USB
connexion [supplied with the device], irrespective of whether the Man
on the Clapham Omnibus would describe them as "electronic". Is that
not standardisation in action?
USB sticks, portable drives, ... transfer information as well.
But again, if information transfer is so complex, one would expect that
to drive standardisation rather than everyone re-inventing the wheel.
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
On 09.10.2025 03:39, Waldek Hebisch wrote:
[...]
But IMO in most cases releasing early makes sense.
LOL, yeah! - Let the users and customers search the bugs for you!
Maybe you think that you can write perfect software.
[...] But within reasonable resource bounds and
using known tecniques you will arrive at point where finding new
bugs takes too much resources.
If your customers need/demand higher quality they should pay
appropriatly to cover needed cost. But expecting no bugs is
simply unrealistic.
I read about developement of software
controlling Space Shuttle. Team doing that boasted that
that have sophisticated developement process ensuring high
quality. They had 400 people working on 400 kloc program.
Given that developement was spread over more than 10 years
that looks as very low "productivity", that pretty high
developement cost. Yet they where not able to say "no bugs".
IIRC they where not even able to say "no bugs discovered
during actual mission", all that they were able to say
was "no serious trouble due to bugs". Potential effects
of failure of Space Shuttle software were pretty serious,
so it was fully justified to spent substantial effort on
quality.
What I develop is quite non-critical, I am almost certain
that "no serious trouble due to bugs" will be true even if
my software is full of bugs. [...]
When I wrote about releasing early, I mean releasing when
stream of new bugs goes down, that is attempting to predict
point of diminishing returns.
More conservative approach
would continue testing for longer time in hope of finding
"last bug". [...]
I have a problem (and tone of your message suggest that you
may have this problem too), I really would prefer to catch as many
bugs as possible during developement and due to this I
probably release to late. [...]
Note that part of my testing may
be using a program just to do some work. Now, if program
is doing valuable service to me, there is reasoble chance
that it will do valuable work for some other people.
Pragmatically you can view this a a deal: other people
get value from work done by the program, I in exchange get
defect reports that allow me to improve the program.
I see nothing wrong in such a deal, as long as it is
honest, in particular when provider of the program
realistically states what program can do.
BTW: Some users judge quality of software looking at number of
bugs reports. More bugs reports is supposed to mean higher
quality.
If that looks wrong to you more detailed reasoning
goes as follows: number of bug reports grows with number of
users.
If there is small number of bug reports it indicates
that there is small number of user and possibly that users
do not bother reporting bugs.
Now, user do not report bugs
when they consider software to be hoplessy bad. And user
in general prefer higher quality software, so small number
of user suggest low quality. So, either way, low number
of bug reports really may mean low quality. This method
may fail if you manage to create perfect software free of
bugs with perfect documentation so that there will be no
spurious bug reports. But in real life program tend to have
enough bugs that this method has at least some merit.
Recommended reading:
"They Write the Right Stuff" (1996-12) - Charles Fishman.
[...]
The gist is:
[...]
- About half the staff are testers, but as the programmers
do not want them to find errors, the programmers already
do their own testing before they give their code to the
actual testers. So more time is spend on testing than on
coding.
A feature is that all those mentioned work via a USBDo you know how many different kinds of rCLUSBrCY there are?
connexion [supplied with the device], irrespective of whether the Man
on the Clapham Omnibus would describe them as "electronic". Is that
not standardisation in action?
Besides the different versions of USB previously alluded to, let me also mention BlueTooth, [...], 4G, 5G ...So which is it? Are devices becoming standardised or are people insisting on re-inventing the wheel? Is this what consumers want or is