On 20/08/2025 01:43, Lawrence DrCOOliveiro wrote:
[I wrote:]
From time to time I wonder what would happen if we ranThe Linux kernel source is currently over 40 million lines, and I
7th Edition Unix on a modern computer.
understand the vast majority of that is device drivers.
You seem to be making Janis's point, but that doesn't seem to
be your intention?
If you were to run an old OS on new hardware, that would need drivers for
that new hardware, too.
Yes, but what is so special about a modern disc drive, monitor, keyboard, mouse, ... that it needs "the vast majority" of 40M lines more
than its equivalent for a PDP-11? Does this not again make Janis's point?
Granted that the advent of 32- and 64-bit integers and addresses
makes some programming much easier, and that we can no longer expect
browsers and other major tools to fit into 64+64K bytes, is the actual
bloat in any way justified?
It's not just kernels and user software --
it's also the documentation. In V7, "man cc" generates just under two
pages of output; on my current computer, it generates over 27000 lines,
call it 450 pages, and is thereby effectively unprintable and unreadable,
so it is largely wasted.
For V7, the entire documentation fits comfortably into two box
files, and the entire source code is a modest pile of lineprinter output. Most of the commands on my current computer are undocumented and unused,
and I have no idea at all what they do.
Yes, I know how that "just happens", and I'm observing rather
than complaining [I'd rather write programs, browse and send/read e-mails
on my current computer than on the PDP-11]. But it does all give food for thought.
On my desktop kernel boot messages say "14342K kernel code". Nominally assuming 10 bytes per source line it means 1.4 milions of lines of
running code, so relatively small part of total kernel source.
Andy Walker <anw@cuboid.co.uk> wrote:[...]
On 20/08/2025 01:43, Lawrence DrCOOliveiro wrote:
Lawrence gave a good list of things, but let me note few additionalIf you were to run an old OS on new hardware, that would need drivers for >>> that new hardware, too.Yes, but what is so special about a modern disc drive, monitor,
keyboard, mouse, ... that it needs "the vast majority" of 40M lines more
than its equivalent for a PDP-11? Does this not again make Janis's point?
aspects. First there is _a lot_ of different drivers. In PDP-11
times there were short list of available devices. Now there is a lot
of different devices on the market and each one potentially need a specialised driver in the kernel. [...]
I think that comparisons with early mainframes or PDP-11 areI would take issue with some of the historical aspects, but it
misleading in the sence that on early machines programmes struggled
to fit programs into avaliable memory. Common technique was keeping
data on disc and having multiple sequential passes. Program
itself could be split into several overlays. Use of overlays
essentially vanished with introduction of virtual memory coupled
with multimegabyte real RAM. More relevant are comparisons
with VAX and early Linux.
AFAICS bloat happens mostly in user level. One reason is more
friendly attitude of modern programs: instead of numeric error
codes programs contains actual error messages.
One reason that modern system are big and bloated is recursive
pulling of dependencies. Namely, there is tendency to delegate
work to libraries and more generally to depend on "standard"
tools. But this in turn creates pressure on libraries and
tools to cover "all" use cases and in particular to include
rarely used functionality.
Hmm, on my machine '/usr/bin' contains 2547 commands. IIRC "minimal"
install give some hundreds of commands, so most commands is from
packages that I explicitely installed or their dependencies.
On 04/10/2025 02:11, Waldek Hebisch wrote:
In PDP-11 times there were short list of available devices. Now
there is a lot of different devices on the market and each one
potentially need a specialised driver in the kernel. [...]
Yes, but one would expect that to drive standardisation rather than
bloat. There are rather a lot of devices that I can plug into the
mains in my home, but I don't have to install hundreds or thousands
of different types of socket.
On Tue, 7 Oct 2025 22:03:07 +0100, Andy Walker wrote:
On 04/10/2025 02:11, Waldek Hebisch wrote:
In PDP-11 times there were short list of available devices. Now
there is a lot of different devices on the market and each one
potentially need a specialised driver in the kernel. [...]
Yes, but one would expect that to drive standardisation rather than
bloat. There are rather a lot of devices that I can plug into the
mains in my home, but I don't have to install hundreds or thousands
of different types of socket.
Most of your electronic devices would not plug directly into the
mains, they would likely use some kind of DC adaptor/charger. How many
of those do you have?
You are trying to make an argument by analogy, and that is already
heading for a pitfall. Those power connections you talk about are for transferring energy, not for transferring information. Information
transfer is a much more complex business.
On Sat, 30 Aug 2025 19:39:49 +0000, Ivan Shmakov wrote:
Not to mention that taking too long to 'polish' your product, you
risk ending up lagging behind your competitors.
I would say, the open-source world is a counterexample to this. Look at
how long it took GNU and Linux to end up dominating the entire computing landscape -- it didnrCOt happen overnight.
Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:
On Sat, 30 Aug 2025 19:39:49 +0000, Ivan Shmakov wrote:
Not to mention that taking too long to 'polish' your product, you
risk ending up lagging behind your competitors.
I would say, the open-source world is a counterexample to this. Look at
how long it took GNU and Linux to end up dominating the entire computing
landscape -- it didnrCOt happen overnight.
Actually, open source nicely illustates this. First advice to
open source projects is "release early, release often". Projects
that delay release because they are "not ready" typically loose
and eventually die.
Open source projects typically want to offer high quality. But
they have to limit their efforts to meet realease schedules.
There are compromises which know bugs get fixed: some are deemed
serious enough to block new release, but a lot get shipped.
There is internal testing, but significant part of problems
get discovered only after release.
One can significantly increase quality by limiting addition of
new featurs. But open source projects that try to do this
typically loose.
Actually, open source nicely illustates this. First advice to
open source projects is "release early, release often".
Projects that delay release because they are "not ready"
typically loose and eventually die.
A principal advantage of the "open-source world" (or rather the non- commercial world) is that there's neither competition nor need to
quickly throw things into the market. So this area has at least the
chance to adapt plans and contents without time pressure.
"You don't get a second shot at a first impression."
On 04/10/2025 02:11, Waldek Hebisch wrote:
Andy Walker <anw@cuboid.co.uk> wrote:[...]
On 20/08/2025 01:43, Lawrence DrCOOliveiro wrote:
aspects. First there is _a lot_ of different drivers. In PDP-11If you were to run an old OS on new hardware, that would need drivers for >>>> that new hardware, too.Yes, but what is so special about a modern disc drive, monitor,
keyboard, mouse, ... that it needs "the vast majority" of 40M lines more >>> than its equivalent for a PDP-11? Does this not again make Janis's point? >> Lawrence gave a good list of things, but let me note few additional
times there were short list of available devices. Now there is a lot
of different devices on the market and each one potentially need a
specialised driver in the kernel. [...]
Yes, but one would expect that to drive standardisation rather
than bloat. There are rather a lot of devices that I can plug into the
mains in my home, but I don't have to install hundreds or thousands of different types of socket.
I think that comparisons with early mainframes or PDP-11 areI would take issue with some of the historical aspects, but it
misleading in the sence that on early machines programmes struggled
to fit programs into avaliable memory. Common technique was keeping
data on disc and having multiple sequential passes. Program
itself could be split into several overlays. Use of overlays
essentially vanished with introduction of virtual memory coupled
with multimegabyte real RAM. More relevant are comparisons
with VAX and early Linux.
would take us on a long detour. Just one comment: we've had virtual
memory since 1959 [Atlas].
AFAICS bloat happens mostly in user level. One reason is more
friendly attitude of modern programs: instead of numeric error
codes programs contains actual error messages.
The systems I've used have always used actual error messages!
[...]
One reason that modern system are big and bloated is recursive
pulling of dependencies. Namely, there is tendency to delegate
work to libraries and more generally to depend on "standard"
tools. But this in turn creates pressure on libraries and
tools to cover "all" use cases and in particular to include
rarely used functionality.
Yes, but that's the sort of pressure that needs to be
resisted; and isn't,
[...]
Hmm, on my machine '/usr/bin' contains 2547 commands. IIRC "minimal"
install give some hundreds of commands, so most commands is from
packages that I explicitely installed or their dependencies.
I have 2580 in my "/usr/bin". That is almost all from the
"medium (recommended)" installation; a handful of others have been
added when I've found something missing (I'd guess perhaps 10). Of
those I've actually used just 64! [Plus 26 in "$HOME/bin".] I
checked a random sample of those 2580; more than 2/3 I have no
idea from the name what they are for [yes, I know I can find out],
and I'm an experienced Unix user with much more CS knowledge than
the average punter. If I were to read an introductory book on
Linux, I doubt whether many more than those 64 would be mentioned,
so I wouldn't even be pointed at the "average" command.
... large-scale open-source projects do compete for
"mindshare" among open-source developers, who are a large but finite
group with a finite amount of time and energy to sink into them.
If that's the terminology you prefer, sure. The point stands.... large-scale open-source projects do compete for
"mindshare" among open-source developers, who are a large but finite
group with a finite amount of time and energy to sink into them.
The rCLmindsharerCY is among the passive users who take whatrCOs given and complain about how it doesnrCOt fit their needs.
WhatrCOs more important is the rCLcontribusharerCY -- the active community that contributes to the project. That matters much more than sheer
numbers of users.
On Wed, 8 Oct 2025 21:18:58 -0000 (UTC)
Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:
... large-scale open-source projects do compete for "mindshare" among
open-source developers, who are a large but finite group with a
finite amount of time and energy to sink into them.
The rCLmindsharerCY is among the passive users who take whatrCOs given and >> complain about how it doesnrCOt fit their needs.
WhatrCOs more important is the rCLcontribusharerCY -- the active community >> that contributes to the project. That matters much more than sheer
numbers of users.
If that's the terminology you prefer, sure. The point stands.
On 08.10.2025 16:03, Waldek Hebisch wrote:
Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:
On Sat, 30 Aug 2025 19:39:49 +0000, Ivan Shmakov wrote:
Not to mention that taking too long to 'polish' your product, you
risk ending up lagging behind your competitors.
I would say, the open-source world is a counterexample to this. Look at >>> how long it took GNU and Linux to end up dominating the entire computing >>> landscape -- it didnrCOt happen overnight.
Actually, open source nicely illustates this. First advice to
open source projects is "release early, release often". Projects
that delay release because they are "not ready" typically loose
and eventually die.
Open source projects typically want to offer high quality. But
they have to limit their efforts to meet realease schedules.
There are compromises which know bugs get fixed: some are deemed
serious enough to block new release, but a lot get shipped.
There is internal testing, but significant part of problems
get discovered only after release.
One can significantly increase quality by limiting addition of
new featurs. But open source projects that try to do this
typically loose.
We can observe that software grows, and grows rank. My experience
is that it makes sense to plan and occasionally add refactoring
cycles in these cases. (There's also software planned accurately
from the beginning, software that changes less, and is only used
for its fixed designed purpose. But we're not speaking about that
here.) A principal advantage of the "open-source world" (or rather
the non-commercial world) is that there's neither competition nor
need to quickly throw things into the market. So this area has at
least the chance to adapt plans and contents without time pressure.
Whether it's done is another question (and project specific). It
should also be mentioned that some projects have e.g. security or
quality requirements that gets tested and measured and require some
adaptive process to increase these factors (without adding anything
new).
antispam@fricas.org (Waldek Hebisch) wrote or quoted:
Actually, open source nicely illustates this. First advice to
open source projects is "release early, release often".
I had thought about using this for my projects, but I can see
the downsides too:
If some projects drop too early, they still barely have any
capabilities. The first curious potential users check it out and
walk away thinking, "a toy product and not the skills that actually
matter in practice". That vibe can stick around - "You don't get
a second shot at a first impression." - and end up keeping people
from giving the later, more capable versions a chance.
Projects that delay release because they are "not ready"
typically loose and eventually die.
Exagerated.
The actual TeX program version is currently at 3.141592653
and was last updated in 2021. It is one of the most successful
programs ever and the market leader for scientific articles
and books that include math formulas.
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:[...]
We can observe that software grows, and grows rank. My experience
is that it makes sense to plan and occasionally add refactoring
cycles in these cases. (There's also software planned accurately
from the beginning, software that changes less, and is only used
for its fixed designed purpose. But we're not speaking about that
here.) A principal advantage of the "open-source world" (or rather
the non-commercial world) is that there's neither competition nor
need to quickly throw things into the market. So this area has at
least the chance to adapt plans and contents without time pressure.
What you wrote corresponds to one-man hobby project. [...]
[...] But more important is software from multiperson project. [...]
[ open source and GPL stuff ]
[ specific sceneries and assumptions ]
[ more open source specific sceneries and assumptions ]
[ open source example sceneries and assumptions about involved people ]
Whether it's done is another question (and project specific). It
should also be mentioned that some projects have e.g. security or
quality requirements that gets tested and measured and require some
adaptive process to increase these factors (without adding anything
new).
Actually, security is another thing which puts pressure to
release quickly: if there is security problem developers want
to distribute fixed version as soon as possible.
[...]
But IMO in most cases releasing early makes sense.
On 09.10.2025 03:39, Waldek Hebisch wrote:
[...]
But IMO in most cases releasing early makes sense.
LOL, yeah! - Let the users and customers search the bugs for you!
If your customers need/demand higher quality they should pay
appropriatly to cover needed cost. But expecting no bugs is
simply unrealistic. I read about developement of software
controlling Space Shuttle. Team doing that boasted that
that have sophisticated developement process ensuring high
quality. They had 400 people working on 400 kloc program.
Given that developement was spread over more than 10 years
that looks as very low "productivity", that pretty high
developement cost. Yet they where not able to say "no bugs".
IIRC they where not even able to say "no bugs discovered
during actual mission", all that they were able to say
was "no serious trouble due to bugs". Potential effects
of failure of Space Shuttle software were pretty serious,
so it was fully justified to spent substantial effort on
quality.
I have a problem (and tone of your message suggest that you
may have this problem too), I really would prefer to catch as many
bugs as possible during developement and due to this I
probably release to late.
I was talking about the doing; you just want to use a different wordIf that's the terminology you prefer, sure. The point stands.
You were talking about thinking, not doing. ItrCOs the doing that
counts.
On Thu, 9 Oct 2025 00:09:50 -0000 (UTC)
Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:
If that's the terminology you prefer, sure. The point stands.
You were talking about thinking, not doing. ItrCOs the doing that counts.
I was talking about the doing ...
No, just using it in the context of developer minds.If that's the terminology you prefer, sure. The point stands.
You were talking about thinking, not doing. ItrCOs the doing that
counts.
I was talking about the doing ...
You used the word rCLmindsharerCY. Trying to redefine what rCLmindrCY means, now?
On Tue, 7 Oct 2025 22:03:07 +0100, Andy Walker wrote:
On 04/10/2025 02:11, Waldek Hebisch wrote:Most of your electronic devices would not plug directly into the
In PDP-11 times there were short list of available devices. NowYes, but one would expect that to drive standardisation rather than
there is a lot of different devices on the market and each one
potentially need a specialised driver in the kernel. [...]
bloat. There are rather a lot of devices that I can plug into the
mains in my home, but I don't have to install hundreds or thousands
of different types of socket.
mains, they would likely use some kind of DC adaptor/charger. How many
of those do you have?
You are trying to make an argument by analogy, and that is already
heading for a pitfall. Those power connections you talk about are for transferring energy, not for transferring information. Information
transfer is a much more complex business.
A feature is that all those mentioned work via a USB
connexion [supplied with the device], irrespective of whether the Man
on the Clapham Omnibus would describe them as "electronic". Is that
not standardisation in action?
USB sticks, portable drives, ... transfer information as well.
But again, if information transfer is so complex, one would expect that
to drive standardisation rather than everyone re-inventing the wheel.
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
On 09.10.2025 03:39, Waldek Hebisch wrote:
[...]
But IMO in most cases releasing early makes sense.
LOL, yeah! - Let the users and customers search the bugs for you!
Maybe you think that you can write perfect software.
[...] But within reasonable resource bounds and
using known tecniques you will arrive at point where finding new
bugs takes too much resources.
If your customers need/demand higher quality they should pay
appropriatly to cover needed cost. But expecting no bugs is
simply unrealistic.
I read about developement of software
controlling Space Shuttle. Team doing that boasted that
that have sophisticated developement process ensuring high
quality. They had 400 people working on 400 kloc program.
Given that developement was spread over more than 10 years
that looks as very low "productivity", that pretty high
developement cost. Yet they where not able to say "no bugs".
IIRC they where not even able to say "no bugs discovered
during actual mission", all that they were able to say
was "no serious trouble due to bugs". Potential effects
of failure of Space Shuttle software were pretty serious,
so it was fully justified to spent substantial effort on
quality.
What I develop is quite non-critical, I am almost certain
that "no serious trouble due to bugs" will be true even if
my software is full of bugs. [...]
When I wrote about releasing early, I mean releasing when
stream of new bugs goes down, that is attempting to predict
point of diminishing returns.
More conservative approach
would continue testing for longer time in hope of finding
"last bug". [...]
I have a problem (and tone of your message suggest that you
may have this problem too), I really would prefer to catch as many
bugs as possible during developement and due to this I
probably release to late. [...]
Note that part of my testing may
be using a program just to do some work. Now, if program
is doing valuable service to me, there is reasoble chance
that it will do valuable work for some other people.
Pragmatically you can view this a a deal: other people
get value from work done by the program, I in exchange get
defect reports that allow me to improve the program.
I see nothing wrong in such a deal, as long as it is
honest, in particular when provider of the program
realistically states what program can do.
BTW: Some users judge quality of software looking at number of
bugs reports. More bugs reports is supposed to mean higher
quality.
If that looks wrong to you more detailed reasoning
goes as follows: number of bug reports grows with number of
users.
If there is small number of bug reports it indicates
that there is small number of user and possibly that users
do not bother reporting bugs.
Now, user do not report bugs
when they consider software to be hoplessy bad. And user
in general prefer higher quality software, so small number
of user suggest low quality. So, either way, low number
of bug reports really may mean low quality. This method
may fail if you manage to create perfect software free of
bugs with perfect documentation so that there will be no
spurious bug reports. But in real life program tend to have
enough bugs that this method has at least some merit.
Recommended reading:
"They Write the Right Stuff" (1996-12) - Charles Fishman.
[...]
The gist is:
[...]
- About half the staff are testers, but as the programmers
do not want them to find errors, the programmers already
do their own testing before they give their code to the
actual testers. So more time is spend on testing than on
coding.
A feature is that all those mentioned work via a USBDo you know how many different kinds of rCLUSBrCY there are?
connexion [supplied with the device], irrespective of whether the Man
on the Clapham Omnibus would describe them as "electronic". Is that
not standardisation in action?
Besides the different versions of USB previously alluded to, let me also mention BlueTooth, [...], 4G, 5G ...So which is it? Are devices becoming standardised or are people insisting on re-inventing the wheel? Is this what consumers want or is
| Sysop: | Amessyroom |
|---|---|
| Location: | Fayetteville, NC |
| Users: | 54 |
| Nodes: | 6 (0 / 6) |
| Uptime: | 02:01:41 |
| Calls: | 743 |
| Files: | 1,218 |
| Messages: | 187,761 |