Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 23 |
Nodes: | 6 (0 / 6) |
Uptime: | 54:28:54 |
Calls: | 583 |
Files: | 1,139 |
D/L today: |
179 files (27,921K bytes) |
Messages: | 111,799 |
I defined the 'PROC rat_gcd' in global space but with Algol 68 it
could also be defined locally (to not pollute the global namespace)
like
[... snip ...]
though performance measurements showed some noticeable degradation
with a local function definition as depicted.
I'd prefer it to be local but since it's ubiquitously used in that
library the performance degradation (about 15% on avg) annoys me.
Opinions on that?
On 18/08/2025 03:52, Janis Papanagnou wrote:
I defined the 'PROC rat_gcd' in global space but with Algol 68 it
could also be defined locally (to not pollute the global namespace)
like
[... snip ...]
though performance measurements showed some noticeable degradation
with a local function definition as depicted.
I can't /quite/ reproduce your problem. If I run just the
interpreter ["a68g myprog.a68g"] then on my machine the timings are identical. If I optimise ["a68g -O3 myprog.a68g"], then /first time/ through, I get a noticeable degradation [about 10% on my machine],
but the timings converge if I run them repeatedly. YMMV.
I suspect
it's to do with storage management, and later runs are able to re-use
heap storage that had to be grabbed first time.
But that could be
completely up the pole. Marcel would probably know.
If you see the same, then I suggest you don't run programs
for a first time. [:-)]
I'd prefer it to be local but since it's ubiquitously used in that
library the performance degradation (about 15% on avg) annoys me.
Opinions on that?
Personally, I'd always go for the version that looks nicer
[ie, in keeping with your own inclinations, with the spirit of A68,
with the One True (A68) indentation policy, and so on].
If you're
worried about 15%, that will be more than compensated for by your
next computer!
If you're Really Worried about 15%, then I fear it's
back to C [or whatever]; but that will cost you more than 15% in
development time.
Actually, with more tests, the variance got even greater; from 10%
to 45% degradation. The variances, though, did not converge [in my environment].
I also suspected some storage management effect; maybe that the GC
got active at various stages. (But the code did not use anything
that would require GC; to be honest, I'm puzzled.)
If you'reActually I'm very conservative concerning computers; mine is 15+
worried about 15%, that will be more than compensated for by your
next computer!
years old, and although I "recently" thought about getting an update
here it's not my priority. ;-)
[...]
If you'reActually I'm very conservative concerning computers; mine is 15+
worried about 15%, that will be more than compensated for by your
next computer!
years old, and although I "recently" thought about getting an update
here it's not my priority. ;-)
Ah. I thought I was bad, keeping computers 10 years or so!
I got a new one a couple of years back, and the difference in speed
and storage was just ridiculous.
[...] (That's actually one point that annoys me in "modern"
software development; rarely anyone seems to care economizing resource requirements.) [...]
And, by the way, thanks for your suggestions and helpful information
on my questions in all my recent Algol posts! It's also very pleasant
being able to substantially exchange ideas on this (IMO) interesting
legacy topic.
From time to time I wonder what would happen if we ran
7th Edition Unix on a modern computer.
From time to time I wonder what would happen if we ranThe Linux kernel source is currently over 40 million lines, and I
7th Edition Unix on a modern computer.
understand the vast majority of that is device drivers.
If you were to run an old OS on new hardware, that would need drivers for that new hardware, too.
On 20/08/2025 01:43, Lawrence DrCOOliveiro wrote:
If you were to run an old OS on new hardware, that would need
drivers for that new hardware, too.
Yes, but what is so special about a modern disc drive, monitor,
keyboard, mouse, ... that it needs "the vast majority" of 40M lines
more than its equivalent for a PDP-11?
[...] (That's actually one point that annoys me in "modern"
software development; rarely anyone seems to care economizing resource requirements.) [...]
On 20/08/2025 01:43, Lawrence DrCOOliveiro wrote:
The Linux kernel source is currently over 40 million lines, and I
understand the vast majority of that is device drivers.
You seem to be making Janis's point, but that doesn't seem to
be your intention?
If you were to run an old OS on new hardware, that would need drivers for
that new hardware, too.
Yes, but what is so special about a modern disc drive, monitor,
keyboard, mouse, ... that it needs "the vast majority" of 40M lines more
than its equivalent for a PDP-11? [...]
On Wed, 20 Aug 2025 23:58:58 +0100, Andy Walker wrote:
On 20/08/2025 01:43, Lawrence DrCOOliveiro wrote:Keyboard and mouse -- USB. [...]
If you were to run an old OS on new hardware, that would needYes, but what is so special about a modern disc drive, monitor,
drivers for that new hardware, too.
keyboard, mouse, ... that it needs "the vast majority" of 40M lines
more than its equivalent for a PDP-11?
On 21/08/2025 03:59, Lawrence DrCOOliveiro wrote:
[...]
You've given us a list of 20-odd features of modern systems
that have been developed since 7th Edition Unix, and could no doubt
think of another 20. What you didn't attempt was to explain why all
these nice things need to occupy 40M lines of code. That's, give or
take, 600k pages of code, call it 2000 books. That's, on your figures,
just the kernel source; [...]
What you didn't attempt was to explain why all these nice things
need to occupy 40M lines of code.
On 2025-08-19, Janis Papanagnou wrote:
On 19.08.2025 01:45, Andy Walker wrote:
If you're worried about 15%, that will be more than compensated
for by your next computer!
Actually I'm very conservative concerning computers; mine is 15+
years old, and although I "recently" thought about getting an
update here it's not my priority. ;-)
Ah. I thought I was bad, keeping computers 10 years or so! I got
a new one a couple of years back, and the difference in speed and
storage was just ridiculous.
Well, used software tools (and their updates) required me to at
least upgrade memory! (That's actually one point that annoys me
in "modern" software development; rarely anyone seems to care
economizing resource requirements.)
First, the 4e7 lines of Linux code is somewhat unfair a measure. On
my system, less than 5% of individual modules built from the Linux
source are loaded right now ...
For those who are looking for a system with more "comprehensible"
sources, I would recommend NetBSD. And if anything, I personally
find its list of supported platforms, http://netbsd.org/ports/ ,
fairly impressive.
On 2025-08-19, Janis Papanagnou wrote:
Well, used software tools (and their updates) required me to at
least upgrade memory! (That's actually one point that annoys me
in "modern" software development; rarely anyone seems to care
economizing resource requirements.)
I doubt it's so much lack of care as it is simply being not a
priority. [...]
[...]
[1] http://spectrum.ieee.org/lean-software-development
[...]
As to websites and JS libraries, for the past 25 years I've been
using as my primary one a browser, Lynx, that never had support
for JS, and likely never will have. IME, an /awful lot/ of
websites are usable and useful entirely without JS. [...]
On 2025-08-27, Lawrence D'Oliveiro wrote:
On Tue, 26 Aug 2025 18:42:05 +0000, Ivan Shmakov wrote:
For those who are looking for a system with more "comprehensible"
sources, I would recommend NetBSD. And if anything, I personally
find its list of supported platforms, http://netbsd.org/ports/ ,
fairly impressive.
Bit misleading, though. Note it counts "Xen" (a Linux-based
hypervisor) as a separate platform.
Also, look at all the different 68k, MIPS, ARM and PowerPC-based
machines that are individually listed.
Linux counts platform support based solely on CPU architecture (not surprising, since it's just a kernel, not the userland as well).
Architectures: all amd64 arm64 armel armhf i386 ppc64el riscv64 s390x
It covers all those CPUs listed (except maybe VAX), and a bunch of
others as well.
Each directory here <https://github.com/torvalds/linux/tree/master/arch> represents a separate supported architecture. Note extras like arm64,
On 2025-08-27, Janis Papanagnou wrote:
On 26.08.2025 20:42, Ivan Shmakov wrote:
On 2025-08-19, Janis Papanagnou wrote:
Well, used software tools (and their updates) required me to at
least upgrade memory! (That's actually one point that annoys me
in "modern" software development; rarely anyone seems to care
economizing resource requirements.)
I doubt it's so much lack of care as it is simply being not a
priority.
But those are depending each other.
And, to be yet more clear; I also think it's [widely] just ignorance!
(The mere existence of the article you quoted below is per se already a strong sign for that. But also other experiences, like talks with many IT-folks of various age and background reinforced my opinion on that.)
(Privately I had later written HTML/JS to create applications (with
dynamic content) since otherwise that would not have been possible;
I had no own server with some application servers available. But I
didn't use any frameworks or external libraries. Already bad enough.)
But even with Browsers and JS activated with my old Firefox I cannot
use or read many websites nowadays; because they demand newer browser versions.
On Wed, 27 Aug 2025 00:28:20 -0000 (UTC), Lawrence DrCOOliveiro
wrote:
Bit misleading, though. Note it counts "Xen" (a Linux-based
hypervisor) as a separate platform.
What do you mean by "Linux-based"?
NetBSD supports running as both Xen domU (unprivileged) /and/ dom0 (privileged.)
Linux counts platform support based solely on CPU architecture (not
surprising, since it's just a kernel, not the userland as well).
There's a "Ports by CPU architecture" section down the NetBSD
ports page; it lists 16 individual CPU architectures.
My point was that GNU/Linux distributions typically support
less ...
The way I see it, it's the /kernel/ that it takes the most
effort to port to a new platform - as it's where the support
for peripherals lives, including platform-specific ones.
But I still think that if you're interested in understanding how
your OS works - at the source code level - you'd be better with
NetBSD than with a Linux-based OS.
Not to mention that taking too long to 'polish' your product, you
risk ending up lagging behind your competitors.
[...]On 2025-08-27, Janis Papanagnou wrote:
But even with Browsers and JS activated with my old Firefox I cannot
use or read many websites nowadays; because they demand newer browser versions.
"Demand" how?
[...]
Not to mention that taking too long to 'polish' your product,
you risk ending up lagging behind your competitors.
On 2025-08-30, Lawrence D'Oliveiro wrote:
On Sat, 30 Aug 2025 19:39:49 +0000, Ivan Shmakov wrote:
Not to mention that taking too long to 'polish' your product, you
risk ending up lagging behind your competitors.
I would say, the open-source world is a counterexample to this.
Look at how long it took GNU and Linux to end up dominating the
entire computing landscape -- it didn't happen overnight.
I'm not sure how much of a consolation it is to the people who owned
the companies that failed, though.
Also, what indication is there that GNU is 'dominating' the
landscape? Sure, Linux is everywhere (such as in now ubiquitous
Android phones and TVs and whatnot), but I don't quite see GNU
being adopted as widely.