Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 23 |
Nodes: | 6 (0 / 6) |
Uptime: | 49:45:26 |
Calls: | 583 |
Files: | 1,138 |
Messages: | 111,301 |
Hmm, that's an interesting question, actually - the Bell Labs -11
was an 11/45, which was much faster than the original -11s, while
the IBM PC was really a bit of a dog thanks to having a 16-bit
architecture on an 8-bit bus and the generally poor performance characteristics of the first-generation x86 CPUs. It'd be neat to
do a head-to-head shootout. I don't know if it's recorded whether
the Bell Labs -11 was core or semiconductor memory (980 vs. 450 ns
cycle time;) the PC at 4.77 MHz would have a cycle time of around
209 ns, but with the aforementioned 8-bit bus. As a naive
approximation, that might put them anywhere from comparable to
around twice the memory bandwidth for the PC...but then the 8088's instruction times are kinda abysmal even on top of that. Definitely
makes one curious...
The claim has another problem. While an x86 might be considered more powerful in some ways, it does not have nearly as capably MMU as the
PDP-11, and that really trips the whole thing over when comparing.
(I'm not sure I would even say the x86 have anything resembling a
proper MMU... Not before the 80386 anyway, which was not in a PC or
PC/XT.)
On Tue, 26 Aug 2025 14:04:07 +0200
Johnny Billquist <bqt@softjar.se> wrote:
Hmm, that's an interesting question, actually - the Bell Labs -11
was an 11/45, which was much faster than the original -11s, while
the IBM PC was really a bit of a dog thanks to having a 16-bit architecture on an 8-bit bus and the generally poor performance characteristics of the first-generation x86 CPUs. It'd be neat to
do a head-to-head shootout. I don't know if it's recorded whether
the Bell Labs -11 was core or semiconductor memory (980 vs. 450 ns
cycle time;) the PC at 4.77 MHz would have a cycle time of around
209 ns, but with the aforementioned 8-bit bus. As a naive
approximation, that might put them anywhere from comparable to
around twice the memory bandwidth for the PC...but then the 8088's instruction times are kinda abysmal even on top of that. Definitely
makes one curious...
The claim has another problem. While an x86 might be considered more powerful in some ways, it does not have nearly as capably MMU as the PDP-11, and that really trips the whole thing over when comparing.
(I'm not sure I would even say the x86 have anything resembling a
proper MMU... Not before the 80386 anyway, which was not in a PC or
PC/XT.)
The 286 had a proper MMU, but not (AFAIUI) a terribly performant one.
I've never heard of an add-on MMU for the 8086, but I admit I've never looked. In any case, I realize now that my napkin math was way off; the 8086/88 can only perform a memory access every fourth cycle, so the
PC's approximate "cycle time" by comparison would be more like ~836 ns, barely faster than core even *before* you factor in the 8-bit bottle-
neck or DRAM refresh. Ye gods, what a *dog.*
The 286 had a proper MMU, but not (AFAIUI) a terribly performant
one. I've never heard of an add-on MMU for the 8086, but I admit
I've never looked. In any case, I realize now that my napkin math
was way off; the 8086/88 can only perform a memory access every
fourth cycle, so the PC's approximate "cycle time" by comparison
would be more like ~836 ns, barely faster than core even *before*
you factor in the 8-bit bottle- neck or DRAM refresh. Ye gods, what
a *dog.*
Sure, but the idea was you had it *all to yourself*
/Mwhah-hah-hah-hah!/ Sorry, I don't know what came over me there.
On Tue, 26 Aug 2025 14:04:07 +0200
Johnny Billquist <bqt@softjar.se> wrote:
Hmm, that's an interesting question, actually - the Bell Labs -11
was an 11/45, which was much faster than the original -11s, while
the IBM PC was really a bit of a dog thanks to having a 16-bit
architecture on an 8-bit bus and the generally poor performance
characteristics of the first-generation x86 CPUs. It'd be neat to
do a head-to-head shootout. I don't know if it's recorded whether
the Bell Labs -11 was core or semiconductor memory (980 vs. 450 ns
cycle time;) the PC at 4.77 MHz would have a cycle time of around
209 ns, but with the aforementioned 8-bit bus. As a naive
approximation, that might put them anywhere from comparable to
around twice the memory bandwidth for the PC...but then the 8088's
instruction times are kinda abysmal even on top of that. Definitely
makes one curious...
The claim has another problem. While an x86 might be considered more
powerful in some ways, it does not have nearly as capably MMU as the
PDP-11, and that really trips the whole thing over when comparing.
(I'm not sure I would even say the x86 have anything resembling a
proper MMU... Not before the 80386 anyway, which was not in a PC or
PC/XT.)
The 286 had a proper MMU, but not (AFAIUI) a terribly performant one.
On Tue, 26 Aug 2025 14:04:07 +0200
Johnny Billquist <bqt@softjar.se> wrote:
Hmm, that's an interesting question, actually - the Bell Labs -11
was an 11/45, which was much faster than the original -11s, while
the IBM PC was really a bit of a dog thanks to having a 16-bit
architecture on an 8-bit bus and the generally poor performance
characteristics of the first-generation x86 CPUs. It'd be neat to
do a head-to-head shootout. I don't know if it's recorded whether
the Bell Labs -11 was core or semiconductor memory (980 vs. 450 ns
cycle time;) the PC at 4.77 MHz would have a cycle time of around
209 ns, but with the aforementioned 8-bit bus. As a naive
approximation, that might put them anywhere from comparable to
around twice the memory bandwidth for the PC...but then the 8088's
instruction times are kinda abysmal even on top of that. Definitely
makes one curious...
The claim has another problem. While an x86 might be considered more
powerful in some ways, it does not have nearly as capably MMU as the
PDP-11, and that really trips the whole thing over when comparing.
(I'm not sure I would even say the x86 have anything resembling a
proper MMU... Not before the 80386 anyway, which was not in a PC or
PC/XT.)
The 286 had a proper MMU, but not (AFAIUI) a terribly performant one.
That also depends on one's definition of "proper MMU". The 286 had a >segmented MMU, but lacked a paged MMU. Paging was not added until the
386. And there are some that define "proper MMU" as "paged MMU".
In article <108kqnu$615d$1@dont-email.me>, Rich <rich@example.invalid> wrote: >>In comp.os.linux.misc John Ames <commodorejohn@gmail.com> wrote:
On Tue, 26 Aug 2025 14:04:07 +0200
Johnny Billquist <bqt@softjar.se> wrote:
Hmm, that's an interesting question, actually - the Bell Labs -11
was an 11/45, which was much faster than the original -11s, while
the IBM PC was really a bit of a dog thanks to having a 16-bit
architecture on an 8-bit bus and the generally poor performance
characteristics of the first-generation x86 CPUs. It'd be neat to
do a head-to-head shootout. I don't know if it's recorded whether
the Bell Labs -11 was core or semiconductor memory (980 vs. 450 ns
cycle time;) the PC at 4.77 MHz would have a cycle time of around
209 ns, but with the aforementioned 8-bit bus. As a naive
approximation, that might put them anywhere from comparable to
around twice the memory bandwidth for the PC...but then the 8088's
instruction times are kinda abysmal even on top of that. Definitely
makes one curious...
The claim has another problem. While an x86 might be considered more
powerful in some ways, it does not have nearly as capably MMU as the
PDP-11, and that really trips the whole thing over when comparing.
(I'm not sure I would even say the x86 have anything resembling a
proper MMU... Not before the 80386 anyway, which was not in a PC or
PC/XT.)
The 286 had a proper MMU, but not (AFAIUI) a terribly performant one.
That also depends on one's definition of "proper MMU". The 286 had a >>segmented MMU, but lacked a paged MMU. Paging was not added until the >>386. And there are some that define "proper MMU" as "paged MMU".
I don't know the MMU details for the 286, but my undestanding (formed
at the time) is that it was "proper" in that it could actually protect >running programs from each other. PC-IX and I presume Xenix worked
on the 8088/8086 by having the C compiler emit code which stayed
in a segment -- so programs wouldn't interfere with each other
*if* nothing went wrong. If something went wrong, (which presumably
you could easily provoke in assembler code) one program could trash
another's RAM.
What the 286 couldn't do was virtual memory, which the 386 could.
On 2025-08-26 14:13, The Natural Philosopher wrote:
On 26/08/2025 13:04, Johnny Billquist wrote:
The claim has another problem. While an x86 might be considered moreYou could equip an *86 with a decent MMU and people did.
powerful in some ways, it does not have nearly as capably MMU as the
PDP-11, and that really trips the whole thing over when comparing.
The 8086? What decent MMU existed for that?
a 386 running Unix was WAY faster than a PDP/11.
It was also about 15 years later than the first PDP-11, and a few years later than the last new implementation of any PDP-11 at all by DEC.
(I'm not sure I would even say the x86 have anything resembling a
proper MMU... Not before the 80386 anyway, which was not in a PC or
PC/XT.)
Well yes, the 386 was what the 8086 should have been all along
Yes, eventually it got a bit more sorted out.
-a Johnny
Sure, but the idea was you had it*all to yourself* /Mwhah-hah-hah-hah!/ Sorry, I don't know what came over me there.
The claim has another problem. While an x86 might be considered more >powerful in some ways, it does not have nearly as capably MMU as the
PDP-11,
That also depends on one's definition of "proper MMU". The 286 had a >>segmented MMU, but lacked a paged MMU. Paging was not added until the >>386. And there are some that define "proper MMU" as "paged MMU".
I don't know the MMU details for the 286, but my undestanding (formed
at the time) is that it was "proper" in that it could actually protect >running programs from each other.
PC-IX and I presume Xenix worked
on the 8088/8086 by having the C compiler emit code which stayed
in a segment -- so programs wouldn't interfere with each other
What the 286 couldn't do was virtual memory, which the 386 could.
There was 286 Xenix that used multiple segments in protected mode.
I never used it.
That also depends on one's definition of "proper MMU". The 286 had a >>>segmented MMU, but lacked a paged MMU. Paging was not added until the >>>386. And there are some that define "proper MMU" as "paged MMU".
I don't know the MMU details for the 286, but my undestanding (formed
at the time) is that it was "proper" in that it could actually protect >>running programs from each other.
It could, but if your programs used more than one segment for code
or data, the switching was extremely slow and painful. Since the
segments were of variable size, that meant operating systems had
to do free space compaction that paging systems don't need.
PC-IX and I presume Xenix worked
on the 8088/8086 by having the C compiler emit code which stayed
in a segment -- so programs wouldn't interfere with each other
There was 286 Xenix that used multiple segments in protected mode.
I never used it.
What the 286 couldn't do was virtual memory, which the 386 could.
Sure it could. The system could mark segments as nonresident and
take a fault and swap them in as needed. I wouldn't call that
very good virtual memory, but it's definitely virtual memory.
According to Johnny Billquist <bqt@softjar.se>:
The claim has another problem. While an x86 might be considered more >>powerful in some ways, it does not have nearly as capably MMU as the >>PDP-11,
The 8086 had no MMU at all, but small model code gave you 64K each of instructions and data, the same as what the 11's MMU gave you. There was no hardware protection so a malicious or badly broken program could crash the system but they rarely did.
That would require instructions that the C compiler
didn't generate.
That would require instructions that the C compiler
didn't generate.
That claim "would require instructions that the C compiler didn't
generate" is just not true. Without memory protection, there are
plenty of ways to crash the system - e.g. overwriting the operating
system code due to a bug in an application.
I haven't tried Unix on 8086, but DOS on x86 essentially relied on applications
being reasonably correct and not too buggy. Having the reset button conveniently
accessible was effectively a requirement for any DOS PC ;-)
On Tue, 26 Aug 2025 14:04:07 +0200
Johnny Billquist <bqt@softjar.se> wrote:
Hmm, that's an interesting question, actually - the Bell Labs -11
was an 11/45, which was much faster than the original -11s, while
the IBM PC was really a bit of a dog thanks to having a 16-bit
architecture on an 8-bit bus and the generally poor performance
characteristics of the first-generation x86 CPUs. It'd be neat to
do a head-to-head shootout. I don't know if it's recorded whether
the Bell Labs -11 was core or semiconductor memory (980 vs. 450 ns
cycle time;) the PC at 4.77 MHz would have a cycle time of around
209 ns, but with the aforementioned 8-bit bus. As a naive
approximation, that might put them anywhere from comparable to
around twice the memory bandwidth for the PC...but then the 8088's
instruction times are kinda abysmal even on top of that. Definitely
makes one curious...
The claim has another problem. While an x86 might be considered more
powerful in some ways, it does not have nearly as capably MMU as the
PDP-11, and that really trips the whole thing over when comparing.
(I'm not sure I would even say the x86 have anything resembling a
proper MMU... Not before the 80386 anyway, which was not in a PC or
PC/XT.)
The 286 had a proper MMU, but not (AFAIUI) a terribly performant one.
I've never heard of an add-on MMU for the 8086, but I admit I've never looked. In any case, I realize now that my napkin math was way off; the 8086/88 can only perform a memory access every fourth cycle, so the
PC's approximate "cycle time" by comparison would be more like ~836 ns, barely faster than core even *before* you factor in the 8-bit bottle-
neck or DRAM refresh. Ye gods, what a *dog.*
I haven't tried Unix on 8086 ...
Unix on an early IBM PC (8086, 10M hard drive) would have been quite a shoehorning job. 'Slow' would probably be a generous word to use.
If you think about I/O "builtin" 360/30 I/O was done by microcode so
probably significantly faster than programmed I/O on PC.
But PC had hardware DMA channels and that should be at least as fast as 360/30.
John Levine <johnl@taugh.com> wrote:
According to Johnny Billquist <bqt@softjar.se>:
The claim has another problem. While an x86 might be considered more
powerful in some ways, it does not have nearly as capably MMU as the
PDP-11,
The 8086 had no MMU at all, but small model code gave you 64K each of
instructions and data, the same as what the 11's MMU gave you. There was no >> hardware protection so a malicious or badly broken program could crash the >> system but they rarely did.
I haven't tried Unix on 8086, but DOS on x86 essentially relied on applications
being reasonably correct and not too buggy. Having the reset button conveniently
accessible was effectively a requirement for any DOS PC ;-)
That would require instructions that the C compiler
didn't generate.
That claim "would require instructions that the C compiler didn't generate" is just not true. Without memory protection, there are plenty of ways to crash
the system - e.g. overwriting the operating system code due to a bug in an application.
Kind regards,
Alex.
On Wed, 27 Aug 2025 16:40:27 +0200
Alexander Schreiber <als@usenet.thangorodrim.de> wrote:
[]
I haven't tried Unix on 8086, but DOS on x86 essentially relied on applications
being reasonably correct and not too buggy. Having the reset button conveniently
accessible was effectively a requirement for any DOS PC ;-)
Unix on an early IBM PC (8086, 10M hard drive) would have been quite a shoehorning job. 'Slow' would probably be a generous word to use.
On Wed, 27 Aug 2025 16:40:27 +0200
Alexander Schreiber <als@usenet.thangorodrim.de> wrote:
That would require instructions that the C compiler
didn't generate.
That claim "would require instructions that the C compiler didn't
generate" is just not true. Without memory protection, there are
plenty of ways to crash the system - e.g. overwriting the operating
system code due to a bug in an application.
It's certainly true that there's no *real* protection on the 8086.
AFAIUI the logic is that, if generated code doesn't touch the segment registers and the OS allocates either 64KB shared or 64KB code + 64KB
data, a 16-bit address won't ever overstep into the next 64KB of RAM,
but x86 addressing can have up to three 16-bit components (two index registers plus a fixed offset,) so it's entirely possible for basic addressing operations to overstep that boundary, unless the compiler
just forgoes complex addressing entirely.
On Wed, 27 Aug 2025 22:09:07 -0000 (UTC), Waldek Hebisch wrote:
If you think about I/O "builtin" 360/30 I/O was done by microcode so
probably significantly faster than programmed I/O on PC.
Fast I/O throughput was just about the main point of a mainframe computer.
But PC had hardware DMA channels and that should be at least as fast as
360/30.
Could MS-DOS (or CP/M) really make use of DMA?? Particularly since it couldnrCOt even do multitasking or interrupt-driven I/O, so the OS driver would just sit there spinning its wheels until the I/O completed anyway.
On Wed, 27 Aug 2025 22:09:07 -0000 (UTC), Waldek Hebisch wrote:
If you think about I/O "builtin" 360/30 I/O was done by microcode so
probably significantly faster than programmed I/O on PC.
Fast I/O throughput was just about the main point of a mainframe computer.
But PC had hardware DMA channels and that should be at least as fast as
360/30.
Could MS-DOS (or CP/M) really make use of DMA?? Particularly since it couldnrCOt even do multitasking or interrupt-driven I/O, so the OS driver would just sit there spinning its wheels until the I/O completed anyway.
Could MS-DOS (or CP/M) really make use of DMA?? Particularly since it
couldnrCOt even do multitasking or interrupt-driven I/O, so the OS driver
would just sit there spinning its wheels until the I/O completed anyway.
Yes, MsDOS could.
I know for certain because I used (in the 90's) an analog data
acquisition card which came with routines for direct poll, interrupt
driven, or dma driven. I still have the documentation.
However, it worked, IIRC, at the same frequency than the original IBM
PC. I have forgotten the exact explanation, but perhaps I have it
written somewhere. Probably related to the clock frequency of the bus on
the ISA cards.
On 8/27/25 10:40 AM, Alexander Schreiber wrote:
John Levine <johnl@taugh.com> wrote:
According to Johnny Billquist-a <bqt@softjar.se>:
The claim has another problem. While an x86 might be considered more
powerful in some ways, it does not have nearly as capably MMU as the
PDP-11,
The 8086 had no MMU at all, but small model code gave you 64K each of
instructions and data, the same as what the 11's MMU gave you. There
was no
hardware protection so a malicious or badly broken program could
crash the
system but they rarely did.
I haven't tried Unix on 8086, but DOS on x86 essentially relied on
applications
being reasonably correct and not too buggy. Having the reset button
conveniently
accessible was effectively a requirement for any DOS PC ;-)
That would require instructions that the C compiler
didn't generate.
That claim "would require instructions that the C compiler didn't
generate"
is just not true. Without memory protection, there are plenty of ways
to crash
the system - e.g. overwriting the operating system code due to a bug
in an
application.
Kind regards,
-a-a-a-a-a-a-a-a-a-a-a Alex.
-a In any case there WERE "Unix Variants" even for the
-a early x86 IBM-PCs. M$ sold Xenix, a bit later there
-a was SCO Unix.
-a The old 8088 was NOT super good for -IX systems but
-a they DID make them (sort of) work. The 386 was much
-a better, but that was some years later. I still
-a remember the PCs coming with a DOS and CP/M-86
-a floppy. Choose.
-a My old boss and I debated about dedicating The Company
-a to DOS or Unix. For sure Unix was generally "better",
-a but alas NOT well suited to all the hardware we had.
-a SO, in the end, it was DOS/Win. Many MANY more apps
-a for DOS/Win ... so, in retrospect .......
-a Later, M$ went Dark Side .....
-a DID find an old Xenix on an antique software site.
-a It's *41* floppies worth. Kept it, but not sure if
-a I'll ever make a VM out of it. Interest/energy
-a kinda compete :-)
On 8/28/25 00:26, c186282 wrote:
In any case there WERE "Unix Variants" even for the early x86
IBM-PCs.
Minix
I haven't tried Unix on 8086, but DOS on x86 essentially relied on applications
being reasonably correct and not too buggy. Having the reset button conveniently
accessible was effectively a requirement for any DOS PC ;-)
That would require instructions that the C compiler
didn't generate.
That claim "would require instructions that the C compiler didn't generate" is just not true. Without memory protection, there are plenty of ways to crash
the system - e.g. overwriting the operating system code due to a bug in an application.
On 2025-08-27, Alexander Schreiber <als@usenet.thangorodrim.de> wrote:
I haven't tried Unix on 8086, but DOS on x86 essentially relied on applications
being reasonably correct and not too buggy. Having the reset button conveniently
accessible was effectively a requirement for any DOS PC ;-)
Unfortunately, at about that time the reset button vanished (probably due
to the DMCA or whatever preceded it).
That would require instructions that the C compiler
didn't generate.
That claim "would require instructions that the C compiler didn't generate" >> is just not true. Without memory protection, there are plenty of ways to crash
the system - e.g. overwriting the operating system code due to a bug in an >> application.
If you didn't want to live entirely in a 64K segment, though, you probably told your C compiler to generate code for the various larger memory modules, which gave you the ability to scribble over the entire 640K (plus system storage).
On 2025-08-27, Alexander Schreiber <als@usenet.thangorodrim.de> wrote:
I haven't tried Unix on 8086, but DOS on x86 essentially relied on applications
being reasonably correct and not too buggy. Having the reset button conveniently
accessible was effectively a requirement for any DOS PC ;-)
Unfortunately, at about that time the reset button vanished (probably due
to the DMCA or whatever preceded it).
That would require instructions that the C compiler
didn't generate.
That claim "would require instructions that the C compiler didn't generate" >> is just not true. Without memory protection, there are plenty of ways to crash
the system - e.g. overwriting the operating system code due to a bug in an >> application.
If you didn't want to live entirely in a 64K segment, though, you probably told your C compiler to generate code for the various larger memory modules, which gave you the ability to scribble over the entire 640K (plus system storage).
On 2025-08-27, Alexander Schreiber <als@usenet.thangorodrim.de> wrote:
I haven't tried Unix on 8086, but DOS on x86 essentially relied on applications
being reasonably correct and not too buggy. Having the reset button conveniently
accessible was effectively a requirement for any DOS PC ;-)
Unfortunately, at about that time the reset button vanished (probably due
to the DMCA or whatever preceded it).
Charlie Gibbs <cgibbs@kltpzyxm.invalid> wrote:
Unfortunately, at about that time the reset button vanished (probably
due to the DMCA or whatever preceded it).
Really? System boards bought this year still have a reset line and my workstation tower case (about 10-15y old now) still has a reset
button.
PCs bought in the 1990s and 2000s still tended to have nicely accessible reset buttons. Not that hiding the reset button would help, when one can
just flip the power.
Really? System boards bought this year still have a reset line and my workstation tower case (about 10-15y old now) still has a reset button.
PCs bought in the 1990s and 2000s still tended to have nicely accessible reset buttons. Not that hiding the reset button would help, when one can
just flip the power.
That would require instructions that the C compiler
didn't generate.
That claim "would require instructions that the C compiler didn't generate" >> is just not true. Without memory protection, there are plenty of ways to crash
the system - e.g. overwriting the operating system code due to a bug in an >> application.
If you didn't want to live entirely in a 64K segment, though, you probably >told your C compiler to generate code for the various larger memory modules,
According to Charlie Gibbs <cgibbs@kltpzyxm.invalid>:PDP I worked on was 64k code, 64k data/stack
That would require instructions that the C compiler
didn't generate.
That claim "would require instructions that the C compiler didn't generate" >>> is just not true. Without memory protection, there are plenty of ways to crash
the system - e.g. overwriting the operating system code due to a bug in an >>> application.
If you didn't want to live entirely in a 64K segment, though, you probably >> told your C compiler to generate code for the various larger memory modules,
Not the PC/IX compiler. It was small mode only, which was plenty to compile all of the PDP-11 source code. x86 object code was a little smaller than PDP-11 code, the data would have been thes same size.
We got complaints from people who wanted to be able to run larger programs. Sorry, doesn't do that, you can use several processes talking though pipes.
According to Johnny Billquist <bqt@softjar.se>:
The claim has another problem. While an x86 might be considered more
powerful in some ways, it does not have nearly as capably MMU as the
PDP-11,
The 8086 had no MMU at all, but small model code gave you 64K each of instructions and data, the same as what the 11's MMU gave you. There was no hardware protection so a malicious or badly broken program could crash the system but they rarely did. That would require instructions that the C compiler
didn't generate.
I worked on PC/IX which was a straightforward port of PDP-11 System III Unix to
the PC. It wasn't particularly fast, but all the C programs that ran on the 11
also ran on PC/IX. It was quite reliable. I recall that we got a bug report about something that only broke if the system had been up continuously for a year.
On 2025-08-26 22:21, John Levine wrote:
According to Johnny Billquist <bqt@softjar.se>:
The claim has another problem. While an x86 might be considered more
powerful in some ways, it does not have nearly as capably MMU as the
PDP-11,
The 8086 had no MMU at all, but small model code gave you 64K each of
instructions and data, the same as what the 11's MMU gave you. There was no >> hardware protection so a malicious or badly broken program could crash the >> system but they rarely did. That would require instructions that the C >compiler
didn't generate.
If we were to compare the memory layout/concepts of the PDP-11 and x86,
with an eye to powerful and capable, then the PDP-11, which have an MMU, >don't need to allocate 64K of memory for each process. In fact, it only
need to allocate as much memory as the process actually require, and any >addressing outside of that would trap and you'd get an signal in your >process. So you can easily squish in many more processes in the same
amount of memory.
The next couple of points I don't know exactly when they came about for
the PDP-11, so it might have been a bit later, but I think it's still
valid as a comparison against the x86 here.
Stack, on the PDP-11 is dynamically grown and allocated while the
program is running, so you don't have to pre-allocate all that memory >either, even though it can grow up to close to 64K.
But even more important, on the PDP-11, there is support for overlaid >programs, which makes heavy use of the MMU. Basically, programs can be
way larger than 64K code. You can place functions in different overlays,
and call between them, and you can run up to many hundred of K of code
very easy and straight forward on the PDP-11, and it's all because the
MMU helps you out with it, moving the pages mapping around as needed.
On Fri, 29 Aug 2025 12:46:49 +0200, Alexander Schreiber wrote:
Really? System boards bought this year still have a reset line and my
workstation tower case (about 10-15y old now) still has a reset button.
PCs bought in the 1990s and 2000s still tended to have nicely accessible
reset buttons. Not that hiding the reset button would help, when one can
just flip the power.
The original PC didn't have one. I remember fitting one to mine!
On 2025-08-27, Alexander Schreiber <als@usenet.thangorodrim.de> wrote:
I haven't tried Unix on 8086, but DOS on x86 essentially relied on applications
being reasonably correct and not too buggy. Having the reset button conveniently
accessible was effectively a requirement for any DOS PC ;-)
Unfortunately, at about that time the reset button vanished (probably due
to the DMCA or whatever preceded it).
In article <108su32$3e8$1@news.misty.com>,
Johnny Billquist <bqt@softjar.se> wrote:
But even more important, on the PDP-11, there is support for overlaid
programs, which makes heavy use of the MMU.
My memory is that at leat for BSD Unix, overlays were not supported
until um, 2.9BSD I think, and that using them was not at all straight-forward. It may have been easier for official DEC OSes...
Charlie Gibbs <cgibbs@kltpzyxm.invalid> wrote:
On 2025-08-27, Alexander Schreiber <als@usenet.thangorodrim.de> wrote:
I haven't tried Unix on 8086, but DOS on x86 essentially relied on
applications being reasonably correct and not too buggy. Having the
reset button conveniently accessible was effectively a requirement
for any DOS PC ;-)
Unfortunately, at about that time the reset button vanished (probably
due to the DMCA or whatever preceded it).
Really? System boards bought this year still have a reset line and my workstation tower case (about 10-15y old now) still has a reset button.
PCs bought in the 1990s and 2000s still tended to have nicely accessible reset buttons. Not that hiding the reset button would help, when one can
just flip the power.
On 29/08/2025 18:09, John Levine wrote:
According to Charlie Gibbs <cgibbs@kltpzyxm.invalid>:PDP I worked on was 64k code, 64k data/stack
That would require instructions that the C compiler
didn't generate.
That claim "would require instructions that the C compiler didn't generate"
is just not true. Without memory protection, there are plenty of ways to >>>> crash the system - e.g. overwriting the operating system code due to a bug >>>> in an application.
If you didn't want to live entirely in a 64K segment, though, you probably >>> told your C compiler to generate code for the various larger memory modules,
Not the PC/IX compiler. It was small mode only, which was plenty to compile >> all of the PDP-11 source code. x86 object code was a little smaller than
PDP-11 code, the data would have been thes same size.
We got complaints from people who wanted to be able to run larger programs. >> Sorry, doesn't do that, you can use several processes talking though pipes.
C was designed for that
My PC compilers could do large model. But oit wasnt really worth it
until the 386 came along
But there definitely was a period before that where the button vanished (although there would have been motherboard pins if you wanted to dig
into it).
Given the rock-solid stability and reliability of the MS-DOS environment
and its applications *cough* *cough* that sounds like an interesting
design oversight.
On Fri, 29 Aug 2025 23:51:18 GMT, Charlie Gibbs wrote:
But there definitely was a period before that where the button vanished
(although there would have been motherboard pins if you wanted to dig
into it).
Apple included a little springy clip thing (the rCLProgrammersrCOs SwitchrCY) in
the box with each of those original classic-form-factor Macintoshes. When installed, pressing one side triggered NMI (used for invoking the resident debugger), while the other side triggered the RESET line (hard reboot).
I still have the muscle memory: seated in front of the machine, reach
around with right hand, far side was NMI, near side was RESET.
It was for us. We needed all that memory. It was only a few years ago
that I finally got rid of all the hacks I wrote in to normalize pointers
and deal with segment wrap-arounds. It was horrible. Forget the 640K barrier - the 64K barrier was alive and well on the 8086/8088/80286.
On Fri, 29 Aug 2025 23:51:17 GMT, Charlie Gibbs wrote:
It was for us. We needed all that memory. It was only a few years ago
that I finally got rid of all the hacks I wrote in to normalize pointers
and deal with segment wrap-arounds. It was horrible. Forget the 640K
barrier - the 64K barrier was alive and well on the 8086/8088/80286.
Tiny, Small, Large, Bigger & Humongous. I have the names wrong but I'm
pretty sure there were 5 sets of libraries that you had to chose to do a build. Then there was what was referred to as the 'thunk' in DJGPP circles when you need to get real.
On 2025-08-29, The Natural Philosopher <tnp@invalid.invalid> wrote:
On 29/08/2025 18:09, John Levine wrote:
According to Charlie Gibbs <cgibbs@kltpzyxm.invalid>:
That would require instructions that the C compiler
didn't generate.
That claim "would require instructions that the C compiler didn't generate"
is just not true. Without memory protection, there are plenty of ways to >>>>> crash the system - e.g. overwriting the operating system code due to a bug
in an application.
If you didn't want to live entirely in a 64K segment, though, you probably >>>> told your C compiler to generate code for the various larger memory modules,
Not the PC/IX compiler. It was small mode only, which was plenty to compile
all of the PDP-11 source code. x86 object code was a little smaller than >>> PDP-11 code, the data would have been thes same size.
We got complaints from people who wanted to be able to run larger programs. >>> Sorry, doesn't do that, you can use several processes talking though pipes. >> PDP I worked on was 64k code, 64k data/stack
C was designed for that
My PC compilers could do large model. But oit wasnt really worth it
until the 386 came along
It was for us. We needed all that memory. It was only a few years ago
that I finally got rid of all the hacks I wrote in to normalize pointers
and deal with segment wrap-arounds. It was horrible. Forget the 640K barrier - the 64K barrier was alive and well on the 8086/8088/80286.
On 8/30/25 1:54 AM, rbowman wrote:
On Fri, 29 Aug 2025 23:51:17 GMT, Charlie Gibbs wrote:
It was for us.-a We needed all that memory.-a It was only a few years ago >>> that I finally got rid of all the hacks I wrote in to normalize pointers >>> and deal with segment wrap-arounds.-a It was horrible.-a Forget the 640K >>> barrier - the 64K barrier was alive and well on the 8086/8088/80286.
Tiny, Small, Large, Bigger & Humongous. I have the names wrong but I'm
pretty sure there were 5 sets of libraries that you had to chose to do a
build. Then there was what was referred to as the 'thunk' in DJGPP
circles
when you need to get real.
-a Tbe original 8088 had all the needed registers.
-a Could minimum deliver at LEAST an easy 64k code
-a space and at LEAST another 64k data area. A few
-a tricks and .......
-a So YEA - you COULD run some kind of -IX on
-a the original PCs. Not super fast/efficient
-a but it COULD work. Remember early versions
-a of SCO.
-a 286/386 ... MUCH better - but it came LATER.
On 30/08/2025 08:06, c186282 wrote:
On 8/30/25 1:54 AM, rbowman wrote:
On Fri, 29 Aug 2025 23:51:17 GMT, Charlie Gibbs wrote:
It was for us.-a We needed all that memory.-a It was only a few years ago >>>> that I finally got rid of all the hacks I wrote in to normalize
pointers
and deal with segment wrap-arounds.-a It was horrible.-a Forget the 640K >>>> barrier - the 64K barrier was alive and well on the 8086/8088/80286.
Tiny, Small, Large, Bigger & Humongous. I have the names wrong but I'm
pretty sure there were 5 sets of libraries that you had to chose to do a >>> build. Then there was what was referred to as the 'thunk' in DJGPP
circles
when you need to get real.
-a-a Tbe original 8088 had all the needed registers.
-a-a Could minimum deliver at LEAST an easy 64k code
-a-a space and at LEAST another 64k data area. A few
-a-a tricks and .......
-a-a So YEA - you COULD run some kind of -IX on
-a-a the original PCs. Not super fast/efficient
-a-a but it COULD work. Remember early versions
-a-a of SCO.
-a-a 286/386 ... MUCH better - but it came LATER.
The big trouble was that Unix was expanding faster than the PC
architecture could handle until the 386 made it all easy.
Then SCO Unix made it all not just possible, but extremely handy.
But we never got a serious graphical user interface on Unix. By the time
X windows had stabilised and cent window managers evolved Linux had
arrived.
In article <108su32$3e8$1@news.misty.com>,
Johnny Billquist <bqt@softjar.se> wrote:
But even more important, on the PDP-11, there is support for overlaid
programs, which makes heavy use of the MMU. Basically, programs can be
way larger than 64K code. You can place functions in different overlays,
and call between them, and you can run up to many hundred of K of code
very easy and straight forward on the PDP-11, and it's all because the
MMU helps you out with it, moving the pages mapping around as needed.
My memory is that at leat for BSD Unix, overlays were not supported until
um, 2.9BSD I think, and that using them was not at all straight-forward.
It may have been easier for official DEC OSes...
On 29 Aug 2025 20:52:10 GMT, Ted Nolan <tednolan> wrote:
In article <108su32$3e8$1@news.misty.com>,
Johnny Billquist <bqt@softjar.se> wrote:
But even more important, on the PDP-11, there is support for overlaid
programs, which makes heavy use of the MMU.
No, it didnrCOt make use of the MMU at all. It was a purely software thing, involving replacing in-memory parts of the program with other parts loaded from the executable file.
My memory is that at leat for BSD Unix, overlays were not supported
until um, 2.9BSD I think, and that using them was not at all
straight-forward. It may have been easier for official DEC OSes...
Using overlays was never straightforward, on any OS.
On 8/30/25 3:28 AM, The Natural Philosopher wrote:
On 30/08/2025 08:06, c186282 wrote:
On 8/30/25 1:54 AM, rbowman wrote:
On Fri, 29 Aug 2025 23:51:17 GMT, Charlie Gibbs wrote:
It was for us.-a We needed all that memory.-a It was only a few years >>>>> ago
that I finally got rid of all the hacks I wrote in to normalize
pointers
and deal with segment wrap-arounds.-a It was horrible.-a Forget the 640K >>>>> barrier - the 64K barrier was alive and well on the 8086/8088/80286.
Tiny, Small, Large, Bigger & Humongous. I have the names wrong but I'm >>>> pretty sure there were 5 sets of libraries that you had to chose to
do a
build. Then there was what was referred to as the 'thunk' in DJGPP
circles
when you need to get real.
-a-a Tbe original 8088 had all the needed registers.
-a-a Could minimum deliver at LEAST an easy 64k code
-a-a space and at LEAST another 64k data area. A few
-a-a tricks and .......
-a-a So YEA - you COULD run some kind of -IX on
-a-a the original PCs. Not super fast/efficient
-a-a but it COULD work. Remember early versions
-a-a of SCO.
-a-a 286/386 ... MUCH better - but it came LATER.
The big trouble was that Unix was expanding faster than the PC
architecture could handle until the 386 made it all easy.
-a Agreed.
Then SCO Unix made it all not just possible, but extremely handy.
-a Well ... not "handy" enough.
-a My boss at the time - a very smart nerd - and I did
-a discuss DOS -vs- Unix for The Outfit.
-a We eventually decided on DOS, and ultimately Win.
-a MORE stuff. MORE support.
But we never got a serious graphical user interface on Unix. By the
time X windows had stabilised and cent window managers evolved Linux
had arrived.
-a Hey, DEALT with 'X' and WMs on the first versions
-a of Linux you could get. NOT super-easy all the time.
-a Spent like 48 hours getting it to rec my damned mouse
-a with RH.
-a NOT encouraging.
-a However Linux GOT BETTER. Soon I had all the needed
-a office servers on Linux - Just Because.
-a But, alas, the Staff - Winders Forever And Always.
-a Typical "split environment".
On 8/29/25 11:27 PM, Lawrence DrCOOliveiro wrote:
On Fri, 29 Aug 2025 23:51:18 GMT, Charlie Gibbs wrote:
But there definitely was a period before that where the button vanished
(although there would have been motherboard pins if you wanted to dig
into it).
Apple included a little springy clip thing (the rCLProgrammersrCOs SwitchrCY) in
the box with each of those original classic-form-factor Macintoshes. When
installed, pressing one side triggered NMI (used for invoking the resident >> debugger), while the other side triggered the RESET line (hard reboot).
I still have the muscle memory: seated in front of the machine, reach
around with right hand, far side was NMI, near side was RESET.
Hmmm ... how did they implement that ? How did it
differ from just using the power switch ???
In THEORY that kind of 'reset' SHOULD include at
least ATTEMPTS to shut down a few important daemons.
MOST important, the HDD cache ... DO try yer best
to write-out the cache before going off.
"Reset" buttons are mostly good, but on the company
servers I always disconnected those, so no dink could
just accidentally bump into the switch while looking
for something else. REAL power switch, like a 3-sec
delay before anything happens.
On 30/08/2025 10:30, c186282 wrote:
On 8/30/25 3:28 AM, The Natural Philosopher wrote:
On 30/08/2025 08:06, c186282 wrote:
On 8/30/25 1:54 AM, rbowman wrote:
On Fri, 29 Aug 2025 23:51:17 GMT, Charlie Gibbs wrote:
It was for us.-a We needed all that memory.-a It was only a fewTiny, Small, Large, Bigger & Humongous. I have the names wrong but I'm >>>>> pretty sure there were 5 sets of libraries that you had to chose to >>>>> do a
years ago
that I finally got rid of all the hacks I wrote in to normalize
pointers
and deal with segment wrap-arounds.-a It was horrible.-a Forget the >>>>>> 640K
barrier - the 64K barrier was alive and well on the 8086/8088/80286. >>>>>
build. Then there was what was referred to as the 'thunk' in DJGPP
circles
when you need to get real.
-a-a Tbe original 8088 had all the needed registers.
-a-a Could minimum deliver at LEAST an easy 64k code
-a-a space and at LEAST another 64k data area. A few
-a-a tricks and .......
-a-a So YEA - you COULD run some kind of -IX on
-a-a the original PCs. Not super fast/efficient
-a-a but it COULD work. Remember early versions
-a-a of SCO.
-a-a 286/386 ... MUCH better - but it came LATER.
The big trouble was that Unix was expanding faster than the PC
architecture could handle until the 386 made it all easy.
-a-a Agreed.
Then SCO Unix made it all not just possible, but extremely handy.
-a-a Well ... not "handy" enough.
-a-a My boss at the time - a very smart nerd - and I did
-a-a discuss DOS -vs- Unix for The Outfit.
-a-a We eventually decided on DOS, and ultimately Win.
-a-a MORE stuff. MORE support.
I was the boss. SCO Unix for the networked servers and Win 3 for the desktops
SUN PC-NFS to hang it all together.
I waited. By around 2003 Debian had some sort of GUIBut we never got a serious graphical user interface on Unix. By the
time X windows had stabilised and cent window managers evolved Linux
had arrived.
-a-a Hey, DEALT with 'X' and WMs on the first versions
-a-a of Linux you could get. NOT super-easy all the time.
-a-a Spent like 48 hours getting it to rec my damned mouse
-a-a with RH.
-a-a NOT encouraging.
-a-a However Linux GOT BETTER. Soon I had all the neededYup. Of course
-a-a office servers on Linux - Just Because.
-a-a But, alas, the Staff - Winders Forever And Always.
-a-a Typical "split environment".
On 29 Aug 2025 20:52:10 GMT, Ted Nolan <tednolan> wrote:
Using overlays was never straightforward, on any OS.
On 2025-08-29 22:52, Ted Nolan <tednolan> wrote:
In article <108su32$3e8$1@news.misty.com>,
Johnny Billquist <bqt@softjar.se> wrote:
But even more important, on the PDP-11, there is support for overlaid
programs, which makes heavy use of the MMU. Basically, programs can be
way larger than 64K code. You can place functions in different overlays, >>> and call between them, and you can run up to many hundred of K of code
very easy and straight forward on the PDP-11, and it's all because the
MMU helps you out with it, moving the pages mapping around as needed.
My memory is that at leat for BSD Unix, overlays were not supported until
um, 2.9BSD I think, and that using them was not at all straight-forward.
It may have been easier for official DEC OSes...
The timeline is the bit I'm not entirely sure about. It uses the >capabilities that were in the PDP-11 hardware all the time, though. So
it's an interesting thing to remember/compare with Unix on an 8086.
As for ease of use, you got it backward. While overlays in DEC OSes
actually are way more advanced, and capable that overlays in Unix on the >PDP-11, using them on the Unix side is basically a no brainer. You don't >need to do anything at all. You just put modules wherever you want to,
and it works.
With the DEC OSes, you have to create an overlay description in a weird >language, and you can't call cross overlay trees, and you need to be
careful if you call upstream, which might change mapping, and all that.
None of those restrictions apply for Unix overlays. The only thing you
need to keep an eye out for is just that the size is kept within some rules.
Johnny
On Fri, 29 Aug 2025 23:51:18 GMT, Charlie Gibbs wrote:
But there definitely was a period before that where the button vanished
(although there would have been motherboard pins if you wanted to dig
into it).
Apple included a little springy clip thing (the rCLProgrammersrCOs SwitchrCY) in
the box with each of those original classic-form-factor Macintoshes. When installed, pressing one side triggered NMI (used for invoking the resident debugger), while the other side triggered the RESET line (hard reboot).
I still have the muscle memory: seated in front of the machine, reach
around with right hand, far side was NMI, near side was RESET.
Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> writes:
On 29 Aug 2025 20:52:10 GMT, Ted Nolan <tednolan> wrote:
Using overlays was never straightforward, on any OS.
Typical troll comment.
There are existance proofs counter to your
unsupported blanket statement.
Burroughs medium systems for example, where using overlays was built
into the compilation tools (including the COBOL compiler) and the
operating system. Even the operating system used overlays for
rarely used functionality.
On 8/30/25 08:54, Scott Lurndal wrote:
Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> writes:
On 29 Aug 2025 20:52:10 GMT, Ted Nolan <tednolan> wrote:
Using overlays was never straightforward, on any OS.
Typical troll comment.
There are existance proofs counter to your
unsupported blanket statement.
Burroughs medium systems for example, where using overlays was built
into the compilation tools (including the COBOL compiler) and the
operating system. Even the operating system used overlays for
rarely used functionality.
OS/360 and applications made extensive use of overlays.
I remember a (PC)-Dos application called "Enable" that overlayed like crazy.
Tbe original 8088 had all the needed registers.
Could minimum deliver at LEAST an easy 64k code space and at LEAST
another 64k data area. A few tricks and .......
I was the boss. SCO Unix for the networked servers and Win 3 for the
desktops SUN PC-NFS to hang it all together.
In article <108vigs$2q3n5$1@dont-email.me>,
Peter Flass <Peter@Iron-Spring.com> wrote:
On 8/30/25 08:54, Scott Lurndal wrote:
Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> writes:
On 29 Aug 2025 20:52:10 GMT, Ted Nolan <tednolan> wrote:
Using overlays was never straightforward, on any OS.
Typical troll comment.
There are existance proofs counter to your
unsupported blanket statement.
Burroughs medium systems for example, where using overlays was built
into the compilation tools (including the COBOL compiler) and the
operating system. Even the operating system used overlays for
rarely used functionality.
OS/360 and applications made extensive use of overlays.
I remember a (PC)-Dos application called "Enable" that overlayed like crazy.
I remember that one -- I had to do printer support over PC-NFS with a filter to convert the Diablo-630 emulation to something the network printer could use. The people actually using the program called it "Unable".
Lawrence DrCOOliveiro <ldo@nz.invalid> wrote at 03:27 this Saturday (GMT):
Apple included a little springy clip thing (the rCLProgrammersrCOs
SwitchrCY) in the box with each of those original classic-form-factor
Macintoshes. When installed, pressing one side triggered NMI (used
for invoking the resident debugger) ...
That's pretty cool, I always wished there was a physical switch to
trigger a debugger since the system might be frozen...
Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> writes:
Using overlays was never straightforward, on any OS.
Typical troll comment.
There are existance proofs counter to your unsupported blanket statement.
Burroughs medium systems for example, where using overlays was built into the compilation tools (including the COBOL compiler) and the operating system. Even the operating system used overlays for rarely used functionality.
In article <108uhef$26t$1@news.misty.com>,
Johnny Billquist <bqt@softjar.se> wrote:
As for ease of use, you got it backward. While overlays in DEC OSes
actually are way more advanced, and capable that overlays in Unix on the
PDP-11, using them on the Unix side is basically a no brainer. You don't
need to do anything at all. You just put modules wherever you want to,
and it works.
With the DEC OSes, you have to create an overlay description in a weird
language, and you can't call cross overlay trees, and you need to be
careful if you call upstream, which might change mapping, and all that.
None of those restrictions apply for Unix overlays. The only thing you
need to keep an eye out for is just that the size is kept within some rules. >>
Johnny
Since you've done it, I defer. I just recall that when we got 2.9BSD, I considered trying to port some big Vax program to the 11 and from reading
the man pages I got the impression I would have to get intimately familiar with said program's call graph (which I definitely was not) to partition
out the overlays and ended up moving on to something else.
-a "Reset" buttons are mostly good, but on the company
-a servers I always disconnected those, so no dink could
-a just accidentally bump into the switch while looking
-a for something else. REAL power switch, like a 3-sec
-a delay before anything happens.
On 29 Aug 2025 20:52:10 GMT, Ted Nolan <tednolan> wrote:
In article <108su32$3e8$1@news.misty.com>,
Johnny Billquist <bqt@softjar.se> wrote:
But even more important, on the PDP-11, there is support for overlaid
programs, which makes heavy use of the MMU.
No, it didnrCOt make use of the MMU at all. It was a purely software thing, involving replacing in-memory parts of the program with other parts loaded from the executable file.
My memory is that at leat for BSD Unix, overlays were not supported
until um, 2.9BSD I think, and that using them was not at all
straight-forward. It may have been easier for official DEC OSes...
Using overlays was never straightforward, on any OS.
On 2025-08-30 19:39, Ted Nolan <tednolan> wrote:
In article <108uhef$26t$1@news.misty.com>,
Johnny Billquist-a <bqt@softjar.se> wrote:
As for ease of use, you got it backward. While overlays in DEC OSes
actually are way more advanced, and capable that overlays in Unix on the >>> PDP-11, using them on the Unix side is basically a no brainer. You don't >>> need to do anything at all. You just put modules wherever you want to,
and it works.
With the DEC OSes, you have to create an overlay description in a weird
language, and you can't call cross overlay trees, and you need to be
careful if you call upstream, which might change mapping, and all that.
None of those restrictions apply for Unix overlays. The only thing you
need to keep an eye out for is just that the size is kept within some
rules.
-a-a Johnny
Since you've done it, I defer.-a I just recall that when we got 2.9BSD, I
considered trying to port some big Vax program to the 11 and from reading
the man pages I got the impression I would have to get intimately
familiar
with said program's call graph (which I definitely was not) to partition
out the overlays and ended up moving on to something else.
You can basically just take the different object files, and put them
into different overlays, and that's it. No need to think anything over
with call graphs or anything. You do get headaches if individual object files are just huge, of course. And total data cannot be more than 64K.
It's only code that is overlaid. But compared to the DEC overlay scheme, it's really simple.
With the DEC stuff, you do need to keep track of call graphs, and stuff. Also, it is done in it's own language, which in itself is also a bit of
a thing to get in to. But you can do a lot of stuff with the DEC overlay stuff that isn't at all possible to do under Unix.
So it's a tradeoff (as usual).
-a Johnny
The Linux kernel offers something similar, but again that assumes that keyboard handling (or alternatively, serial port handling) is still functioning <https://docs.kernel.org/admin-guide/sysrq.html>.
On 2025-08-30 06:25, c186282 wrote:
-a "Reset" buttons are mostly good, but on the company
-a servers I always disconnected those, so no dink could
-a just accidentally bump into the switch while looking
-a for something else. REAL power switch, like a 3-sec
-a delay before anything happens.
Some reset buttons have to be pressed deep to work.
On my very first PC (80386DX CPU clocked at a blistering 40 MHz), the
desktop case had the reset button right next to the turbo button and of course they looked identical except for the text label above them. Nice
big flat buttons too, flush with the case surface, so easy to press. Thankfully, I usually didn't need the turbo button.
On 8/31/25 03:40, Johnny Billquist wrote:
On 2025-08-30 19:39, Ted Nolan <tednolan> wrote:
In article <108uhef$26t$1@news.misty.com>,
Johnny Billquist-a <bqt@softjar.se> wrote:
As for ease of use, you got it backward. While overlays in DEC OSes
actually are way more advanced, and capable that overlays in Unix on
the
PDP-11, using them on the Unix side is basically a no brainer. You
don't
need to do anything at all. You just put modules wherever you want to, >>>> and it works.
With the DEC OSes, you have to create an overlay description in a weird >>>> language, and you can't call cross overlay trees, and you need to be
careful if you call upstream, which might change mapping, and all that. >>>> None of those restrictions apply for Unix overlays. The only thing you >>>> need to keep an eye out for is just that the size is kept within
some rules.
-a-a Johnny
Since you've done it, I defer.-a I just recall that when we got 2.9BSD, I >>> considered trying to port some big Vax program to the 11 and from
reading
the man pages I got the impression I would have to get intimately
familiar
with said program's call graph (which I definitely was not) to partition >>> out the overlays and ended up moving on to something else.
You can basically just take the different object files, and put them
into different overlays, and that's it. No need to think anything over
with call graphs or anything. You do get headaches if individual
object files are just huge, of course. And total data cannot be more
than 64K. It's only code that is overlaid. But compared to the DEC
overlay scheme, it's really simple.
With the DEC stuff, you do need to keep track of call graphs, and
stuff. Also, it is done in it's own language, which in itself is also
a bit of a thing to get in to. But you can do a lot of stuff with the
DEC overlay stuff that isn't at all possible to do under Unix.
So it's a tradeoff (as usual).
-a-a Johnny
Sort of. Assuming overlays on DEC function like all others I've seen,
you need to organize. What goes into the root? (used by all overlays),
then group object files so that things used together are in the same overlay.
On 29/08/2025 07:50, Charlie Gibbs wrote:
On 2025-08-27, Alexander Schreiber <als@usenet.thangorodrim.de> wrote:Wasn't there a 64k data and 64k code model as well? And possibly a 64K
I haven't tried Unix on 8086, but DOS on x86 essentially relied on applications
being reasonably correct and not too buggy. Having the reset button conveniently
accessible was effectively a requirement for any DOS PC ;-)
Unfortunately, at about that time the reset button vanished (probably due
to the DMCA or whatever preceded it).
That would require instructions that the C compiler
didn't generate.
That claim "would require instructions that the C compiler didn't generate" >>> is just not true. Without memory protection, there are plenty of ways to crash
the system - e.g. overwriting the operating system code due to a bug in an >>> application.
If you didn't want to live entirely in a 64K segment, though, you probably >> told your C compiler to generate code for the various larger memory modules, >> which gave you the ability to scribble over the entire 640K (plus system
storage).
stack as well though that was a pain with C.
On 2025-08-29, Alexander Schreiber <als@usenet.thangorodrim.de> wrote:
Charlie Gibbs <cgibbs@kltpzyxm.invalid> wrote:
On 2025-08-27, Alexander Schreiber <als@usenet.thangorodrim.de> wrote:
I haven't tried Unix on 8086, but DOS on x86 essentially relied on
applications being reasonably correct and not too buggy. Having the
reset button conveniently accessible was effectively a requirement
for any DOS PC ;-)
Unfortunately, at about that time the reset button vanished (probably
due to the DMCA or whatever preceded it).
Really? System boards bought this year still have a reset line and my
workstation tower case (about 10-15y old now) still has a reset button.
Oops, I forgot about that. They did make a comeback, didn't they?
But there definitely was a period before that where the button vanished (although there would have been motherboard pins if you wanted to dig
into it).
On my very first PC (80386DX CPU clocked at a blistering 40 MHz), the
desktop case had the reset button right next to the turbo button and of course they looked identical except for the text label above them. Nice
big flat buttons too, flush with the case surface, so easy to press. Thankfully, I usually didn't need the turbo button.
Most later machines of mine had the reset button recessed making it much harder to press by accident.
On Sun, 31 Aug 2025 18:23:23 +0200, Alexander Schreiber wrote:
On my very first PC (80386DX CPU clocked at a blistering 40 MHz), the
desktop case had the reset button right next to the turbo button and of
course they looked identical except for the text label above them. Nice
big flat buttons too, flush with the case surface, so easy to press.
Thankfully, I usually didn't need the turbo button.
Most later machines of mine had the reset button recessed making it much
harder to press by accident.
I can see the one contrarian interoffice memo now:
rCLWhy not recess the turbo button instead, and make it harder to press? Because it can cause compatibility problems with older software designed
to run only at a CPU speed of 4.77MHz, so the user should think twice
before pressing it. The reset button should be easier to press, because
when you need it, you really need it!rCY
;)
On 2025-08-30 01:34, Lawrence DrCOOliveiro wrote:
Using overlays was never straightforward, on any OS.
It was trivial on Turbo Pascal.
On Sun, 31 Aug 2025 13:44:17 +0200, Carlos E.R. wrote:
On 2025-08-30 01:34, Lawrence DrCOOliveiro wrote:
Using overlays was never straightforward, on any OS.
It was trivial on Turbo Pascal.
There were two kinds of overlay system: the one where the calling code
could be swapped out while the called code (needing a possible segment
swap when returning from the callee) and the one where it couldnrCOt
(needing more memory).
Which one did Turbo Pascal use?