• Re: Hmmm ... Downloaded Xenix - But It's *41* Floppies Worth

    From John Ames@commodorejohn@gmail.com to comp.os.linux.misc,alt.folklore.computers on Tue Aug 26 08:31:18 2025
    From Newsgroup: comp.os.linux.misc

    On Tue, 26 Aug 2025 14:04:07 +0200
    Johnny Billquist <bqt@softjar.se> wrote:

    Hmm, that's an interesting question, actually - the Bell Labs -11
    was an 11/45, which was much faster than the original -11s, while
    the IBM PC was really a bit of a dog thanks to having a 16-bit
    architecture on an 8-bit bus and the generally poor performance characteristics of the first-generation x86 CPUs. It'd be neat to
    do a head-to-head shootout. I don't know if it's recorded whether
    the Bell Labs -11 was core or semiconductor memory (980 vs. 450 ns
    cycle time;) the PC at 4.77 MHz would have a cycle time of around
    209 ns, but with the aforementioned 8-bit bus. As a naive
    approximation, that might put them anywhere from comparable to
    around twice the memory bandwidth for the PC...but then the 8088's instruction times are kinda abysmal even on top of that. Definitely
    makes one curious...

    The claim has another problem. While an x86 might be considered more powerful in some ways, it does not have nearly as capably MMU as the
    PDP-11, and that really trips the whole thing over when comparing.
    (I'm not sure I would even say the x86 have anything resembling a
    proper MMU... Not before the 80386 anyway, which was not in a PC or
    PC/XT.)

    The 286 had a proper MMU, but not (AFAIUI) a terribly performant one.
    I've never heard of an add-on MMU for the 8086, but I admit I've never
    looked. In any case, I realize now that my napkin math was way off; the
    8086/88 can only perform a memory access every fourth cycle, so the
    PC's approximate "cycle time" by comparison would be more like ~836 ns,
    barely faster than core even *before* you factor in the 8-bit bottle-
    neck or DRAM refresh. Ye gods, what a *dog.*

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kerr-Mudd, John@admin@127.0.0.1 to comp.os.linux.misc,alt.folklore.computers on Tue Aug 26 17:52:41 2025
    From Newsgroup: comp.os.linux.misc

    On Tue, 26 Aug 2025 08:31:18 -0700
    John Ames <commodorejohn@gmail.com> wrote:

    On Tue, 26 Aug 2025 14:04:07 +0200
    Johnny Billquist <bqt@softjar.se> wrote:

    Hmm, that's an interesting question, actually - the Bell Labs -11
    was an 11/45, which was much faster than the original -11s, while
    the IBM PC was really a bit of a dog thanks to having a 16-bit architecture on an 8-bit bus and the generally poor performance characteristics of the first-generation x86 CPUs. It'd be neat to
    do a head-to-head shootout. I don't know if it's recorded whether
    the Bell Labs -11 was core or semiconductor memory (980 vs. 450 ns
    cycle time;) the PC at 4.77 MHz would have a cycle time of around
    209 ns, but with the aforementioned 8-bit bus. As a naive
    approximation, that might put them anywhere from comparable to
    around twice the memory bandwidth for the PC...but then the 8088's instruction times are kinda abysmal even on top of that. Definitely
    makes one curious...

    The claim has another problem. While an x86 might be considered more powerful in some ways, it does not have nearly as capably MMU as the PDP-11, and that really trips the whole thing over when comparing.
    (I'm not sure I would even say the x86 have anything resembling a
    proper MMU... Not before the 80386 anyway, which was not in a PC or
    PC/XT.)

    The 286 had a proper MMU, but not (AFAIUI) a terribly performant one.
    I've never heard of an add-on MMU for the 8086, but I admit I've never looked. In any case, I realize now that my napkin math was way off; the 8086/88 can only perform a memory access every fourth cycle, so the
    PC's approximate "cycle time" by comparison would be more like ~836 ns, barely faster than core even *before* you factor in the 8-bit bottle-
    neck or DRAM refresh. Ye gods, what a *dog.*

    Sure, but the idea was you had it *all to yourself* /Mwhah-hah-hah-hah!/ Sorry, I don't know what came over me there.
    --
    Bah, and indeed Humbug.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From John Ames@commodorejohn@gmail.com to comp.os.linux.misc,alt.folklore.computers on Tue Aug 26 10:01:26 2025
    From Newsgroup: comp.os.linux.misc

    On Tue, 26 Aug 2025 17:52:41 +0100
    "Kerr-Mudd, John" <admin@127.0.0.1> wrote:

    The 286 had a proper MMU, but not (AFAIUI) a terribly performant
    one. I've never heard of an add-on MMU for the 8086, but I admit
    I've never looked. In any case, I realize now that my napkin math
    was way off; the 8086/88 can only perform a memory access every
    fourth cycle, so the PC's approximate "cycle time" by comparison
    would be more like ~836 ns, barely faster than core even *before*
    you factor in the 8-bit bottle- neck or DRAM refresh. Ye gods, what
    a *dog.*

    Sure, but the idea was you had it *all to yourself*
    /Mwhah-hah-hah-hah!/ Sorry, I don't know what came over me there.

    Admittedly.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Rich@rich@example.invalid to comp.os.linux.misc,alt.folklore.computers on Tue Aug 26 17:26:22 2025
    From Newsgroup: comp.os.linux.misc

    In comp.os.linux.misc John Ames <commodorejohn@gmail.com> wrote:
    On Tue, 26 Aug 2025 14:04:07 +0200
    Johnny Billquist <bqt@softjar.se> wrote:

    Hmm, that's an interesting question, actually - the Bell Labs -11
    was an 11/45, which was much faster than the original -11s, while
    the IBM PC was really a bit of a dog thanks to having a 16-bit
    architecture on an 8-bit bus and the generally poor performance
    characteristics of the first-generation x86 CPUs. It'd be neat to
    do a head-to-head shootout. I don't know if it's recorded whether
    the Bell Labs -11 was core or semiconductor memory (980 vs. 450 ns
    cycle time;) the PC at 4.77 MHz would have a cycle time of around
    209 ns, but with the aforementioned 8-bit bus. As a naive
    approximation, that might put them anywhere from comparable to
    around twice the memory bandwidth for the PC...but then the 8088's
    instruction times are kinda abysmal even on top of that. Definitely
    makes one curious...

    The claim has another problem. While an x86 might be considered more
    powerful in some ways, it does not have nearly as capably MMU as the
    PDP-11, and that really trips the whole thing over when comparing.
    (I'm not sure I would even say the x86 have anything resembling a
    proper MMU... Not before the 80386 anyway, which was not in a PC or
    PC/XT.)

    The 286 had a proper MMU, but not (AFAIUI) a terribly performant one.

    That also depends on one's definition of "proper MMU". The 286 had a segmented MMU, but lacked a paged MMU. Paging was not added until the
    386. And there are some that define "proper MMU" as "paged MMU".
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From ted@loft.tnolan.com (Ted Nolan@tednolan to comp.os.linux.misc,alt.folklore.computers on Tue Aug 26 17:47:53 2025
    From Newsgroup: comp.os.linux.misc

    In article <108kqnu$615d$1@dont-email.me>, Rich <rich@example.invalid> wrote: >In comp.os.linux.misc John Ames <commodorejohn@gmail.com> wrote:
    On Tue, 26 Aug 2025 14:04:07 +0200
    Johnny Billquist <bqt@softjar.se> wrote:

    Hmm, that's an interesting question, actually - the Bell Labs -11
    was an 11/45, which was much faster than the original -11s, while
    the IBM PC was really a bit of a dog thanks to having a 16-bit
    architecture on an 8-bit bus and the generally poor performance
    characteristics of the first-generation x86 CPUs. It'd be neat to
    do a head-to-head shootout. I don't know if it's recorded whether
    the Bell Labs -11 was core or semiconductor memory (980 vs. 450 ns
    cycle time;) the PC at 4.77 MHz would have a cycle time of around
    209 ns, but with the aforementioned 8-bit bus. As a naive
    approximation, that might put them anywhere from comparable to
    around twice the memory bandwidth for the PC...but then the 8088's
    instruction times are kinda abysmal even on top of that. Definitely
    makes one curious...

    The claim has another problem. While an x86 might be considered more
    powerful in some ways, it does not have nearly as capably MMU as the
    PDP-11, and that really trips the whole thing over when comparing.
    (I'm not sure I would even say the x86 have anything resembling a
    proper MMU... Not before the 80386 anyway, which was not in a PC or
    PC/XT.)

    The 286 had a proper MMU, but not (AFAIUI) a terribly performant one.

    That also depends on one's definition of "proper MMU". The 286 had a >segmented MMU, but lacked a paged MMU. Paging was not added until the
    386. And there are some that define "proper MMU" as "paged MMU".

    I don't know the MMU details for the 286, but my undestanding (formed
    at the time) is that it was "proper" in that it could actually protect
    running programs from each other. PC-IX and I presume Xenix worked
    on the 8088/8086 by having the C compiler emit code which stayed
    in a segment -- so programs wouldn't interfere with each other
    *if* nothing went wrong. If something went wrong, (which presumably
    you could easily provoke in assembler code) one program could trash
    another's RAM.

    What the 286 couldn't do was virtual memory, which the 386 could.
    --
    columbiaclosings.com
    What's not in Columbia anymore..
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.os.linux.misc,alt.folklore.computers on Tue Aug 26 18:19:26 2025
    From Newsgroup: comp.os.linux.misc

    ted@loft.tnolan.com (Ted Nolan <tednolan>) writes:
    In article <108kqnu$615d$1@dont-email.me>, Rich <rich@example.invalid> wrote: >>In comp.os.linux.misc John Ames <commodorejohn@gmail.com> wrote:
    On Tue, 26 Aug 2025 14:04:07 +0200
    Johnny Billquist <bqt@softjar.se> wrote:

    Hmm, that's an interesting question, actually - the Bell Labs -11
    was an 11/45, which was much faster than the original -11s, while
    the IBM PC was really a bit of a dog thanks to having a 16-bit
    architecture on an 8-bit bus and the generally poor performance
    characteristics of the first-generation x86 CPUs. It'd be neat to
    do a head-to-head shootout. I don't know if it's recorded whether
    the Bell Labs -11 was core or semiconductor memory (980 vs. 450 ns
    cycle time;) the PC at 4.77 MHz would have a cycle time of around
    209 ns, but with the aforementioned 8-bit bus. As a naive
    approximation, that might put them anywhere from comparable to
    around twice the memory bandwidth for the PC...but then the 8088's
    instruction times are kinda abysmal even on top of that. Definitely
    makes one curious...

    The claim has another problem. While an x86 might be considered more
    powerful in some ways, it does not have nearly as capably MMU as the
    PDP-11, and that really trips the whole thing over when comparing.
    (I'm not sure I would even say the x86 have anything resembling a
    proper MMU... Not before the 80386 anyway, which was not in a PC or
    PC/XT.)

    The 286 had a proper MMU, but not (AFAIUI) a terribly performant one.

    That also depends on one's definition of "proper MMU". The 286 had a >>segmented MMU, but lacked a paged MMU. Paging was not added until the >>386. And there are some that define "proper MMU" as "paged MMU".

    I don't know the MMU details for the 286, but my undestanding (formed
    at the time) is that it was "proper" in that it could actually protect >running programs from each other. PC-IX and I presume Xenix worked
    on the 8088/8086 by having the C compiler emit code which stayed
    in a segment -- so programs wouldn't interfere with each other
    *if* nothing went wrong. If something went wrong, (which presumably
    you could easily provoke in assembler code) one program could trash
    another's RAM.

    Protected mode provided four privilege rings, task management and memory protection at the segment level.


    What the 286 couldn't do was virtual memory, which the 386 could.

    To the extent that a segment could be marked not-present in the
    GDT or LDT, the 286 supported virtual memory. The segment
    desrcriptor supported an accessed bit.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From The Natural Philosopher@tnp@invalid.invalid to comp.os.linux.misc,alt.folklore.computers on Tue Aug 26 19:43:10 2025
    From Newsgroup: comp.os.linux.misc

    On 26/08/2025 14:23, Johnny Billquist wrote:
    On 2025-08-26 14:13, The Natural Philosopher wrote:
    On 26/08/2025 13:04, Johnny Billquist wrote:
    The claim has another problem. While an x86 might be considered more
    powerful in some ways, it does not have nearly as capably MMU as the
    PDP-11, and that really trips the whole thing over when comparing.
    You could equip an *86 with a decent MMU and people did.

    The 8086? What decent MMU existed for that?

    a 386 running Unix was WAY faster than a PDP/11.

    It was also about 15 years later than the first PDP-11, and a few years later than the last new implementation of any PDP-11 at all by DEC.

    Indeed it was. And it was way cheaper than a VAX too


    (I'm not sure I would even say the x86 have anything resembling a
    proper MMU... Not before the 80386 anyway, which was not in a PC or
    PC/XT.)

    Well yes, the 386 was what the 8086 should have been all along

    Yes, eventually it got a bit more sorted out.

    -a Johnny

    --
    Religion is regarded by the common people as true, by the wise as
    foolish, and by the rulers as useful.

    (Seneca the Younger, 65 AD)


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From The Natural Philosopher@tnp@invalid.invalid to comp.os.linux.misc,alt.folklore.computers on Tue Aug 26 19:44:00 2025
    From Newsgroup: comp.os.linux.misc

    On 26/08/2025 17:52, Kerr-Mudd, John wrote:
    Sure, but the idea was you had it*all to yourself* /Mwhah-hah-hah-hah!/ Sorry, I don't know what came over me there.

    A masturbating elephant?
    --
    "When a true genius appears in the world, you may know him by this sign,
    that the dunces are all in confederacy against him."

    Jonathan Swift.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From John Levine@johnl@taugh.com to comp.os.linux.misc,alt.folklore.computers on Tue Aug 26 20:21:08 2025
    From Newsgroup: comp.os.linux.misc

    According to Johnny Billquist <bqt@softjar.se>:
    The claim has another problem. While an x86 might be considered more >powerful in some ways, it does not have nearly as capably MMU as the
    PDP-11,

    The 8086 had no MMU at all, but small model code gave you 64K each of instructions and data, the same as what the 11's MMU gave you. There was no hardware protection so a malicious or badly broken program could crash the system but they rarely did. That would require instructions that the C compiler didn't generate.

    I worked on PC/IX which was a straightforward port of PDP-11 System III Unix to the PC. It wasn't particularly fast, but all the C programs that ran on the 11 also ran on PC/IX. It was quite reliable. I recall that we got a bug report about something that only broke if the system had been up continuously for a year.
    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From John Levine@johnl@taugh.com to comp.os.linux.misc,alt.folklore.computers on Tue Aug 26 20:25:03 2025
    From Newsgroup: comp.os.linux.misc

    According to Ted Nolan <tednolan> <tednolan>:
    That also depends on one's definition of "proper MMU". The 286 had a >>segmented MMU, but lacked a paged MMU. Paging was not added until the >>386. And there are some that define "proper MMU" as "paged MMU".

    I don't know the MMU details for the 286, but my undestanding (formed
    at the time) is that it was "proper" in that it could actually protect >running programs from each other.

    It could, but if your programs used more than one segment for code
    or data, the switching was extremely slow and painful. Since the
    segments were of variable size, that meant operating systems had
    to do free space compaction that paging systems don't need.

    PC-IX and I presume Xenix worked
    on the 8088/8086 by having the C compiler emit code which stayed
    in a segment -- so programs wouldn't interfere with each other

    There was 286 Xenix that used multiple segments in protected mode.
    I never used it.

    What the 286 couldn't do was virtual memory, which the 386 could.

    Sure it could. The system could mark segments as nonresident and
    take a fault and swap them in as needed. I wouldn't call that
    very good virtual memory, but it's definitely virtual memory.
    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From The Natural Philosopher@tnp@invalid.invalid to comp.os.linux.misc,alt.folklore.computers on Tue Aug 26 21:44:56 2025
    From Newsgroup: comp.os.linux.misc

    On 26/08/2025 21:25, John Levine wrote:
    There was 286 Xenix that used multiple segments in protected mode.
    I never used it.

    I saw Venix run on a 286.
    --
    "When one man dies it's a tragedy. When thousands die it's statistics."

    Josef Stalin


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From ted@loft.tnolan.com (Ted Nolan@tednolan to comp.os.linux.misc,alt.folklore.computers on Tue Aug 26 21:12:05 2025
    From Newsgroup: comp.os.linux.misc

    In article <108l56v$77t$2@gal.iecc.com>, John Levine <johnl@taugh.com> wrote: >According to Ted Nolan <tednolan> <tednolan>:
    That also depends on one's definition of "proper MMU". The 286 had a >>>segmented MMU, but lacked a paged MMU. Paging was not added until the >>>386. And there are some that define "proper MMU" as "paged MMU".

    I don't know the MMU details for the 286, but my undestanding (formed
    at the time) is that it was "proper" in that it could actually protect >>running programs from each other.

    It could, but if your programs used more than one segment for code
    or data, the switching was extremely slow and painful. Since the
    segments were of variable size, that meant operating systems had
    to do free space compaction that paging systems don't need.

    PC-IX and I presume Xenix worked
    on the 8088/8086 by having the C compiler emit code which stayed
    in a segment -- so programs wouldn't interfere with each other

    There was 286 Xenix that used multiple segments in protected mode.
    I never used it.

    What the 286 couldn't do was virtual memory, which the 386 could.

    Sure it could. The system could mark segments as nonresident and
    take a fault and swap them in as needed. I wouldn't call that
    very good virtual memory, but it's definitely virtual memory.


    Thanks!

    It ain't the things you don't know, but the things you know that ain't so...
    --
    columbiaclosings.com
    What's not in Columbia anymore..
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Alexander Schreiber@als@usenet.thangorodrim.de to comp.os.linux.misc,alt.folklore.computers on Wed Aug 27 16:40:27 2025
    From Newsgroup: comp.os.linux.misc

    John Levine <johnl@taugh.com> wrote:
    According to Johnny Billquist <bqt@softjar.se>:
    The claim has another problem. While an x86 might be considered more >>powerful in some ways, it does not have nearly as capably MMU as the >>PDP-11,

    The 8086 had no MMU at all, but small model code gave you 64K each of instructions and data, the same as what the 11's MMU gave you. There was no hardware protection so a malicious or badly broken program could crash the system but they rarely did.

    I haven't tried Unix on 8086, but DOS on x86 essentially relied on applications being reasonably correct and not too buggy. Having the reset button conveniently
    accessible was effectively a requirement for any DOS PC ;-)

    That would require instructions that the C compiler
    didn't generate.

    That claim "would require instructions that the C compiler didn't generate"
    is just not true. Without memory protection, there are plenty of ways to crash the system - e.g. overwriting the operating system code due to a bug in an application.

    Kind regards,
    Alex.
    --
    "Opportunity is missed by most people because it is dressed in overalls and
    looks like work." -- Thomas A. Edison
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From John Ames@commodorejohn@gmail.com to comp.os.linux.misc,alt.folklore.computers on Wed Aug 27 08:30:20 2025
    From Newsgroup: comp.os.linux.misc

    On Wed, 27 Aug 2025 16:40:27 +0200
    Alexander Schreiber <als@usenet.thangorodrim.de> wrote:

    That would require instructions that the C compiler
    didn't generate.

    That claim "would require instructions that the C compiler didn't
    generate" is just not true. Without memory protection, there are
    plenty of ways to crash the system - e.g. overwriting the operating
    system code due to a bug in an application.

    It's certainly true that there's no *real* protection on the 8086.
    AFAIUI the logic is that, if generated code doesn't touch the segment
    registers and the OS allocates either 64KB shared or 64KB code + 64KB
    data, a 16-bit address won't ever overstep into the next 64KB of RAM,
    but x86 addressing can have up to three 16-bit components (two index
    registers plus a fixed offset,) so it's entirely possible for basic
    addressing operations to overstep that boundary, unless the compiler
    just forgoes complex addressing entirely.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kerr-Mudd, John@admin@127.0.0.1 to comp.os.linux.misc,alt.folklore.computers on Wed Aug 27 19:04:51 2025
    From Newsgroup: comp.os.linux.misc

    On Wed, 27 Aug 2025 16:40:27 +0200
    Alexander Schreiber <als@usenet.thangorodrim.de> wrote:

    []

    I haven't tried Unix on 8086, but DOS on x86 essentially relied on applications
    being reasonably correct and not too buggy. Having the reset button conveniently
    accessible was effectively a requirement for any DOS PC ;-)


    Unix on an early IBM PC (8086, 10M hard drive) would have been quite a shoehorning job. 'Slow' would probably be a generous word to use.

    []
    --
    Bah, and indeed Humbug.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.os.linux.misc,alt.folklore.computers on Wed Aug 27 22:09:07 2025
    From Newsgroup: comp.os.linux.misc

    In alt.folklore.computers John Ames <commodorejohn@gmail.com> wrote:
    On Tue, 26 Aug 2025 14:04:07 +0200
    Johnny Billquist <bqt@softjar.se> wrote:

    Hmm, that's an interesting question, actually - the Bell Labs -11
    was an 11/45, which was much faster than the original -11s, while
    the IBM PC was really a bit of a dog thanks to having a 16-bit
    architecture on an 8-bit bus and the generally poor performance
    characteristics of the first-generation x86 CPUs. It'd be neat to
    do a head-to-head shootout. I don't know if it's recorded whether
    the Bell Labs -11 was core or semiconductor memory (980 vs. 450 ns
    cycle time;) the PC at 4.77 MHz would have a cycle time of around
    209 ns, but with the aforementioned 8-bit bus. As a naive
    approximation, that might put them anywhere from comparable to
    around twice the memory bandwidth for the PC...but then the 8088's
    instruction times are kinda abysmal even on top of that. Definitely
    makes one curious...

    The claim has another problem. While an x86 might be considered more
    powerful in some ways, it does not have nearly as capably MMU as the
    PDP-11, and that really trips the whole thing over when comparing.
    (I'm not sure I would even say the x86 have anything resembling a
    proper MMU... Not before the 80386 anyway, which was not in a PC or
    PC/XT.)

    The 286 had a proper MMU, but not (AFAIUI) a terribly performant one.
    I've never heard of an add-on MMU for the 8086, but I admit I've never looked. In any case, I realize now that my napkin math was way off; the 8086/88 can only perform a memory access every fourth cycle, so the
    PC's approximate "cycle time" by comparison would be more like ~836 ns, barely faster than core even *before* you factor in the 8-bit bottle-
    neck or DRAM refresh. Ye gods, what a *dog.*

    Actually PC cycle time is similar to 360/30 microcode cycle time.
    Memory bandwidth is simiar too. AFAICS for most applications
    code running on PC would be faster than code running on 360/30.
    If you think about I/O "builtin" 360/30 I/O was done by microcode
    so probably significantly faster than programmed I/O on PC.
    But PC had hardware DMA channels and that should be at least
    as fast as 360/30. Of course, when PC appeared nobody would
    like to do serious data processing on 360/30 (360/30 was cancelled
    from IBM offering few years earlier).
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.linux.misc,alt.folklore.computers on Wed Aug 27 22:58:25 2025
    From Newsgroup: comp.os.linux.misc

    On Wed, 27 Aug 2025 16:40:27 +0200, Alexander Schreiber wrote:

    I haven't tried Unix on 8086 ...

    I briefly used an Altos 586 system at my first employer after leaving Uni.
    I believe that was a Xenix machine running on an 8086 without memory protection.

    I didnrCOt get as far as crashing anything.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.linux.misc,alt.folklore.computers on Wed Aug 27 22:58:58 2025
    From Newsgroup: comp.os.linux.misc

    On Wed, 27 Aug 2025 19:04:51 +0100, Kerr-Mudd, John wrote:

    Unix on an early IBM PC (8086, 10M hard drive) would have been quite a shoehorning job. 'Slow' would probably be a generous word to use.

    Would that have been slower and more memory-constrained than a PDP-11?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.linux.misc,alt.folklore.computers on Wed Aug 27 23:00:40 2025
    From Newsgroup: comp.os.linux.misc

    On Wed, 27 Aug 2025 22:09:07 -0000 (UTC), Waldek Hebisch wrote:

    If you think about I/O "builtin" 360/30 I/O was done by microcode so
    probably significantly faster than programmed I/O on PC.

    Fast I/O throughput was just about the main point of a mainframe computer.

    But PC had hardware DMA channels and that should be at least as fast as 360/30.

    Could MS-DOS (or CP/M) really make use of DMA?? Particularly since it couldnrCOt even do multitasking or interrupt-driven I/O, so the OS driver would just sit there spinning its wheels until the I/O completed anyway.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From c186282@c186282@nnada.net to comp.os.linux.misc,alt.folklore.computers on Thu Aug 28 03:26:27 2025
    From Newsgroup: comp.os.linux.misc

    On 8/27/25 10:40 AM, Alexander Schreiber wrote:
    John Levine <johnl@taugh.com> wrote:
    According to Johnny Billquist <bqt@softjar.se>:
    The claim has another problem. While an x86 might be considered more
    powerful in some ways, it does not have nearly as capably MMU as the
    PDP-11,

    The 8086 had no MMU at all, but small model code gave you 64K each of
    instructions and data, the same as what the 11's MMU gave you. There was no >> hardware protection so a malicious or badly broken program could crash the >> system but they rarely did.

    I haven't tried Unix on 8086, but DOS on x86 essentially relied on applications
    being reasonably correct and not too buggy. Having the reset button conveniently
    accessible was effectively a requirement for any DOS PC ;-)

    That would require instructions that the C compiler
    didn't generate.

    That claim "would require instructions that the C compiler didn't generate" is just not true. Without memory protection, there are plenty of ways to crash
    the system - e.g. overwriting the operating system code due to a bug in an application.

    Kind regards,
    Alex.



    In any case there WERE "Unix Variants" even for the
    early x86 IBM-PCs. M$ sold Xenix, a bit later there
    was SCO Unix.

    The old 8088 was NOT super good for -IX systems but
    they DID make them (sort of) work. The 386 was much
    better, but that was some years later. I still
    remember the PCs coming with a DOS and CP/M-86
    floppy. Choose.

    My old boss and I debated about dedicating The Company
    to DOS or Unix. For sure Unix was generally "better",
    but alas NOT well suited to all the hardware we had.
    SO, in the end, it was DOS/Win. Many MANY more apps
    for DOS/Win ... so, in retrospect .......

    Later, M$ went Dark Side .....

    DID find an old Xenix on an antique software site.
    It's *41* floppies worth. Kept it, but not sure if
    I'll ever make a VM out of it. Interest/energy
    kinda compete :-)
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From c186282@c186282@nnada.net to comp.os.linux.misc,alt.folklore.computers on Thu Aug 28 04:06:44 2025
    From Newsgroup: comp.os.linux.misc

    On 8/27/25 2:04 PM, Kerr-Mudd, John wrote:
    On Wed, 27 Aug 2025 16:40:27 +0200
    Alexander Schreiber <als@usenet.thangorodrim.de> wrote:

    []

    I haven't tried Unix on 8086, but DOS on x86 essentially relied on applications
    being reasonably correct and not too buggy. Having the reset button conveniently
    accessible was effectively a requirement for any DOS PC ;-)


    Unix on an early IBM PC (8086, 10M hard drive) would have been quite a shoehorning job. 'Slow' would probably be a generous word to use.

    See my recent post.

    "Unix-like" COULD be had even for the 8088 gen
    of PCs.

    But it wasn't very well adapted FOR that.

    386 kind of turned some critical corners.

    Old boss and I debated DOS -vs- Unix as the
    future for the org back in the PC days. Alas,
    full evidence, DOS - eventually Win - won for
    a number of reasons. THAT'S what all the new
    cool software was tuned for.

    Much later Linux arrived. Alas everyone was
    fully adapted/addicted to Win. Only 'sysadmins'
    saw the Linux advantage. So, 99% of the org
    used Win while the background servers and such
    became Linux. Regular users never suspected.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Alexander Schreiber@als@usenet.thangorodrim.de to comp.os.linux.misc,alt.folklore.computers on Thu Aug 28 10:09:49 2025
    From Newsgroup: comp.os.linux.misc

    John Ames <commodorejohn@gmail.com> wrote:
    On Wed, 27 Aug 2025 16:40:27 +0200
    Alexander Schreiber <als@usenet.thangorodrim.de> wrote:

    That would require instructions that the C compiler
    didn't generate.

    That claim "would require instructions that the C compiler didn't
    generate" is just not true. Without memory protection, there are
    plenty of ways to crash the system - e.g. overwriting the operating
    system code due to a bug in an application.

    It's certainly true that there's no *real* protection on the 8086.
    AFAIUI the logic is that, if generated code doesn't touch the segment registers and the OS allocates either 64KB shared or 64KB code + 64KB
    data, a 16-bit address won't ever overstep into the next 64KB of RAM,
    but x86 addressing can have up to three 16-bit components (two index registers plus a fixed offset,) so it's entirely possible for basic addressing operations to overstep that boundary, unless the compiler
    just forgoes complex addressing entirely.

    Which brings us back to: If the the applications are reasonably correct
    and well-behaved, things should be fine - and I suspect in reality they
    mostly where, until one hit a suitably nasty bug.

    My first contact with Unix was Solaris on Sun ELCs. I played with a bunch
    of others over the years, including some of the IMHO more weird ones
    (e.g. Coherent: What do you mean, TCP/IP is a paid add-on???),
    but they all used hardware memory protection (e.g. x86 on 386+ machines, UltraSPARC, HPPA, Itanics, Alphas, ...).

    The only "what is this memory protecting thing?" platform I used a lot
    back in the days where the various versions of DOS on x86 (MS-DOS, DR-DOS, Novell DOS) which had a tendency to drive home that memory protection
    sure would be nice with the occasional crash that froze/rebooted the
    machine, especially if one was programming and not sufficiently careful.

    Kind regards,
    Alex.
    --
    "Opportunity is missed by most people because it is dressed in overalls and
    looks like work." -- Thomas A. Edison
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.os.linux.misc,alt.folklore.computers on Thu Aug 28 09:10:29 2025
    From Newsgroup: comp.os.linux.misc

    In alt.folklore.computers Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:
    On Wed, 27 Aug 2025 22:09:07 -0000 (UTC), Waldek Hebisch wrote:

    If you think about I/O "builtin" 360/30 I/O was done by microcode so
    probably significantly faster than programmed I/O on PC.

    Fast I/O throughput was just about the main point of a mainframe computer.

    But PC had hardware DMA channels and that should be at least as fast as
    360/30.

    Could MS-DOS (or CP/M) really make use of DMA?? Particularly since it couldnrCOt even do multitasking or interrupt-driven I/O, so the OS driver would just sit there spinning its wheels until the I/O completed anyway.

    AFAIK IBM PC BIOS used DMA for hard drives and floppies. Later
    AT BIOS switched to PIO because on AT it gave higher troughput.
    Yes, DOS would wait for driver to finish it work, so only gain
    from DMA was higher I/O bandwidth.

    However, this was in context of porting Unix to PC: IIUC such ports
    were independent of DOS and used BIOS mostly for booting. In
    particular used their own drivers for floppies and hard discs.

    If you wanted to do mainframe work on PC (which probably IBM did
    not want you to do) you would not use MS-DOS. Rather, you would
    port IBM DOS from 360 to PC hardware. Or write your own OS.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Carlos E.R.@robin_listas@es.invalid to comp.os.linux.misc,alt.folklore.computers on Thu Aug 28 13:02:59 2025
    From Newsgroup: comp.os.linux.misc

    On 2025-08-28 01:00, Lawrence DrCOOliveiro wrote:
    On Wed, 27 Aug 2025 22:09:07 -0000 (UTC), Waldek Hebisch wrote:

    If you think about I/O "builtin" 360/30 I/O was done by microcode so
    probably significantly faster than programmed I/O on PC.

    Fast I/O throughput was just about the main point of a mainframe computer.

    But PC had hardware DMA channels and that should be at least as fast as
    360/30.

    Could MS-DOS (or CP/M) really make use of DMA?? Particularly since it couldnrCOt even do multitasking or interrupt-driven I/O, so the OS driver would just sit there spinning its wheels until the I/O completed anyway.

    Yes, MsDOS could.

    I know for certain because I used (in the 90's) an analog data
    acquisition card which came with routines for direct poll, interrupt
    driven, or dma driven. I still have the documentation.

    However, it worked, IIRC, at the same frequency than the original IBM
    PC. I have forgotten the exact explanation, but perhaps I have it
    written somewhere. Probably related to the clock frequency of the bus on
    the ISA cards.
    --
    Cheers, Carlos.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From The Natural Philosopher@tnp@invalid.invalid to comp.os.linux.misc,alt.folklore.computers on Thu Aug 28 12:17:59 2025
    From Newsgroup: comp.os.linux.misc

    On 28/08/2025 12:02, Carlos E.R. wrote:

    Could MS-DOS (or CP/M) really make use of DMA?? Particularly since it
    couldnrCOt even do multitasking or interrupt-driven I/O, so the OS driver
    would just sit there spinning its wheels until the I/O completed anyway.

    Yes, MsDOS could.

    I know for certain because I used (in the 90's) an analog data
    acquisition card which came with routines for direct poll, interrupt
    driven, or dma driven. I still have the documentation.

    However, it worked, IIRC, at the same frequency than the original IBM
    PC. I have forgotten the exact explanation, but perhaps I have it
    written somewhere. Probably related to the clock frequency of the bus on
    the ISA cards.

    Floppy disk drive used DMA
    --
    Any fool can believe in principles - and most of them do!



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Peter Flass@Peter@Iron-Spring.com to comp.os.linux.misc,alt.folklore.computers on Thu Aug 28 15:56:30 2025
    From Newsgroup: comp.os.linux.misc

    On 8/28/25 00:26, c186282 wrote:
    On 8/27/25 10:40 AM, Alexander Schreiber wrote:
    John Levine <johnl@taugh.com> wrote:
    According to Johnny Billquist-a <bqt@softjar.se>:
    The claim has another problem. While an x86 might be considered more
    powerful in some ways, it does not have nearly as capably MMU as the
    PDP-11,

    The 8086 had no MMU at all, but small model code gave you 64K each of
    instructions and data, the same as what the 11's MMU gave you. There
    was no
    hardware protection so a malicious or badly broken program could
    crash the
    system but they rarely did.

    I haven't tried Unix on 8086, but DOS on x86 essentially relied on
    applications
    being reasonably correct and not too buggy. Having the reset button
    conveniently
    accessible was effectively a requirement for any DOS PC ;-)

    That would require instructions that the C compiler
    didn't generate.

    That claim "would require instructions that the C compiler didn't
    generate"
    is just not true. Without memory protection, there are plenty of ways
    to crash
    the system - e.g. overwriting the operating system code due to a bug
    in an
    application.

    Kind regards,
    -a-a-a-a-a-a-a-a-a-a-a Alex.



    -a In any case there WERE "Unix Variants" even for the
    -a early x86 IBM-PCs. M$ sold Xenix, a bit later there
    -a was SCO Unix.

    Minix


    -a The old 8088 was NOT super good for -IX systems but
    -a they DID make them (sort of) work. The 386 was much
    -a better, but that was some years later. I still
    -a remember the PCs coming with a DOS and CP/M-86
    -a floppy. Choose.

    -a My old boss and I debated about dedicating The Company
    -a to DOS or Unix. For sure Unix was generally "better",
    -a but alas NOT well suited to all the hardware we had.
    -a SO, in the end, it was DOS/Win. Many MANY more apps
    -a for DOS/Win ... so, in retrospect .......

    -a Later, M$ went Dark Side .....

    -a DID find an old Xenix on an antique software site.
    -a It's *41* floppies worth. Kept it, but not sure if
    -a I'll ever make a VM out of it. Interest/energy
    -a kinda compete :-)

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.linux.misc,alt.folklore.computers on Fri Aug 29 00:58:15 2025
    From Newsgroup: comp.os.linux.misc

    On Thu, 28 Aug 2025 15:56:30 -0700, Peter Flass wrote:

    On 8/28/25 00:26, c186282 wrote:

    In any case there WERE "Unix Variants" even for the early x86
    IBM-PCs.

    Minix

    Minix never licensed the rCLUnixrCY trademark. ThatrCOs why itrCOs best referred
    to as a rCL*nixrCY OS, like Linux and the BSDs.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Charlie Gibbs@cgibbs@kltpzyxm.invalid to comp.os.linux.misc,alt.folklore.computers on Fri Aug 29 06:50:56 2025
    From Newsgroup: comp.os.linux.misc

    On 2025-08-27, Alexander Schreiber <als@usenet.thangorodrim.de> wrote:

    I haven't tried Unix on 8086, but DOS on x86 essentially relied on applications
    being reasonably correct and not too buggy. Having the reset button conveniently
    accessible was effectively a requirement for any DOS PC ;-)

    Unfortunately, at about that time the reset button vanished (probably due
    to the DMCA or whatever preceded it).

    That would require instructions that the C compiler
    didn't generate.

    That claim "would require instructions that the C compiler didn't generate" is just not true. Without memory protection, there are plenty of ways to crash
    the system - e.g. overwriting the operating system code due to a bug in an application.

    If you didn't want to live entirely in a 64K segment, though, you probably
    told your C compiler to generate code for the various larger memory modules, which gave you the ability to scribble over the entire 640K (plus system storage).
    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From The Natural Philosopher@tnp@invalid.invalid to comp.os.linux.misc,alt.folklore.computers on Fri Aug 29 10:50:46 2025
    From Newsgroup: comp.os.linux.misc

    On 29/08/2025 07:50, Charlie Gibbs wrote:
    On 2025-08-27, Alexander Schreiber <als@usenet.thangorodrim.de> wrote:

    I haven't tried Unix on 8086, but DOS on x86 essentially relied on applications
    being reasonably correct and not too buggy. Having the reset button conveniently
    accessible was effectively a requirement for any DOS PC ;-)

    Unfortunately, at about that time the reset button vanished (probably due
    to the DMCA or whatever preceded it).

    That would require instructions that the C compiler
    didn't generate.

    That claim "would require instructions that the C compiler didn't generate" >> is just not true. Without memory protection, there are plenty of ways to crash
    the system - e.g. overwriting the operating system code due to a bug in an >> application.

    If you didn't want to live entirely in a 64K segment, though, you probably told your C compiler to generate code for the various larger memory modules, which gave you the ability to scribble over the entire 640K (plus system storage).

    Wasn't there a 64k data and 64k code model as well? And possibly a 64K
    stack as well though that was a pain with C.
    .
    --
    WOKE is an acronym... Without Originality, Knowledge or Education.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Alexander Schreiber@als@usenet.thangorodrim.de to comp.os.linux.misc,alt.folklore.computers on Fri Aug 29 12:46:49 2025
    From Newsgroup: comp.os.linux.misc

    Charlie Gibbs <cgibbs@kltpzyxm.invalid> wrote:
    On 2025-08-27, Alexander Schreiber <als@usenet.thangorodrim.de> wrote:

    I haven't tried Unix on 8086, but DOS on x86 essentially relied on applications
    being reasonably correct and not too buggy. Having the reset button conveniently
    accessible was effectively a requirement for any DOS PC ;-)

    Unfortunately, at about that time the reset button vanished (probably due
    to the DMCA or whatever preceded it).

    Really? System boards bought this year still have a reset line and my workstation tower case (about 10-15y old now) still has a reset button.

    PCs bought in the 1990s and 2000s still tended to have nicely accessible
    reset buttons. Not that hiding the reset button would help, when one can
    just flip the power.

    That would require instructions that the C compiler
    didn't generate.

    That claim "would require instructions that the C compiler didn't generate" >> is just not true. Without memory protection, there are plenty of ways to crash
    the system - e.g. overwriting the operating system code due to a bug in an >> application.

    If you didn't want to live entirely in a 64K segment, though, you probably told your C compiler to generate code for the various larger memory modules, which gave you the ability to scribble over the entire 640K (plus system storage).

    And even so, the code could just load one of the segment registers (e.g. ES) with with different values (and that is very hard to inhibit on the compiler side unless one wants to play "highly restricted source language" games) and then just scribble away ..

    Kind regards,
    Alex.
    --
    "Opportunity is missed by most people because it is dressed in overalls and
    looks like work." -- Thomas A. Edison
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Carlos E.R.@robin_listas@es.invalid to comp.os.linux.misc,alt.folklore.computers on Fri Aug 29 14:53:00 2025
    From Newsgroup: comp.os.linux.misc

    On 2025-08-29 08:50, Charlie Gibbs wrote:
    On 2025-08-27, Alexander Schreiber <als@usenet.thangorodrim.de> wrote:

    I haven't tried Unix on 8086, but DOS on x86 essentially relied on applications
    being reasonably correct and not too buggy. Having the reset button conveniently
    accessible was effectively a requirement for any DOS PC ;-)

    Unfortunately, at about that time the reset button vanished (probably due
    to the DMCA or whatever preceded it).

    Vanished? All my PCs have it.

    ...
    --
    Cheers, Carlos.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Kettlewell@invalid@invalid.invalid to comp.os.linux.misc,alt.folklore.computers on Fri Aug 29 14:28:51 2025
    From Newsgroup: comp.os.linux.misc

    Alexander Schreiber <als@usenet.thangorodrim.de> writes:
    Charlie Gibbs <cgibbs@kltpzyxm.invalid> wrote:
    Unfortunately, at about that time the reset button vanished (probably
    due to the DMCA or whatever preceded it).

    Really? System boards bought this year still have a reset line and my workstation tower case (about 10-15y old now) still has a reset
    button.

    I bought PC with a reset button last year. TheyrCOve not vanished at all.

    PCs bought in the 1990s and 2000s still tended to have nicely accessible reset buttons. Not that hiding the reset button would help, when one can
    just flip the power.

    On mine itrCOs on top, next to the power button and some USB ports.
    --
    https://www.greenend.org.uk/rjk/
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Bob Eager@news0009@eager.cx to comp.os.linux.misc,alt.folklore.computers on Fri Aug 29 14:40:42 2025
    From Newsgroup: comp.os.linux.misc

    On Fri, 29 Aug 2025 12:46:49 +0200, Alexander Schreiber wrote:

    Really? System boards bought this year still have a reset line and my workstation tower case (about 10-15y old now) still has a reset button.

    PCs bought in the 1990s and 2000s still tended to have nicely accessible reset buttons. Not that hiding the reset button would help, when one can
    just flip the power.

    The original PC didn't have one. I remember fitting one to mine!
    --
    Using UNIX since v6 (1975)...

    Use the BIG mirror service in the UK:
    http://www.mirrorservice.org
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From John Levine@johnl@taugh.com to comp.os.linux.misc,alt.folklore.computers on Fri Aug 29 17:09:56 2025
    From Newsgroup: comp.os.linux.misc

    According to Charlie Gibbs <cgibbs@kltpzyxm.invalid>:
    That would require instructions that the C compiler
    didn't generate.

    That claim "would require instructions that the C compiler didn't generate" >> is just not true. Without memory protection, there are plenty of ways to crash
    the system - e.g. overwriting the operating system code due to a bug in an >> application.

    If you didn't want to live entirely in a 64K segment, though, you probably >told your C compiler to generate code for the various larger memory modules,

    Not the PC/IX compiler. It was small mode only, which was plenty to compile all of the PDP-11 source code. x86 object code was a little smaller than PDP-11 code, the data would have been thes same size.

    We got complaints from people who wanted to be able to run larger programs. Sorry, doesn't do that, you can use several processes talking though pipes.
    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From The Natural Philosopher@tnp@invalid.invalid to comp.os.linux.misc,alt.folklore.computers on Fri Aug 29 19:18:18 2025
    From Newsgroup: comp.os.linux.misc

    On 29/08/2025 18:09, John Levine wrote:
    According to Charlie Gibbs <cgibbs@kltpzyxm.invalid>:
    That would require instructions that the C compiler
    didn't generate.

    That claim "would require instructions that the C compiler didn't generate" >>> is just not true. Without memory protection, there are plenty of ways to crash
    the system - e.g. overwriting the operating system code due to a bug in an >>> application.

    If you didn't want to live entirely in a 64K segment, though, you probably >> told your C compiler to generate code for the various larger memory modules,

    Not the PC/IX compiler. It was small mode only, which was plenty to compile all of the PDP-11 source code. x86 object code was a little smaller than PDP-11 code, the data would have been thes same size.

    We got complaints from people who wanted to be able to run larger programs. Sorry, doesn't do that, you can use several processes talking though pipes.
    PDP I worked on was 64k code, 64k data/stack

    C was designed for that
    My PC compilers could do large model. But oit wasnt really worth it
    until the 386 came along
    --
    Truth welcomes investigation because truth knows investigation will lead
    to converts. It is deception that uses all the other techniques.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Johnny Billquist@bqt@softjar.se to comp.os.linux.misc,alt.folklore.computers on Fri Aug 29 21:12:34 2025
    From Newsgroup: comp.os.linux.misc

    On 2025-08-26 22:21, John Levine wrote:
    According to Johnny Billquist <bqt@softjar.se>:
    The claim has another problem. While an x86 might be considered more
    powerful in some ways, it does not have nearly as capably MMU as the
    PDP-11,

    The 8086 had no MMU at all, but small model code gave you 64K each of instructions and data, the same as what the 11's MMU gave you. There was no hardware protection so a malicious or badly broken program could crash the system but they rarely did. That would require instructions that the C compiler
    didn't generate.

    If we were to compare the memory layout/concepts of the PDP-11 and x86,
    with an eye to powerful and capable, then the PDP-11, which have an MMU,
    don't need to allocate 64K of memory for each process. In fact, it only
    need to allocate as much memory as the process actually require, and any addressing outside of that would trap and you'd get an signal in your
    process. So you can easily squish in many more processes in the same
    amount of memory.
    The next couple of points I don't know exactly when they came about for
    the PDP-11, so it might have been a bit later, but I think it's still
    valid as a comparison against the x86 here.
    Stack, on the PDP-11 is dynamically grown and allocated while the
    program is running, so you don't have to pre-allocate all that memory
    either, even though it can grow up to close to 64K.

    But even more important, on the PDP-11, there is support for overlaid programs, which makes heavy use of the MMU. Basically, programs can be
    way larger than 64K code. You can place functions in different overlays,
    and call between them, and you can run up to many hundred of K of code
    very easy and straight forward on the PDP-11, and it's all because the
    MMU helps you out with it, moving the pages mapping around as needed.

    I worked on PC/IX which was a straightforward port of PDP-11 System III Unix to
    the PC. It wasn't particularly fast, but all the C programs that ran on the 11
    also ran on PC/IX. It was quite reliable. I recall that we got a bug report about something that only broke if the system had been up continuously for a year.

    And the PDP-11 grew more capable in ways the PC/IX would not be able to follow, getting programs you would not have been able to run on that PC.

    Johnny

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From ted@loft.tnolan.com (Ted Nolan@tednolan to comp.os.linux.misc,alt.folklore.computers on Fri Aug 29 20:52:10 2025
    From Newsgroup: comp.os.linux.misc

    In article <108su32$3e8$1@news.misty.com>,
    Johnny Billquist <bqt@softjar.se> wrote:
    On 2025-08-26 22:21, John Levine wrote:
    According to Johnny Billquist <bqt@softjar.se>:
    The claim has another problem. While an x86 might be considered more
    powerful in some ways, it does not have nearly as capably MMU as the
    PDP-11,

    The 8086 had no MMU at all, but small model code gave you 64K each of
    instructions and data, the same as what the 11's MMU gave you. There was no >> hardware protection so a malicious or badly broken program could crash the >> system but they rarely did. That would require instructions that the C >compiler
    didn't generate.

    If we were to compare the memory layout/concepts of the PDP-11 and x86,
    with an eye to powerful and capable, then the PDP-11, which have an MMU, >don't need to allocate 64K of memory for each process. In fact, it only
    need to allocate as much memory as the process actually require, and any >addressing outside of that would trap and you'd get an signal in your >process. So you can easily squish in many more processes in the same
    amount of memory.
    The next couple of points I don't know exactly when they came about for
    the PDP-11, so it might have been a bit later, but I think it's still
    valid as a comparison against the x86 here.
    Stack, on the PDP-11 is dynamically grown and allocated while the
    program is running, so you don't have to pre-allocate all that memory >either, even though it can grow up to close to 64K.

    But even more important, on the PDP-11, there is support for overlaid >programs, which makes heavy use of the MMU. Basically, programs can be
    way larger than 64K code. You can place functions in different overlays,
    and call between them, and you can run up to many hundred of K of code
    very easy and straight forward on the PDP-11, and it's all because the
    MMU helps you out with it, moving the pages mapping around as needed.


    My memory is that at leat for BSD Unix, overlays were not supported until
    um, 2.9BSD I think, and that using them was not at all straight-forward.
    It may have been easier for official DEC OSes...
    --
    columbiaclosings.com
    What's not in Columbia anymore..
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Alexander Schreiber@als@usenet.thangorodrim.de to comp.os.linux.misc,alt.folklore.computers on Fri Aug 29 23:13:06 2025
    From Newsgroup: comp.os.linux.misc

    Bob Eager <news0009@eager.cx> wrote:
    On Fri, 29 Aug 2025 12:46:49 +0200, Alexander Schreiber wrote:

    Really? System boards bought this year still have a reset line and my
    workstation tower case (about 10-15y old now) still has a reset button.

    PCs bought in the 1990s and 2000s still tended to have nicely accessible
    reset buttons. Not that hiding the reset button would help, when one can
    just flip the power.

    The original PC didn't have one. I remember fitting one to mine!

    Given the rock-solid stability and reliability of the MS-DOS environment
    and its applications *cough* *cough* that sounds like an interesting
    design oversight.

    SCNR,
    Alex.
    --
    "Opportunity is missed by most people because it is dressed in overalls and
    looks like work." -- Thomas A. Edison
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.os.linux.misc,alt.folklore.computers on Fri Aug 29 22:27:21 2025
    From Newsgroup: comp.os.linux.misc

    Charlie Gibbs <cgibbs@kltpzyxm.invalid> writes:
    On 2025-08-27, Alexander Schreiber <als@usenet.thangorodrim.de> wrote:

    I haven't tried Unix on 8086, but DOS on x86 essentially relied on applications
    being reasonably correct and not too buggy. Having the reset button conveniently
    accessible was effectively a requirement for any DOS PC ;-)

    Unfortunately, at about that time the reset button vanished (probably due
    to the DMCA or whatever preceded it).

    We had an ISA/EISA card with a single pushbutton on it that would
    assert the NMI signal. The PCI version asserted SERR#.

    Very useful for OS debugging.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.linux.misc,alt.folklore.computers on Fri Aug 29 23:34:57 2025
    From Newsgroup: comp.os.linux.misc

    On 29 Aug 2025 20:52:10 GMT, Ted Nolan <tednolan> wrote:

    In article <108su32$3e8$1@news.misty.com>,
    Johnny Billquist <bqt@softjar.se> wrote:

    But even more important, on the PDP-11, there is support for overlaid
    programs, which makes heavy use of the MMU.

    No, it didnrCOt make use of the MMU at all. It was a purely software thing, involving replacing in-memory parts of the program with other parts loaded from the executable file.

    My memory is that at leat for BSD Unix, overlays were not supported
    until um, 2.9BSD I think, and that using them was not at all straight-forward. It may have been easier for official DEC OSes...

    Using overlays was never straightforward, on any OS.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Charlie Gibbs@cgibbs@kltpzyxm.invalid to comp.os.linux.misc,alt.folklore.computers on Fri Aug 29 23:51:18 2025
    From Newsgroup: comp.os.linux.misc

    On 2025-08-29, Alexander Schreiber <als@usenet.thangorodrim.de> wrote:

    Charlie Gibbs <cgibbs@kltpzyxm.invalid> wrote:

    On 2025-08-27, Alexander Schreiber <als@usenet.thangorodrim.de> wrote:

    I haven't tried Unix on 8086, but DOS on x86 essentially relied on
    applications being reasonably correct and not too buggy. Having the
    reset button conveniently accessible was effectively a requirement
    for any DOS PC ;-)

    Unfortunately, at about that time the reset button vanished (probably
    due to the DMCA or whatever preceded it).

    Really? System boards bought this year still have a reset line and my workstation tower case (about 10-15y old now) still has a reset button.

    Oops, I forgot about that. They did make a comeback, didn't they?
    But there definitely was a period before that where the button vanished (although there would have been motherboard pins if you wanted to dig
    into it).

    PCs bought in the 1990s and 2000s still tended to have nicely accessible reset buttons. Not that hiding the reset button would help, when one can
    just flip the power.

    Some people had debuggers that could scan memory after a crash.
    Power cycling would wipe that - which is what the copy protection
    enthusiasts were after.
    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Charlie Gibbs@cgibbs@kltpzyxm.invalid to comp.os.linux.misc,alt.folklore.computers on Fri Aug 29 23:51:17 2025
    From Newsgroup: comp.os.linux.misc

    On 2025-08-29, The Natural Philosopher <tnp@invalid.invalid> wrote:

    On 29/08/2025 18:09, John Levine wrote:

    According to Charlie Gibbs <cgibbs@kltpzyxm.invalid>:

    That would require instructions that the C compiler
    didn't generate.

    That claim "would require instructions that the C compiler didn't generate"
    is just not true. Without memory protection, there are plenty of ways to >>>> crash the system - e.g. overwriting the operating system code due to a bug >>>> in an application.

    If you didn't want to live entirely in a 64K segment, though, you probably >>> told your C compiler to generate code for the various larger memory modules,

    Not the PC/IX compiler. It was small mode only, which was plenty to compile >> all of the PDP-11 source code. x86 object code was a little smaller than
    PDP-11 code, the data would have been thes same size.

    We got complaints from people who wanted to be able to run larger programs. >> Sorry, doesn't do that, you can use several processes talking though pipes.
    PDP I worked on was 64k code, 64k data/stack

    C was designed for that
    My PC compilers could do large model. But oit wasnt really worth it
    until the 386 came along

    It was for us. We needed all that memory. It was only a few years ago
    that I finally got rid of all the hacks I wrote in to normalize pointers
    and deal with segment wrap-arounds. It was horrible. Forget the 640K
    barrier - the 64K barrier was alive and well on the 8086/8088/80286.
    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.linux.misc,alt.folklore.computers on Sat Aug 30 03:27:29 2025
    From Newsgroup: comp.os.linux.misc

    On Fri, 29 Aug 2025 23:51:18 GMT, Charlie Gibbs wrote:

    But there definitely was a period before that where the button vanished (although there would have been motherboard pins if you wanted to dig
    into it).

    Apple included a little springy clip thing (the rCLProgrammersrCOs SwitchrCY) in
    the box with each of those original classic-form-factor Macintoshes. When installed, pressing one side triggered NMI (used for invoking the resident debugger), while the other side triggered the RESET line (hard reboot).

    I still have the muscle memory: seated in front of the machine, reach
    around with right hand, far side was NMI, near side was RESET.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.linux.misc,alt.folklore.computers on Sat Aug 30 03:28:43 2025
    From Newsgroup: comp.os.linux.misc

    On Fri, 29 Aug 2025 23:13:06 +0200, Alexander Schreiber wrote:

    Given the rock-solid stability and reliability of the MS-DOS environment
    and its applications *cough* *cough* that sounds like an interesting
    design oversight.

    DonrCOt know why IBM didnrCOt feel the need to have an easily-accessible button hard-wired to the RESET signal. Instead you had to hit the infamous three-key sequence and hope it would be recognized by a still-functioning
    BIOS ...
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From c186282@c186282@nnada.net to comp.os.linux.misc,alt.folklore.computers on Sat Aug 30 00:25:23 2025
    From Newsgroup: comp.os.linux.misc

    On 8/29/25 11:27 PM, Lawrence DrCOOliveiro wrote:
    On Fri, 29 Aug 2025 23:51:18 GMT, Charlie Gibbs wrote:

    But there definitely was a period before that where the button vanished
    (although there would have been motherboard pins if you wanted to dig
    into it).

    Apple included a little springy clip thing (the rCLProgrammersrCOs SwitchrCY) in
    the box with each of those original classic-form-factor Macintoshes. When installed, pressing one side triggered NMI (used for invoking the resident debugger), while the other side triggered the RESET line (hard reboot).

    I still have the muscle memory: seated in front of the machine, reach
    around with right hand, far side was NMI, near side was RESET.

    Hmmm ... how did they implement that ? How did it
    differ from just using the power switch ???

    In THEORY that kind of 'reset' SHOULD include at
    least ATTEMPTS to shut down a few important daemons.
    MOST important, the HDD cache ... DO try yer best
    to write-out the cache before going off.

    "Reset" buttons are mostly good, but on the company
    servers I always disconnected those, so no dink could
    just accidentally bump into the switch while looking
    for something else. REAL power switch, like a 3-sec
    delay before anything happens.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From rbowman@bowman@montana.com to comp.os.linux.misc,alt.folklore.computers on Sat Aug 30 05:54:34 2025
    From Newsgroup: comp.os.linux.misc

    On Fri, 29 Aug 2025 23:51:17 GMT, Charlie Gibbs wrote:

    It was for us. We needed all that memory. It was only a few years ago
    that I finally got rid of all the hacks I wrote in to normalize pointers
    and deal with segment wrap-arounds. It was horrible. Forget the 640K barrier - the 64K barrier was alive and well on the 8086/8088/80286.

    Tiny, Small, Large, Bigger & Humongous. I have the names wrong but I'm
    pretty sure there were 5 sets of libraries that you had to chose to do a build. Then there was what was referred to as the 'thunk' in DJGPP circles when you need to get real.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From c186282@c186282@nnada.net to comp.os.linux.misc,alt.folklore.computers on Sat Aug 30 03:06:11 2025
    From Newsgroup: comp.os.linux.misc

    On 8/30/25 1:54 AM, rbowman wrote:
    On Fri, 29 Aug 2025 23:51:17 GMT, Charlie Gibbs wrote:

    It was for us. We needed all that memory. It was only a few years ago
    that I finally got rid of all the hacks I wrote in to normalize pointers
    and deal with segment wrap-arounds. It was horrible. Forget the 640K
    barrier - the 64K barrier was alive and well on the 8086/8088/80286.

    Tiny, Small, Large, Bigger & Humongous. I have the names wrong but I'm
    pretty sure there were 5 sets of libraries that you had to chose to do a build. Then there was what was referred to as the 'thunk' in DJGPP circles when you need to get real.

    Tbe original 8088 had all the needed registers.
    Could minimum deliver at LEAST an easy 64k code
    space and at LEAST another 64k data area. A few
    tricks and .......

    So YEA - you COULD run some kind of -IX on
    the original PCs. Not super fast/efficient
    but it COULD work. Remember early versions
    of SCO.

    286/386 ... MUCH better - but it came LATER.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From The Natural Philosopher@tnp@invalid.invalid to comp.os.linux.misc,alt.folklore.computers on Sat Aug 30 08:13:28 2025
    From Newsgroup: comp.os.linux.misc

    On 30/08/2025 00:51, Charlie Gibbs wrote:
    On 2025-08-29, The Natural Philosopher <tnp@invalid.invalid> wrote:

    On 29/08/2025 18:09, John Levine wrote:

    According to Charlie Gibbs <cgibbs@kltpzyxm.invalid>:

    That would require instructions that the C compiler
    didn't generate.

    That claim "would require instructions that the C compiler didn't generate"
    is just not true. Without memory protection, there are plenty of ways to >>>>> crash the system - e.g. overwriting the operating system code due to a bug
    in an application.

    If you didn't want to live entirely in a 64K segment, though, you probably >>>> told your C compiler to generate code for the various larger memory modules,

    Not the PC/IX compiler. It was small mode only, which was plenty to compile
    all of the PDP-11 source code. x86 object code was a little smaller than >>> PDP-11 code, the data would have been thes same size.

    We got complaints from people who wanted to be able to run larger programs. >>> Sorry, doesn't do that, you can use several processes talking though pipes. >> PDP I worked on was 64k code, 64k data/stack

    C was designed for that
    My PC compilers could do large model. But oit wasnt really worth it
    until the 386 came along

    It was for us. We needed all that memory. It was only a few years ago
    that I finally got rid of all the hacks I wrote in to normalize pointers
    and deal with segment wrap-arounds. It was horrible. Forget the 640K barrier - the 64K barrier was alive and well on the 8086/8088/80286.

    Yes. Well my programs were all relatively small, Mostly BIOS level ROMs
    and device drivers.

    Cant use large model in an 8k PROM...
    --
    Canada is all right really, though not for the whole weekend.

    "Saki"

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From The Natural Philosopher@tnp@invalid.invalid to comp.os.linux.misc,alt.folklore.computers on Sat Aug 30 08:28:34 2025
    From Newsgroup: comp.os.linux.misc

    On 30/08/2025 08:06, c186282 wrote:
    On 8/30/25 1:54 AM, rbowman wrote:
    On Fri, 29 Aug 2025 23:51:17 GMT, Charlie Gibbs wrote:

    It was for us.-a We needed all that memory.-a It was only a few years ago >>> that I finally got rid of all the hacks I wrote in to normalize pointers >>> and deal with segment wrap-arounds.-a It was horrible.-a Forget the 640K >>> barrier - the 64K barrier was alive and well on the 8086/8088/80286.

    Tiny, Small, Large, Bigger & Humongous. I have the names wrong but I'm
    pretty sure there were 5 sets of libraries that you had to chose to do a
    build. Then there was what was referred to as the 'thunk' in DJGPP
    circles
    when you need to get real.

    -a Tbe original 8088 had all the needed registers.
    -a Could minimum deliver at LEAST an easy 64k code
    -a space and at LEAST another 64k data area. A few
    -a tricks and .......

    -a So YEA - you COULD run some kind of -IX on
    -a the original PCs. Not super fast/efficient
    -a but it COULD work. Remember early versions
    -a of SCO.

    -a 286/386 ... MUCH better - but it came LATER.

    The big trouble was that Unix was expanding faster than the PC
    architecture could handle until the 386 made it all easy.

    Then SCO Unix made it all not just possible, but extremely handy.

    But we never got a serious graphical user interface on Unix. By the time
    X windows had stabilised and cent window managers evolved Linux had arrived.
    --
    "In our post-modern world, climate science is not powerful because it is
    true: it is true because it is powerful."

    Lucas Bergkamp

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From c186282@c186282@nnada.net to comp.os.linux.misc,alt.folklore.computers,alt.security on Sat Aug 30 05:30:34 2025
    From Newsgroup: comp.os.linux.misc

    On 8/30/25 3:28 AM, The Natural Philosopher wrote:
    On 30/08/2025 08:06, c186282 wrote:
    On 8/30/25 1:54 AM, rbowman wrote:
    On Fri, 29 Aug 2025 23:51:17 GMT, Charlie Gibbs wrote:

    It was for us.-a We needed all that memory.-a It was only a few years ago >>>> that I finally got rid of all the hacks I wrote in to normalize
    pointers
    and deal with segment wrap-arounds.-a It was horrible.-a Forget the 640K >>>> barrier - the 64K barrier was alive and well on the 8086/8088/80286.

    Tiny, Small, Large, Bigger & Humongous. I have the names wrong but I'm
    pretty sure there were 5 sets of libraries that you had to chose to do a >>> build. Then there was what was referred to as the 'thunk' in DJGPP
    circles
    when you need to get real.

    -a-a Tbe original 8088 had all the needed registers.
    -a-a Could minimum deliver at LEAST an easy 64k code
    -a-a space and at LEAST another 64k data area. A few
    -a-a tricks and .......

    -a-a So YEA - you COULD run some kind of -IX on
    -a-a the original PCs. Not super fast/efficient
    -a-a but it COULD work. Remember early versions
    -a-a of SCO.

    -a-a 286/386 ... MUCH better - but it came LATER.

    The big trouble was that Unix was expanding faster than the PC
    architecture could handle until the 386 made it all easy.

    Agreed.

    Then SCO Unix made it all not just possible, but extremely handy.

    Well ... not "handy" enough.

    My boss at the time - a very smart nerd - and I did
    discuss DOS -vs- Unix for The Outfit.

    We eventually decided on DOS, and ultimately Win.

    MORE stuff. MORE support.

    But we never got a serious graphical user interface on Unix. By the time
    X windows had stabilised and cent window managers evolved Linux had
    arrived.

    Hey, DEALT with 'X' and WMs on the first versions
    of Linux you could get. NOT super-easy all the time.
    Spent like 48 hours getting it to rec my damned mouse
    with RH.

    NOT encouraging.

    However Linux GOT BETTER. Soon I had all the needed
    office servers on Linux - Just Because.

    But, alas, the Staff - Winders Forever And Always.

    Typical "split environment".

    Old DOS/Win ... you were limited to TEN clients for
    shared disks and net connections. Solution ... make
    Linux box ONE of those - and then share a LOT more
    using it's address :-) That was my first real use
    for Linux. Grew thereafter.

    No, the budget did NOT include switching to the
    early WinServer and per-user licenses. Only did
    that MUCH MUCH later - but there were only 5 users
    for that particular need.

    After retirement, not sure what the New Guy has
    done ... but it looked to be all M$ Cloud. He
    couldn't even write a three line Python script
    alas - but the bosses kinda LIKED that. Insanity.

    Vlad's boyz WILL destroy The Cloud REAL SOON.
    Have already proved they can.

    Then what ?

    Heh, heh ... shopping for net services ... a LOT
    of US corps now demand bank acct ROUTING NUMBERS.
    Note that what can be routed IN can be routed OUT.
    With CCards you have legal protections - NOT with
    routing numbers.

    Still get my pension/SS checks on PAPER. Do
    you wonder WHY ???

    Vlad's boyz will have ALL the routing numbers
    by now. No - corp or govt - has been immune
    to Vlad's boyz. Xi's boyz, maybe even worse.

    Frankly, ALL US biz/banks/govt need to issue
    NEW NUMBERS. But it'd be a huge cluster-fuck
    so they WON'T. Alas the BIGGER cluster-fuck
    is on the NEAR horizon. 'Cold' war has become
    much warmer ......

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Johnny Billquist@bqt@softjar.se to comp.os.linux.misc,alt.folklore.computers on Sat Aug 30 11:49:03 2025
    From Newsgroup: comp.os.linux.misc

    On 2025-08-29 22:52, Ted Nolan <tednolan> wrote:
    In article <108su32$3e8$1@news.misty.com>,
    Johnny Billquist <bqt@softjar.se> wrote:
    But even more important, on the PDP-11, there is support for overlaid
    programs, which makes heavy use of the MMU. Basically, programs can be
    way larger than 64K code. You can place functions in different overlays,
    and call between them, and you can run up to many hundred of K of code
    very easy and straight forward on the PDP-11, and it's all because the
    MMU helps you out with it, moving the pages mapping around as needed.


    My memory is that at leat for BSD Unix, overlays were not supported until
    um, 2.9BSD I think, and that using them was not at all straight-forward.
    It may have been easier for official DEC OSes...

    The timeline is the bit I'm not entirely sure about. It uses the
    capabilities that were in the PDP-11 hardware all the time, though. So
    it's an interesting thing to remember/compare with Unix on an 8086.

    As for ease of use, you got it backward. While overlays in DEC OSes
    actually are way more advanced, and capable that overlays in Unix on the PDP-11, using them on the Unix side is basically a no brainer. You don't
    need to do anything at all. You just put modules wherever you want to,
    and it works.

    With the DEC OSes, you have to create an overlay description in a weird language, and you can't call cross overlay trees, and you need to be
    careful if you call upstream, which might change mapping, and all that.
    None of those restrictions apply for Unix overlays. The only thing you
    need to keep an eye out for is just that the size is kept within some rules.

    Johnny

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Johnny Billquist@bqt@softjar.se to comp.os.linux.misc,alt.folklore.computers on Sat Aug 30 11:51:37 2025
    From Newsgroup: comp.os.linux.misc

    On 2025-08-30 01:34, Lawrence DrCOOliveiro wrote:
    On 29 Aug 2025 20:52:10 GMT, Ted Nolan <tednolan> wrote:

    In article <108su32$3e8$1@news.misty.com>,
    Johnny Billquist <bqt@softjar.se> wrote:

    But even more important, on the PDP-11, there is support for overlaid
    programs, which makes heavy use of the MMU.

    No, it didnrCOt make use of the MMU at all. It was a purely software thing, involving replacing in-memory parts of the program with other parts loaded from the executable file.

    No. You are wrong. If you want to, we can go and read the code together.
    DEC OSes on the other hand could do overlays either via MMU, or by
    reading in the correct overlay from disk. (I'm still supporting, fixing,
    and developing new bits for 2.11BSD.)

    My memory is that at leat for BSD Unix, overlays were not supported
    until um, 2.9BSD I think, and that using them was not at all
    straight-forward. It may have been easier for official DEC OSes...

    Using overlays was never straightforward, on any OS.

    Happy to show you otherwise. Really, under Unix, using the overlays
    requires almost no brain at all. You just spread the code out in
    different overlays and that's it.

    Johnny

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From The Natural Philosopher@tnp@invalid.invalid to comp.os.linux.misc,alt.folklore.computers,alt.security on Sat Aug 30 11:59:32 2025
    From Newsgroup: comp.os.linux.misc

    On 30/08/2025 10:30, c186282 wrote:
    On 8/30/25 3:28 AM, The Natural Philosopher wrote:
    On 30/08/2025 08:06, c186282 wrote:
    On 8/30/25 1:54 AM, rbowman wrote:
    On Fri, 29 Aug 2025 23:51:17 GMT, Charlie Gibbs wrote:

    It was for us.-a We needed all that memory.-a It was only a few years >>>>> ago
    that I finally got rid of all the hacks I wrote in to normalize
    pointers
    and deal with segment wrap-arounds.-a It was horrible.-a Forget the 640K >>>>> barrier - the 64K barrier was alive and well on the 8086/8088/80286.

    Tiny, Small, Large, Bigger & Humongous. I have the names wrong but I'm >>>> pretty sure there were 5 sets of libraries that you had to chose to
    do a
    build. Then there was what was referred to as the 'thunk' in DJGPP
    circles
    when you need to get real.

    -a-a Tbe original 8088 had all the needed registers.
    -a-a Could minimum deliver at LEAST an easy 64k code
    -a-a space and at LEAST another 64k data area. A few
    -a-a tricks and .......

    -a-a So YEA - you COULD run some kind of -IX on
    -a-a the original PCs. Not super fast/efficient
    -a-a but it COULD work. Remember early versions
    -a-a of SCO.

    -a-a 286/386 ... MUCH better - but it came LATER.

    The big trouble was that Unix was expanding faster than the PC
    architecture could handle until the 386 made it all easy.

    -a Agreed.

    Then SCO Unix made it all not just possible, but extremely handy.

    -a Well ... not "handy" enough.

    -a My boss at the time - a very smart nerd - and I did
    -a discuss DOS -vs- Unix for The Outfit.

    -a We eventually decided on DOS, and ultimately Win.

    -a MORE stuff. MORE support.


    I was the boss. SCO Unix for the networked servers and Win 3 for the
    desktops
    SUN PC-NFS to hang it all together.



    But we never got a serious graphical user interface on Unix. By the
    time X windows had stabilised and cent window managers evolved Linux
    had arrived.

    -a Hey, DEALT with 'X' and WMs on the first versions
    -a of Linux you could get. NOT super-easy all the time.
    -a Spent like 48 hours getting it to rec my damned mouse
    -a with RH.

    -a NOT encouraging.

    I waited. By around 2003 Debian had some sort of GUI

    -a However Linux GOT BETTER. Soon I had all the needed
    -a office servers on Linux - Just Because.

    -a But, alas, the Staff - Winders Forever And Always.

    -a Typical "split environment".

    Yup. Of course


    --
    rCLPolitics is the art of looking for trouble, finding it everywhere, diagnosing it incorrectly and applying the wrong remedies.rCY
    rCo Groucho Marx

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.linux.misc,alt.folklore.computers on Sat Aug 30 12:19:06 2025
    From Newsgroup: comp.os.linux.misc

    In article <suScna_OAeil4C_1nZ2dnZfqnPSdnZ2d@giganews.com>,
    c186282 <c186282@nnada.net> wrote:
    On 8/29/25 11:27 PM, Lawrence DrCOOliveiro wrote:
    On Fri, 29 Aug 2025 23:51:18 GMT, Charlie Gibbs wrote:

    But there definitely was a period before that where the button vanished
    (although there would have been motherboard pins if you wanted to dig
    into it).

    Apple included a little springy clip thing (the rCLProgrammersrCOs SwitchrCY) in
    the box with each of those original classic-form-factor Macintoshes. When
    installed, pressing one side triggered NMI (used for invoking the resident >> debugger), while the other side triggered the RESET line (hard reboot).

    I still have the muscle memory: seated in front of the machine, reach
    around with right hand, far side was NMI, near side was RESET.

    Hmmm ... how did they implement that ? How did it
    differ from just using the power switch ???

    Presumably the NMI would enter a debugger. RESET would yank the
    CPU and peripheral reset lines.

    In THEORY that kind of 'reset' SHOULD include at
    least ATTEMPTS to shut down a few important daemons.
    MOST important, the HDD cache ... DO try yer best
    to write-out the cache before going off.

    The original Macintosh had daemons, important or not. For that
    matter, it didn't ship with a hard disk drive, either.

    "Reset" buttons are mostly good, but on the company
    servers I always disconnected those, so no dink could
    just accidentally bump into the switch while looking
    for something else. REAL power switch, like a 3-sec
    delay before anything happens.

    Those weren't Macs. :-)

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From c186282@c186282@nnada.net to comp.os.linux.misc,alt.folklore.computers,alt.security on Sat Aug 30 08:26:06 2025
    From Newsgroup: comp.os.linux.misc

    On 8/30/25 6:59 AM, The Natural Philosopher wrote:
    On 30/08/2025 10:30, c186282 wrote:
    On 8/30/25 3:28 AM, The Natural Philosopher wrote:
    On 30/08/2025 08:06, c186282 wrote:
    On 8/30/25 1:54 AM, rbowman wrote:
    On Fri, 29 Aug 2025 23:51:17 GMT, Charlie Gibbs wrote:

    It was for us.-a We needed all that memory.-a It was only a few
    years ago
    that I finally got rid of all the hacks I wrote in to normalize
    pointers
    and deal with segment wrap-arounds.-a It was horrible.-a Forget the >>>>>> 640K
    barrier - the 64K barrier was alive and well on the 8086/8088/80286. >>>>>
    Tiny, Small, Large, Bigger & Humongous. I have the names wrong but I'm >>>>> pretty sure there were 5 sets of libraries that you had to chose to >>>>> do a
    build. Then there was what was referred to as the 'thunk' in DJGPP
    circles
    when you need to get real.

    -a-a Tbe original 8088 had all the needed registers.
    -a-a Could minimum deliver at LEAST an easy 64k code
    -a-a space and at LEAST another 64k data area. A few
    -a-a tricks and .......

    -a-a So YEA - you COULD run some kind of -IX on
    -a-a the original PCs. Not super fast/efficient
    -a-a but it COULD work. Remember early versions
    -a-a of SCO.

    -a-a 286/386 ... MUCH better - but it came LATER.

    The big trouble was that Unix was expanding faster than the PC
    architecture could handle until the 386 made it all easy.

    -a-a Agreed.

    Then SCO Unix made it all not just possible, but extremely handy.

    -a-a Well ... not "handy" enough.

    -a-a My boss at the time - a very smart nerd - and I did
    -a-a discuss DOS -vs- Unix for The Outfit.

    -a-a We eventually decided on DOS, and ultimately Win.

    -a-a MORE stuff. MORE support.


    I was the boss. SCO Unix for the networked servers and Win 3 for the desktops
    SUN PC-NFS to hang it all together.

    We didn't have "servers" quite that far back.
    It was all DOS/Win3/95 plus Novell networking
    for awhile.

    I remember installing all the coax and T-connectors.

    But we never got a serious graphical user interface on Unix. By the
    time X windows had stabilised and cent window managers evolved Linux
    had arrived.

    -a-a Hey, DEALT with 'X' and WMs on the first versions
    -a-a of Linux you could get. NOT super-easy all the time.
    -a-a Spent like 48 hours getting it to rec my damned mouse
    -a-a with RH.

    -a-a NOT encouraging.

    I waited. By around 2003 Debian had some sort of GUI

    GUIs appeared fairly early - but EASY TO CONFIGURE
    ones ... had to wait a bit.

    -a-a However Linux GOT BETTER. Soon I had all the needed
    -a-a office servers on Linux - Just Because.

    -a-a But, alas, the Staff - Winders Forever And Always.

    -a-a Typical "split environment".

    Yup. Of course

    This IS the norm.

    Some complain because I speak of Winders in this
    forum but, alas, it's still ALL CONNECTED. Win
    shit and YOU have to drop yer Linux stuff and fix
    all the Win boxes. Been there, done that, again
    and again and again.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.os.linux.misc,alt.folklore.computers on Sat Aug 30 15:54:11 2025
    From Newsgroup: comp.os.linux.misc

    Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> writes:
    On 29 Aug 2025 20:52:10 GMT, Ted Nolan <tednolan> wrote:



    Using overlays was never straightforward, on any OS.

    Typical troll comment.

    There are existance proofs counter to your
    unsupported blanket statement.

    Burroughs medium systems for example, where using overlays was built
    into the compilation tools (including the COBOL compiler) and the
    operating system. Even the operating system used overlays for
    rarely used functionality.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From ted@loft.tnolan.com (Ted Nolan@tednolan to comp.os.linux.misc,alt.folklore.computers on Sat Aug 30 17:39:49 2025
    From Newsgroup: comp.os.linux.misc

    In article <108uhef$26t$1@news.misty.com>,
    Johnny Billquist <bqt@softjar.se> wrote:
    On 2025-08-29 22:52, Ted Nolan <tednolan> wrote:
    In article <108su32$3e8$1@news.misty.com>,
    Johnny Billquist <bqt@softjar.se> wrote:
    But even more important, on the PDP-11, there is support for overlaid
    programs, which makes heavy use of the MMU. Basically, programs can be
    way larger than 64K code. You can place functions in different overlays, >>> and call between them, and you can run up to many hundred of K of code
    very easy and straight forward on the PDP-11, and it's all because the
    MMU helps you out with it, moving the pages mapping around as needed.


    My memory is that at leat for BSD Unix, overlays were not supported until
    um, 2.9BSD I think, and that using them was not at all straight-forward.
    It may have been easier for official DEC OSes...

    The timeline is the bit I'm not entirely sure about. It uses the >capabilities that were in the PDP-11 hardware all the time, though. So
    it's an interesting thing to remember/compare with Unix on an 8086.

    As for ease of use, you got it backward. While overlays in DEC OSes
    actually are way more advanced, and capable that overlays in Unix on the >PDP-11, using them on the Unix side is basically a no brainer. You don't >need to do anything at all. You just put modules wherever you want to,
    and it works.

    With the DEC OSes, you have to create an overlay description in a weird >language, and you can't call cross overlay trees, and you need to be
    careful if you call upstream, which might change mapping, and all that.
    None of those restrictions apply for Unix overlays. The only thing you
    need to keep an eye out for is just that the size is kept within some rules.

    Johnny


    Since you've done it, I defer. I just recall that when we got 2.9BSD, I considered trying to port some big Vax program to the 11 and from reading
    the man pages I got the impression I would have to get intimately familiar
    with said program's call graph (which I definitely was not) to partition
    out the overlays and ended up moving on to something else.
    --
    columbiaclosings.com
    What's not in Columbia anymore..
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From candycanearter07@candycanearter07@candycanearter07.nomail.afraid to comp.os.linux.misc,alt.folklore.computers on Sat Aug 30 18:30:11 2025
    From Newsgroup: comp.os.linux.misc

    Lawrence DrCOOliveiro <ldo@nz.invalid> wrote at 03:27 this Saturday (GMT):
    On Fri, 29 Aug 2025 23:51:18 GMT, Charlie Gibbs wrote:

    But there definitely was a period before that where the button vanished
    (although there would have been motherboard pins if you wanted to dig
    into it).

    Apple included a little springy clip thing (the rCLProgrammersrCOs SwitchrCY) in
    the box with each of those original classic-form-factor Macintoshes. When installed, pressing one side triggered NMI (used for invoking the resident debugger), while the other side triggered the RESET line (hard reboot).

    I still have the muscle memory: seated in front of the machine, reach
    around with right hand, far side was NMI, near side was RESET.


    That's pretty cool, I always wished there was a physical switch to
    trigger a debugger since the system might be frozen...
    --
    user <candycane> is generated from /dev/urandom
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Peter Flass@Peter@Iron-Spring.com to comp.os.linux.misc,alt.folklore.computers on Sat Aug 30 12:13:32 2025
    From Newsgroup: comp.os.linux.misc

    On 8/30/25 08:54, Scott Lurndal wrote:
    Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> writes:
    On 29 Aug 2025 20:52:10 GMT, Ted Nolan <tednolan> wrote:



    Using overlays was never straightforward, on any OS.

    Typical troll comment.

    There are existance proofs counter to your
    unsupported blanket statement.

    Burroughs medium systems for example, where using overlays was built
    into the compilation tools (including the COBOL compiler) and the
    operating system. Even the operating system used overlays for
    rarely used functionality.

    OS/360 and applications made extensive use of overlays.

    I remember a (PC)-Dos application called "Enable" that overlayed like crazy. --- Synchronet 3.21a-Linux NewsLink 1.2
  • From ted@loft.tnolan.com (Ted Nolan@tednolan to comp.os.linux.misc,alt.folklore.computers on Sat Aug 30 19:27:20 2025
    From Newsgroup: comp.os.linux.misc

    In article <108vigs$2q3n5$1@dont-email.me>,
    Peter Flass <Peter@Iron-Spring.com> wrote:
    On 8/30/25 08:54, Scott Lurndal wrote:
    Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> writes:
    On 29 Aug 2025 20:52:10 GMT, Ted Nolan <tednolan> wrote:



    Using overlays was never straightforward, on any OS.

    Typical troll comment.

    There are existance proofs counter to your
    unsupported blanket statement.

    Burroughs medium systems for example, where using overlays was built
    into the compilation tools (including the COBOL compiler) and the
    operating system. Even the operating system used overlays for
    rarely used functionality.

    OS/360 and applications made extensive use of overlays.

    I remember a (PC)-Dos application called "Enable" that overlayed like crazy.

    I remember that one -- I had to do printer support over PC-NFS with a filter
    to convert the Diablo-630 emulation to something the network printer could
    use. The people actually using the program called it "Unable".
    --
    columbiaclosings.com
    What's not in Columbia anymore..
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From rbowman@bowman@montana.com to comp.os.linux.misc,alt.folklore.computers on Sat Aug 30 20:31:23 2025
    From Newsgroup: comp.os.linux.misc

    On Sat, 30 Aug 2025 03:06:11 -0400, c186282 wrote:

    Tbe original 8088 had all the needed registers.
    Could minimum deliver at LEAST an easy 64k code space and at LEAST
    another 64k data area. A few tricks and .......

    They mostly followed the bank switching the Z80s were doing but
    incorporated the memory management into the processor. After all, the i432
    was the REAL answer so why get fancy with the Band-Aid.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From rbowman@bowman@montana.com to comp.os.linux.misc,alt.folklore.computers,alt.security on Sat Aug 30 20:36:37 2025
    From Newsgroup: comp.os.linux.misc

    On Sat, 30 Aug 2025 11:59:32 +0100, The Natural Philosopher wrote:

    I was the boss. SCO Unix for the networked servers and Win 3 for the
    desktops SUN PC-NFS to hang it all together.

    Our legacy products were developed for RS6000 AIX. Port to Linux, port to Windows. Windows won. There were two sites that had Linux servers but then
    the administrators who were Linux fans moved on and the new guy went to Windows.

    The customer is always right as long as they keep signing checks (cheques, whatever you want to call them)
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Peter Flass@Peter@Iron-Spring.com to comp.os.linux.misc,alt.folklore.computers on Sat Aug 30 14:45:34 2025
    From Newsgroup: comp.os.linux.misc

    On 8/30/25 12:27, Ted Nolan <tednolan> wrote:
    In article <108vigs$2q3n5$1@dont-email.me>,
    Peter Flass <Peter@Iron-Spring.com> wrote:
    On 8/30/25 08:54, Scott Lurndal wrote:
    Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> writes:
    On 29 Aug 2025 20:52:10 GMT, Ted Nolan <tednolan> wrote:



    Using overlays was never straightforward, on any OS.

    Typical troll comment.

    There are existance proofs counter to your
    unsupported blanket statement.

    Burroughs medium systems for example, where using overlays was built
    into the compilation tools (including the COBOL compiler) and the
    operating system. Even the operating system used overlays for
    rarely used functionality.

    OS/360 and applications made extensive use of overlays.

    I remember a (PC)-Dos application called "Enable" that overlayed like crazy.

    I remember that one -- I had to do printer support over PC-NFS with a filter to convert the Diablo-630 emulation to something the network printer could use. The people actually using the program called it "Unable".

    It was a really good program. It just was very slow. You were better off
    with separate programs.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.linux.misc,alt.folklore.computers on Sat Aug 30 22:20:11 2025
    From Newsgroup: comp.os.linux.misc

    On Sat, 30 Aug 2025 18:30:11 -0000 (UTC), candycanearter07 wrote:

    Lawrence DrCOOliveiro <ldo@nz.invalid> wrote at 03:27 this Saturday (GMT):

    Apple included a little springy clip thing (the rCLProgrammersrCOs
    SwitchrCY) in the box with each of those original classic-form-factor
    Macintoshes. When installed, pressing one side triggered NMI (used
    for invoking the resident debugger) ...

    That's pretty cool, I always wished there was a physical switch to
    trigger a debugger since the system might be frozen...

    It only worked because there was a mini-debugger in ROM to hook that
    interrupt. Or, you could install rCLMacsBugrCY at boot time, which would
    take over that interrupt and offer much more sophisticated
    functionality.

    The Linux kernel offers something similar, but again that assumes that
    keyboard handling (or alternatively, serial port handling) is still
    functioning <https://docs.kernel.org/admin-guide/sysrq.html>.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Rich Alderson@news@alderson.users.panix.com to comp.os.linux.misc,alt.folklore.computers on Sat Aug 30 19:34:34 2025
    From Newsgroup: comp.os.linux.misc

    scott@slp53.sl.home (Scott Lurndal) writes:

    Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> writes:

    Using overlays was never straightforward, on any OS.

    Typical troll comment.

    There are existance proofs counter to your unsupported blanket statement.

    Burroughs medium systems for example, where using overlays was built into the compilation tools (including the COBOL compiler) and the operating system. Even the operating system used overlays for rarely used functionality.

    Far be it from me to defend the troll, but I will say that my experience (56 years and counting) agrees with his comment.

    IIRC, I have made changes to exactly one (1) overlaid program in all that time, having been forced to do that instead of a complete rewrite because it had to work quickly at the museum.

    That's with experience on OS/360, SVS and MVS on /370, Tops-10 and TOPS-20, and RSX-11M/-20F.

    Overlays are like a coyote ugly bed partner: I'd rather chew my arm off.
    --
    Rich Alderson news@alderson.users.panix.com
    Audendum est, et veritas investiganda; quam etiamsi non assequamur,
    omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
    --Galen --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Johnny Billquist@bqt@softjar.se to comp.os.linux.misc,alt.folklore.computers on Sun Aug 31 12:40:44 2025
    From Newsgroup: comp.os.linux.misc

    On 2025-08-30 19:39, Ted Nolan <tednolan> wrote:
    In article <108uhef$26t$1@news.misty.com>,
    Johnny Billquist <bqt@softjar.se> wrote:
    As for ease of use, you got it backward. While overlays in DEC OSes
    actually are way more advanced, and capable that overlays in Unix on the
    PDP-11, using them on the Unix side is basically a no brainer. You don't
    need to do anything at all. You just put modules wherever you want to,
    and it works.

    With the DEC OSes, you have to create an overlay description in a weird
    language, and you can't call cross overlay trees, and you need to be
    careful if you call upstream, which might change mapping, and all that.
    None of those restrictions apply for Unix overlays. The only thing you
    need to keep an eye out for is just that the size is kept within some rules. >>
    Johnny


    Since you've done it, I defer. I just recall that when we got 2.9BSD, I considered trying to port some big Vax program to the 11 and from reading
    the man pages I got the impression I would have to get intimately familiar with said program's call graph (which I definitely was not) to partition
    out the overlays and ended up moving on to something else.

    You can basically just take the different object files, and put them
    into different overlays, and that's it. No need to think anything over
    with call graphs or anything. You do get headaches if individual object
    files are just huge, of course. And total data cannot be more than 64K.
    It's only code that is overlaid. But compared to the DEC overlay scheme,
    it's really simple.
    With the DEC stuff, you do need to keep track of call graphs, and stuff.
    Also, it is done in it's own language, which in itself is also a bit of
    a thing to get in to. But you can do a lot of stuff with the DEC overlay
    stuff that isn't at all possible to do under Unix.
    So it's a tradeoff (as usual).

    Johnny

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Carlos E.R.@robin_listas@es.invalid to comp.os.linux.misc,alt.folklore.computers on Sun Aug 31 13:37:37 2025
    From Newsgroup: comp.os.linux.misc

    On 2025-08-30 06:25, c186282 wrote:
    -a "Reset" buttons are mostly good, but on the company
    -a servers I always disconnected those, so no dink could
    -a just accidentally bump into the switch while looking
    -a for something else. REAL power switch, like a 3-sec
    -a delay before anything happens.

    Some reset buttons have to be pressed deep to work.
    --
    Cheers, Carlos.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Carlos E.R.@robin_listas@es.invalid to comp.os.linux.misc,alt.folklore.computers on Sun Aug 31 13:44:17 2025
    From Newsgroup: comp.os.linux.misc

    On 2025-08-30 01:34, Lawrence DrCOOliveiro wrote:
    On 29 Aug 2025 20:52:10 GMT, Ted Nolan <tednolan> wrote:

    In article <108su32$3e8$1@news.misty.com>,
    Johnny Billquist <bqt@softjar.se> wrote:

    But even more important, on the PDP-11, there is support for overlaid
    programs, which makes heavy use of the MMU.

    No, it didnrCOt make use of the MMU at all. It was a purely software thing, involving replacing in-memory parts of the program with other parts loaded from the executable file.

    My memory is that at leat for BSD Unix, overlays were not supported
    until um, 2.9BSD I think, and that using them was not at all
    straight-forward. It may have been easier for official DEC OSes...

    Using overlays was never straightforward, on any OS.

    It was trivial on Turbo Pascal.
    --
    Cheers, Carlos.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Peter Flass@Peter@Iron-Spring.com to comp.os.linux.misc,alt.folklore.computers on Sun Aug 31 08:19:14 2025
    From Newsgroup: comp.os.linux.misc

    On 8/31/25 03:40, Johnny Billquist wrote:
    On 2025-08-30 19:39, Ted Nolan <tednolan> wrote:
    In article <108uhef$26t$1@news.misty.com>,
    Johnny Billquist-a <bqt@softjar.se> wrote:
    As for ease of use, you got it backward. While overlays in DEC OSes
    actually are way more advanced, and capable that overlays in Unix on the >>> PDP-11, using them on the Unix side is basically a no brainer. You don't >>> need to do anything at all. You just put modules wherever you want to,
    and it works.

    With the DEC OSes, you have to create an overlay description in a weird
    language, and you can't call cross overlay trees, and you need to be
    careful if you call upstream, which might change mapping, and all that.
    None of those restrictions apply for Unix overlays. The only thing you
    need to keep an eye out for is just that the size is kept within some
    rules.

    -a-a Johnny


    Since you've done it, I defer.-a I just recall that when we got 2.9BSD, I
    considered trying to port some big Vax program to the 11 and from reading
    the man pages I got the impression I would have to get intimately
    familiar
    with said program's call graph (which I definitely was not) to partition
    out the overlays and ended up moving on to something else.

    You can basically just take the different object files, and put them
    into different overlays, and that's it. No need to think anything over
    with call graphs or anything. You do get headaches if individual object files are just huge, of course. And total data cannot be more than 64K.
    It's only code that is overlaid. But compared to the DEC overlay scheme, it's really simple.
    With the DEC stuff, you do need to keep track of call graphs, and stuff. Also, it is done in it's own language, which in itself is also a bit of
    a thing to get in to. But you can do a lot of stuff with the DEC overlay stuff that isn't at all possible to do under Unix.
    So it's a tradeoff (as usual).

    -a Johnny


    Sort of. Assuming overlays on DEC function like all others I've seen,
    you need to organize. What goes into the root? (used by all overlays),
    then group object files so that things used together are in the same
    overlay.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Alexander Schreiber@als@usenet.thangorodrim.de to comp.os.linux.misc,alt.folklore.computers on Sun Aug 31 18:25:37 2025
    From Newsgroup: comp.os.linux.misc

    Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:

    The Linux kernel offers something similar, but again that assumes that keyboard handling (or alternatively, serial port handling) is still functioning <https://docs.kernel.org/admin-guide/sysrq.html>.

    Although the keyboard handler being functional these days also tends
    to require the USB stacks (both hard- and software) still being
    sufficiently functional, because that's what the keyboard is wired to.

    Kind regards,
    Alex.
    --
    "Opportunity is missed by most people because it is dressed in overalls and
    looks like work." -- Thomas A. Edison
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Alexander Schreiber@als@usenet.thangorodrim.de to comp.os.linux.misc,alt.folklore.computers on Sun Aug 31 18:23:23 2025
    From Newsgroup: comp.os.linux.misc

    Carlos E.R. <robin_listas@es.invalid> wrote:
    On 2025-08-30 06:25, c186282 wrote:
    -a "Reset" buttons are mostly good, but on the company
    -a servers I always disconnected those, so no dink could
    -a just accidentally bump into the switch while looking
    -a for something else. REAL power switch, like a 3-sec
    -a delay before anything happens.

    Some reset buttons have to be pressed deep to work.

    On my very first PC (80386DX CPU clocked at a blistering 40 MHz), the
    desktop case had the reset button right next to the turbo button and of
    course they looked identical except for the text label above them. Nice
    big flat buttons too, flush with the case surface, so easy to press. Thankfully, I usually didn't need the turbo button.

    Most later machines of mine had the reset button recessed making it
    much harder to press by accident.

    Kind regards,
    Alex.
    --
    "Opportunity is missed by most people because it is dressed in overalls and
    looks like work." -- Thomas A. Edison
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From rbowman@bowman@montana.com to comp.os.linux.misc,alt.folklore.computers on Sun Aug 31 17:37:38 2025
    From Newsgroup: comp.os.linux.misc

    On Sun, 31 Aug 2025 18:23:23 +0200, Alexander Schreiber wrote:

    On my very first PC (80386DX CPU clocked at a blistering 40 MHz), the
    desktop case had the reset button right next to the turbo button and of course they looked identical except for the text label above them. Nice
    big flat buttons too, flush with the case surface, so easy to press. Thankfully, I usually didn't need the turbo button.

    In a wonderful display of ergonomic design my Acer laptop has the power
    button on the same row as the function keys to the right of the insert/
    delete and the same size and spacing, and right above the backspace. It is
    not recessed or otherwise set off.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Johnny Billquist@bqt@softjar.se to comp.os.linux.misc,alt.folklore.computers on Sun Aug 31 20:10:00 2025
    From Newsgroup: comp.os.linux.misc

    On 2025-08-31 17:19, Peter Flass wrote:
    On 8/31/25 03:40, Johnny Billquist wrote:
    On 2025-08-30 19:39, Ted Nolan <tednolan> wrote:
    In article <108uhef$26t$1@news.misty.com>,
    Johnny Billquist-a <bqt@softjar.se> wrote:
    As for ease of use, you got it backward. While overlays in DEC OSes
    actually are way more advanced, and capable that overlays in Unix on
    the
    PDP-11, using them on the Unix side is basically a no brainer. You
    don't
    need to do anything at all. You just put modules wherever you want to, >>>> and it works.

    With the DEC OSes, you have to create an overlay description in a weird >>>> language, and you can't call cross overlay trees, and you need to be
    careful if you call upstream, which might change mapping, and all that. >>>> None of those restrictions apply for Unix overlays. The only thing you >>>> need to keep an eye out for is just that the size is kept within
    some rules.

    -a-a Johnny


    Since you've done it, I defer.-a I just recall that when we got 2.9BSD, I >>> considered trying to port some big Vax program to the 11 and from
    reading
    the man pages I got the impression I would have to get intimately
    familiar
    with said program's call graph (which I definitely was not) to partition >>> out the overlays and ended up moving on to something else.

    You can basically just take the different object files, and put them
    into different overlays, and that's it. No need to think anything over
    with call graphs or anything. You do get headaches if individual
    object files are just huge, of course. And total data cannot be more
    than 64K. It's only code that is overlaid. But compared to the DEC
    overlay scheme, it's really simple.
    With the DEC stuff, you do need to keep track of call graphs, and
    stuff. Also, it is done in it's own language, which in itself is also
    a bit of a thing to get in to. But you can do a lot of stuff with the
    DEC overlay stuff that isn't at all possible to do under Unix.
    So it's a tradeoff (as usual).

    -a-a Johnny


    Sort of. Assuming overlays on DEC function like all others I've seen,
    you need to organize. What goes into the root? (used by all overlays),
    then group object files so that things used together are in the same overlay.

    Right. With overlays in DEC OSes, that's what you want/need to do. With overlays in Unix, not really. Everything can be anywhere, except for the
    main function, which needs to be in the root. But everything can call everything, in any way it wants to. Makes no difference.

    No need to think about related stuff being in the same overlay, or
    shared things preferably in the root, and so on. Just put things
    wherever. There is a small performance penalty whenever calling
    something in another overlay, since it requires an MMU remapping, and
    the call goes through a stub function to make that happen. But that's
    about it.

    Note that in Unix, there is only a single level of overlays, and you can
    have at most 15 overlays. But that's the one limitation. Not having to
    worry about if the routine you call are in the overlay tree path, and so
    on, is very nice.

    Johnny

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Rich@rich@example.invalid to comp.os.linux.misc on Sun Aug 31 20:15:40 2025
    From Newsgroup: comp.os.linux.misc

    In comp.os.linux.misc The Natural Philosopher <tnp@invalid.invalid> wrote:
    On 29/08/2025 07:50, Charlie Gibbs wrote:
    On 2025-08-27, Alexander Schreiber <als@usenet.thangorodrim.de> wrote:

    I haven't tried Unix on 8086, but DOS on x86 essentially relied on applications
    being reasonably correct and not too buggy. Having the reset button conveniently
    accessible was effectively a requirement for any DOS PC ;-)

    Unfortunately, at about that time the reset button vanished (probably due
    to the DMCA or whatever preceded it).

    That would require instructions that the C compiler
    didn't generate.

    That claim "would require instructions that the C compiler didn't generate" >>> is just not true. Without memory protection, there are plenty of ways to crash
    the system - e.g. overwriting the operating system code due to a bug in an >>> application.

    If you didn't want to live entirely in a 64K segment, though, you probably >> told your C compiler to generate code for the various larger memory modules, >> which gave you the ability to scribble over the entire 640K (plus system
    storage).

    Wasn't there a 64k data and 64k code model as well? And possibly a 64K
    stack as well though that was a pain with C.

    The 8086 had four segment registers, one for code, one for stack, and
    two for "data". So provided one was either writing in assembly, or
    one's HLL supported all four segment registers pointing to
    non-overlapping addresses, one could "access" 4x64k with the 8086
    without needing to change a segment register value.

    Of course, if one did change a segment register value, one could access
    any memory address anywhere within the 8086's 1Mib total addressable
    space, as there was also no memory protection either.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Rich@rich@example.invalid to comp.os.linux.misc,alt.folklore.computers on Sun Aug 31 20:20:40 2025
    From Newsgroup: comp.os.linux.misc

    In comp.os.linux.misc Charlie Gibbs <cgibbs@kltpzyxm.invalid> wrote:
    On 2025-08-29, Alexander Schreiber <als@usenet.thangorodrim.de> wrote:

    Charlie Gibbs <cgibbs@kltpzyxm.invalid> wrote:

    On 2025-08-27, Alexander Schreiber <als@usenet.thangorodrim.de> wrote:

    I haven't tried Unix on 8086, but DOS on x86 essentially relied on
    applications being reasonably correct and not too buggy. Having the
    reset button conveniently accessible was effectively a requirement
    for any DOS PC ;-)

    Unfortunately, at about that time the reset button vanished (probably
    due to the DMCA or whatever preceded it).

    Really? System boards bought this year still have a reset line and my
    workstation tower case (about 10-15y old now) still has a reset button.

    Oops, I forgot about that. They did make a comeback, didn't they?
    But there definitely was a period before that where the button vanished (although there would have been motherboard pins if you wanted to dig
    into it).

    That was likely caused more by cheap case makers racing to the bottom
    in trying to cut the price of their cases to the minimum. Eliminating
    the reset button dropped a button insert molding, an actual switch, the hardware to hold the switch in position, and the wiring and plug from
    switch to motherboard from the BOM cost of the case. Likely no more
    than $1 total at the quantities the cheap makers would have been
    purchasing, but for a case that was meant to be priced at $40 or $50 a
    $1 savings on BOM is a reasonable percentage of the MSRP (and even
    bigger percentage of their wholesale cost to retailers).
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.linux.misc,alt.folklore.computers on Sun Aug 31 22:24:56 2025
    From Newsgroup: comp.os.linux.misc

    On Sun, 31 Aug 2025 18:23:23 +0200, Alexander Schreiber wrote:

    On my very first PC (80386DX CPU clocked at a blistering 40 MHz), the
    desktop case had the reset button right next to the turbo button and of course they looked identical except for the text label above them. Nice
    big flat buttons too, flush with the case surface, so easy to press. Thankfully, I usually didn't need the turbo button.

    Most later machines of mine had the reset button recessed making it much harder to press by accident.

    I can see the one contrarian interoffice memo now:

    rCLWhy not recess the turbo button instead, and make it harder to press? Because it can cause compatibility problems with older software designed
    to run only at a CPU speed of 4.77MHz, so the user should think twice
    before pressing it. The reset button should be easier to press, because
    when you need it, you really need it!rCY

    ;)
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Carlos E.R.@robin_listas@es.invalid to comp.os.linux.misc,alt.folklore.computers on Mon Sep 1 02:35:12 2025
    From Newsgroup: comp.os.linux.misc

    On 2025-09-01 00:24, Lawrence DrCOOliveiro wrote:
    On Sun, 31 Aug 2025 18:23:23 +0200, Alexander Schreiber wrote:

    On my very first PC (80386DX CPU clocked at a blistering 40 MHz), the
    desktop case had the reset button right next to the turbo button and of
    course they looked identical except for the text label above them. Nice
    big flat buttons too, flush with the case surface, so easy to press.
    Thankfully, I usually didn't need the turbo button.

    Most later machines of mine had the reset button recessed making it much
    harder to press by accident.

    I can see the one contrarian interoffice memo now:

    rCLWhy not recess the turbo button instead, and make it harder to press? Because it can cause compatibility problems with older software designed
    to run only at a CPU speed of 4.77MHz, so the user should think twice
    before pressing it. The reset button should be easier to press, because
    when you need it, you really need it!rCY

    ;)

    Wow! :-(
    --
    Cheers, Carlos.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.linux.misc,alt.folklore.computers on Mon Sep 1 02:36:25 2025
    From Newsgroup: comp.os.linux.misc

    On Sun, 31 Aug 2025 13:44:17 +0200, Carlos E.R. wrote:

    On 2025-08-30 01:34, Lawrence DrCOOliveiro wrote:

    Using overlays was never straightforward, on any OS.

    It was trivial on Turbo Pascal.

    There were two kinds of overlay system: the one where the calling code
    could be swapped out while the called code (needing a possible segment
    swap when returning from the callee) and the one where it couldnrCOt
    (needing more memory).

    Which one did Turbo Pascal use?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From c186282@c186282@nnada.net to comp.os.linux.misc,alt.folklore.computers on Mon Sep 1 01:52:31 2025
    From Newsgroup: comp.os.linux.misc

    On 8/31/25 10:36 PM, Lawrence DrCOOliveiro wrote:
    On Sun, 31 Aug 2025 13:44:17 +0200, Carlos E.R. wrote:

    On 2025-08-30 01:34, Lawrence DrCOOliveiro wrote:

    Using overlays was never straightforward, on any OS.

    It was trivial on Turbo Pascal.

    There were two kinds of overlay system: the one where the calling code
    could be swapped out while the called code (needing a possible segment
    swap when returning from the callee) and the one where it couldnrCOt
    (needing more memory).

    Which one did Turbo Pascal use?

    I bought v1.x ... and then on. Best money I ever
    spent. DID need overlays for a large pgm in v3.x -
    something between a graphic and mini-GIS app.
    STILL use FPC/Lazarus fairly often. GREAT language.

    And, unlike 'C', you can actually kind of read and
    understand your own code years later :-)

    And the overlays WERE super easy. WERE limited
    to 64k however. The older TPs came for both x86/DOS
    and CP/M-86. Similar, though not quite identical,
    tricks and solutions.

    Foley & Van Dam - "Fundamentals Of Interactive
    Computer Graphics" - has all the good algos
    (IN Pascal).

    Hey, did use the M$/IBM multi-pass Pascal compiler
    (still have it in a VM somewhere) but TP was just
    a *revolution*. Even found a good use for the
    'turtle' in v3.x
    --- Synchronet 3.21a-Linux NewsLink 1.2