• Re: Byte ordering

    From MitchAlsup1@21:1/5 to BGB on Fri Oct 4 23:06:21 2024
    On Fri, 4 Oct 2024 19:05:15 +0000, BGB wrote:

    On 10/4/2024 12:30 PM, Anton Ertl wrote:

    Say, pretty much none of the modern graphics programs (that I am aware
    of) really support working with 16-color and 256-color bitmap images
    with a manually specified color palette.

    Typically, any modern programs are true-color internally, typically only supporting 256-color as an import/export format with an automatically generated "optimized" palette, and often not bothering with 16-color
    images at all. Not so useful if one is doing something that does
    actually have a need for an explicit color palette (and does not have so
    much need for any "photo manipulation" features).

    1996 version of CorelDraw 3 suffers from none of this, supporting all
    sorts of pallets {RGB, CYM, CYMK, at least 3 more) with various
    user specified limitations, 24-bit, 32-bit, ... with all sorts of
    fillers mixing any 2 colors previous mentioned with various patterns
    {gradient, polka dot, you define which pixel gets from which color}.

    Still have the CD-ROM if anyone wants to try.


    And, most people generally haven't bothered with this stuff since the
    Win16 era (even the people doing "pixel art" are still generally doing
    so using true-color PNGs or similar).

    Blame PowerPoint ... No more evil tool ever existed.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Sat Oct 5 06:34:28 2024
    On Fri, 4 Oct 2024 23:06:21 +0000, MitchAlsup1 wrote:

    Blame PowerPoint ... No more evil tool ever existed.

    Competitors existed, at one time, e.g. Adobe Persuasion, Harvard Graphics, others I’ve forgotten.

    Somehow Microsoft made PowerPoint the most attractive of the lot ... were
    the others even worse?

    Actually, it’s not that it doesn’t produce pretty graphics, it’s that people end up believing in the prettiness of the graphics, instead of considering the facts they’re supposed to (mis)represent.

    Edward Tufte, come back, all is forgiven!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to BGB on Sat Oct 5 06:35:56 2024
    On Fri, 4 Oct 2024 19:44:40 -0500, BGB wrote:

    MS PaintBrush became MS Paint and seemingly mostly got dumbed down as
    time went on.

    Side excursion into 3D Paint (or is that Paint 3D?), which failed to take
    off, and is now being abandoned.

    Closest modern alternative is Paint.NET, but still doesn't allow manual palette control in the same way as BitEdit.

    Inkscape has good palette control. It does scalable vector graphics
    natively. Give it a try.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to Anton Ertl on Sat Jan 4 22:40:51 2025
    Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
    antispam@fricas.org (Waldek Hebisch) writes:
    From my point of view main drawbacks of 286 is poor support for
    large arrays and problem for Lisp-like system which have a lot
    of small data structures and traverse then via pointers.

    Yes. In the first case the segments are too small, in the latter case
    there are too few segments (if you have one segment per object).

    In the second case one can pack several objects into single
    segment, so except for loct security properties this is not
    a big problem. But there is a lot of loading segment registers
    and slow loading is a problem.

    Concerning code "model", I think that Intel assumend that
    most procedures would fit in a single segment and that
    average procedure will be of order of single kilobytes.
    Using 16-bit offsets for jumps inside procedure and
    segment-offset pair for calls is likely to lead to better
    or similar performance as purely 32-bit machine.

    With the 80286's segments and their slowness, that is very doubtful.
    The 8086 has branches with 8-bit offsets and branches and calls with
    16-bit offsets. The 386 in 32-bit mode has branches with 8-bit
    offsets and branches and calls with 32-bit offsets; if 16-bit offsets
    for branches would be useful enough for performance, they could
    instead have designed the longer branch length to be 16 bits, and
    maybe a prefix for 32-bit branch offsets.

    At that time Intel apparently wanted to avoid having too many
    instructions.

    That would be faster than
    what you outline, as soon as one call happens. But apparently 16-bit branches are not that beneficial, or they would have gone that way on
    the 386.

    For machine with 32-bit bus benefit is much smaller.

    Another usage of segments for code would be to put the code segment of
    a shared object (known as DLL among Windowsheads) in a segment, and
    use far calls to call functions in other shared objects, while using
    near calls within a shared object. This allows to share the code
    segments between different programs, and to locate them anywhere in
    physical memory. However, AFAIK shared objects were not a thing in
    the 80286 timeframe; Unix only got them in the late 1980s.

    IIUC shared segments were widely used on Multics.

    I used Xenix on a 286 in 1986 or 1987; my impression is that programs
    were limited to 64KB code and 64KB data size, exactly the PDP-11 model
    you denounce.

    Maybe. I have seen many cases where sofware essentiallt "wastes"
    good things offered by hardware.

    What went wrong? IIUC there were several control systems
    using 286 features, so there was some success. But PC-s
    became main user of x86 chips and significant fraction
    of PC-s was used for gaming. Game authors wanted direct
    access to hardware which in case of 286 forced real mode.

    Every successful software used direct access to hardware because of performance; the rest waned. Using BIOS calls was just too slow.
    Lotus 1-2-3 won out over VisiCalc and Multiplan by being faster from
    writing directly to video.

    For most early graphic cards direct screen access could be allowed
    just by allocating appropriate segment. And most non-games
    could gain good performance with better system interface.
    I think that variaty of tricks used in games and their
    popularity made protected mode system much less appealing
    to vendors. And that discouraged work on better interfaces
    for non-games.

    More generally, vendors could release separate versions of
    programs for 8086 and 286 but few did so. And users having
    only binaries wanted to use 8086 on their new systems which
    led to heroic efforts like OS/2 DOS box and later Linux
    dosemu. But integration of 8086 programs with protected
    mode was solved too late for 286 model to gain traction
    (and on 286 "DOS box" had to run in real mode, breaking
    normal system protection).

    But IIUC first paging Unix appeared _after_ release of 286.

    From <https://en.wikipedia.org/wiki/History_of_the_Berkeley_Software_Distribution#3BSD>:

    |The kernel of 32V was largely rewritten by Berkeley graduate student |Ã?zalp BabaoÄ?lu to include a virtual memory implementation, and a |complete operating system including the new kernel, ports of the 2BSD |utilities to the VAX, and the utilities from 32V was released as 3BSD
    |at the end of 1979.

    The 80286 was introduced on February 1, 1982.

    OK

    In 286 time Multics was highly regarded and it heavily depended
    on segmentaion. MVS was using paging hardware, but was
    talking about segments, except for that MVS segmentation
    was flawed because some addresses far outside a segment were
    considered as part of different segment. I think that also
    in VMS there was some taliking about segments. So creators
    of 286 could believe that they are providing "right thing"
    and not a fake possible with paging hardware.

    There was various segmented hardware around, first and foremost (for
    the designers of the 80286), the iAPX432. And as you write, all the
    good reasons that resulted in segments on the iAPX432 also persisted
    in the 80286. However, given the slowness of segmentation, only the
    tiny (all in one segment), small (one segment for code and one for
    data), and maybe medium memory models (one data segment) are
    competetive in protected mode compared to real mode.

    AFAICS that covered wast majority of programs during eighties.
    Turbo Pascal offered only medium memory model and was quite
    popular. Its code generator produced mediocre output, but
    real Turbo Pascal programs used a lot of inline assembly
    and performance was not bad.

    Intel apparently assumed that programmers are willing to spend
    extra work to get good performance and IMO this was right
    as a general statement. Intel probably did not realize that
    programmers will be very reluctant to spent work on security
    features and in particular to spent work on making programs
    fast in 286 protected mode.

    So if they really had wanted protected mode to succeed, they should
    have designed in 32-bit data segments (and maybe also 32-bit code
    segments). Alternatively, if protected mode and the 32-bit addresses
    do not fit in the 286 transistor budget, a CPU that implements the
    32-bit feature and leaves away protected mode would have been more
    popular than the 80286; and (depending on how the 32-bit extension was implemented) it might have been a better stepping stone towards the
    kind of CPU with protected mode that they imagined; but the alt-386
    designers probably would not have designed in this kind of protected
    mode that they did.

    Intel probably assumend that 286 would cover most needs, especially
    given that most system had much less memory than 16 MB theoreticlly
    allowed by 286. And for bigger systems they released 386.

    Concerning paging, all these scenarios are without paging. Paging was primarily a virtual-memory feature, not a memory-protection feature.

    Yes, exactly.

    It acquired memory protection only as far as it was easy with pages
    (i.e., at page granularity). So paging was not designed as a
    competition to segments as far as protection was concerned. If
    computer architects had managed to design segmentation with
    competetive performance, we would be seeing hardware with both paging
    and segmentation nowadays. Or maybe even without paging, now that
    memories tend to be big enough to make virtual memory mostly
    unnecessary.

    And I do not think thay could make
    32-bit processor with segmentation in available transistor
    buget,

    Maybe not.

    and even it they managed it would be slowed down by too
    long addresses (segment + 32-bit offset).

    On the contrary, every program that does not fit in the medium memory
    model on the 80286 would run at least as fast on such a CPU in real
    mode and significantly faster in protected mode.

    I think that Intel considerd "programs that do it in the medium
    memory model" as tiny minority. IMO this is partially true: there
    is a class of programs which with some work fit into medium
    model, but using flat address space is easier. I think that
    on 286 (that is with 16 bit bus) those programs (assuming enough
    tuning) run faster than flat 32-bit version. But naive compilation
    in large (or huge) model leads to worse speed than flat mode.

    In a bit different spirit, for programs that do not fit in
    64kB, but are not too large there is natural temptation to
    have some "compression" scheme for pointers and use mostly
    16-bit pointers. That can be done without special hardware
    support. OTOH Intel segmentation is a specific proposal
    in such direction with hardware support. Clearly it is
    less flexible than software schemes based on native 32-bit
    addressing. But I think that Intel segmentation had some
    attractive features during eighties.

    Another thing is 386. I think that designers of 286 thought
    that 386 will remove some limitations. And 386 allowed
    bigger segmensts removing one major limitation. OTOH
    for 32-bit processor with segementation it would be natural
    to have 32-bit segment registers. It is not clear to
    me if 16-bit segment registers in 386 were deemed necessary
    for backward compatibility or maybe in 386 period flat
    fraction in Intel won and they kept segmentation mostly
    for compatibility.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Terje Mathisen@21:1/5 to Waldek Hebisch on Sun Jan 5 08:54:29 2025
    Waldek Hebisch wrote:
    Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
    There was various segmented hardware around, first and foremost (for
    the designers of the 80286), the iAPX432. And as you write, all the
    good reasons that resulted in segments on the iAPX432 also persisted
    in the 80286. However, given the slowness of segmentation, only the
    tiny (all in one segment), small (one segment for code and one for
    data), and maybe medium memory models (one data segment) are
    competetive in protected mode compared to real mode.

    AFAICS that covered wast majority of programs during eighties.
    Turbo Pascal offered only medium memory model and was quite
    popular. Its code generator produced mediocre output, but
    real Turbo Pascal programs used a lot of inline assembly
    and performance was not bad.

    As someone who wrote megabytes of that asm, I feel qualified to comment:

    Turbo Pascal itself 1.0 ran in Small model (64kB code & data) afair, but
    since the compiler/editor/linker/loader/debugger only used 35 kB (37 kB
    if you also loaded the text error messages), it had enough room left
    over for the source code it compiled.

    From the very beginning it supported Medium as you state, with separate
    code in the CS reg and data+stack (DS+SS) sharing a single segment.

    This way you had to use ES for all cross-segment operations,
    particularly REP MOVSB block moves.

    Later versions supported the Large model where all addresses were segment+offset pairs, as well as Huge where the segment was pointing at
    the object, rounded down to the nearest 16-byte boundary, and the offset (typically BX) was always [0-15].

    Intel apparently assumed that programmers are willing to spend
    extra work to get good performance and IMO this was right
    as a general statement. Intel probably did not realize that
    programmers will be very reluctant to spent work on security
    features and in particular to spent work on making programs
    fast in 286 protected mode.

    Protected could only be fast if segment reloads were rare, in my own
    code I would allocate arrays of largish objects as the max number that
    would fit in 64K, then grab the next.

    Terje
    PS. Happy New Year!

    --
    - <Terje.Mathisen at tmsw.no>
    "almost all programming can be viewed as an exercise in caching"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to Anton Ertl on Sun Jan 5 14:48:00 2025
    In article <2025Jan3.093849@mips.complang.tuwien.ac.at>, anton@mips.complang.tuwien.ac.at (Anton Ertl) wrote:

    The 8086 has branches with 8-bit offsets and branches and calls
    with 16-bit offsets. The 386 in 32-bit mode has branches with
    8-bit offsets and branches and calls with 32-bit offsets; if
    16-bit offsets for branches would be useful enough for performance,
    they could instead have designed the longer branch length to be
    16 bits, and maybe a prefix for 32-bit branch offsets. That would
    be faster than what you outline, as soon as one call happens.
    But apparently 16-bit branches are not that beneficial, or they
    would have gone that way on the 386.

    Don't assume that Intel of the early 1980s would have done enough
    simulation to explore those possibilities thoroughly. Given the mistakes
    they made in the 1970s with iAPX 432 and in the 1990s with Itanium, both through lack of simulation with varying workloads, they may well have
    been working by rules of thumb and engineering "intuition."

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Waldek Hebisch on Sun Jan 5 15:20:31 2025
    antispam@fricas.org (Waldek Hebisch) writes:
    Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:

    Another usage of segments for code would be to put the code segment of
    a shared object (known as DLL among Windowsheads) in a segment, and
    use far calls to call functions in other shared objects, while using
    near calls within a shared object. This allows to share the code
    segments between different programs, and to locate them anywhere in
    physical memory. However, AFAIK shared objects were not a thing in
    the 80286 timeframe; Unix only got them in the late 1980s.

    IIUC shared segments were widely used on Multics.

    They were widely used on both the Burroughs large systems
    and the HP-3000 as well, both exemplars of segmentation
    done right, in so far as it can be.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to Anton Ertl on Fri Jan 3 03:37:50 2025
    Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
    David Brown <david.brown@hesbynett.no> writes:
    On 04/10/2024 19:30, Anton Ertl wrote:
    David Brown <david.brown@hesbynett.no> writes:
    On 04/10/2024 00:17, Lawrence D'Oliveiro wrote:
    Compare this with the pain the x86 world went through, over a much longer >>>>> time, to move to 32-bit.

    The x86 started from 8-bit roots, and increased width over time, which >>>> is a very different path.

    Still, the question is why they did the 286 (released 1982) with its
    protected mode instead of adding IA-32 to the architecture, maybe at
    the start with a 386SX-like package and with real-mode only, or with
    the MMU in a separate chip (like the 68020/68551).


    I can only guess the obvious - it is what some big customer(s) were
    asking for. Maybe Intel didn't see the need for 32-bit computing in the >>markets they were targeting, or at least didn't see it as worth the cost.

    Anyone could see the problems that the PDP-11 had with its 16-bit
    limitation. Intel saw it in the iAPX 432 starting in 1975. It is
    obvious that, as soon as memory grows beyond 64KB (and already the
    8086 catered for that), the protected mode of the 80286 would be more
    of a hindrance than even the real mode of the 8086. I find it hard to believe that many customers would ask Intel for something the 80286
    protected mode with segments limited to 64KB, and even if, that Intel
    would listen to them. This looks much more like an idee fixe to me
    that one or more of the 286 project leaders had, and all customer
    input was made to fit into this idea, or was ignored.

    From my point of view main drawbacks of 286 is poor support for
    large arrays and problem for Lisp-like system which have a lot
    of small data structures and traverse then via pointers.

    However, playing devil's advocate I can see sense in 286. IMO
    Intel targeted quite a diffferent market. IIUC main intended marker
    for 8086 were industial control and various embedded aplication.
    286 was probably intenended for similar markets, but with stronger
    emphasis on security. In control application it is typical to
    have several cooperating processes. 286 allows separate local
    descriptor tables for each task, so mutitasking program easily
    may have say 30000 descriptors. Trying to get similar number
    of separately protected objects using paging would require
    similar number of pages, which with 16 MB total address space
    leads to 512 byte pages. For smaller paged systems situation
    is even worse: with 512 kB of memory 512 byte pages lead to
    1024 pages in total which means that access control can not
    be very granular and one would get significant memory
    fragmentation for parts smaller than page. I can guess that
    Intel rejected very small pages as problematic in implementation.
    So if the goal is fine grained access control, then segementation
    for machine of size of 286 looks better than paging.

    Concerning code "model", I think that Intel assumend that
    most procedures would fit in a single segment and that
    average procedure will be of order of single kilobytes.
    Using 16-bit offsets for jumps inside procedure and
    segment-offset pair for calls is likely to lead to better
    or similar performance as purely 32-bit machine. For
    control applications it is likely that each procedure
    will access moderate number of segments and total amount
    of accessed data will be moderate. In other words, Intel
    probably considerd "mostly medium" model where procedure
    mainly accesses it data segment using just 16-bit offsets
    and occasionally accesses other segments.

    Compared to PDP-11 this leads to resonably natural
    code that use some hundreds of kilobytes of memory,
    much better than 128 kB limit of PDP-11 with separate
    code and data areas. And segment maniputlation allows
    also bigger programs.

    What went wrong? IIUC there were several control systems
    using 286 features, so there was some success. But PC-s
    became main user of x86 chips and significant fraction
    of PC-s was used for gaming. Game authors wanted direct
    access to hardware which in case of 286 forced real mode.
    Also, for long time 8088 played mayor role and PC software
    "had" to run on 8088. Software vendors theoretically could
    release separate versions for each processor or do some
    runtime switching of critical procedures, but easiest way
    was to depend on compatibility with 8088. "Better" OS-es
    went Unix way, depending on paging and not using segmentation.
    But IIUC first paging Unix appeared _after_ release of 286.
    In 286 time Multics was highly regarded and it heavily depended
    on segmentaion. MVS was using paging hardware, but was
    talking about segments, except for that MVS segmentation
    was flawed because some addresses far outside a segment were
    considered as part of different segment. I think that also
    in VMS there was some taliking about segments. So creators
    of 286 could believe that they are providing "right thing"
    and not a fake possible with paging hardware.

    Concerning the cost, ther 80286 has 134,000 transistors, compared to supposedly 68,000 for the 68000, and the 190,000 of the 68020. I am
    sure that Intel could have managed a 32-bit 8086 (maybe even with the
    nice addressing modes that the 386 has in 32-bit mode) with those
    134,000 transistors if Motorola could build the 68000 with half of
    that.

    I think that Intel could manage to build "mostly" 32-bit processor
    in transistor budget of 8086, that is have say 7 registers 32-bit
    each, where each register could be treated as a pair of 16-bit
    registers and 32-bit operations would take twice as much time
    as 16-bit operation. But I think that such processor would be
    slower (say 10% slower) than 8086 mostly because of more need to
    use longer addresses. Similarly, hypotetical 32-bit 286 would
    be slower than real 286. And I do not think thay could make
    32-bit processor with segmentation in available transistor
    buget, and even it they managed it would be slowed down by too
    long addresses (segment + 32-bit offset).

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Anton Ertl@21:1/5 to Waldek Hebisch on Fri Jan 3 08:38:49 2025
    antispam@fricas.org (Waldek Hebisch) writes:
    From my point of view main drawbacks of 286 is poor support for
    large arrays and problem for Lisp-like system which have a lot
    of small data structures and traverse then via pointers.

    Yes. In the first case the segments are too small, in the latter case
    there are too few segments (if you have one segment per object).

    Concerning code "model", I think that Intel assumend that
    most procedures would fit in a single segment and that
    average procedure will be of order of single kilobytes.
    Using 16-bit offsets for jumps inside procedure and
    segment-offset pair for calls is likely to lead to better
    or similar performance as purely 32-bit machine.

    With the 80286's segments and their slowness, that is very doubtful.
    The 8086 has branches with 8-bit offsets and branches and calls with
    16-bit offsets. The 386 in 32-bit mode has branches with 8-bit
    offsets and branches and calls with 32-bit offsets; if 16-bit offsets
    for branches would be useful enough for performance, they could
    instead have designed the longer branch length to be 16 bits, and
    maybe a prefix for 32-bit branch offsets. That would be faster than
    what you outline, as soon as one call happens. But apparently 16-bit
    branches are not that beneficial, or they would have gone that way on
    the 386.

    Another usage of segments for code would be to put the code segment of
    a shared object (known as DLL among Windowsheads) in a segment, and
    use far calls to call functions in other shared objects, while using
    near calls within a shared object. This allows to share the code
    segments between different programs, and to locate them anywhere in
    physical memory. However, AFAIK shared objects were not a thing in
    the 80286 timeframe; Unix only got them in the late 1980s.

    I used Xenix on a 286 in 1986 or 1987; my impression is that programs
    were limited to 64KB code and 64KB data size, exactly the PDP-11 model
    you denounce.

    What went wrong? IIUC there were several control systems
    using 286 features, so there was some success. But PC-s
    became main user of x86 chips and significant fraction
    of PC-s was used for gaming. Game authors wanted direct
    access to hardware which in case of 286 forced real mode.

    Every successful software used direct access to hardware because of performance; the rest waned. Using BIOS calls was just too slow.
    Lotus 1-2-3 won out over VisiCalc and Multiplan by being faster from
    writing directly to video.

    But IIUC first paging Unix appeared _after_ release of 286.

    From <https://en.wikipedia.org/wiki/History_of_the_Berkeley_Software_Distribution#3BSD>:

    |The kernel of 32V was largely rewritten by Berkeley graduate student
    |Özalp Babaoğlu to include a virtual memory implementation, and a
    |complete operating system including the new kernel, ports of the 2BSD |utilities to the VAX, and the utilities from 32V was released as 3BSD
    |at the end of 1979.

    The 80286 was introduced on February 1, 1982.

    In 286 time Multics was highly regarded and it heavily depended
    on segmentaion. MVS was using paging hardware, but was
    talking about segments, except for that MVS segmentation
    was flawed because some addresses far outside a segment were
    considered as part of different segment. I think that also
    in VMS there was some taliking about segments. So creators
    of 286 could believe that they are providing "right thing"
    and not a fake possible with paging hardware.

    There was various segmented hardware around, first and foremost (for
    the designers of the 80286), the iAPX432. And as you write, all the
    good reasons that resulted in segments on the iAPX432 also persisted
    in the 80286. However, given the slowness of segmentation, only the
    tiny (all in one segment), small (one segment for code and one for
    data), and maybe medium memory models (one data segment) are
    competetive in protected mode compared to real mode.

    So if they really had wanted protected mode to succeed, they should
    have designed in 32-bit data segments (and maybe also 32-bit code
    segments). Alternatively, if protected mode and the 32-bit addresses
    do not fit in the 286 transistor budget, a CPU that implements the
    32-bit feature and leaves away protected mode would have been more
    popular than the 80286; and (depending on how the 32-bit extension was implemented) it might have been a better stepping stone towards the
    kind of CPU with protected mode that they imagined; but the alt-386
    designers probably would not have designed in this kind of protected
    mode that they did.

    Concerning paging, all these scenarios are without paging. Paging was primarily a virtual-memory feature, not a memory-protection feature.
    It acquired memory protection only as far as it was easy with pages
    (i.e., at page granularity). So paging was not designed as a
    competition to segments as far as protection was concerned. If
    computer architects had managed to design segmentation with
    competetive performance, we would be seeing hardware with both paging
    and segmentation nowadays. Or maybe even without paging, now that
    memories tend to be big enough to make virtual memory mostly
    unnecessary.

    And I do not think thay could make
    32-bit processor with segmentation in available transistor
    buget,

    Maybe not.

    and even it they managed it would be slowed down by too
    long addresses (segment + 32-bit offset).

    On the contrary, every program that does not fit in the medium memory
    model on the 80286 would run at least as fast on such a CPU in real
    mode and significantly faster in protected mode.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Anton Ertl on Fri Jan 3 18:11:53 2025
    anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
    antispam@fricas.org (Waldek Hebisch) writes:
    From my point of view main drawbacks of 286 is poor support for
    large arrays and problem for Lisp-like system which have a lot
    of small data structures and traverse then via pointers.

    But IIUC first paging Unix appeared _after_ release of 286.

    From ><https://en.wikipedia.org/wiki/History_of_the_Berkeley_Software_Distribution#3BSD>:

    |The kernel of 32V was largely rewritten by Berkeley graduate student
    |Özalp Babaoğlu to include a virtual memory implementation, and a
    |complete operating system including the new kernel, ports of the 2BSD >|utilities to the VAX, and the utilities from 32V was released as 3BSD
    |at the end of 1979.

    There was also a version of Western Electric unix that ran on the VAX in that time frame.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)