Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 43 |
Nodes: | 6 (0 / 6) |
Uptime: | 104:29:01 |
Calls: | 290 |
Files: | 905 |
Messages: | 76,618 |
On 10/4/2024 12:30 PM, Anton Ertl wrote:
Say, pretty much none of the modern graphics programs (that I am aware
of) really support working with 16-color and 256-color bitmap images
with a manually specified color palette.
Typically, any modern programs are true-color internally, typically only supporting 256-color as an import/export format with an automatically generated "optimized" palette, and often not bothering with 16-color
images at all. Not so useful if one is doing something that does
actually have a need for an explicit color palette (and does not have so
much need for any "photo manipulation" features).
And, most people generally haven't bothered with this stuff since the
Win16 era (even the people doing "pixel art" are still generally doing
so using true-color PNGs or similar).
Blame PowerPoint ... No more evil tool ever existed.
MS PaintBrush became MS Paint and seemingly mostly got dumbed down as
time went on.
Closest modern alternative is Paint.NET, but still doesn't allow manual palette control in the same way as BitEdit.
antispam@fricas.org (Waldek Hebisch) writes:
From my point of view main drawbacks of 286 is poor support for
large arrays and problem for Lisp-like system which have a lot
of small data structures and traverse then via pointers.
Yes. In the first case the segments are too small, in the latter case
there are too few segments (if you have one segment per object).
Concerning code "model", I think that Intel assumend that
most procedures would fit in a single segment and that
average procedure will be of order of single kilobytes.
Using 16-bit offsets for jumps inside procedure and
segment-offset pair for calls is likely to lead to better
or similar performance as purely 32-bit machine.
With the 80286's segments and their slowness, that is very doubtful.
The 8086 has branches with 8-bit offsets and branches and calls with
16-bit offsets. The 386 in 32-bit mode has branches with 8-bit
offsets and branches and calls with 32-bit offsets; if 16-bit offsets
for branches would be useful enough for performance, they could
instead have designed the longer branch length to be 16 bits, and
maybe a prefix for 32-bit branch offsets.
That would be faster than
what you outline, as soon as one call happens. But apparently 16-bit branches are not that beneficial, or they would have gone that way on
the 386.
Another usage of segments for code would be to put the code segment of
a shared object (known as DLL among Windowsheads) in a segment, and
use far calls to call functions in other shared objects, while using
near calls within a shared object. This allows to share the code
segments between different programs, and to locate them anywhere in
physical memory. However, AFAIK shared objects were not a thing in
the 80286 timeframe; Unix only got them in the late 1980s.
I used Xenix on a 286 in 1986 or 1987; my impression is that programs
were limited to 64KB code and 64KB data size, exactly the PDP-11 model
you denounce.
What went wrong? IIUC there were several control systems
using 286 features, so there was some success. But PC-s
became main user of x86 chips and significant fraction
of PC-s was used for gaming. Game authors wanted direct
access to hardware which in case of 286 forced real mode.
Every successful software used direct access to hardware because of performance; the rest waned. Using BIOS calls was just too slow.
Lotus 1-2-3 won out over VisiCalc and Multiplan by being faster from
writing directly to video.
But IIUC first paging Unix appeared _after_ release of 286.
From <https://en.wikipedia.org/wiki/History_of_the_Berkeley_Software_Distribution#3BSD>:
|The kernel of 32V was largely rewritten by Berkeley graduate student |Ã?zalp BabaoÄ?lu to include a virtual memory implementation, and a |complete operating system including the new kernel, ports of the 2BSD |utilities to the VAX, and the utilities from 32V was released as 3BSD
|at the end of 1979.
The 80286 was introduced on February 1, 1982.
In 286 time Multics was highly regarded and it heavily depended
on segmentaion. MVS was using paging hardware, but was
talking about segments, except for that MVS segmentation
was flawed because some addresses far outside a segment were
considered as part of different segment. I think that also
in VMS there was some taliking about segments. So creators
of 286 could believe that they are providing "right thing"
and not a fake possible with paging hardware.
There was various segmented hardware around, first and foremost (for
the designers of the 80286), the iAPX432. And as you write, all the
good reasons that resulted in segments on the iAPX432 also persisted
in the 80286. However, given the slowness of segmentation, only the
tiny (all in one segment), small (one segment for code and one for
data), and maybe medium memory models (one data segment) are
competetive in protected mode compared to real mode.
So if they really had wanted protected mode to succeed, they should
have designed in 32-bit data segments (and maybe also 32-bit code
segments). Alternatively, if protected mode and the 32-bit addresses
do not fit in the 286 transistor budget, a CPU that implements the
32-bit feature and leaves away protected mode would have been more
popular than the 80286; and (depending on how the 32-bit extension was implemented) it might have been a better stepping stone towards the
kind of CPU with protected mode that they imagined; but the alt-386
designers probably would not have designed in this kind of protected
mode that they did.
Concerning paging, all these scenarios are without paging. Paging was primarily a virtual-memory feature, not a memory-protection feature.
It acquired memory protection only as far as it was easy with pages
(i.e., at page granularity). So paging was not designed as a
competition to segments as far as protection was concerned. If
computer architects had managed to design segmentation with
competetive performance, we would be seeing hardware with both paging
and segmentation nowadays. Or maybe even without paging, now that
memories tend to be big enough to make virtual memory mostly
unnecessary.
And I do not think thay could make
32-bit processor with segmentation in available transistor
buget,
Maybe not.
and even it they managed it would be slowed down by too
long addresses (segment + 32-bit offset).
On the contrary, every program that does not fit in the medium memory
model on the 80286 would run at least as fast on such a CPU in real
mode and significantly faster in protected mode.
Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
There was various segmented hardware around, first and foremost (for
the designers of the 80286), the iAPX432. And as you write, all the
good reasons that resulted in segments on the iAPX432 also persisted
in the 80286. However, given the slowness of segmentation, only the
tiny (all in one segment), small (one segment for code and one for
data), and maybe medium memory models (one data segment) are
competetive in protected mode compared to real mode.
AFAICS that covered wast majority of programs during eighties.
Turbo Pascal offered only medium memory model and was quite
popular. Its code generator produced mediocre output, but
real Turbo Pascal programs used a lot of inline assembly
and performance was not bad.
Intel apparently assumed that programmers are willing to spend
extra work to get good performance and IMO this was right
as a general statement. Intel probably did not realize that
programmers will be very reluctant to spent work on security
features and in particular to spent work on making programs
fast in 286 protected mode.
The 8086 has branches with 8-bit offsets and branches and calls
with 16-bit offsets. The 386 in 32-bit mode has branches with
8-bit offsets and branches and calls with 32-bit offsets; if
16-bit offsets for branches would be useful enough for performance,
they could instead have designed the longer branch length to be
16 bits, and maybe a prefix for 32-bit branch offsets. That would
be faster than what you outline, as soon as one call happens.
But apparently 16-bit branches are not that beneficial, or they
would have gone that way on the 386.
Anton Ertl <anton@mips.complang.tuwien.ac.at> wrote:
Another usage of segments for code would be to put the code segment of
a shared object (known as DLL among Windowsheads) in a segment, and
use far calls to call functions in other shared objects, while using
near calls within a shared object. This allows to share the code
segments between different programs, and to locate them anywhere in
physical memory. However, AFAIK shared objects were not a thing in
the 80286 timeframe; Unix only got them in the late 1980s.
IIUC shared segments were widely used on Multics.
David Brown <david.brown@hesbynett.no> writes:
On 04/10/2024 19:30, Anton Ertl wrote:
David Brown <david.brown@hesbynett.no> writes:
On 04/10/2024 00:17, Lawrence D'Oliveiro wrote:
Compare this with the pain the x86 world went through, over a much longer >>>>> time, to move to 32-bit.
The x86 started from 8-bit roots, and increased width over time, which >>>> is a very different path.
Still, the question is why they did the 286 (released 1982) with its
protected mode instead of adding IA-32 to the architecture, maybe at
the start with a 386SX-like package and with real-mode only, or with
the MMU in a separate chip (like the 68020/68551).
I can only guess the obvious - it is what some big customer(s) were
asking for. Maybe Intel didn't see the need for 32-bit computing in the >>markets they were targeting, or at least didn't see it as worth the cost.
Anyone could see the problems that the PDP-11 had with its 16-bit
limitation. Intel saw it in the iAPX 432 starting in 1975. It is
obvious that, as soon as memory grows beyond 64KB (and already the
8086 catered for that), the protected mode of the 80286 would be more
of a hindrance than even the real mode of the 8086. I find it hard to believe that many customers would ask Intel for something the 80286
protected mode with segments limited to 64KB, and even if, that Intel
would listen to them. This looks much more like an idee fixe to me
that one or more of the 286 project leaders had, and all customer
input was made to fit into this idea, or was ignored.
Concerning the cost, ther 80286 has 134,000 transistors, compared to supposedly 68,000 for the 68000, and the 190,000 of the 68020. I am
sure that Intel could have managed a 32-bit 8086 (maybe even with the
nice addressing modes that the 386 has in 32-bit mode) with those
134,000 transistors if Motorola could build the 68000 with half of
that.
From my point of view main drawbacks of 286 is poor support for
large arrays and problem for Lisp-like system which have a lot
of small data structures and traverse then via pointers.
Concerning code "model", I think that Intel assumend that
most procedures would fit in a single segment and that
average procedure will be of order of single kilobytes.
Using 16-bit offsets for jumps inside procedure and
segment-offset pair for calls is likely to lead to better
or similar performance as purely 32-bit machine.
What went wrong? IIUC there were several control systems
using 286 features, so there was some success. But PC-s
became main user of x86 chips and significant fraction
of PC-s was used for gaming. Game authors wanted direct
access to hardware which in case of 286 forced real mode.
But IIUC first paging Unix appeared _after_ release of 286.
In 286 time Multics was highly regarded and it heavily depended
on segmentaion. MVS was using paging hardware, but was
talking about segments, except for that MVS segmentation
was flawed because some addresses far outside a segment were
considered as part of different segment. I think that also
in VMS there was some taliking about segments. So creators
of 286 could believe that they are providing "right thing"
and not a fake possible with paging hardware.
And I do not think thay could make
32-bit processor with segmentation in available transistor
buget,
and even it they managed it would be slowed down by too
long addresses (segment + 32-bit offset).
antispam@fricas.org (Waldek Hebisch) writes:
From my point of view main drawbacks of 286 is poor support for
large arrays and problem for Lisp-like system which have a lot
of small data structures and traverse then via pointers.
But IIUC first paging Unix appeared _after_ release of 286.
From ><https://en.wikipedia.org/wiki/History_of_the_Berkeley_Software_Distribution#3BSD>:
|The kernel of 32V was largely rewritten by Berkeley graduate student
|Özalp Babaoğlu to include a virtual memory implementation, and a
|complete operating system including the new kernel, ports of the 2BSD >|utilities to the VAX, and the utilities from 32V was released as 3BSD
|at the end of 1979.