Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 42 |
Nodes: | 6 (0 / 6) |
Uptime: | 02:04:42 |
Calls: | 220 |
Calls today: | 1 |
Files: | 824 |
Messages: | 121,544 |
Posted today: | 6 |
e.g.
"
// start of barrel
EventRec far* searchp = (EventRec far*) work.bufs;
"
My eyes! My eyes! That was COMPACT model code, so 64k of code and 1MB of data, code addresses were 16bit offsets to the CS reg and data was far
so 32 bits of segment and offset of DS or ES. And of course you had to
be extra careful of any pointer arithmetic as a far pointer wrapped
after 64k. You had to use slower HUGE pointers to get automatic normalisation. God it was shit.
On 25.11.24 18:33, mm0fmf wrote:
My eyes! My eyes! That was COMPACT model code, so 64k of code and 1MB of
data, code addresses were 16bit offsets to the CS reg and data was far
so 32 bits of segment and offset of DS or ES. And of course you had to
be extra careful of any pointer arithmetic as a far pointer wrapped
after 64k. You had to use slower HUGE pointers to get automatic
normalisation. God it was shit.
And to consider that, at that time, processors like MC68000 or NS32016
were readily available.
On 2024-11-26, Josef Möllers <josef@invalid.invalid> wrote:
On 25.11.24 18:33, mm0fmf wrote:
My eyes! My eyes! That was COMPACT model code, so 64k of code and 1MB of >>> data, code addresses were 16bit offsets to the CS reg and data was far
so 32 bits of segment and offset of DS or ES. And of course you had to
be extra careful of any pointer arithmetic as a far pointer wrapped
after 64k. You had to use slower HUGE pointers to get automatic
normalisation. God it was shit.
And to consider that, at that time, processors like MC68000 or NS32016
were readily available.
Which proves once again that a shitty design beats a good one
if it's released first.
Everybody was yapping about the 640K barrier. I was more concerned
with the 64K barrier. I remember manually normalizing pointers
everywhere, and if I wanted to work with a large arrays of structures
I'd copy individual structures to a work area byte by byte so I
didn't get bitten by segment wrap-around in the middle of a structure.
As the joke goes, aren't you glad the iAPX432 died out?
Otherwise a truly horrible Intel architecture might have
taken over the world.
On 25.11.24 18:33, mm0fmf wrote:
[...]
e.g.
"
// start of barrel
EventRec far* searchp = (EventRec far*) work.bufs;
"
My eyes! My eyes! That was COMPACT model code, so 64k of code and 1MB
of data, code addresses were 16bit offsets to the CS reg and data was
far so 32 bits of segment and offset of DS or ES. And of course you
had to be extra careful of any pointer arithmetic as a far pointer
wrapped after 64k. You had to use slower HUGE pointers to get
automatic normalisation. God it was shit.
And to consider that, at that time, processors like MC68000 or NS32016
were readily available.
On 26/11/2024 17:37, Josef Möllers wrote:
On 25.11.24 18:33, mm0fmf wrote:
My eyes! My eyes! That was COMPACT model code, so 64k of code and 1MB
of data, code addresses were 16bit offsets to the CS reg and data was
far so 32 bits of segment and offset of DS or ES. And of course you
had to be extra careful of any pointer arithmetic as a far pointer
wrapped after 64k. You had to use slower HUGE pointers to get
automatic normalisation. God it was shit.
And to consider that, at that time, processors like MC68000 or NS32016
were readily available.
Backwards compatibility.
DOS came from 8080 based CP/M , to run on an 8086, to where 8 bit code
could be easily ported.
And so we were stick with that architecture.
On Tue, 26 Nov 2024 18:37:02 +0100, Josef Möllers
<josef@invalid.invalid> wrote:
And to consider that, at that time, processors like MC68000 or NS32016
were readily available.
At the time when the design decision was made, the Motorola 68000 was
not ready for production.
From https://en.wikipedia.org/wiki/IBM_Personal_Computer :
"The 68000 was considered the best choice,[19] but was not
production-ready like the others."
And to consider that, at that time, processors like MC68000 or NS32016
were readily available.
Robert Roland wrote:
On Tue, 26 Nov 2024 18:37:02 +0100, Josef Möllers
<josef@invalid.invalid> wrote:
And to consider that, at that time, processors like MC68000 or NS32016At the time when the design decision was made, the Motorola 68000 was
were readily available.
not ready for production.
From https://en.wikipedia.org/wiki/IBM_Personal_Computer :
"The 68000 was considered the best choice,[19] but was not
production-ready like the others."
I also remember a zilog Z8000?
Intel put the "backward" in "backward compatible".
The Natural Philosopher <tnp@invalid.invalid> writes:
I also remember a zilog Z8000?
Yes, although also with a segmented memory model.
On Thu, 28 Nov 2024 19:42:18 GMT, Charlie Gibbs wrote:
Intel put the "backward" in "backward compatible".
I recall the term “backward combatible” used to describe the feelings of violence some people had towards the requirement for backward
compatibility with certain kinds of brain death ...
On 2024-12-18, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Thu, 28 Nov 2024 19:42:18 GMT, Charlie Gibbs wrote:
Intel put the "backward" in "backward compatible".
I recall the term “backward combatible” used to describe the feelings of
violence some people had towards the requirement for backward
compatibility with certain kinds of brain death ...
Then there's "bug-compatible", where so many people and systems
have adapted to an existing bug that you can't fix it without
breaking just about everything - so any future versions have
to also contain the bug, or at least a good emulation of it.
Qwerty keyboards being a prime example.