Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 26 |
Nodes: | 6 (0 / 6) |
Uptime: | 61:16:30 |
Calls: | 633 |
Calls today: | 1 |
Files: | 1,188 |
D/L today: |
32 files (20,076K bytes) |
Messages: | 181,450 |
Apparently someone wants to create a big-endian RISC-V, and someone
proposed adding support to that to Linux. This has evoked the
following design guideline for designing bad architectures from Linus Torvalds (extracted from <https://lwn.net/ml/all/CAHk-=wji-hEV1U1x92TLsrPbpSPqDD7Cgv2YwzeL-mMbM7iaRA@mail.gmail.com/>):
|If somebody really wants to create bad hardware in this day and age,
|please do make it big-endian, and also add the following very
|traditional features for sh*t-for-brains hardware:
|
| - virtually tagged caches
|
| You can't really claim to be worst-of-the-worst without virtually
|tagged caches.
|
| Tears of joy as you debug cache alias issues and of flushing caches
|on context switches.
|
| - only do aligned memory accesses
|
| Bonus point for not even faulting, and just loading and storing
|garbage instead.
|
| - expose your pipeline details in the ISA
|
| Delayed branch slots or explicit instruction grouping is a great
|way to show that you eat crayons for breakfast before you start
|designing your hardware platform
|
| - extended memory windows
|
| It was good enough for 8-bit machines in order to address more
|memory, and became a HIGHMEM.SYS staple in the DOS world, and then got |taken up by both x86 and arm in their 32-bit days as HIGHMEM support.
|
| It has decades of history, and an architecture cannot be called
|truly awful if it doesn't support some kind of HIGHMEM crap.
|
| - register windows. It's like extended memory, but for your registers!
|
| Please make sure to also have hardware support for filling and
|spilling them, but make it limited enough that system software has to
|deal with faults at critical times. Nesting exceptions is joyful!
|
| Bonus points if they are rotating and overflowing them silently
|just corrupts data. Keep those users on their toes!
|
| - in fact, require software fallbacks for pretty much anything unusual.
|
| TLB fills? They might only happen every ten or twenty instructions,
|so make them fault to some software implementation to really show your
|mad hardware skillz.
|
| denormals or any other FP precision issues? No, no, don't waste
|hardware on getting it right, software people *LOVE* to clean up after
|you.
|
| Remember: your mom picked up your dirty laundry from your floor,
|and software people are like the super-moms of the world.
|
| - make exceptions asynchronous.
|
| That's another great way to make sure people stay on their toes.
|Make sure machine check exceptions can happen in any context, so that
|you are guaranteed to have a dead machine any time anything goes
|wrong.
|
| But you should also take the non-maskability of NMI to heart, and
|make sure that software cannot possibly write code that is truly
|atomic. Because the NM is NMI is what makes it great!
|
| Floating point! Make sure that the special case you don't deal with
|in hardware are also delayed so that the software people have extra
|joy in trying to figure out just WTF happened. See the previous entry:
|they live for that stuff.
|
|I'm sure I've forgotten many other points. And I'm sure that hardware |people will figure it out!
Apparently someone wants to create a big-endian RISC-V, and someone
proposed adding support to that to Linux. This has evoked the
following design guideline for designing bad architectures from Linus Torvalds (extracted from <https://lwn.net/ml/all/CAHk-=wji-hEV1U1x92TLsrPbpSPqDD7Cgv2YwzeL-mMbM7iaRA@mail.gmail.com/>):
|If somebody really wants to create bad hardware in this day and age,
|please do make it big-endian, and also add the following very
|traditional features for sh*t-for-brains hardware:
|
| - virtually tagged caches
|
| You can't really claim to be worst-of-the-worst without virtually
|tagged caches.
|
| Tears of joy as you debug cache alias issues and of flushing caches
|on context switches.
|
| - virtually tagged cachesThat is only true if one insists on OS with Multiple Address Spaces. Virtually tagged caches are fine for Single Address Space (SAS) OS.
| You can't really claim to be worst-of-the-worst without virtually
|tagged caches.
| Tears of joy as you debug cache alias issues and of flushing caches
|on context switches.
Apparently someone wants to create a big-endian RISC-V, and someone
proposed adding support to that to Linux. This has evoked the
following design guideline for designing bad architectures from Linus Torvalds (extracted from <https://lwn.net/ml/all/CAHk-=wji-hEV1U1x92TLsrPbpSPqDD7Cgv2YwzeL-mMbM7iaRA@mail.gmail.com/>):
|If somebody really wants to create bad hardware in this day and age,
|please do make it big-endian, and also add the following very
|traditional features for sh*t-for-brains hardware:
|
| - virtually tagged caches
|
| You can't really claim to be worst-of-the-worst without virtually
|tagged caches.
|
| Tears of joy as you debug cache alias issues and of flushing caches
|on context switches.
| - only do aligned memory accesses
|
| Bonus point for not even faulting, and just loading and storing
|garbage instead.
| - expose your pipeline details in the ISA
|
| Delayed branch slots or explicit instruction grouping is a great
|way to show that you eat crayons for breakfast before you start
|designing your hardware platform
| - extended memory windows
|
| It was good enough for 8-bit machines in order to address more
|memory, and became a HIGHMEM.SYS staple in the DOS world, and then got |taken up by both x86 and arm in their 32-bit days as HIGHMEM support.
| It has decades of history, and an architecture cannot be called
|truly awful if it doesn't support some kind of HIGHMEM crap.
|
| - register windows. It's like extended memory, but for your registers!
|
| Please make sure to also have hardware support for filling and
|spilling them, but make it limited enough that system software has to
|deal with faults at critical times. Nesting exceptions is joyful!
|
| Bonus points if they are rotating and overflowing them silently
|just corrupts data. Keep those users on their toes!
| - in fact, require software fallbacks for pretty much anything unusual.
|
| TLB fills? They might only happen every ten or twenty instructions,
|so make them fault to some software implementation to really show your
|mad hardware skillz.
| denormals or any other FP precision issues? No, no, don't waste
|hardware on getting it right, software people *LOVE* to clean up after
|you.
|
| Remember: your mom picked up your dirty laundry from your floor,
|and software people are like the super-moms of the world.
| - make exceptions asynchronous.
| That's another great way to make sure people stay on their toes.
|Make sure machine check exceptions can happen in any context, so that
|you are guaranteed to have a dead machine any time anything goes
|wrong.
|
| But you should also take the non-maskability of NMI to heart, and
|make sure that software cannot possibly write code that is truly
|atomic. Because the NM is NMI is what makes it great!
| Floating point! Make sure that the special case you don't deal with
|in hardware are also delayed so that the software people have extra
|joy in trying to figure out just WTF happened. See the previous entry:
|they live for that stuff.
|I'm sure I've forgotten many other points. And I'm sure that hardware |people will figure it out!
| - virtually tagged cachesThat is only true if one insists on OS with Multiple Address Spaces. Virtually tagged caches are fine for Single Address Space (SAS) OS.
| You can't really claim to be worst-of-the-worst without virtually
|tagged caches.
| Tears of joy as you debug cache alias issues and of flushing caches
|on context switches.
AFAIK, the main problem with SASOS is "backward compatibility", most importantly with `fork`. The Mill people proposed a possible solution,
which seemed workable, but it's far from clear to me whether it would
work well enough if you want to port, say, Debian to such
an architecture.
Stefan--- Synchronet 3.21a-Linux NewsLink 1.2
Stefan Monnier <monnier@iro.umontreal.ca> posted:
| - virtually tagged cachesThat is only true if one insists on OS with Multiple Address Spaces.
| You can't really claim to be worst-of-the-worst without virtually
|tagged caches.
| Tears of joy as you debug cache alias issues and of flushing caches
|on context switches.
Virtually tagged caches are fine for Single Address Space (SAS) OS.
AFAIK, the main problem with SASOS is "backward compatibility", most
importantly with `fork`. The Mill people proposed a possible solution,
which seemed workable, but it's far from clear to me whether it would
work well enough if you want to port, say, Debian to such
an architecture.
SASOS seems like a bridge too far.
Stefan
Fork is not a problem with virtual tagged caches or SAS. Normal fork
starts the child with a copy of the parent's address mapping, and uses
"Copy on Write" (COW) to create unique pages as soon as either process
does a write.
anton@mips.complang.tuwien.ac.at (Anton Ertl) posted:
Apparently someone wants to create a big-endian RISC-V, and someone
proposed adding support to that to Linux. This has evoked the
following design guideline for designing bad architectures from Linus
Torvalds (extracted from
<https://lwn.net/ml/all/CAHk-=wji-hEV1U1x92TLsrPbpSPqDD7Cgv2YwzeL-mMbM7iaRA@mail.gmail.com/>):
|If somebody really wants to create bad hardware in this day and age,
|please do make it big-endian, and also add the following very
|traditional features for sh*t-for-brains hardware:
|
| - virtually tagged caches
|
| You can't really claim to be worst-of-the-worst without virtually
|tagged caches.
|
| Tears of joy as you debug cache alias issues and of flushing caches
|on context switches.
Avoided.
| - only do aligned memory accesses
|
| Bonus point for not even faulting, and just loading and storing
|garbage instead.
Avoided.
| - expose your pipeline details in the ISA
|
| Delayed branch slots or explicit instruction grouping is a great
|way to show that you eat crayons for breakfast before you start
|designing your hardware platform
Avoided
| - extended memory windows
|
| It was good enough for 8-bit machines in order to address more
|memory, and became a HIGHMEM.SYS staple in the DOS world, and then got
|taken up by both x86 and arm in their 32-bit days as HIGHMEM support.
Avoided
| It has decades of history, and an architecture cannot be called
|truly awful if it doesn't support some kind of HIGHMEM crap.
|
| - register windows. It's like extended memory, but for your registers!
|
| Please make sure to also have hardware support for filling and
|spilling them, but make it limited enough that system software has to
|deal with faults at critical times. Nesting exceptions is joyful!
|
| Bonus points if they are rotating and overflowing them silently
|just corrupts data. Keep those users on their toes!
Avoided
| - in fact, require software fallbacks for pretty much anything unusual.
|
| TLB fills? They might only happen every ten or twenty instructions,
|so make them fault to some software implementation to really show your
|mad hardware skillz.
Avoided--and mine are even coherent so you don't even have to shoot
them down.
| denormals or any other FP precision issues? No, no, don't waste
|hardware on getting it right, software people *LOVE* to clean up after
|you.
|
| Remember: your mom picked up your dirty laundry from your floor,
|and software people are like the super-moms of the world.
Avoided.
| - make exceptions asynchronous.
Avoided
| That's another great way to make sure people stay on their toes.
|Make sure machine check exceptions can happen in any context, so that
|you are guaranteed to have a dead machine any time anything goes
|wrong.
|
| But you should also take the non-maskability of NMI to heart, and
|make sure that software cannot possibly write code that is truly
|atomic. Because the NM is NMI is what makes it great!
Avoided
| Floating point! Make sure that the special case you don't deal with
|in hardware are also delayed so that the software people have extra
|joy in trying to figure out just WTF happened. See the previous entry:
|they live for that stuff.
Avoided
|I'm sure I've forgotten many other points. And I'm sure that hardware
|people will figure it out!
A clean sweep.
| - virtually tagged cachesThat is only true if one insists on OS with Multiple Address Spaces.
| You can't really claim to be worst-of-the-worst without virtually
|tagged caches.
| Tears of joy as you debug cache alias issues and of flushing caches
|on context switches.
Virtually tagged caches are fine for Single Address Space (SAS) OS.
AFAIK, the main problem with SASOS is "backward compatibility", most importantly with `fork`. The Mill people proposed a possible solution,
which seemed workable, but it's far from clear to me whether it would
work well enough if you want to port, say, Debian to such
an architecture.
Stefan
| - virtually tagged cachesThat is only true if one insists on OS with Multiple Address Spaces.
| You can't really claim to be worst-of-the-worst without virtually
|tagged caches.
| Tears of joy as you debug cache alias issues and of flushing caches
|on context switches.
Virtually tagged caches are fine for Single Address Space (SAS) OS.
AFAIK, the main problem with SASOS is "backward compatibility", most >importantly with `fork`. The Mill people proposed a possible solution,
which seemed workable, but it's far from clear to me whether it would
work well enough if you want to port, say, Debian to such
an architecture.
Stefan
AFAIK, the main problem with SASOS is "backward compatibility", most >>importantly with `fork`. ...
First process is ASID=1. It forks, and the child is ASID=2. It is a >completely new address space. ...
The last widely used single address space systems I can think of were OS/VS1 and OS/VS2 SVS,
It appears that Kent Dickey <kegs@provalid.com> said:
AFAIK, the main problem with SASOS is "backward compatibility", most
importantly with `fork`. ...
First process is ASID=1. It forks, and the child is ASID=2. It is a
completely new address space. ...
I don't think anyone would call a system that gives each process a completely new address space a single address space system. Making the ASID part of the translated address is one of many ways of implementing a conventional address space per process system.
The last widely used single address space systems I can think of were OS/VS1 and OS/VS2 SVS, each of which provided a single full sized address space in which they essentially ran their real memory predecessors MFT and MVT.
It appears that Kent Dickey <kegs@provalid.com> said:
AFAIK, the main problem with SASOS is "backward compatibility", most >>importantly with `fork`. ...
First process is ASID=1. It forks, and the child is ASID=2. It is a >completely new address space. ...
I don't think anyone would call a system that gives each process a
completely new address space a single address space system.
Making
the ASID part of the translated address is one of many ways of
implementing a conventional address space per process system.
The last widely used single address space systems I can think of were
OS/VS1 and OS/VS2 SVS,
each of which provided a single full sized
address space in which they essentially ran their real memory
predecessors MFT and MVT. As Lynn has often told us, operating
system bloat forced them quickly to go to MVS, an address space per
process.
I suppose there could still be single address space realtime or
embedded systems where all the programs to be run are known when the
system is built.
The last widely used single address space systems I can think of were
OS/VS1 and OS/VS2 SVS,
How would you call OS/400 (nowadays, IBM i) ?
I suppose there could still be single address space realtime or
embedded systems where all the programs to be run are known when the
system is built.
IIRC, Windows CE supported SAS mode of operation just fine without such >limitations.
It appears that Michael S <already5chosen@yahoo.com> said:
The last widely used single address space systems I can think of were
OS/VS1 and OS/VS2 SVS,
How would you call OS/400 (nowadays, IBM i) ?
I haven't looked at it for a while but I think you're right.
They have POSIX compatible APIs, wonder how that works.
I suppose there could still be single address space realtime or
embedded systems where all the programs to be run are known when the
system is built.
IIRC, Windows CE supported SAS mode of operation just fine without such
limitations.
For that matter, so did MS-DOS and Windows up through 3.0.
It appears that Michael S <already5chosen@yahoo.com> said:
I suppose there could still be single address space realtime or
embedded systems where all the programs to be run are known when
the system is built.
IIRC, Windows CE supported SAS mode of operation just fine without
such limitations.
For that matter, so did MS-DOS and Windows up through 3.0.
The last widely used single address space systems I can think of were OS/VS1 and OS/VS2 SVS, each of which provided a single full sized address space in which they essentially ran their real memory predecessors MFT and MVT. As Lynn has often told us, operating system bloat forced them quickly to go
to MVS, an address space per process.
It appears that Michael S <already5chosen@yahoo.com> said:
The last widely used single address space systems I can think of wereHow would you call OS/400 (nowadays, IBM i) ?
OS/VS1 and OS/VS2 SVS,
I haven't looked at it for a while but I think you're right.
They have POSIX compatible APIs, wonder how that works.
In article <1759506155-5857@newsgrouper.org>,
MitchAlsup <user5857@newsgrouper.org.invalid> wrote:
Stefan Monnier <monnier@iro.umontreal.ca> posted:
| - virtually tagged cachesThat is only true if one insists on OS with Multiple Address Spaces.
| You can't really claim to be worst-of-the-worst without virtually >>> >> |tagged caches.
| Tears of joy as you debug cache alias issues and of flushing caches >>> >> |on context switches.
Virtually tagged caches are fine for Single Address Space (SAS) OS.
AFAIK, the main problem with SASOS is "backward compatibility", most
importantly with `fork`. The Mill people proposed a possible solution,
which seemed workable, but it's far from clear to me whether it would
work well enough if you want to port, say, Debian to such
an architecture.
SASOS seems like a bridge too far.
Stefan
Fork is not a problem with virtual tagged caches or SAS. Normal fork
starts the child with a copy of the parent's address mapping, and uses
"Copy on Write" (COW) to create unique pages as soon as either process
does a write.
For it's entire existance, PA-RISC HP-UX supported virtual indexed
caches in a SAS, and implemented fork using Copy On Access. As soon as
the child process touched any page for read or write, it got a copy, so
it can only access its own pages (not counting read-only instruction
pages). This works fine, and it's not a performance issue. The love
folks have for COW is overblown. Real code either immediately exec()'s >(maybe doing some close()'s and other housekeeping first) or starts
writing lots of pages doing what it wants to do as a new process. Note
since the OS knows it needs to copy pages, it can pre-copy a bunch of
pages, such as the stack, and some basic data pages, to avoid some
initial faults for the exec() case at least.
Kent
AFAIK, the main problem with SASOS is "backward compatibility", most >>>importantly with `fork`. ...
First process is ASID=1. It forks, and the child is ASID=2. It is a >>completely new address space. ...
The last widely used single address space systems I can think of were OS/VS1 >and OS/VS2 SVS, each of which provided a single full sized address space in >which they essentially ran their real memory predecessors MFT and MVT. As >Lynn has often told us, operating system bloat forced them quickly to go
to MVS, an address space per process.
I suppose there could still be single address space realtime or
embedded systems where all the programs to be run are known when the
system is built.
--
Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
On Fri, 3 Oct 2025 16:18:47 -0000 (UTC), kegs@provalid.com (Kent
Dickey) wrote:
In article <1759506155-5857@newsgrouper.org>,
MitchAlsup <user5857@newsgrouper.org.invalid> wrote:
Stefan Monnier <monnier@iro.umontreal.ca> posted:
| - virtually tagged cachesThat is only true if one insists on OS with Multiple Address Spaces. >>>> > Virtually tagged caches are fine for Single Address Space (SAS) OS.
| You can't really claim to be worst-of-the-worst without virtually >>>> >> |tagged caches.
| Tears of joy as you debug cache alias issues and of flushing caches >>>> >> |on context switches.
AFAIK, the main problem with SASOS is "backward compatibility", most
importantly with `fork`. The Mill people proposed a possible solution, >>>> which seemed workable, but it's far from clear to me whether it would
work well enough if you want to port, say, Debian to such
an architecture.
SASOS seems like a bridge too far.
Stefan
Fork is not a problem with virtual tagged caches or SAS. Normal fork >>starts the child with a copy of the parent's address mapping, and uses >>"Copy on Write" (COW) to create unique pages as soon as either process
does a write.
Copy-On-Write (or Copy-On-Access) doesn't solve the fork problem in
SAS - which is that copied /pointers/ remain referencing objects in
the original process. Under the multi-space model of Unix/Linux,
after a fork the copied pointers should be referencing the copied
objects in the new process.
Lacking a way to identify and fixup pointer values, under SAS by
simply copying data (COW or COA) you end unintentionally /sharing/
data.
For it's entire existance, PA-RISC HP-UX supported virtual indexed
caches in a SAS, and implemented fork using Copy On Access. As soon as
the child process touched any page for read or write, it got a copy, so
it can only access its own pages (not counting read-only instruction >>pages). This works fine, and it's not a performance issue. The love
folks have for COW is overblown. Real code either immediately exec()'s >>(maybe doing some close()'s and other housekeeping first) or starts
writing lots of pages doing what it wants to do as a new process. Note >>since the OS knows it needs to copy pages, it can pre-copy a bunch of >>pages, such as the stack, and some basic data pages, to avoid some
initial faults for the exec() case at least.
fork-exec is not a problem. fork alone is.
How did HP-UX on PA-RISC handle fork?
Kent
Apparently someone wants to create a big-endian RISC-V, and someone
proposed adding support to that to Linux.
|If somebody really wants to create bad hardware in this day and age,
|please do make it big-endian, and also add the following very
|traditional features for sh*t-for-brains hardware:
| - only do aligned memory accesses
| - expose your pipeline details in the ISA
On Fri, 03 Oct 2025 08:58:32 +0000, Anton Ertl quoted:
|If somebody really wants to create bad hardware in this day and age, |please do make it big-endian, and also add the following very
|traditional features for sh*t-for-brains hardware:
I think that for a computer to be big-endian is a good thing.
It makes it easier to understand core dumps, as numbers are stored just as they are written.
But more importantly, it means that binary integers are ordered the same
way as packed decimal integers, which are ordered the same way as integers in character text form.
As for the _rest_ of the items, though, all of them are indeed bad things.
But some are worse than others.
| - only do aligned memory accesses
Nearly all memory access are, or could be, aligned. Performance is
improved if they are. As long as there's some provision to handle
unaligned data, such as a move characters instruction, data structures can be dealt with for things like communications formats.
I'm not saying it isn't bad, just that it was excusable before we had as many transistors available as we do now.
| - expose your pipeline details in the ISA
The original MIPS did this. This is bad indeed, as whatever you do in this direction won't be applicable to later iterations of the ISA as technology advances.
Failing to support the entire IEEE 754 floating-point standard just needs
to be documented. Expecting software to fake it being implemented is not reasonable: as long as denormals instead produce zero as the result, one just has an inferior floating-point format, not a computer that doesn't work. Once again, bad, but not all that terrible.
But anything that means that programs could randomly fail because
interrupts don't properly save or restore the entire machine state...
*that* is catastrophically bad, and hardly compares to his other examples.
John Savard
On Fri, 03 Oct 2025 08:58:32 +0000, Anton Ertl quoted:
|If somebody really wants to create bad hardware in this day and age,
|please do make it big-endian, and also add the following very
|traditional features for sh*t-for-brains hardware:
I think that for a computer to be big-endian is a good thing.
It makes it easier to understand core dumps, as numbers are stored just as >they are written.
The only benefit in modern days for big-endian is that network
protocols are in big-endian form. Not a big issue with modern
LE CPUs, where byteswap is a single cycle instruction.
scott@slp53.sl.home (Scott Lurndal) writes:
The only benefit in modern days for big-endian is that network
protocols are in big-endian form. Not a big issue with modern
LE CPUs, where byteswap is a single cycle instruction.
Clever architects put the byte swap it in the load and store
instructions, where the byte-swapping is just an addition to the
handling of misaligned loads and stores, which itself is an addition
to the handling of smaller-than-transfer-width accesses. PowerPC has
such instructions.
On Fri, 03 Oct 2025 08:58:32 +0000, Anton Ertl quoted:
|If somebody really wants to create bad hardware in this day and age,
|please do make it big-endian, and also add the following very
|traditional features for sh*t-for-brains hardware:
I think that for a computer to be big-endian is a good thing.
But more importantly, it means that binary integers are ordered the same
way as packed decimal integers, which are ordered the same way as integers >in character text form.
| - only do aligned memory accesses
Nearly all memory access are, or could be, aligned. Performance is
improved if they are. As long as there's some provision to handle
unaligned data, such as a move characters instruction, data structures can >be dealt with for things like communications formats.
I'm not saying it isn't bad, just that it was excusable before we had as >many transistors available as we do now.
Failing to support the entire IEEE 754 floating-point standard just needs
to be documented. Expecting software to fake it being implemented is not >reasonable: as long as denormals instead produce zero as the result, one >just has an inferior floating-point format, not a computer that doesn't >work.
John Savard <quadibloc@invalid.invalid> writes:
On Fri, 03 Oct 2025 08:58:32 +0000, Anton Ertl quoted:
|If somebody really wants to create bad hardware in this day and age,
|please do make it big-endian, and also add the following very
|traditional features for sh*t-for-brains hardware:
I think that for a computer to be big-endian is a good thing.
Whatever the technical merits of different byte orders may be (and the
names "big-endian" and "little-endian" already indicate that far more >discussion has been expended on the topic than these merits justify ><https://en.wikipedia.org/wiki/Lilliput_and_Blefuscu#History_and_politics>), >little-endian has won, and that's its major merit, and big-endian's
major demerit.
But as you correctly said, the fight is over, little-endian has won,
let's argue about something else.
There is something to be said for at least having a big-endian
system around to test programs: If people mismatch types, there
is a chance that it will blow up on a big-endian system and work
silently on a little-endian system.
This has a reverse side: Little-endian having effectively won,
software often does not work on big-endian systems out of the box
any more. I suspect this is why IBM effectively chose little-endian
for POWER, but AIX is big-endian (and will remain so for the forseeable >future).
And of course, this is all due to an architecture which is arguably
the most influential of all times (or at least has the highest
ratio of influence to recognition level, but that by a _huge_ margin):
The Datapoint 2200.
Thomas Koenig <tkoenig@netcologne.de> writes:
There is something to be said for at least having a big-endian
system around to test programs: If people mismatch types, there
is a chance that it will blow up on a big-endian system and work
silently on a little-endian system.
If the only thing wrong with the software is that it does not work on big-endian systems, and little-endian has won, is there really
anything wrong with the software?
And of course, this is all due to an architecture which is arguably
the most influential of all times (or at least has the highest
ratio of influence to recognition level, but that by a _huge_ margin):
The Datapoint 2200.
Another widely-used architecture today inherited its byte order from
the 6502.
Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
Thomas Koenig <tkoenig@netcologne.de> writes:
There is something to be said for at least having a big-endian
system around to test programs: If people mismatch types, there
is a chance that it will blow up on a big-endian system and work
silently on a little-endian system.
If the only thing wrong with the software is that it does not work
on big-endian systems, and little-endian has won, is there really
anything wrong with the software?
A type mismatch? I think so.
And of course, this is all due to an architecture which is arguably
the most influential of all times (or at least has the highest
ratio of influence to recognition level, but that by a _huge_
margin): The Datapoint 2200.
Another widely-used architecture today inherited its byte order from
the 6502.
Which one?
On Sun, 12 Oct 2025 10:14:08 -0000 (UTC)
Thomas Koenig <tkoenig@netcologne.de> wrote:
Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
Thomas Koenig <tkoenig@netcologne.de> writes:
There is something to be said for at least having a big-endian
system around to test programs: If people mismatch types, there
is a chance that it will blow up on a big-endian system and work
silently on a little-endian system.
If the only thing wrong with the software is that it does not work
on big-endian systems, and little-endian has won, is there really
anything wrong with the software?
A type mismatch? I think so.
And of course, this is all due to an architecture which is arguably
the most influential of all times (or at least has the highest
ratio of influence to recognition level, but that by a _huge_
margin): The Datapoint 2200.
Another widely-used architecture today inherited its byte order from
the 6502.
Which one?
Arm.
It was designed as CPU for successor of 6502-based BBC Micro.
But does 6502 really have "byte order" in hardware? Or just "soft" conventions of BBC BASIC interpreter?
Michael S <already5chosen@yahoo.com> schrieb:
On Sun, 12 Oct 2025 10:14:08 -0000 (UTC)
Thomas Koenig <tkoenig@netcologne.de> wrote:
Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
Thomas Koenig <tkoenig@netcologne.de> writes:
There is something to be said for at least having a big-endian
system around to test programs: If people mismatch types, there
is a chance that it will blow up on a big-endian system and work
silently on a little-endian system.
If the only thing wrong with the software is that it does not
work on big-endian systems, and little-endian has won, is there
really anything wrong with the software?
A type mismatch? I think so.
And of course, this is all due to an architecture which is
arguably the most influential of all times (or at least has the
highest ratio of influence to recognition level, but that by a
_huge_ margin): The Datapoint 2200.
Another widely-used architecture today inherited its byte order
from the 6502.
Which one?
Arm.
That does not have many architectural features from the 6502 :-)
It was designed as CPU for successor of 6502-based BBC Micro.
But does 6502 really have "byte order" in hardware? Or just "soft" conventions of BBC BASIC interpreter?
Yes, the 6502 is little-endian,
which you can see in its instruction formats
and the way the pointers in the zero page were stored.
Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
Thomas Koenig <tkoenig@netcologne.de> writes:
There is something to be said for at least having a big-endian
system around to test programs: If people mismatch types, there
is a chance that it will blow up on a big-endian system and work
silently on a little-endian system.
If the only thing wrong with the software is that it does not work on
big-endian systems, and little-endian has won, is there really
anything wrong with the software?
A type mismatch? I think so.
Another widely-used architecture today inherited its byte order from
the 6502.
Which one?
On Sun, 12 Oct 2025 11:38:39 -0000 (UTC)
Thomas Koenig <tkoenig@netcologne.de> wrote:
Michael S <already5chosen@yahoo.com> schrieb:
Arm.
That does not have many architectural features from the 6502 :-)
It has the same byte order.
CZVN flags are superficially similar, although there is an important >difference - on ARM Z flag is not affected by non-arithmetic
instructions.
Yes, the 6502 is little-endian,
which you can see in its instruction formats
That does not count. Instruction encoding is orthogonal to the question
of byte order during execution. I had seen various combinations.
Including encodings that have no particular order, i.e. immediate field >scattered in instruction word. Not that I remember which architecture it
was.
Indirect addressing modes are clearly LE.
In case of JMP instruction 16-bit LE pointer does not even have to be in
zero page.
Thomas Koenig <tkoenig@netcologne.de> writes:
Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
Thomas Koenig <tkoenig@netcologne.de> writes:
There is something to be said for at least having a big-endian
system around to test programs: If people mismatch types, there
is a chance that it will blow up on a big-endian system and work >>>>silently on a little-endian system.
If the only thing wrong with the software is that it does not work on
big-endian systems, and little-endian has won, is there really
anything wrong with the software?
A type mismatch? I think so.
If there is really something wrong with the software on little-endian systems, you don't need a big-endian system to find the mistake.
Another widely-used architecture today inherited its byte order from
the 6502.
Which one?
ARM A32, and then T32 and A64.
Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
Thomas Koenig <tkoenig@netcologne.de> writes:
Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
If the only thing wrong with the software is that it does not work on
big-endian systems, and little-endian has won, is there really
anything wrong with the software?
A type mismatch? I think so.
If there is really something wrong with the software on little-endian
systems, you don't need a big-endian system to find the mistake.
Would you consider a type mistake (access through the wrong type
of pointer, say store a value to char * and read via int *) to
be an error or not, if it is not directly observable on limited
number of test runs on a little-endian system? Your comment would
suggest not.
Another widely-used architecture today inherited its byte order from
the 6502.
Which one?
ARM A32, and then T32 and A64.
https://developer.arm.com/documentation/102376/0200/Alignment-and-endianness/Endianness
says endianness can be configurable (unless you mean something else
by A64).
According to Anton Ertl <anton@mips.complang.tuwien.ac.at>:
John Savard <quadibloc@invalid.invalid> writes:
On Fri, 03 Oct 2025 08:58:32 +0000, Anton Ertl quoted:
|If somebody really wants to create bad hardware in this day and age,
|please do make it big-endian, and also add the following very
|traditional features for sh*t-for-brains hardware:
I think that for a computer to be big-endian is a good thing.
Garrrgghhhhhhhh, not this again.
Whatever the technical merits of different byte orders may be (and the >names "big-endian" and "little-endian" already indicate that far more >discussion has been expended on the topic than these merits justify ><https://en.wikipedia.org/wiki/Lilliput_and_Blefuscu#History_and_politics>), >little-endian has won, and that's its major merit, and big-endian's
major demerit.
Yup. I really wish the arguments about which order is "more natural"
would stop since they're just people's cultural preconceptions. I
imagine that if my first language were Arabic or Hebrew, I would find left-to-right big-endian core dumps much less readable than the
familiar looking right-to-left little-endian ones.
But as you correctly said, the fight is over, little-endian has won,
let's argue about something else.
IEN 137 said everything worth saying about this topic 45 years ago.
https://www.rfc-editor.org/ien/ien137.txt
Thomas Koenig <tkoenig@netcologne.de> writes:
Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
Thomas Koenig <tkoenig@netcologne.de> writes:
Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
If the only thing wrong with the software is that it does not work on >>>>> big-endian systems, and little-endian has won, is there really
anything wrong with the software?
A type mismatch? I think so.
If there is really something wrong with the software on little-endian
systems, you don't need a big-endian system to find the mistake.
Would you consider a type mistake (access through the wrong type
of pointer, say store a value to char * and read via int *) to
be an error or not, if it is not directly observable on limited
number of test runs on a little-endian system? Your comment would
suggest not.
If no test can be devised that shows unintended behaviour on the little-endian system, then I consider the program as delivered to be
working.
If a test can be devised that shows unintended behaviour on the
little-endian system, then there is no need for testing on a
big-endian system.
says endianness can be configurable (unless you mean something elseAnother widely-used architecture today inherited its byte order from >>>>> the 6502.
Which one?
ARM A32, and then T32 and A64.
https://developer.arm.com/documentation/102376/0200/Alignment-and-endianness/Endianness
by A64).
Which has zero relevance, because everyone in their right mind
configures their machine little-endian.
<https://wiki.debian.org/ArmPorts> says:
Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
Thomas Koenig <tkoenig@netcologne.de> writes:
Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
Thomas Koenig <tkoenig@netcologne.de> writes:
There is something to be said for at least having a big-endian
system around to test programs: If people mismatch types, there
is a chance that it will blow up on a big-endian system and work >>>>silently on a little-endian system.
If the only thing wrong with the software is that it does not
work on big-endian systems, and little-endian has won, is there
really anything wrong with the software?
A type mismatch? I think so.
If there is really something wrong with the software on
little-endian systems, you don't need a big-endian system to find
the mistake.
Would you consider a type mistake (access through the wrong type
of pointer, say store a value to char * and read via int *) to
be an error or not, if it is not directly observable on limited
number of test runs on a little-endian system? Your comment would
suggest not.
Another widely-used architecture today inherited its byte order
from the 6502.
Which one?
ARM A32, and then T32 and A64.
https://developer.arm.com/documentation/102376/0200/Alignment-and-endianness/Endianness
says endianness can be configurable (unless you mean something else
by A64).
But in pratice nobody makes cores that do not support LE or do not
power-up in LE mode. May be, some of them can be switched into BE later.
But why?
Michael S <already5chosen@yahoo.com> writes:
On Sun, 12 Oct 2025 11:38:39 -0000 (UTC)
Thomas Koenig <tkoenig@netcologne.de> wrote:
Michael S <already5chosen@yahoo.com> schrieb:
Arm.
That does not have many architectural features from the 6502 :-)
It has the same byte order.
Which is what is relevant for the question at hand. The intention of
the ARM architects was to produce a CPU for their successor of the BBC
Micro, and they certainly mentioned the prominent role of the 6502 as inspiration in their accounts; they obviously did not try to create a
32-bit 6502, but at least they did not change the byte order.
CZVN flags are superficially similar, although there is an important >difference - on ARM Z flag is not affected by non-arithmetic
instructions.
What about the other flags?
Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
Thomas Koenig <tkoenig@netcologne.de> writes:
Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
Thomas Koenig <tkoenig@netcologne.de> writes:
Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
If the only thing wrong with the software is that it does not work on >>>>>> big-endian systems, and little-endian has won, is there really
anything wrong with the software?
A type mismatch? I think so.
If there is really something wrong with the software on little-endian
systems, you don't need a big-endian system to find the mistake.
Would you consider a type mistake (access through the wrong type
of pointer, say store a value to char * and read via int *) to
be an error or not, if it is not directly observable on limited
number of test runs on a little-endian system? Your comment would >>>suggest not.
If no test can be devised that shows unintended behaviour on the
little-endian system, then I consider the program as delivered to be
working.
That isn't what I was saying.
If a test can be devised that shows unintended behaviour on the
little-endian system, then there is no need for testing on a
big-endian system.
Testing, by its very nature, is incomplete. The theoretical
possibility that a test can be derived does not help in practice.
https://developer.arm.com/documentation/102376/0200/Alignment-and-endianness/Endianness
says endianness can be configurable (unless you mean something else
by A64).
Which has zero relevance, because everyone in their right mind
configures their machine little-endian.
<https://wiki.debian.org/ArmPorts> says:
That's circular reasoning.
On Sun, 12 Oct 2025 13:36:51 GMT
anton@mips.complang.tuwien.ac.at (Anton Ertl) wrote:
Michael S <already5chosen@yahoo.com> writes:
CZVN flags are superficially similar, although there is an important
difference - on ARM Z flag is not affected by non-arithmetic
instructions.
What about the other flags?
Sorry, my mistake. On 6502 Z is not the only flag that is affected by >non-arithmetic instructions. N is affected as well.
Also, apart fron different flags-handling by INC/DEC, which is
fully expected, there are differences in Logical, shift and evenin
compare instruuctions.
So, the two architectures are more far apart in flags handling then I >thought.
John Levine <johnl@taugh.com> posted:
According to Anton Ertl <anton@mips.complang.tuwien.ac.at>:
John Savard <quadibloc@invalid.invalid> writes:
On Fri, 03 Oct 2025 08:58:32 +0000, Anton Ertl quoted:
|If somebody really wants to create bad hardware in this day and age, >>>>> |please do make it big-endian, and also add the following very
|traditional features for sh*t-for-brains hardware:
I think that for a computer to be big-endian is a good thing.
Garrrgghhhhhhhh, not this again.
Whatever the technical merits of different byte orders may be (and the
names "big-endian" and "little-endian" already indicate that far more
discussion has been expended on the topic than these merits justify
<https://en.wikipedia.org/wiki/Lilliput_and_Blefuscu#History_and_politics>),
little-endian has won, and that's its major merit, and big-endian's
major demerit.
Yup. I really wish the arguments about which order is "more natural"
would stop since they're just people's cultural preconceptions. I
imagine that if my first language were Arabic or Hebrew, I would find
left-to-right big-endian core dumps much less readable than the
familiar looking right-to-left little-endian ones.
Top to bottom works for Japanese and Chinese. Yet I hear not
appetite for TB byte order.
But as you correctly said, the fight is over, little-endian has won,
let's argue about something else.
IEN 137 said everything worth saying about this topic 45 years ago.
https://www.rfc-editor.org/ien/ien137.txt
On Sun, 12 Oct 2025 13:36:51 GMT
anton@mips.complang.tuwien.ac.at (Anton Ertl) wrote:
Michael S <already5chosen@yahoo.com> writes:
On Sun, 12 Oct 2025 11:38:39 -0000 (UTC)
Thomas Koenig <tkoenig@netcologne.de> wrote:
Michael S <already5chosen@yahoo.com> schrieb:
Arm.
That does not have many architectural features from the 6502 :-)
It has the same byte order.
Which is what is relevant for the question at hand. The intention of
the ARM architects was to produce a CPU for their successor of the BBC Micro, and they certainly mentioned the prominent role of the 6502 as inspiration in their accounts; they obviously did not try to create a 32-bit 6502, but at least they did not change the byte order.
CZVN flags are superficially similar, although there is an important >difference - on ARM Z flag is not affected by non-arithmetic >instructions.
What about the other flags?
Sorry, my mistake. On 6502 Z is not the only flag that is affected by non-arithmetic instructions. N is affected as well.
Also, apart fron different flags-handling by INC/DEC, which is
fully expected, there are differences in Logical, shift and evenin
compare instruuctions.
So, the two architectures are more far apart in flags handling then I thought.
Convinient reference here: http://www.6502.org/users/obelisk/6502/instructions.html
Thomas Koenig <tkoenig@netcologne.de> writes:
Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
Thomas Koenig <tkoenig@netcologne.de> writes:
Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
Thomas Koenig <tkoenig@netcologne.de> writes:
Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
If the only thing wrong with the software is that it does not work on >>>>>>> big-endian systems, and little-endian has won, is there really
anything wrong with the software?
A type mismatch? I think so.
If there is really something wrong with the software on little-endian >>>>> systems, you don't need a big-endian system to find the mistake.
Would you consider a type mistake (access through the wrong type
of pointer, say store a value to char * and read via int *) to
be an error or not, if it is not directly observable on limited
number of test runs on a little-endian system? Your comment would >>>>suggest not.
If no test can be devised that shows unintended behaviour on the
little-endian system, then I consider the program as delivered to be
working.
That isn't what I was saying.
Correct: That's what I am saying.
If a test can be devised that shows unintended behaviour on the
little-endian system, then there is no need for testing on a
big-endian system.
Testing, by its very nature, is incomplete. The theoretical
possibility that a test can be derived does not help in practice.
Maybe not, but that's not my point: If no such test can be devised,
would you call it a bug? Why?
As for practice: Does testing on big-endian systems help in practice?
Not in my experience.
https://developer.arm.com/documentation/102376/0200/Alignment-and-endianness/Endianness
says endianness can be configurable (unless you mean something else
by A64).
Which has zero relevance, because everyone in their right mind
configures their machine little-endian. >>><https://wiki.debian.org/ArmPorts> says:
That's circular reasoning.
You may think so,
but the lack of big-endian ARM systems makes my
point.
John Levine <johnl@taugh.com> schrieb:
But as you correctly said, the fight is over, little-endian has won,
let's argue about something else.
There is something to be said for at least having a big-endian
system around to test programs: If people mismatch types, there
is a chance that it will blow up on a big-endian system and work
silently on a little-endian system.
Why did the ARM architects put this in?
They need not have done so...
Thomas Koenig <tkoenig@netcologne.de> writes:
[configurable byte order]
Why did the ARM architects put this in?
They need not have done so...
It's cheap to add (at least the cheapo version, and I expect that's the
one that ARM provied), several other architectures supported it, and
when they added this feature, it was not clear that little-endian would
win.
And Linksys actually used big-endian mode in their NSLU2 NAS
(discontinued 2008), so maybe Intel got a customer thanks to this
feature of ARM (or maybe they would have gone with the Xscale CPU
anyway, and used it little-endian if the big-endian mode had not
existed).