Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 26 |
Nodes: | 6 (0 / 6) |
Uptime: | 61:05:03 |
Calls: | 633 |
Calls today: | 1 |
Files: | 1,188 |
D/L today: |
32 files (20,076K bytes) |
Messages: | 181,450 |
I .too, like VMS (contrary to what a lot of people here think :-) and
I personally know of a couple of niche markets VMS used to be strong
in (maybe not dominate, but held a good position). I never really
understood why they lost those markets and I would love to see them
back. But the more I read and see the more it seems to me that there
is no desire to actually grow the VMS market and the majority (including those who actually control it) are perfectly happy to just let things
slide slowly down a black hole from which nothing ever returns.
On 2025-09-12, bill <bill.gunshannon@gmail.com> wrote:
I .too, like VMS (contrary to what a lot of people here think :-) and
I personally know of a couple of niche markets VMS used to be strong
in (maybe not dominate, but held a good position). I never really
understood why they lost those markets and I would love to see them
back. But the more I read and see the more it seems to me that there
is no desire to actually grow the VMS market and the majority (including
those who actually control it) are perfectly happy to just let things
slide slowly down a black hole from which nothing ever returns.
Especially given that z/OS is actually several years older than VMS
and is still going very strongly indeed.
Could VMS still have been as strong to this day if different decisions
and paths in the past had been taken ?
On 2025-09-12, bill <bill.gunshannon@gmail.com> wrote:
I .too, like VMS (contrary to what a lot of people here think :-) and
I personally know of a couple of niche markets VMS used to be strong
in (maybe not dominate, but held a good position). I never really
understood why they lost those markets and I would love to see them
back. But the more I read and see the more it seems to me that there
is no desire to actually grow the VMS market and the majority (including
those who actually control it) are perfectly happy to just let things
slide slowly down a black hole from which nothing ever returns.
Especially given that z/OS is actually several years older than VMS
and is still going very strongly indeed.
Could VMS still have been as strong to this day if different decisions
and paths in the past had been taken ?
Simon.
On 16/09/2025 18:46, Simon Clubley wrote:
Especially given that z/OS is actually several years older than VMS
and is still going very strongly indeed.
I don't believe its as strong as you believe. Perhaps the Z platform is,
but z/OS is pretty much limited to traditional big banks and airline reservation systems. These systems are all much larger that most VMS
systems so migration away is harder and riskier.
Another notable feature of Z hardware is the virtualisation technology inherent in the "hardware". So it all comes with multiple Logical
PARtitions or LPARs which despite their name are more like physical partitioning of the hardware, and zVM which uses the "Start Interpretive Execution" (SIE) instruction to create Virtual Machines.
DEC never had anything like this.
On 9/16/2025 4:47 PM, David Wade wrote:
On 16/09/2025 18:46, Simon Clubley wrote:
Especially given that z/OS is actually several years older than VMS
and is still going very strongly indeed.
I don't believe its as strong as you believe. Perhaps the Z platform
is, but z/OS is pretty much limited to traditional big banks and
airline reservation systems. These systems are all much larger that
most VMS systems so migration away is harder and riskier.
I am not sure that the attrition rate for z/OS is less than for VMS.
But they started at a way higher point, so they are still at a higher
point.
Another notable feature of Z hardware is the virtualisation technology
inherent in the "hardware". So it all comes with multiple Logical
PARtitions or LPARs which despite their name are more like physical
partitioning of the hardware, and zVM which uses the "Start
Interpretive Execution" (SIE) instruction to create Virtual Machines.
DEC never had anything like this.
I always considered Alpha Galaxy to be somewhat similar to LPAR.
Arne
Especially given that z/OS is actually several years older than VMSI don't believe its as strong as you believe. Perhaps the Z platform
and is still going very strongly indeed.
is, but z/OS is pretty much limited to traditional big banks and
airline reservation systems. These systems are all much larger that
most VMS systems so migration away is harder and riskier. The
hundreds of SMEs that once had a small IBM/370 like the 43xx or 9370
have gone.
Another notable feature of Z hardware is the virtualisation
technology inherent in the "hardware". So it all comes with multiple
Logical PARtitions or LPARs which despite their name are more like
physical partitioning of the hardware, and zVM which uses the "Start Interpretive Execution" (SIE) instruction to create Virtual
Machines.
.. lets face it the competition such as pr1mos, hp-ux , Solaris, GCOS6
are all in simiular states of decline...
On 16/09/2025 22:00, Arne Vajh|+j wrote:
On 9/16/2025 4:47 PM, David Wade wrote:
On 16/09/2025 18:46, Simon Clubley wrote:
Especially given that z/OS is actually several years older than VMS
and is still going very strongly indeed.
I don't believe its as strong as you believe. Perhaps the Z platform
is, but z/OS is pretty much limited to traditional big banks and
airline reservation systems. These systems are all much larger that
most VMS systems so migration away is harder and riskier.
I am not sure that the attrition rate for z/OS is less than for VMS.
But they started at a way higher point, so they are still at a higher
point.
Another notable feature of Z hardware is the virtualisation
technology inherent in the "hardware". So it all comes with multiple
Logical PARtitions or LPARs which despite their name are more like
physical partitioning of the hardware, and zVM which uses the "Start
Interpretive Execution" (SIE) instruction to create Virtual Machines.
DEC never had anything like this.
I always considered Alpha Galaxy to be somewhat similar to LPAR.
Isn't that the layer that translates Vax instructions?
LPARs allow multiple operating systems to be run. Could you ever run VMS and ULTRIX
at the same time on the same Alpha box.
On Tue, 16 Sep 2025 21:47:08 +0100, David Wade wrote:
Especially given that z/OS is actually several years older than VMSI don't believe its as strong as you believe. Perhaps the Z platform
and is still going very strongly indeed.
is, but z/OS is pretty much limited to traditional big banks and
airline reservation systems. These systems are all much larger that
most VMS systems so migration away is harder and riskier. The
hundreds of SMEs that once had a small IBM/370 like the 43xx or 9370
have gone.
IBM as a whole has been losing money for years,
and laying off staff
left and right. ThatrCOs not exactly the sign of a platform rCLgoing stronglyrCY, is it.
The only recent bright spot in the company, that I
know of, is its Red Hat acquisition.
Another notable feature of Z hardware is the virtualisation
technology inherent in the "hardware". So it all comes with multiple
Logical PARtitions or LPARs which despite their name are more like
physical partitioning of the hardware, and zVM which uses the "Start
Interpretive Execution" (SIE) instruction to create Virtual
Machines.
Does that sound like there are a limited number of slots for
instantiating virtual machines? Modern virtualization architectures
arenrCOt limited like that.
On Tue, 16 Sep 2025 21:47:08 +0100, David Wade wrote:
Especially given that z/OS is actually several years older than VMSI don't believe its as strong as you believe. Perhaps the Z platform
and is still going very strongly indeed.
is, but z/OS is pretty much limited to traditional big banks and
airline reservation systems. These systems are all much larger that
most VMS systems so migration away is harder and riskier. The
hundreds of SMEs that once had a small IBM/370 like the 43xx or 9370
have gone.
IBM as a whole has been losing money for years, and laying off staff
left and right. ThatrCOs not exactly the sign of a platform rCLgoing stronglyrCY, is it. The only recent bright spot in the company, that I
know of, is its Red Hat acquisition.
Another notable feature of Z hardware is the virtualisation
technology inherent in the "hardware". So it all comes with multiple
Logical PARtitions or LPARs which despite their name are more like
physical partitioning of the hardware, and zVM which uses the "Start
Interpretive Execution" (SIE) instruction to create Virtual
Machines.
Does that sound like there are a limited number of slots for
instantiating virtual machines? Modern virtualization architectures
arenrCOt limited like that.
.. lets face it the competition such as pr1mos, hp-ux , Solaris, GCOS6
are all in simiular states of decline...
Are new installations of any of those still being sold? Somehow I donrCOt think so ...
On 16/09/2025 18:46, Simon Clubley wrote:
On 2025-09-12, bill <bill.gunshannon@gmail.com> wrote:
I .too, like VMS (contrary to what a lot of people here think :-) and
I personally know of a couple of niche markets VMS used to be strong
in (maybe not dominate, but held a good position). I never really
understood why they lost those markets and I would love to see them
back. But the more I read and see the more it seems to me that there
is no desire to actually grow the VMS market and the majority (including >>> those who actually control it) are perfectly happy to just let things
slide slowly down a black hole from which nothing ever returns.
Especially given that z/OS is actually several years older than VMS
and is still going very strongly indeed.
I don't believe its as strong as you believe. Perhaps the Z platform is,
but z/OS is pretty much limited to traditional big banks and airline >reservation systems. These systems are all much larger that most VMS
systems so migration away is harder and riskier. The hundreds of SMEs
that once had a small IBM/370 like the 43xx or 9370 have gone. It sold
its X86 server, laptop and server business to Lenovo. I think IBM may be >regretting this. These SME customers would have been the type for whom
the cloud made sense, but they have all gone X86 and its cloud business
is not the success it hoped for.
They require compliance to the Payment Card Industry Data Security
Standard (PCI DSS). This requires supported software, so IBM uses this
to drive the hardware/software cycle. Typically each generation of
hardware only supports two releases of software, and only the current + >previous release is supported.
Just as there were two prices for Alpha boxes there are two prices for
Z. One high one if you run zOS, one lower one if you run zLinux. z boxes
are big, but you pay for what you use. So if you have zOS you probably
have some spare CPUs you can turn on for minimal cost...
Another notable feature of Z hardware is the virtualisation technology >inherent in the "hardware". So it all comes with multiple Logical
PARtitions or LPARs which despite their name are more like physical >partitioning of the hardware, and zVM which uses the "Start Interpretive >Execution" (SIE) instruction to create Virtual Machines.
DEC never had anything like this.
Could VMS still have been as strong to this day if different decisions
and paths in the past had been taken ?
I don't think so. Whilst I feel it would have been wonderful to have had
a VLC on my desk in 1990s the pricing precluded that. Perhaps if the
price of the VLC had arrived at the same time, and for the same price as
the PS/2 and you had kept binary compatibility with VAX rather than
going Alpha and then Itanium..
.. lets face it the competition such as pr1mos, hp-ux , Solaris, GCOS6
are all in simiular states of decline...
Its interesting you say "modern virtualisation" because most of the
various "tweaks and tricks" modern X64 virtualisations use were
developed by IBM in the 1970s an 80s for VM/XA & VM/ESA. X86 and AMD
CPUs didn't get these until 2005/6. zVM is really slick... but
expensive.
Are new installations of VMS still being sold?
So you can buy Solaris and I think HP-UX ...
On Wed, 17 Sep 2025 00:25:32 +0100, David Wade wrote:
Its interesting you say "modern virtualisation" because most of the
various "tweaks and tricks" modern X64 virtualisations use were
developed by IBM in the 1970s an 80s for VM/XA & VM/ESA. X86 and AMD
CPUs didn't get these until 2005/6. zVM is really slick... but
expensive.
IBM invented virtualization, in the beginning to run multiple instances of CMS. This was their attempt to compete with interactive timesharing
systems from DEC and other vendors.
Trouble is, unlike those others, which
had multiuser support built-in, CMS was single-user only. So as a quick
hack, the rCLCPrCY (later rCLVMrCY) hypervisor was introduced. Each user effectively had their own (virtual) machine. Sounds like a neat idea,
until you realize that communication and sharing of info between machines (i.e. between different users) wouldnrCOt have been so easy.
Did IBM ever address that problem of communication between machines?
<htps://www.libvirt.org/manpages/virsh.htmlt>
Are new installations of VMS still being sold?
Probably not.
So you can buy Solaris and I think HP-UX ...
And of course macOS, but thatrCOs not in the server/enterprise league. But
it still likely, far and away, the most popular OS that can legally call itself rCLUnixrCY.
(Not that many people care about that any more.)
On 17/09/2025 07:08, Lawrence DrCOOliveiro wrote:
On Wed, 17 Sep 2025 00:25:32 +0100, David Wade wrote:
Its interesting you say "modern virtualisation" because most of the
various "tweaks and tricks" modern X64 virtualisations use were
developed by IBM in the 1970s an 80s for VM/XA & VM/ESA. X86 and AMD
CPUs didn't get these until 2005/6. zVM is really slick... but
expensive.
IBM invented virtualization, in the beginning to run multiple instances of >> CMS. This was their attempt to compete with interactive timesharing
systems from DEC and other vendors.
I am sorry, but it was really because their own products, TSO and TSS
didn't work. IBM really disliked VM and has tried to kill it several
times.
So the original VM work was done on the 360/47 & 67 special 360
models with virtual memory. The original 370 announcement did not
include Virtual Memory support, this cost them a lot of money as they
ended up retro-fitting it to several CPUs. The XA architecture does not >satisfy the
<https://en.wikipedia.org/wiki/Popek_and_Goldberg_virtualization_requirements>
so the hypervisor had to be re-written to use the SIE microcode....
They failed because the MVS team needed it to develop MVS now zOS..
Trouble is, unlike those others, which
had multiuser support built-in, CMS was single-user only. So as a quick
hack, the rCLCPrCY (later rCLVMrCY) hypervisor was introduced. Each user
effectively had their own (virtual) machine. Sounds like a neat idea,
until you realize that communication and sharing of info between machines
(i.e. between different users) wouldnrCOt have been so easy.
Not really. They were developed at the same time.
Did IBM ever address that problem of communication between machines?
Depends what you mean by communications?
The spool can be used to exchange files, so for example for e-mail via >virtual readers, punches and printers...
From virtually day 1 there was the Virtual Machine Communications
Facility (VMCF) , then IUCV - Inter User Communication Facility. TCP/IP
can be layered on top of these.
You can use these protocols to implement "Service Machines", virtual >machines which run a server program.
For example the IBM Office Automation System PROFS later Office Vision
used "service machines" with which the user communications via IUCV to >manage Document Storage, Diary Management and Messaging.
I think around the late 1970's IBM included the Shared File System which >finally allowed several users to have write access to the same file at
the same time...
.. so yes communications is not a problem.
<htps://www.libvirt.org/manpages/virsh.htmlt>
Are new installations of VMS still being sold?
Probably not.
So you can buy Solaris and I think HP-UX ...
And of course macOS, but thatrCOs not in the server/enterprise league. But >> it still likely, far and away, the most popular OS that can legally call
itself rCLUnixrCY.
(Not that many people care about that any more.)
I don't believe that it can legally be called UNIX, but yes its derived
from BSD but Apple no longer make what we call servers...
On 17/09/2025 07:08, Lawrence DrCOOliveiro wrote:
And of course macOS, but thatrCOs not in the server/enterprise league. But >> it still likely, far and away, the most popular OS that can legally call
itself rCLUnixrCY.
I don't believe that it can legally be called UNIX, but yes its derived
from BSD but Apple no longer make what we call servers...
In article <10ae172$33ukj$1@dont-email.me>,
David Wade <g4ugm@dave.invalid> wrote:
On 17/09/2025 07:08, Lawrence DrCOOliveiro wrote:
On Wed, 17 Sep 2025 00:25:32 +0100, David Wade wrote:<https://en.wikipedia.org/wiki/Popek_and_Goldberg_virtualization_requirements>
Its interesting you say "modern virtualisation" because most of the
I'm not sure about the specifics here, which is not to say that
I don't believe you, but I'd love to see a source. 370 is known
to meet the P&G requirements, and XA extended the architecture
with some new features for supporting virtual machines; do you
recall what they added that _violated_ the P&G requirements?
so the hypervisor had to be re-written to use the SIE microcode....
They failed because the MVS team needed it to develop MVS now zOS..
Sounds like you've got some inside baseball info here; I'd love
to see some sources if you can share them!
Trouble is, unlike those others, which
had multiuser support built-in, CMS was single-user only. So as a quick
hack, the rCLCPrCY (later rCLVMrCY) hypervisor was introduced. Each user >>> effectively had their own (virtual) machine. Sounds like a neat idea,
until you realize that communication and sharing of info between machines >>> (i.e. between different users) wouldnrCOt have been so easy.
Not really. They were developed at the same time.
Lol. The troll really has no idea what he's talking about.
Did IBM ever address that problem of communication between machines?
Depends what you mean by communications?
The spool can be used to exchange files, so for example for e-mail via
virtual readers, punches and printers...
From virtually day 1 there was the Virtual Machine Communications
Facility (VMCF) , then IUCV - Inter User Communication Facility. TCP/IP
can be layered on top of these.
You can use these protocols to implement "Service Machines", virtual
machines which run a server program.
For example the IBM Office Automation System PROFS later Office Vision
used "service machines" with which the user communications via IUCV to
manage Document Storage, Diary Management and Messaging.
I think around the late 1970's IBM included the Shared File System which
finally allowed several users to have write access to the same file at
the same time...
.. so yes communications is not a problem.
You should tell him that CMS stands for, "Conversational Monitor
System". VM is all about communications, as most timeshared
systems are.
<htps://www.libvirt.org/manpages/virsh.htmlt>
Are new installations of VMS still being sold?
Probably not.
So you can buy Solaris and I think HP-UX ...
And of course macOS, but thatrCOs not in the server/enterprise league. But >>> it still likely, far and away, the most popular OS that can legally call >>> itself rCLUnixrCY.
(Not that many people care about that any more.)
I don't believe that it can legally be called UNIX, but yes its derived >>from BSD but Apple no longer make what we call servers...
macOS is one of the few that _can_ legally be called Unix. The
full list is here: https://www.opengroup.org/openbrand/register/
- Dan C.
In article <10ae172$33ukj$1@dont-email.me>,
David Wade <g4ugm@dave.invalid> wrote:
I think around the late 1970's IBM included the Shared File System which
finally allowed several users to have write access to the same file at
the same time...
.. so yes communications is not a problem.
You should tell him that CMS stands for, "Conversational Monitor
System". VM is all about communications, as most timeshared
systems are.
On 17/09/2025 12:52, Dan Cross wrote:
In article <10ae172$33ukj$1@dont-email.me>,
David Wade <g4ugm@dave.invalid> wrote:
On 17/09/2025 07:08, Lawrence DrCOOliveiro wrote:
On Wed, 17 Sep 2025 00:25:32 +0100, David Wade wrote:<https://en.wikipedia.org/wiki/Popek_and_Goldberg_virtualization_requirements>
Its interesting you say "modern virtualisation" because most of the
I'm not sure about the specifics here, which is not to say that
I don't believe you, but I'd love to see a source. 370 is known
to meet the P&G requirements, and XA extended the architecture
with some new features for supporting virtual machines; do you
recall what they added that _violated_ the P&G requirements?
It is the same issue that differentiates a 68000 and the 68010 and which >prevented the VAX having a hypervisor without microcode changes...
The VM/370 hypervisor relies on running the virtual machines in "problem >state" or "user mode" even if the VM thinks its running in "Supervisor >State" or "privileged mode". So for example CMS thinks its running in
real memory, where as in fact it running in virtual.
In order for this to work any instruction which discloses the system
state needs to be a privileged instruction. This is true on S/370 but
this generates a huge overhead when running non-virtual machines.
So on XA and later there are ways to examine the system state from a >non-privileged program.
I found the paper on Virtualising VAX that was linked interesting ...
https://www.cs.cmu.edu/~15811/papers/vax_vmm.pdf
So they modified the VAX microcode to get round this problem, however
the VAX has an additional challenges as it has four protection states,
not two like a S/370. There are additional issues with VAX covered in
this paper....
so the hypervisor had to be re-written to use the SIE microcode....
They failed because the MVS team needed it to develop MVS now zOS..
Sounds like you've got some inside baseball info here; I'd love
to see some sources if you can share them!
I think its widely "put about. For example in Melinda Varian's paper
from 1991:-
"VM AND THE VM COMMUNITY: Past, Present, and Future"
https://www.leeandmelindavarian.com/Melinda/neuvm.pdf
bottom of page 55 in the PDF:-
There is a widely a widely believed (but possibly apocryphal) story that >anti-VM, pro-MVS forces at one point nearly succeeded in convincing the >company to kill VM, but the President of IBM, upon learning how heavily
the MVS developers depended upon VM, said simply, rCLIf itrCOs good enough >for you, itrCOs good enough for the customers.rCY
[snip]
macOS is one of the few that _can_ legally be called Unix. The
full list is here: https://www.opengroup.org/openbrand/register/
oh thanks for that...
In article <10aekoc$3a62f$1@dont-email.me>,
David Wade <g4ugm@dave.invalid> wrote:
On 17/09/2025 12:52, Dan Cross wrote:
In article <10ae172$33ukj$1@dont-email.me>,
David Wade <g4ugm@dave.invalid> wrote:
On 17/09/2025 07:08, Lawrence DrCOOliveiro wrote:
On Wed, 17 Sep 2025 00:25:32 +0100, David Wade wrote:<https://en.wikipedia.org/wiki/Popek_and_Goldberg_virtualization_requirements>
Its interesting you say "modern virtualisation" because most of the
I'm not sure about the specifics here, which is not to say that
I don't believe you, but I'd love to see a source. 370 is known
to meet the P&G requirements, and XA extended the architecture
with some new features for supporting virtual machines; do you
recall what they added that _violated_ the P&G requirements?
It is the same issue that differentiates a 68000 and the 68010 and which
prevented the VAX having a hypervisor without microcode changes...
Or the inverse? The issue with the 68000 was that it noted the
processor privilege mode, interrupt level, and debugging trace
control in the status register, and reading that register was
unprivileged. The 68010 simply made the instruction reading the
entire SR privileged, and added an unprivileged instruction to
read just the condition codes.
Sounds like IBM took an already clasically virtualizable machine
and made it not so for efficiency reasons, adding in new
sensitive and yet unprivileged instructions, but also a
compatibility hack via microcode and a new instruction to switch
to that?
The VM/370 hypervisor relies on running the virtual machines in "problem
state" or "user mode" even if the VM thinks its running in "Supervisor
State" or "privileged mode". So for example CMS thinks its running in
real memory, where as in fact it running in virtual.
In order for this to work any instruction which discloses the system
state needs to be a privileged instruction. This is true on S/370 but
this generates a huge overhead when running non-virtual machines.
Yup. This is pretty much theorem 1 from P&G's 1974 CACM paper.
P&G would classify instructions that expose that kind state as
"sensitive". Their criteria is that all sensitive instructions
must be a subset of the set of privileged instructions, so that
they can be trapped (and emulated, usually) by the hypervisor.
So on XA and later there are ways to examine the system state from a
non-privileged program.
Interesting. I guess I'm curious what they changed; perhaps the
address mode bit? My reading suggests that they added some
enhancements to improve VM performance, but it's unclear to me
what they did that made XA unvirtualizable.
I found the paper on Virtualising VAX that was linked interesting ...
https://www.cs.cmu.edu/~15811/papers/vax_vmm.pdf
Thanks! I thought it was interesting.
So they modified the VAX microcode to get round this problem, however
the VAX has an additional challenges as it has four protection states,
not two like a S/370. There are additional issues with VAX covered in
this paper....
Critically, P&G never considered virtual memory beyond a single
relocation register. VM invented shadow paging to make it cope.
I imagine the paging scheme on the VAX would require similar
techniques.
https://homes.cs.aau.dk/~kleist/Courses/nds-e05/papers/virtual-vax.pdf
goes into some detail here. The ring compression thing is
interesting.
so the hypervisor had to be re-written to use the SIE microcode....
They failed because the MVS team needed it to develop MVS now zOS..
Sounds like you've got some inside baseball info here; I'd love
to see some sources if you can share them!
I think its widely "put about. For example in Melinda Varian's paper
from 1991:-
"VM AND THE VM COMMUNITY: Past, Present, and Future"
https://www.leeandmelindavarian.com/Melinda/neuvm.pdf
bottom of page 55 in the PDF:-
There is a widely a widely believed (but possibly apocryphal) story that
anti-VM, pro-MVS forces at one point nearly succeeded in convincing the
company to kill VM, but the President of IBM, upon learning how heavily
the MVS developers depended upon VM, said simply, rCLIf itrCOs good enough >> for you, itrCOs good enough for the customers.rCY
My problem with Varian's paper is that every time I sit down to
read just a part, I get sucked into it and an hour or two goes
by. It's just too good!
[snip]
macOS is one of the few that _can_ legally be called Unix. The
full list is here: https://www.opengroup.org/openbrand/register/
oh thanks for that...
Sure thing!
- Dan C.
In article <10ae172$33ukj$1@dont-email.me>,
David Wade <g4ugm@dave.invalid> wrote:
I think around the late 1970's IBM included the Shared File System which >>> finally allowed several users to have write access to the same file at
the same time...
.. so yes communications is not a problem.
You should tell him that CMS stands for, "Conversational Monitor
System". VM is all about communications, as most timeshared
systems are.
Dan, that's actually a retronym. It was originally called the
"Cambridge Monitor System". I think the renaming occurred when
it moved off the modified 360/40 to the 360/67, but I could be
hallucinating like an LLM.
On 17/09/2025 21:23, Dan Cross wrote:
In article <10aekoc$3a62f$1@dont-email.me>,
David Wade <g4ugm@dave.invalid> wrote:
On 17/09/2025 12:52, Dan Cross wrote:
In article <10ae172$33ukj$1@dont-email.me>,
David Wade <g4ugm@dave.invalid> wrote:
On 17/09/2025 07:08, Lawrence DrCOOliveiro wrote:
On Wed, 17 Sep 2025 00:25:32 +0100, David Wade wrote:
Its interesting you say "modern virtualisation" because most of the >>>>> <https://en.wikipedia.org/wiki/Popek_and_Goldberg_virtualization_requirements>
I'm not sure about the specifics here, which is not to say that
I don't believe you, but I'd love to see a source. 370 is known
to meet the P&G requirements, and XA extended the architecture
with some new features for supporting virtual machines; do you
recall what they added that _violated_ the P&G requirements?
It is the same issue that differentiates a 68000 and the 68010 and which >>> prevented the VAX having a hypervisor without microcode changes...
Or the inverse? The issue with the 68000 was that it noted the
processor privilege mode, interrupt level, and debugging trace
control in the status register, and reading that register was
unprivileged. The 68010 simply made the instruction reading the
entire SR privileged, and added an unprivileged instruction to
read just the condition codes.
Sounds like IBM took an already clasically virtualizable machine
and made it not so for efficiency reasons, adding in new
sensitive and yet unprivileged instructions, but also a
compatibility hack via microcode and a new instruction to switch
to that?
I think its to do with switching between 24 and 31 bit addressing...
.. SIE or Start Interpretive Execution creates a virtual environment
that the microcode manages.
As I am sure you know many of the earlier 370 class machines had similar >facilities in that ECPS:VM implemented some of the functions normally >carried out in the Hypervisor in the CPU microcode. I found this free to >download paper on it :-
https://dl.acm.org/doi/abs/10.1145/1096532.1096534
in many ways SIE is an extension of these assists...
Especially given that z/OS is actually several years older than VMS
and is still going very strongly indeed.
Could VMS still have been as strong to this day if different
decisions and paths in the past had been taken ?
... the IBM Z instruction set only has three instruction lengths -
2, 4 and 6 bytes, which has not changed since System/360 - and you
can always discover the length of each instruction from its first
two bytes. That makes having multiple instructions being decoded simultaneously easier, which is a bottleneck in x86 and x86-64, the
other long-lasting CISC instruction set.
I talked to a colleague, who returned to my employer after a
takeover, and remembers our business in the early 1980s. He's
perfectly clear that VMS was a far better OS for technical computing
than any of the proprietary minicomputer OSes of the time, all of
which are dead. But VAX couldn't match the performance of high-end
68000 Unix machines, followed by the RISCs, and the rest is history.
On Sat, 20 Sep 2025 21:13 +0100 (BST), John Dallman wrote:
... the IBM Z instruction set only has three instruction lengths -
2, 4 and 6 bytes, which has not changed since System/360 - and you
can always discover the length of each instruction from its first
two bytes. That makes having multiple instructions being decoded
simultaneously easier, which is a bottleneck in x86 and x86-64, the
other long-lasting CISC instruction set.
Mainframes were never designed for high CPU performance.
Look at the current Top500 list of the worldrCOs fastest machines; what architectures do you see? IBM POWER offers a few contenders; also ARM,
I think MIPS, and of course the most common is x86-64. At some point
no doubt a RISC-V machine is likely to make an appearance.
No IBM Z. Not before, not now, not ever.
The VAX instruction set is quite nice in some ways and quite horrible in others. Some of those made it hard to make run very fast.
The extremely variable-length instructions are a prime example.
On 9/20/2025 4:13 PM, John Dallman wrote:
The VAX instruction set is quite nice in some ways and quite horrible in
others. Some of those made it hard to make run very fast.
The extremely variable-length instructions are a prime example.
CASEx is probably the worst.
Example of >100 bytes long:
50 bytes long
On 9/20/2025 8:51 PM, Arne Vajh|+j wrote:
On 9/20/2025 4:13 PM, John Dallman wrote:
The VAX instruction set is quite nice in some ways and quite horrible in >>> others. Some of those made it hard to make run very fast.
The extremely variable-length instructions are a prime example.
CASEx is probably the worst.
Example of >100 bytes long:
Correction:
50 bytes long
On 9/20/2025 7:40 PM, Lawrence DrCOOliveiro wrote:
Look at the current Top500 list of the worldrCOs fastest machines; what
architectures do you see? IBM POWER offers a few contenders; also ARM,
I think MIPS, and of course the most common is x86-64. At some point no
doubt a RISC-V machine is likely to make an appearance.
No IBM Z. Not before, not now, not ever.
Not now.
But once upon a time.
IBM 3090 with integrated vector facility and the equivalent and
compatible Amdahl vector.
Mainframes were never designed for high CPU performance.
He's perfectly clear that VMS was a far better OS for technicalPresumably your colleague was talking only about non-Unix systems?
computing than any of the proprietary minicomputer OSes of the time
Correction:But it is possible to make a 100 byte instruction.
50 bytes long
If reusing jump destinations I guess it would be possible
to create a 32 KB instruction.
On Sat, 20 Sep 2025 21:13 +0100 (BST), John Dallman wrote:
... the IBM Z instruction set only has three instruction lengths -
2, 4 and 6 bytes, which has not changed since System/360 - and you
can always discover the length of each instruction from its first
two bytes. That makes having multiple instructions being decoded
simultaneously easier, which is a bottleneck in x86 and x86-64, the
other long-lasting CISC instruction set.
Mainframes were never designed for high CPU performance.
Look at the current Top500 list of the worldrCOs fastest machines; what architectures do you see? IBM POWER offers a few contenders; also ARM,
I think MIPS, and of course the most common is x86-64. At some point
no doubt a RISC-V machine is likely to make an appearance.
No IBM Z. Not before, not now, not ever.
On Sat, 20 Sep 2025 20:09:53 -0400, Arne Vajh|+j wrote:
On 9/20/2025 7:40 PM, Lawrence DrCOOliveiro wrote:
Look at the current Top500 list of the worldrCOs fastest machines; what
architectures do you see? IBM POWER offers a few contenders; also ARM,
I think MIPS, and of course the most common is x86-64. At some point no
doubt a RISC-V machine is likely to make an appearance.
No IBM Z. Not before, not now, not ever.
Not now.
But once upon a time.
IBM 3090 with integrated vector facility and the equivalent and
compatible Amdahl vector.
Was it ever competitive?
No. ThatrCOs why it was abandoned.
In article <68cf5518$0$718$14726298@news.sunsite.dk>, arne@vajhoej.dk
(Arne Vajh|+j) wrote:
Correction:But it is possible to make a 100 byte instruction.
>50 bytes long
If reusing jump destinations I guess it would be possible
to create a 32 KB instruction.
Ouch!
Register masks are another thing that make fast implementation difficult.
On 9/20/2025 9:40 PM, Lawrence DrCOOliveiro wrote:
On Sat, 20 Sep 2025 20:09:53 -0400, Arne Vajh|+j wrote:
On 9/20/2025 7:40 PM, Lawrence DrCOOliveiro wrote:
Look at the current Top500 list of the worldrCOs fastest machines; what >>>> architectures do you see? IBM POWER offers a few contenders; also ARM, >>>> I think MIPS, and of course the most common is x86-64. At some point no >>>> doubt a RISC-V machine is likely to make an appearance.
No IBM Z. Not before, not now, not ever.
Not now.
But once upon a time.
IBM 3090 with integrated vector facility and the equivalent and
compatible Amdahl vector.
Was it ever competitive?
No. ThatrCOs why it was abandoned.
It was produced and sold for a number of years. In competition
with Cray, NEC, Fujitsu etc..
Production and sale stopped when the entire class
(single super computers with vector aggregate) went
away (and was replaced by distributed super computers).
On 9/20/2025 9:40 PM, Lawrence DrCOOliveiro wrote:
On Sat, 20 Sep 2025 20:09:53 -0400, Arne Vajh|+j wrote:
IBM 3090 with integrated vector facility and the equivalent and
compatible Amdahl vector.
Was it ever competitive?
No. ThatrCOs why it was abandoned.
It was produced and sold for a number of years.
In article <10ane0m$1dl6v$4@dont-email.me>, ldo@nz.invalid (Lawrence D_Oliveiro) wrote:
Mainframes were never designed for high CPU performance.
IBM certainly intended them to be, and the IBM 360 Model 91 was the
first ever computer to use Tomasulo's algorithm, which is now ubiquitous
in fast microprocessors.
Modern IBM Z is not CPU-competitive with fast systems, but it is much
faster than the originals ...
On 21/09/2025 00:40, Lawrence DrCOOliveiro wrote:
No, but these machines are all special purpose.
No IBM Z [in supercomputer rankings]. Not before, not now, not ever.
The real advantage of the 360/370 etc. architecture was the way it did
IO. The original channel with its own dedicated processor and 8-bit bus running at 1Mhz yielding 8 Mbits/sec was rapid for its era.
Then the use of block mode terminals so the management of input fields
was all done in the terminal controller. The Mainframe never saw an
interrupt until a complete form was filled in.
I think DEC or was it HP forgot this with the Alpha.
I remember looking at Alpha for Microsoft Exchange on Windows/NT. It was really hard to justify using an Alpha because Exchange is very IO
intensive. You couldn't get enough RAID to use the CPU.
But we digress, I don't believe the techniques IBM use to perpetuate the
use of Z would have worked with VMS.
On Sun, 21 Sep 2025 10:56:33 +0100, David Wade wrote:
On 21/09/2025 00:40, Lawrence DrCOOliveiro wrote:
No, but these machines are all special purpose.
No IBM Z [in supercomputer rankings]. Not before, not now, not ever.
My point exactly.
The real advantage of the 360/370 etc. architecture was the way it did
IO. The original channel with its own dedicated processor and 8-bit bus
running at 1Mhz yielding 8 Mbits/sec was rapid for its era.
Then the use of block mode terminals so the management of input fields
was all done in the terminal controller. The Mainframe never saw an
interrupt until a complete form was filled in.
In other words, mainframes are, and were, all about high I/O throughput
and efficient batch operation. Notice that they are *not* about low I/O latency, which is important for interactive and real-time work.
Imagine trying to run a full-screen text editor on those block-mode
terminals -- TECO, TPU/EVE, Emacs ... a few dozen users interrupting the
CPU on every keystroke would probably bring a big, multi-million-dollar
IBM system to its knees.
I think DEC or was it HP forgot this with the Alpha.
No they didnrCOt. DEC machines were all about interactivity, right from the original PDP-1. That meant low latency, even at the expense of high throughput. ThatrCOs why they were able to run circles around far more expensive (and complex) IBM hardware in the interactive timesharing
market.
Remember machines in the various PDP families were quite popular in lab/ factory situations, doing monitoring, data collection and process control
in real time.
I remember looking at Alpha for Microsoft Exchange on Windows/NT. It was
really hard to justify using an Alpha because Exchange is very IO
intensive. You couldn't get enough RAID to use the CPU.
Or maybe Windows NT (and Exchange) were just too inefficient. Did you
compare performance with DEC Unix on the same hardware? Linux was also starting to build a reputation for offering higher performance on the vendorrCOs own hardware than the vendor-supplied OS.
But we digress, I don't believe the techniques IBM use to perpetuate the
use of Z would have worked with VMS.
Correct. VMS, again, followed in that DEC tradition of being primarily an interactive, not a batch, OS.
On 21/09/2025 21:31, Lawrence DrCOOliveiro wrote:
On Sun, 21 Sep 2025 10:56:33 +0100, David Wade wrote:
I remember looking at Alpha for Microsoft Exchange on Windows/NT. It was >>> really hard to justify using an Alpha because Exchange is very IO
intensive. You couldn't get enough RAID to use the CPU.
Or maybe Windows NT (and Exchange) were just too inefficient. Did you
compare performance with DEC Unix on the same hardware? Linux was also
starting to build a reputation for offering higher performance on the
vendorrCOs own hardware than the vendor-supplied OS.
Thats crap. Exchange is very efficient in terms of CPU use. It just
hammers the disks. So how could adding an Alpha CPU increase
performance. The alpha that is simply overkill. You could get the same performance, running the same OS on much cheaper, lower performance, in
CPU terms boxes. You just need a mirror set for every 250 users...
If you were Microsoft at the time you wanted Exchange which only runs on Windows so other OSs not an option.
On 21/09/2025 21:31, Lawrence DrCOOliveiro wrote:
Imagine trying to run a full-screen text editor on those block-mode
terminals -- TECO, TPU/EVE, Emacs ... a few dozen users interrupting
the CPU on every keystroke would probably bring a big,
multi-million-dollar IBM system to its knees.
You actually can't write an editor that works like that, and you don't
need it. IBMs XEDIT is just as powerful as EMACS in its own way ...
... with the while screen being multiple, editable fields.
You have to leverage what you have. I still prefer xedit to teco or
emacs.
I think DEC or was it HP forgot this with the Alpha.
No they didnrCOt. DEC machines were all about interactivity, right from
the original PDP-1. That meant low latency, even at the expense of high
throughput. ThatrCOs why they were able to run circles around far more
expensive (and complex) IBM hardware in the interactive timesharing
market.
Then why did they try and sell them as Database Servers or Exchange
Server.
In fact the converse applies. I well remember sharing a drink
with a friend who was rolling out office automation in a big bank.
At the time the VAX servers he had for All-In-One would not scale to all
the users he needed to deliver OA too. So senior managers and directors
got all-in-one, but the plebs got IBMs Office Vision because the
mainframe scaled better with large numbers of screens, with sub-second response.
Remember machines in the various PDP families were quite popular inWe must have had hundreds of 11-s running CAMAC crates, but there is
lab/ factory situations, doing monitoring, data collection and process
control in real time.
usually no random database access on such systems. Bang the data to tape
or floppy disk. Send to mainframe for analysis..
Exchange is very efficient in terms of CPU use. It just hammers the
disks.
If you were Microsoft at the time you wanted Exchange which only runs on Windows so other OSs not an option.
Correct. VMS, again, followed in that DEC tradition of being primarily
an interactive, not a batch, OS.
well yes, but it degrades terribly when you get short of RAM and hit the dreaded type-behind. I remember some of my users coming back from a VMS introduction and saying there was no way they were having a VAX how
could we get an IBM 4381. I told them and they were very happy...
For databases the argument was that 64 bit allowed for larger address
space and more memory and more caching would increase performance.
I don't know if that applies to Exchange as well.
On Sun, 21 Sep 2025 19:20:20 -0400, Arne Vajh|+j wrote:
For databases the argument was that 64 bit allowed for larger address
space and more memory and more caching would increase performance.
I don't know if that applies to Exchange as well.
Even if it did, it would have been moot. Windows NT remained resolutely 32-bit, even on 64-bit machines like Alpha, right into the rCO00s.
But a 64 bit version was supposed to happen. People were expecting it.
MS dragged their feet and eventually pulled the plug on Alpha.
And then HP did the same and we got Itanium. And MS added Windows
support for that (64 bit that is).
On 9/21/2025 7:22 PM, Lawrence D?Oliveiro wrote:
On Sun, 21 Sep 2025 19:20:20 -0400, Arne Vajhoj wrote:
For databases the argument was that 64 bit allowed for larger address
space and more memory and more caching would increase performance.
I don't know if that applies to Exchange as well.
Even if it did, it would have been moot. Windows NT remained resolutely 32-bit, even on 64-bit machines like Alpha, right into the ?00s.
Relevant point.
But a 64 bit version was supposed to happen. People were
expecting it. MS dragged their feet and eventually
pulled the plug on Alpha.
On Sun, 21 Sep 2025 19:48:30 -0400, Arne Vajhoj wrote:
But a 64 bit version was supposed to happen. People were expecting it.
MS dragged their feet and eventually pulled the plug on Alpha.
Obviously it was just too hard for Windows NT to support a mix of 32-bit
and 64-bit architectures. So much for portability ...
And then HP did the same and we got Itanium. And MS added Windows
support for that (64 bit that is).
Itanium was a very high-profile, big-budget project. I suppose it?s
possible that HP and Intel contributed some of the costs for Microsoft to create 64-bit NT for that.
On 9/21/2025 6:35 PM, David Wade wrote:
On 21/09/2025 21:31, Lawrence DrCOOliveiro wrote:
On Sun, 21 Sep 2025 10:56:33 +0100, David Wade wrote:
I remember looking at Alpha for Microsoft Exchange on Windows/NT. It
was
really hard to justify using an Alpha because Exchange is very IO
intensive. You couldn't get enough RAID to use the CPU.
Or maybe Windows NT (and Exchange) were just too inefficient. Did you
compare performance with DEC Unix on the same hardware? Linux was also
starting to build a reputation for offering higher performance on the
vendorrCOs own hardware than the vendor-supplied OS.
Thats crap. Exchange is very efficient in terms of CPU use. It just
hammers the disks. So how could adding an Alpha CPU increase
performance. The alpha that is simply overkill. You could get the same
performance, running the same OS on much cheaper, lower performance,
in CPU terms boxes. You just need a mirror set for every 250 users...
If you were Microsoft at the time you wanted Exchange which only runs
on Windows so other OSs not an option.
For databases the argument was that 64 bit allowed for larger address
space and more memory and more caching would increase performance.
I don't know if that applies to Exchange as well.
Arne
[snip]
Modern IBM Z is not CPU-competitive with fast systems,
Microsoft was still committed to doing 64bit Windows for Itanium
though, and Itanium hardware wasn't ready yet. As they still had
plenty of Alphas lying around, they continued working on the 64bit
Alpha port internally until Itanium hardware was ready in
sufficient quantities.
So I think its a bit disingenuous to claim Windows NT wasn't portable.
Windows 2000 was to introduce new VLM APIs that allow 32bit applications
on Alpha to access very large amounts of memory.
On Mon, 22 Sep 2025 19:57:42 +1200, David Goodwin wrote:
Windows 2000 was to introduce new VLM APIs that allow 32bit applications
on Alpha to access very large amounts of memory.
ThererCOs a reason the API is still called rCLWin32rCY, not rCLWin64rCY. Instead of
using POSIX-style symbolic type names like size_t, time_t and off_t, they explicitly use 32-bit types.
This leads to craziness like, when getting the size of a file, it returns
the high half and low half in separate 32-bit quantities, even on a 64-bit system, with native 64-bit integer support!
On 9/22/2025 7:03 PM, Lawrence DrCOOliveiro wrote:
On Mon, 22 Sep 2025 19:57:42 +1200, David Goodwin wrote:
Windows 2000 was to introduce new VLM APIs that allow 32bit applications >>> on Alpha to access very large amounts of memory.
ThererCOs a reason the API is still called rCLWin32rCY, not rCLWin64rCY. Instead of
using POSIX-style symbolic type names like size_t, time_t and off_t, they
explicitly use 32-bit types.
This leads to craziness like, when getting the size of a file, it returns
the high half and low half in separate 32-bit quantities, even on a
64-bit
system, with native 64-bit integer support!
There are two aspects here.
1) types that have different sizes on different
-a-a platforms/compilers/configs vs types that have
-a-a same sizes on all platforms/compilers/configs
Experience shows that the latter is better than the
former, because it makes it easier to write portable
code with well defined behavior.
off_t is a signed integer of unknown size.
On 9/20/2025 4:13 PM, John Dallman wrote:
The VAX instruction set is quite nice in some ways and quite horrible in
others. Some of those made it hard to make run very fast.
The extremely variable-length instructions are a prime example.
CASEx is probably the worst.
Example of >100 bytes long:
The fact that many of the ports you mention never made it to
production release, and even the ones (other than x86) that did are
now defunct, I think reinforces my point. The ports were difficult
and expensive to create, and difficult and expensive to maintain.
In the end they were all just abandoned.
Even the concept of a portable OS seems to have gone from Windows
nowadays. It has taken Microsoft a lot of trouble to come up with
the ARM port, for example, and I don't think the compatibility
issues have entirely been worked out, even after all these years.
A RISC-V Windows port will likely never happen.
Microsoft is a commercial organisation, and has to pay staff for all
the work done on Windows. This increases costs compared to
open-source work that doesn't show up in the costs for Linux, or the
BSDs.
I've worked on thoroughly portable application software for Windows
NT (and Unixes) since 1995.
In the mid-1990s, MIPS R3000 and R4000 were only available in
expensive workstations from MIPS, DEC and SGI. SGI had an ongoing
internal disagreement over embracing Windows NT or sticking with
Irix. The only NT machines they ever sold were Intel-based.
There was a company - NetPower - that planned to sell R4000-based
machines in the high-end PC market, and we had one of their
prototypes for porting. They had not launched the machines when the
Pentium Pro completely destroyed MIPS' performance advantage over
x86. NetPower switched to x86.
x86 was the usual platform for Windows NT.
The saying my team coined was "If you don't know about processor architectures, you want Intel. If you want the fastest CPU and can
cope with a lot of software not being available, you want Alpha. If
you really, really believe in IBM's strategy and are prepared to pay
at least three times as much to stick with it, you want PowerPC ..."
Alpha was killed by Compaq.
PowerPC was abandoned by Microsoft.
Itanium was an expensive fiasco in the general computing market. Its
sole benefit to Windows was that it taught Microsoft a lot about
doing 64-bit.
32-bit ARM was part of one of Microsoft's less good ideas. There
appears to be a widespread opinion within the company that the
Windows GUI is intrinsically and obviously superior to any other.
There is no single best GUI, IMHO.
[Microsoft] had obnoxiously cut-down versions of Windows which made
it very hard to test software unless you worked in the exact way
that Microsoft had prepared for.
A RISC-V Windows port will likely never happen.
Quite likely not, because RISC-V is suffering from an ongoing
failure to produce cores fast enough for desktops, or even mobile
devices. This has lasted long enough that I'm becoming doubtful it
will ever happen.
On Mon, 22 Sep 2025 20:13:12 +1200, David Goodwin wrote:
So I think its a bit disingenuous to claim Windows NT wasn't portable.
The fact that many of the ports you mention never made it to production release, and even the ones (other than x86) that did are now defunct, I think reinforces my point. The ports were difficult and expensive to
create, and difficult and expensive to maintain. In the end they were all just abandoned.
Even the concept of a portable OS seems to have gone from Windows
nowadays. It has taken Microsoft a lot of trouble to come up with the ARM port, for example, and I don?t think the compatibility issues have
entirely been worked out, even after all these years.
A RISC-V Windows port will likely never happen.
On Mon, 22 Sep 2025 19:57:42 +1200, David Goodwin wrote:
Windows 2000 was to introduce new VLM APIs that allow 32bit applications
on Alpha to access very large amounts of memory.
There?s a reason the API is still called ?Win32?, not ?Win64?. Instead of using POSIX-style symbolic type names like size_t, time_t and off_t, they explicitly use 32-bit types.
This leads to craziness like, when getting the size of a file, it returns the high half and low half in separate 32-bit quantities, even on a 64-bit system, with native 64-bit integer support!
In article <10asked$2lq0s$3@dont-email.me>, ldo@nz.invalid says...
On Mon, 22 Sep 2025 20:13:12 +1200, David Goodwin wrote:
So I think its a bit disingenuous to claim Windows NT wasn't
portable.
The fact that many of the ports you mention never made it to
production release, and even the ones (other than x86) that did are
now defunct, I think reinforces my point. The ports were difficult
and expensive to create, and difficult and expensive to maintain.
In the end they were all just abandoned.
What makes you think they were difficult or expensive?
There are plenty of other reasons why Microsoft, a for-profit
company, might choose to discontinue them.
[lots of other discussion of exactly how difficult and expensive it
is to maintain a cross-platform proprietary OS omitted]
Linux is not immune to this either.
Linux no longer supports Itanium for the same reason Windows no
longer supports Itanium: the costs started to ought-weigh the
benefits.
Even the concept of a portable OS seems to have gone from Windows
nowadays. It has taken Microsoft a lot of trouble to come up with
the ARM port, for example, and I don?t think the compatibility
issues have entirely been worked out, even after all these years.
A lot of trouble? They made some (obviously) bad decisions with
Windows RT, but that doesn't imply the port was especially
difficult.
A RISC-V Windows port will likely never happen.
That of course depends on if it will ever look like a *profitable*
platform to sell Windows on.
In article <10asked$2lq0s$3@dont-email.me>, ldo@nz.invalid (Lawrence D_Oliveiro) wrote:
Lawrence D_Oliveiro <ldo@nz.invalid> wrote:
The fact that many of the ports you mention never made it to
production release, and even the ones (other than x86) that did are
now defunct, I think reinforces my point. The ports were difficult
and expensive to create, and difficult and expensive to maintain.
In the end they were all just abandoned.
Microsoft is a commercial organisation, and has to pay staff for all the
work done on Windows. This increases costs compared to open-source work
that doesn't show up in the costs for Linux, or the BSDs. I've worked on thoroughly portable application software for Windows NT (and Unixes)
since 1995. My employers have at least considered porting to every
Windows NT platform available. I've been involved with those decisions
and done the more recent ports.
i860 never appeared in machines people could buy.
On Di 23 Sep 2025 at 20:49, jgd@cix.co.uk (John Dallman) wrote:
i860 never appeared in machines people could buy.My PPOE bought an Alliant machine - i think it was FX/2800 - that
was built with just such chips.
On Di 23 Sep 2025 at 20:49, jgd@cix.co.uk (John Dallman) wrote:
i860 never appeared in machines people could buy.
My PPOE bought an Aliaint machine - i think it was FX/2800 - that was
built with just such chips.
On Sun, 28 Sep 2025 22:21:58 +0200, Andreas Eder wrote:
On Di 23 Sep 2025 at 20:49, jgd@cix.co.uk (John Dallman) wrote:
i860 never appeared in machines people could buy.
My PPOE bought an Aliaint machine - i think it was FX/2800 - that was
built with just such chips.
According to Da Wiki, the FX/2800 range appeared in 1990 <https://en.wikipedia.org/wiki/Alliant_Computer_Systems#1990s>, so
Windows NT was still a (vapourware) glint in Dave CutlerrCOs eye at that point.
Presumably it was running some kind of Unix system.
In article <87bjmufhuh.fsf@eder.anydns.info>, a_eder_muc@web.de (Andreas >Eder) wrote:
On Di 23 Sep 2025 at 20:49, jgd@cix.co.uk (John Dallman) wrote:
i860 never appeared in machines people could buy.My PPOE bought an Alliant machine - i think it was FX/2800 - that
was built with just such chips.
OK, I'm wrong. I am pretty sure that the i860 machines Microsoft used for >early NT development were never sold.
On Sun, 28 Sep 2025 22:21:58 +0200, Andreas Eder wrote:
On Di 23 Sep 2025 at 20:49, jgd@cix.co.uk (John Dallman) wrote:
i860 never appeared in machines people could buy.
My PPOE bought an Aliaint machine - i think it was FX/2800 - that was
built with just such chips.
According to Da Wiki, the FX/2800 range appeared in 1990 <https://en.wikipedia.org/wiki/Alliant_Computer_Systems#1990s>, so
Windows NT was still a (vapourware) glint in Dave CutlerrCOs eye at that point.
Presumably it was running some kind of Unix system.
On So 28 Sep 2025 at 22:57, Lawrence D?Oliveiro <ldo@nz.invalid> wrote:
On Sun, 28 Sep 2025 22:21:58 +0200, Andreas Eder wrote:
On Di 23 Sep 2025 at 20:49, jgd@cix.co.uk (John Dallman) wrote:
i860 never appeared in machines people could buy.
My PPOE bought an Aliaint machine - i think it was FX/2800 - that was
built with just such chips.
According to Da Wiki, the FX/2800 range appeared in 1990 <https://en.wikipedia.org/wiki/Alliant_Computer_Systems#1990s>, so
Windows NT was still a (vapourware) glint in Dave Cutler?s eye at that point.
Presumably it was running some kind of Unix system.
Yes, of course it was. But it was a machine people could buy with i860s inside.
The issue was that [i860] turned out to be not as good as expected, so Microsoft built some new hardware using MIPS instead and switched
development to that.
On Tue, 30 Sep 2025 15:28:34 +1300, David Goodwin wrote:
The issue was that [i860] turned out to be not as good as expected, so Microsoft built some new hardware using MIPS instead and switched development to that.
Didn?t seem to help, though, did it?
The MIPS version of NT didn?t last long, either.
IIRC the *reason* for the i860 port first, and then the MIPS port,
was to ensure that the operating system was developed from the start
with portability in mind.
So the MIPS port achieved its purpose. Once the job was done,
Microsoft sold their hardware designs to MIPS Technologies who used
it as a basis for a line of workstations until SGI bought the
company.
The MIPS version of NT didnrCOt last long, either.
The MIPS version was never very popular to begin with - today the
hardware is flying pigs rare.
By late 1996 no one was buying Windows NT for MIPS systems anymore, so Microsoft stopped maintaining it.
On Wed, 1 Oct 2025 12:11:43 +1300, David Goodwin wrote:
IIRC the *reason* for the i860 port first, and then the MIPS port,
was to ensure that the operating system was developed from the start
with portability in mind.
We already know that one of the design goals for Windows NT from the beginning was ?portability in mind?. The question was whether it
achieved that. Ultimately, it did not.
So the MIPS port achieved its purpose. Once the job was done,
Microsoft sold their hardware designs to MIPS Technologies who used
it as a basis for a line of workstations until SGI bought the
company.
So, having done the port and climbed that mountain, it was realized
that climbing portability mountains in a proprietary OS is hard, and
so the Windows NT team soft-pedalled that particular design goal from
that point on ... ?
The MIPS version of NT didn?t last long, either.
The MIPS version was never very popular to begin with - today the
hardware is flying pigs rare.
I already mentioned that MIPS processors outship x86 by about 3:1,
last I checked. You wouldn?t call x86 ?flying pigs rare?, would you?
By late 1996 no one was buying Windows NT for MIPS systems anymore, so Microsoft stopped maintaining it.
People still buy them and run Linux on them, which is why Linux still continues to support them.
You've yet to give a good reason to believe [Windows NT] isn't
portable. The fact it has been released on six architectures and
publicly demonstrated on a seventh would suggest you are wrong.
As you have previously established, Microsoft is a for-profit
company. Their goal is to make profit, not to support as many
platforms as possible for as long as possible whether or not there
is worthwhile demand for Windows on those platforms.
I already mentioned that MIPS processors outship x86 by about 3:1,
last I checked. You wouldn?t call x86 rCLflying pigs rarerCY, would
you?
Set top boxes and routers were not the target market for Windows in
the 90s, and they are clearly not a market Microsoft is interested
in pursuing today.
This is fine as as profit is not the goal and "for fun" is a good
enough motivation. Microsoft clearly has other goals and
motiviations.
In the 90s Windows NT was only released for IBM PC compatibles, and
platforms which conformed (to varying degrees) to the ARC standard.
Later from the 2000 after ARC ceased to be relevant, EFI was adopted as
a new standard.
On 10/1/2025 12:52 AM, David Goodwin wrote:
In the 90s Windows NT was only released for IBM PC compatibles, and platforms which conformed (to varying degrees) to the ARC standard.
Later from the 2000 after ARC ceased to be relevant, EFI was adopted as
a new standard.
I seem to remember NT coming with a list of supported hardware
and if you had otherwise MS did not guarantee it would run at
all, much less perform acceptably.
In article <10bhqe4$uqv$3@dont-email.me>, ldo@nz.invalid says...
The MIPS version of NT didn?t last long, either.
The MIPS version was never very popular to begin with - today the
hardware is flying pigs rare.
I already mentioned that MIPS processors outship x86 by about 3:1,
last I checked. You wouldn?t call x86 ?flying pigs rare?, would you?
Set top boxes and routers were not the target market for Windows in the
90s, and they are clearly not a market Microsoft is interested in
pursuing today.
In article <MPG.43477ac440ab7840989764@news.zx.net.nz>,
David Goodwin <david+usenet@zx.net.nz> wrote:
In article <10bhqe4$uqv$3@dont-email.me>, ldo@nz.invalid says...
In general, arguing with Lawrence is like trying to reason with
a leaking pen: it doesn't change and just gets ink all over your
fingers.
The MIPS version of NT didn?t last long, either.
The MIPS version was never very popular to begin with - today the
hardware is flying pigs rare.
I already mentioned that MIPS processors outship x86 by about 3:1,
last I checked. You wouldn?t call x86 ?flying pigs rare?, would
you?
Set top boxes and routers were not the target market for Windows in
the 90s, and they are clearly not a market Microsoft is interested
in pursuing today.
Moreover, 99.9% of those MIPS CPUs that are outselling x86 are
embedded microcontrollers that just happen to use the MIPS
instruction set. If they run any OS at all, it's way more than
likely to be some kind of RTOS.
For that matter, ARM Cortex-M0 CPUs are _incredibly_ common, in
all sorts of things that many people are unaware even has a
microcontroller inside of it, but Linux isn't running on them.
There are cute hacks like uCLinux designed to run on constrained
systems, but I doubt that more than a tiny fraction of those
CPUs are running it, and besides,
it's not being used for
general-purpose compute, which is what Windows targets.
Bottom line: pointing to the number of MIPS CPUs shipped versus
x86 as some kind of "evidence" for the non-portability of
Windows is similar pointing to the number of pineapples shipped
versus cars as evidence that cars don't grow on trees.
- Dan C.
YourCOre thinking in terms of low-margin products using MIPS, arenrCOt you? While that may be partially true, there are also some pretty high-margin
ones indeed.
In article <MPG.43477ac440ab7840989764@news.zx.net.nz>,
David Goodwin <david+usenet@zx.net.nz> wrote:
In article <10bhqe4$uqv$3@dont-email.me>, ldo@nz.invalid says...
In general, arguing with Lawrence is like trying to reason with
a leaking pen: it doesn't change and just gets ink all over your
fingers.
In article <MPG.43477ac440ab7840989764@news.zx.net.nz>,
David Goodwin <david+usenet@zx.net.nz> wrote:
In general, arguing with Lawrence is like trying to reason with
a leaking pen: it doesn't change and just gets ink all over your
fingers.
Set top boxes and routers were not the target market for Windows in the >>90s, and they are clearly not a market Microsoft is interested in
pursuing today.
Moreover, 99.9% of those MIPS CPUs that are outselling x86 are
embedded microcontrollers that just happen to use the MIPS
instruction set. If they run any OS at all, it's way more than
likely to be some kind of RTOS.
On 2025-10-02, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
In article <MPG.43477ac440ab7840989764@news.zx.net.nz>,
David Goodwin <david+usenet@zx.net.nz> wrote:
Set top boxes and routers were not the target market for Windows in the
90s, and they are clearly not a market Microsoft is interested in
pursuing today.
Moreover, 99.9% of those MIPS CPUs that are outselling x86 are
embedded microcontrollers that just happen to use the MIPS
instruction set. If they run any OS at all, it's way more than
likely to be some kind of RTOS.
Here is one example at the lower end (which is also available in hobbyist friendly packaging):
https://uk.farnell.com/microchip/pic32mx250f128b-i-sp/mcu-32bit-pic32-40mhz-spdip-28/dp/2097773
On Wed, 1 Oct 2025 17:52:38 +1300, David Goodwin wrote:
You've yet to give a good reason to believe [Windows NT] isn't
portable. The fact it has been released on six architectures and
publicly demonstrated on a seventh would suggest you are wrong.
The fact that none of them survived reinforces my point. The ports
survived only long enough for Microsoft to claim bragging rights, and
then expired not long after.
In article <mk59hoFkf86U2@mid.individual.net>, bill.gunshannon@gmail.com says...
On 10/1/2025 12:52 AM, David Goodwin wrote:
In the 90s Windows NT was only released for IBM PC compatibles, and
platforms which conformed (to varying degrees) to the ARC standard.
Later from the 2000 after ARC ceased to be relevant, EFI was adopted as
a new standard.
I seem to remember NT coming with a list of supported hardware
and if you had otherwise MS did not guarantee it would run at
all, much less perform acceptably.
Yeah, the Hardware Compatibility List (HCL) told you machines (or other hardware) Windows NT was *known* to be compatible with - it had been
tested and should work fine. Anything not on the list came down to
whether the vendor had written drivers for it since the version of
Windows NT you're running came out. It took a while for some vendors to
start building NT drivers, and not all bothered until it started to
become more widespread with Windows 2000 and XP.
For RISC machines, the HCL mattered more. Rather than aiming to
standardise hardware under the ARC standard as PC vendors did under the
"IBM PC compatible" de facto standard, a lot of RISC vendors just relied
on using Windows NT's Hardware Abstraction Layer to paper over any
deviations from the ARC standard or prior machines they may have
produced. Each new machine got a new HAL module, and without one of
those Windows NT probably wouldn't even boot.
On 10/1/25 06:05, Lawrence DrCOOliveiro wrote:
On Wed, 1 Oct 2025 17:52:38 +1300, David Goodwin wrote:
You've yet to give a good reason to believe [Windows NT] isn't
portable. The fact it has been released on six architectures and
publicly demonstrated on a seventh would suggest you are wrong.
The fact that none of them survived reinforces my point. The ports
survived only long enough for Microsoft to claim bragging rights, and
then expired not long after.
Fwir, the discussion is about nt portability, or not. Not whether other architectures survived, boxes sold etc. Classic deflection..
The fact that was ported to so many other architectures reflects the
fact that it was designed with a HAL to enable just that ability.
Quite profound for it's time, even if you hate windows in general.
On 10/3/2025 8:12 AM, Simon Clubley wrote:
On 2025-10-02, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
In article <MPG.43477ac440ab7840989764@news.zx.net.nz>,
David Goodwin <david+usenet@zx.net.nz> wrote:
Set top boxes and routers were not the target market for Windows in the >>>> 90s, and they are clearly not a market Microsoft is interested in
pursuing today.
Moreover, 99.9% of those MIPS CPUs that are outselling x86 are
embedded microcontrollers that just happen to use the MIPS
instruction set. If they run any OS at all, it's way more than
likely to be some kind of RTOS.
Here is one example at the lower end (which is also available in hobbyist
friendly packaging):
https://uk.farnell.com/microchip/pic32mx250f128b-i-sp/mcu-32bit-pic32-40mhz-spdip-28/dp/2097773
I think this part of the spec illustrate the target market:
<quote>
MIPS32-< M4K-< core with MIPS16e-< mode for up to 40% smaller code size ></quote>
Switching to 16 bit mode to reduce application size is not where
Microsoft is with Windows today.
On 2025-10-02, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
In article <MPG.43477ac440ab7840989764@news.zx.net.nz>,
David Goodwin <david+usenet@zx.net.nz> wrote:
In general, arguing with Lawrence is like trying to reason with
a leaking pen: it doesn't change and just gets ink all over your
fingers.
Do you maintain a fortune file of these comparisons to cycle through ? :-)
Set top boxes and routers were not the target market for Windows in the >>>90s, and they are clearly not a market Microsoft is interested in >>>pursuing today.
Moreover, 99.9% of those MIPS CPUs that are outselling x86 are
embedded microcontrollers that just happen to use the MIPS
instruction set. If they run any OS at all, it's way more than
likely to be some kind of RTOS.
Here is one example at the lower end (which is also available in hobbyist >friendly packaging):
https://uk.farnell.com/microchip/pic32mx250f128b-i-sp/mcu-32bit-pic32-40mhz-spdip-28/dp/2097773
In article <68dfc38f$0$673$14726298@news.sunsite.dk>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 10/3/2025 8:12 AM, Simon Clubley wrote:
On 2025-10-02, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
In article <MPG.43477ac440ab7840989764@news.zx.net.nz>,
David Goodwin <david+usenet@zx.net.nz> wrote:
Set top boxes and routers were not the target market for Windows in the >>>>> 90s, and they are clearly not a market Microsoft is interested in
pursuing today.
Moreover, 99.9% of those MIPS CPUs that are outselling x86 are
embedded microcontrollers that just happen to use the MIPS
instruction set. If they run any OS at all, it's way more than
likely to be some kind of RTOS.
Here is one example at the lower end (which is also available in hobbyist >>> friendly packaging):
https://uk.farnell.com/microchip/pic32mx250f128b-i-sp/mcu-32bit-pic32-40mhz-spdip-28/dp/2097773
I think this part of the spec illustrate the target market:
<quote>
MIPS32-< M4K-< core with MIPS16e-< mode for up to 40% smaller code size
</quote>
Switching to 16 bit mode to reduce application size is not where
Microsoft is with Windows today.
a) MSFT isn't running Windows on that core, but Linux isn't
running on it, either.
b) MIPS16e is to MIPS 32 as Thumb or Thumb-2 is to ARM, or as
the RISC-V compressed ISA is to RISC-V.
c) Windows on ARM does use Thumb-2: https://devblogs.microsoft.com/oldnewthing/20210531-00/?p=105265
On 10/4/2025 10:14 PM, Dan Cross wrote:
In article <68dfc38f$0$673$14726298@news.sunsite.dk>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 10/3/2025 8:12 AM, Simon Clubley wrote:
On 2025-10-02, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
In article <MPG.43477ac440ab7840989764@news.zx.net.nz>,
David Goodwin <david+usenet@zx.net.nz> wrote:
Set top boxes and routers were not the target market for Windows in the >>>>>> 90s, and they are clearly not a market Microsoft is interested in
pursuing today.
Moreover, 99.9% of those MIPS CPUs that are outselling x86 are
embedded microcontrollers that just happen to use the MIPS
instruction set. If they run any OS at all, it's way more than
likely to be some kind of RTOS.
Here is one example at the lower end (which is also available in hobbyist >>>> friendly packaging):
https://uk.farnell.com/microchip/pic32mx250f128b-i-sp/mcu-32bit-pic32-40mhz-spdip-28/dp/2097773
I think this part of the spec illustrate the target market:
<quote>
MIPS32-< M4K-< core with MIPS16e-< mode for up to 40% smaller code size
</quote>
Switching to 16 bit mode to reduce application size is not where
Microsoft is with Windows today.
a) MSFT isn't running Windows on that core, but Linux isn't
running on it, either.
b) MIPS16e is to MIPS 32 as Thumb or Thumb-2 is to ARM, or as
the RISC-V compressed ISA is to RISC-V.
c) Windows on ARM does use Thumb-2:
https://devblogs.microsoft.com/oldnewthing/20210531-00/?p=105265
So MIPS16e is not 16 bit in traditional sense (16 bit registers,
16 bit address space etc.) but just shorter instructions (16 bit)?
c) Windows on ARM does use Thumb-2: https://devblogs.microsoft.com/oldnewthing/20210531-00/?p=105265
In article <10bsk9q$svt$1@reader2.panix.com>,
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
c) Windows on ARM does use Thumb-2:
https://devblogs.microsoft.com/oldnewthing/20210531-00/?p=105265
Interesting, thanks. 64-bit ARM Windows code does not use any form of
Thumb; it was left out of the 64-bit ARM ISA, which is very different
from the classic 32-bit ISA.
Most ARM64 cores also support A32 and T32
but if Windows is only using A64 it doesn't matter.
I mean, Pr1mos is basically gone. There's an emulator, but I
don't think (new) hardware has been sold for decades, since
Pr1me went under.
Solaris and HP-UX are on their last legs.
Is GCOS6 even still available, or is it just legacy support?
In article <10ad18c$2d4$1@reader1.panix.com>,
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
I mean, Pr1mos is basically gone. There's an emulator, but I
don't think (new) hardware has been sold for decades, since
Pr1me went under.
Emulator here: <https://github.com/prirun/p50em>,
not to be confused with a version of (obsolete) Android for PCs
with the same name.
No new hardware since the early 1990s.
Solaris and HP-UX are on their last legs.
Oracle still say they're supporting Solaris 11.4 with mainstream support >until 2031 and offering extended support until 2037, but that's 20 years >after the final CPU model, the M8, was released.
HP-UX support from HPE ends at the end of 2025. The hardware stopped
being sold in 2021.
Is GCOS6 even still available, or is it just legacy support?
Seems to be all emulation now.
In article <10c32cu$ahp$2@reader2.panix.com>,
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
Most ARM64 cores also support A32 and T32
That is changing, reasonably quickly. ARM stopped releasing new cores
that could do A32 or T32 in 2023, having been phasing them out since 2021. >Apple's recent cores and Qualcomm's Oryons are likewise 64-bit only.
but if Windows is only using A64 it doesn't matter.
Microsoft supply compilers that can target 32-bit code, and run-time >libraries for 32-bit programs. I've never tried building anything on ARM >Windows for 32-bit so I don't know how well they work. I don't know if
ARM Windows 11, which is always a 64-bit OS, will notice that the
hardware is incapable of running A32/T32, but I hope to have appropriate >hardware fairly soon.
In article <10ad18c$2d4$1@reader1.panix.com>,
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
Solaris and HP-UX are on their last legs.
Oracle still say they're supporting Solaris 11.4 with mainstream support until 2031 and offering extended support until 2037, but that's 20 years after the final CPU model, the M8, was released.
HP-UX support from HPE ends at the end of 2025. The hardware stopped
being sold in 2021.
In article <memo.20251008170323.10624a@jgd.cix.co.uk>,
John Dallman <jgd@cix.co.uk> wrote:
In article <10ad18c$2d4$1@reader1.panix.com>,
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
Solaris and HP-UX are on their last legs.
Oracle still say they're supporting Solaris 11.4 with mainstream support
until 2031 and offering extended support until 2037, but that's 20 years
after the final CPU model, the M8, was released.
I wonder what percentage of Solaris installations are on SPARC
and what are x86 at this point. 2037 is only 12 years away.
On 10/8/2025 3:45 PM, Dan Cross wrote:
In article <memo.20251008170323.10624a@jgd.cix.co.uk>,
John Dallman <jgd@cix.co.uk> wrote:
In article <10ad18c$2d4$1@reader1.panix.com>,
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
Solaris and HP-UX are on their last legs.
Oracle still say they're supporting Solaris 11.4 with mainstream
support until 2031 and offering extended support until 2037, but
that's 20 years after the final CPU model, the M8, was released.
I wonder what percentage of Solaris installations are on SPARC
and what are x86 at this point. 2037 is only 12 years away.
Back in the Sun days Solaris/SPARC was way more common than
Solaris/x86-64 (and Solaris/x86 before that).
And I doubt it has changed. I don't recall a time where
Solaris/SPARC was considered dead and Solaris/x86-64 was
considered to have a bright future. And one migration Solaris/SPARC->Linux/x86-64 is cheaper than two migrations Solaris/SPARC->Solaris/x86-64->Linux/x86-64.
Arne
On Wed, 8 Oct 2025 17:00:31 -0400
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 10/8/2025 3:45 PM, Dan Cross wrote:
In article <memo.20251008170323.10624a@jgd.cix.co.uk>,
John Dallman <jgd@cix.co.uk> wrote:
In article <10ad18c$2d4$1@reader1.panix.com>,
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
Solaris and HP-UX are on their last legs.
Oracle still say they're supporting Solaris 11.4 with mainstream
support until 2031 and offering extended support until 2037, but
that's 20 years after the final CPU model, the M8, was released.
I wonder what percentage of Solaris installations are on SPARC
and what are x86 at this point. 2037 is only 12 years away.
Back in the Sun days Solaris/SPARC was way more common than
Solaris/x86-64 (and Solaris/x86 before that).
And I doubt it has changed. I don't recall a time where
Solaris/SPARC was considered dead and Solaris/x86-64 was
considered to have a bright future. And one migration
Solaris/SPARC->Linux/x86-64 is cheaper than two migrations
Solaris/SPARC->Solaris/x86-64->Linux/x86-64.
Arne
If we believe that submission of benchmark results is an indicator of interest then it looks like Oracle lost interest in Solaris for x86-64 approximately in 2012H2, i.e. few years earlier than they finally
decided to stop development of Solaris for SPARC.
Looks like the Linux releases that support Sparc are more recent than
the Solaris builds...
If we believe that submission of benchmark results is an indicator
of interest then it looks like Oracle lost interest in Solaris for
x86-64 approximately in 2012H2, i.e. few years earlier than they
finally decided to stop development of Solaris for SPARC.
On 10/8/2025 3:45 PM, Dan Cross wrote:
In article <memo.20251008170323.10624a@jgd.cix.co.uk>,
John Dallman <jgd@cix.co.uk> wrote:
In article <10ad18c$2d4$1@reader1.panix.com>,
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
Solaris and HP-UX are on their last legs.
Oracle still say they're supporting Solaris 11.4 with mainstream support >>> until 2031 and offering extended support until 2037, but that's 20 years >>> after the final CPU model, the M8, was released.
I wonder what percentage of Solaris installations are on SPARC
and what are x86 at this point. 2037 is only 12 years away.
Back in the Sun days Solaris/SPARC was way more common than
Solaris/x86-64 (and Solaris/x86 before that).
And I doubt it has changed. I don't recall a time where
Solaris/SPARC was considered dead and Solaris/x86-64 was
considered to have a bright future.
And one migration
Solaris/SPARC->Linux/x86-64 is cheaper than two migrations >Solaris/SPARC->Solaris/x86-64->Linux/x86-64.
In article <20251009010203.000044ac@yahoo.com>, already5chosen@yahoo.com >(Michael S) wrote:
If we believe that submission of benchmark results is an indicator
of interest then it looks like Oracle lost interest in Solaris for
x86-64 approximately in 2012H2, i.e. few years earlier than they
finally decided to stop development of Solaris for SPARC.
That's about right. Sun would occasionally ask my employers to support >Solaris on x86-64 (we'd supported it on SPARC for many years) but they
were never able to demonstrate any customer demand. After the Oracle >takeover, the requests stopped: Oracle wanted to sell proprietary
hardware, until they lost interest in Solaris in favour of cloud.
Interesting point. So LPARS are physical partitioning. I guess almost a type-0 hypervisor. You can't over commit. However its part of the
hardware so basically "free". Given you get a minimum of 68 cores in any current Z box it isn't usually a problem. If you need to over-commit
then you can buy zVM a type-1 hypervisor which is really a re-badged VM/
XA from the 1970s.
My understanding was that LPARs as configured using PR/SM are logical resources in terms of CPU, managed using using a derivative of VM
integrated at firmware level, hence not physical partitioning as I'd understand it (such as how a Sun E10K manages this).
Oracle wanted to sell proprietary hardware, until they lost interest
in Solaris in favour of cloud.
On Fri, 10 Oct 2025 08:14 +0100 (BST), John Dallman wrote:
Oracle wanted to sell proprietary hardware, until they lost interest
in Solaris in favour of cloud.
IrCOm sure the fans of the various OpenSolaris offshoots would love to see Solaris open-sourced again. Surely it would be no loss to Oracle to do
this now.
In article <10c6jdf$1sato$2@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 10/8/2025 3:45 PM, Dan Cross wrote:
I wonder what percentage of Solaris installations are on SPARC
and what are x86 at this point. 2037 is only 12 years away.
Back in the Sun days Solaris/SPARC was way more common than
Solaris/x86-64 (and Solaris/x86 before that).
Yup.
And I doubt it has changed. I don't recall a time where
Solaris/SPARC was considered dead and Solaris/x86-64 was
considered to have a bright future.
Within Sun a lot of senior engineers realized by the mid-1990s
that SPARC was going to be a dead end. They just weren't going
to be able to compete against Intel, and the realization within
(at least) the Solaris kernel team was that if Sun didn't pivot
to x86, they'd be doomed. And those folks were largely correct.
But Sun just didn't want to give up that high margin business
and compete against the likes of Dell on volume.
Yes.And one migration
Solaris/SPARC->Linux/x86-64 is cheaper than two migrations
Solaris/SPARC->Solaris/x86-64->Linux/x86-64.
OTOH, if someone is still stuck with Solaris for some reason,
they can still buy modern hardware from Dell, HPE, or Lenovo and
there's a good chance Solaris 11.4 will work on it.
I don't think [Oracle] were ever interested in Sun's earlier,
traditional markets: workstations and so forth were uninteresting.
I also don't think they took Linux seriously enough, and by the
time they did, it was too late: had OpenSolaris happened 8 years
earlier, maybe it could have been a viable alternative, but as
it was, it was too little, too late.
On 10/10/2025 9:30 AM, Lawrence DrCOOliveiro wrote:
IrCOm sure the fans of the various OpenSolaris offshoots would love to
see Solaris open-sourced again. Surely it would be no loss to Oracle to
do this now.
I doubt it would make a difference.
They got a copy years ago. They could not make it a success.
In article <10can8q$dop$1@reader2.panix.com>,
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
I don't think [Oracle] were ever interested in Sun's earlier,
traditional markets: workstations and so forth were uninteresting.
By the time of the takeover, Sun wasn't very interested in SPARC workstations, because their market share was close to zero. x86-64
Windows and Linux had demolished all the traditional Unix workstations
by then. The Sun server business was still going, but l[o]sing money
pretty fast.
The vast majority of Solaris system revenue was made after that. And questionable whether they could have made the same revenue on x86
due to the competition.
But it still does not make sense to do a migration that will require
another migration later compared to just do one migration to
something with a future.
Those guys came back after over a year with a huge pile of changes
to the Solaris kernel that made it capable of running a RHEL3.0 x86
32-bit userland. But only that, not any other distro.
The Solaris kernel people weren't willing to take on a load of
changes that weren't done to their standards, and after a lot of
arguing, the whole job was abandoned.
Open Solaris seemed to be based on the idea that Linux people would
prefer to work on Solaris, which is a terrible failure in
understanding their motivations.
In article <10can8q$dop$1@reader2.panix.com>,
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
I don't think [Oracle] were ever interested in Sun's earlier,
traditional markets: workstations and so forth were uninteresting.
By the time of the takeover, Sun wasn't very interested in SPARC >workstations, because their market share was close to zero. x86-64
Windows and Linux had demolished all the traditional Unix workstations by >then. The Sun server business was still going, but loosing money pretty
fast.
On 10/10/2025 9:30 AM, Lawrence DrCOOliveiro wrote:
On Fri, 10 Oct 2025 08:14 +0100 (BST), John Dallman wrote:
Oracle wanted to sell proprietary hardware, until they lost interest
in Solaris in favour of cloud.
IrCOm sure the fans of the various OpenSolaris offshoots would love to see >> Solaris open-sourced again. Surely it would be no loss to Oracle to do
this now.
I doubt it would make a difference.
They got a copy years ago. They could not make it a success.
There is no reason to believe that getting a copy again would make
it a success.
On 10/10/2025 6:14 AM, Dan Cross wrote:
In article <10c6jdf$1sato$2@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 10/8/2025 3:45 PM, Dan Cross wrote:
I wonder what percentage of Solaris installations are on SPARC
and what are x86 at this point. 2037 is only 12 years away.
Back in the Sun days Solaris/SPARC was way more common than
Solaris/x86-64 (and Solaris/x86 before that).
Yup.
And I doubt it has changed. I don't recall a time where
Solaris/SPARC was considered dead and Solaris/x86-64 was
considered to have a bright future.
Within Sun a lot of senior engineers realized by the mid-1990s
that SPARC was going to be a dead end. They just weren't going
to be able to compete against Intel, and the realization within
(at least) the Solaris kernel team was that if Sun didn't pivot
to x86, they'd be doomed. And those folks were largely correct.
But Sun just didn't want to give up that high margin business
and compete against the likes of Dell on volume.
Good decision. The vast majority of Solaris system revenue was
made after that. And questionable whether they could have made
the same revenue on x86 due to the competition.
And one migration
Solaris/SPARC->Linux/x86-64 is cheaper than two migrations
Solaris/SPARC->Solaris/x86-64->Linux/x86-64.
OTOH, if someone is still stuck with Solaris for some reason,
they can still buy modern hardware from Dell, HPE, or Lenovo and
there's a good chance Solaris 11.4 will work on it.
Yes.
But it still does not make sense to do a migration that will
require another migration later compared to just do one
migration to something with a future.
In article <10camac$nch$1@reader2.panix.com>,
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
I also don't think they took Linux seriously enough, and by the
time they did, it was too late: had OpenSolaris happened 8 years
earlier, maybe it could have been a viable alternative, but as
it was, it was too little, too late.
They wasted several years on a fiasco. Since the Linux system calls were >somewhat Solaris-like in those days, they had the idea of making Solaris
x86 capable of running Linux binaries. So they hired a bunch of Linux
people - apparently not very good ones - and set them to work. Those guys >came back after over a year with a huge pile of changes to the Solaris
kernel that made it capable of running a RHEL3.0 x86 32-bit userland. But >only that, not any other distro. The Solaris kernel people weren't
willing to take on a load of changes that weren't done to their standards, >and after a lot of arguing, the whole job was abandoned.
Open Solaris seemed to be based on the idea that Linux people would
prefer to work on Solaris, which is a terrible failure in understanding
their motivations.
In article <memo.20251007223453.10624Y@jgd.cix.co.uk>,
John Dallman <jgd@cix.co.uk> wrote:
That is changing, reasonably quickly. ARM stopped releasing new
cores that could do A32 or T32 in 2023, having been phasing them
out since 2021.
I wonder if this suggests that they'll introduce a compressed
instruction set a la Thumb for 64 bit mode; -M profile seems to
top out at ARMv8.1; and according to the ARMv8-M ARM, only
supports T32.
Presumably at some point they'll introduce an ARMv9 core for
the embedded market and this will become an issue.
Or maybe they won't. We could be in a world of 32-bit embedded
cores in that space for a very long time indeed.
In article <memo.20251010201326.10624g@jgd.cix.co.uk>,
John Dallman <jgd@cix.co.uk> wrote:
They wasted several years on a fiasco.Do you mean the LX-branded zone stuff? Or something else?
I'd argue that Sun more or less abandoned the workstation market
when they switched to SVR4 and away from BSD with the move to
Solaris from SunOS 4.
I think also the focus shifted dramatically once Java came onto
the scene; Sun seemed to move away from its traditional computer
business in order to focus more full on java and its ecosystem.
IrCOm sure the fans of the various OpenSolaris offshoots would love to see Solaris open-sourced again.The action is at illumos now. I'm on various IRC channels about Solaris,
In article <10cdflq$5c0$2@reader2.panix.com>,
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
I think also the focus shifted dramatically once Java came onto the
scene; Sun seemed to move away from its traditional computer business
in order to focus more full on java and its ecosystem.
They tried that on us, but were deeply unconvincing.
[Sun's] initial success was because they built the computer that
they themselves wanted to use, and came up with a computer a
bunch of other people wanted to use, too. It was a joy to use a
Sun workstation at the time. But then they stopped doing that.
In article <10cb3rt$1hmm$1@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 10/10/2025 6:14 AM, Dan Cross wrote:
Within Sun a lot of senior engineers realized by the mid-1990s
that SPARC was going to be a dead end. They just weren't going
to be able to compete against Intel, and the realization within
(at least) the Solaris kernel team was that if Sun didn't pivot
to x86, they'd be doomed. And those folks were largely correct.
But Sun just didn't want to give up that high margin business
and compete against the likes of Dell on volume.
Good decision. The vast majority of Solaris system revenue was
made after that. And questionable whether they could have made
the same revenue on x86 due to the competition.
Good in the short term, perhaps, but bad in the long term.
And one migration
Solaris/SPARC->Linux/x86-64 is cheaper than two migrations
Solaris/SPARC->Solaris/x86-64->Linux/x86-64.
OTOH, if someone is still stuck with Solaris for some reason,
they can still buy modern hardware from Dell, HPE, or Lenovo and
there's a good chance Solaris 11.4 will work on it.
Yes.
But it still does not make sense to do a migration that will
require another migration later compared to just do one
migration to something with a future.
One can't really make a categorical statement like that. It
depends too much on the application, and how much it leveraged
the Solaris environment. For instance, something that makes
heavy use of zones, SMF, ZFS, doors, the management stuff, etc,
might be much easier to move to Solaris x86 than Linux.
For
that matter, it may be easier to move to illumos rather than
Linux.
In article <memo.20251010201326.10624f@jgd.cix.co.uk>,
John Dallman <jgd@cix.co.uk> wrote:
In article <10can8q$dop$1@reader2.panix.com>,
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
I don't think [Oracle] were ever interested in Sun's earlier,
traditional markets: workstations and so forth were uninteresting.
By the time of the takeover, Sun wasn't very interested in SPARC
workstations, because their market share was close to zero. x86-64
Windows and Linux had demolished all the traditional Unix workstations by
then. The Sun server business was still going, but loosing money pretty
fast.
Oh totally. Sun was a shell of its former self by then. Oracle
didn't care; everything was done through a web browser anyway.
I'd argue that Sun more or less abandoned the workstation market
when they switched to SVR4 and away from BSD with the move to
Solaris from SunOS 4. I think also the focus shifted
dramatically once Java came onto the scene; Sun seemed to move
away from its traditional computer business in order to focus
more full on java and its ecosystem.
Sun did not make money on Java and did not even have potential
for making money on Java.
There was not much money in Java SE. The money was in Java EE.
Sun's Java EE products sucked big time.
But moving to Illumos is not moving to a well supported platform with a highly likely future.
On Sun, 12 Oct 2025 21:11:49 -0400, Arne Vajh|+j wrote:
Sun did not make money on Java and did not even have potential
for making money on Java.
There was not much money in Java SE. The money was in Java EE.
Sun's Java EE products sucked big time.
I thought Oracle acquired Sun for one reason and one reason only: to get control of Java.
Seeing a good long term business for selling proprietary Unix
for x86-64 require a very good imagination.
Arne
On 13/10/2025 02:07, Arne Vajh|+j wrote:
Seeing a good long term business for selling proprietary Unix
for x86-64 require a very good imagination.
Arne
Red Hat do well out of it, although not quite propriety, not quite open source...
On 13/10/2025 11:57, Chris Townley wrote:
On 13/10/2025 02:07, Arne Vajhoj wrote:RedHat have worked hard to make it impossible to use their Linux without paying. In addition they do well because in order to comply with many security policies you need supported software.
Seeing a good long term business for selling proprietary Unix
for x86-64 require a very good imagination.
Red Hat do well out of it, although not quite propriety, not quite open
source...
So unless you are the French Gendarmerie, who have their own Linux
Distro, you need to pay RedHat for support. Its not cheap
They were expecting us to be impressed that they'd done JNI wrappers of
about ten functions from our 500+ function API. We said "Presumably you
have tools to generate this stuff automatically?" and they didn't
understand what we were talking about.
BTW, RHEL 10 appears to have completely dropped 32-bit application
support (this is different from RHEL itself having a 32-bit RHEL
version, which got dropped around RHEL 7).
If true, this means all your 32-bit legacy applications will stop
working on RHEL 10. Goodness knows what they were thinking when
they did that.
On 13/10/2025 02:07, Arne Vajh|+j wrote:
Seeing a good long term business for selling proprietary Unix
for x86-64 require a very good imagination.
Red Hat do well out of it, although not quite propriety, not quite open source...
On 13/10/2025 11:57, Chris Townley wrote:
On 13/10/2025 02:07, Arne Vajh|+j wrote:RedHat have worked hard to make it impossible to use their Linux without paying. In addition they do well because in order to comply with many security policies you need supported software.
Seeing a good long term business for selling proprietary Unix
for x86-64 require a very good imagination.
Red Hat do well out of it, although not quite propriety, not quite
open source...
So unless you are the French Gendarmerie, who have their own Linux
Distro, you need to pay RedHat for support. Its not cheap
On 2025-10-11, John Dallman <jgd@cix.co.uk> wrote:
They were expecting us to be impressed that they'd done JNI wrappers of
about ten functions from our 500+ function API. We said "Presumably you
have tools to generate this stuff automatically?" and they didn't
understand what we were talking about.
Bloody &*&^* stupid JNI. :-( As someone who writes some programs for
personal use on Android, containing a mixture of Java and C code,
I bloody well _hate_ that interface. :-(
I believe I may have mentioned this once or twice before. :-)
In article <10cdflq$5c0$2@reader2.panix.com>,
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
I think also the focus shifted dramatically once Java came onto
the scene; Sun seemed to move away from its traditional computer
business in order to focus more full on java and its ecosystem.
They tried that on us, but were deeply unconvincing.
They were expecting us to be impressed that they'd done JNI wrappers of
about ten functions from our 500+ function API. We said "Presumably you
have tools to generate this stuff automatically?" and they didn't
understand what we were talking about.
In general: avoid JNI unless you really need it.
So unless you are the French Gendarmerie, who have their own Linux
Distro, you need to pay RedHat for support. Its not cheap
Support is easy. If you need support you pay.
On 10/13/2025 6:41 AM, David Wade wrote:
On 13/10/2025 11:57, Chris Townley wrote:
On 13/10/2025 02:07, Arne Vajh|+j wrote:RedHat have worked hard to make it impossible to use their Linux
Seeing a good long term business for selling proprietary Unix
for x86-64 require a very good imagination.
Red Hat do well out of it, although not quite propriety, not quite
open source...
without paying. In addition they do well because in order to comply
with many security policies you need supported software.
So unless you are the French Gendarmerie, who have their own Linux
Distro, you need to pay RedHat for support. Its not cheap
RHEL product management is getting squeezed. The IBM bean counters
want higher profit. And sale is dropping due to companies moving
their Linux workload from on-prem RHEL to cloud non-RHEL. So they
have done some "crazy" stuff to make it harder for RHEL clones.
But RHEL clones still exist. Rocky, Alma, Oracle, Amazon etc..
Redhat's changes may have reduced compatibility from 100%
to 99.95%, but my impression is that the industry in general
consider the compatibility acceptable.
Support is easy. If you need support you pay. Redhat is still
an obvious choice in that case. But few make that choice, because
most only provide containers and let the cloud vendor provide
the host Linux. And they don't want to pay Redhat.
Arne
On 13/10/2025 21:38, Arne Vajh|+j wrote:
On 10/13/2025 6:41 AM, David Wade wrote:
On 13/10/2025 11:57, Chris Townley wrote:
On 13/10/2025 02:07, Arne Vajh|+j wrote:RedHat have worked hard to make it impossible to use their Linux
Seeing a good long term business for selling proprietary Unix
for x86-64 require a very good imagination.
Red Hat do well out of it, although not quite propriety, not quite
open source...
without paying. In addition they do well because in order to comply
with many security policies you need supported software.
So unless you are the French Gendarmerie, who have their own Linux
Distro, you need to pay RedHat for support. Its not cheap
RHEL product management is getting squeezed. The IBM bean counters
want higher profit. And sale is dropping due to companies moving
their Linux workload from on-prem RHEL to cloud non-RHEL. So they
have done some "crazy" stuff to make it harder for RHEL clones.
But RHEL clones still exist. Rocky, Alma, Oracle, Amazon etc..
Redhat's changes may have reduced compatibility from 100%
to 99.95%, but my impression is that the industry in general
consider the compatibility acceptable.
Support is easy. If you need support you pay. Redhat is still
an obvious choice in that case. But few make that choice, because
most only provide containers and let the cloud vendor provide
the host Linux. And they don't want to pay Redhat.
My former company would only use RHEL
On Mon, 13 Oct 2025 16:38:38 -0400, Arne Vajh|+j wrote:
Support is easy. If you need support you pay.
The thing is, expertise in a non-proprietary product is not confined to
the company that makes that product. There is plenty of Open Source
expertise available in the community that you can hire. If you rely on an outside company, particularly a large one, you know that inevitably their interests align with their shareholders, and sooner or later will come
into conflict with yours (as happens with Microsoft, for example). If you rely on your own employees, that canrCOt happen.
On 10/13/2025 5:53 PM, Chris Townley wrote:
On 13/10/2025 21:38, Arne Vajh|+j wrote:
On 10/13/2025 6:41 AM, David Wade wrote:
On 13/10/2025 11:57, Chris Townley wrote:
On 13/10/2025 02:07, Arne Vajh|+j wrote:RedHat have worked hard to make it impossible to use their Linux
Seeing a good long term business for selling proprietary Unix
for x86-64 require a very good imagination.
Red Hat do well out of it, although not quite propriety, not quite
open source...
without paying. In addition they do well because in order to comply
with many security policies you need supported software.
So unless you are the French Gendarmerie, who have their own Linux
Distro, you need to pay RedHat for support. Its not cheap
RHEL product management is getting squeezed. The IBM bean counters
want higher profit. And sale is dropping due to companies moving
their Linux workload from on-prem RHEL to cloud non-RHEL. So they
have done some "crazy" stuff to make it harder for RHEL clones.
But RHEL clones still exist. Rocky, Alma, Oracle, Amazon etc..
Redhat's changes may have reduced compatibility from 100%
to 99.95%, but my impression is that the industry in general
consider the compatibility acceptable.
Support is easy. If you need support you pay. Redhat is still
an obvious choice in that case. But few make that choice, because
most only provide containers and let the cloud vendor provide
the host Linux. And they don't want to pay Redhat.
My former company would only use RHEL
On-prem I assume?
Because paying the cloud vendor for VM's, installing
RHEL and Kubernetes (in form of Openshift for a Redhat
shop) instead of just using EKS/AKS/GKE would be
"unusual".
On 10/13/2025 5:36 PM, Lawrence DrCOOliveiro wrote:
On Mon, 13 Oct 2025 16:38:38 -0400, Arne Vajh|+j wrote:
.
Support is easy. If you need support you pay.
The thing is, expertise in a non-proprietary product is not
confined to the company that makes that product. There is plenty of
Open Source expertise available in the community that you can hire.
If you rely on an outside company, particularly a large one, you
know that inevitably their interests align with their shareholders,
and sooner or later will come into conflict with yours (as happens
with Microsoft, for example). If you rely on your own employees,
that canrCOt happen.
Enterprises with a need to document support can not just hire a
random consultant when the need arrive.
On Mon, 13 Oct 2025 19:26:56 -0400, Arne Vajh|+j wrote:
On 10/13/2025 5:36 PM, Lawrence DrCOOliveiro wrote:
On Mon, 13 Oct 2025 16:38:38 -0400, Arne Vajh|+j wrote:
Support is easy. If you need support you pay.
The thing is, expertise in a non-proprietary product is not
confined to the company that makes that product. There is plenty of
Open Source expertise available in the community that you can hire.
If you rely on an outside company, particularly a large one, you
know that inevitably their interests align with their shareholders,
and sooner or later will come into conflict with yours (as happens
with Microsoft, for example). If you rely on your own employees,
that canrCOt happen.
Enterprises with a need to document support can not just hire a
random consultant when the need arrive.
If something is mission-critical and core to their entire business,
they want a staff they can rely on, completely, to manage that
properly.
In article <10cdflq$5c0$2@reader2.panix.com>,
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
I'd argue that Sun more or less abandoned the workstation market
when they switched to SVR4 and away from BSD with the move to
Solaris from SunOS 4.
That doesn't match my experience. Solaris was first released in 1992 and
had taken over by 1996. Sun released the Blade workstations in 2000, and
new Ultra workstations in 2006, and didn't discontinue them until 2008.
Until at least 2005, we had customers doing serious work on SPARC >workstations, although nobody was switching to them from other platforms.
Our stuff does gain significantly from 64-bit addressing; I could believe >fields that didn't need 64-bit gave up on Sun earlier.
I think also the focus shifted dramatically once Java came onto
the scene; Sun seemed to move away from its traditional computer
business in order to focus more full on java and its ecosystem.
They tried that on us, but were deeply unconvincing.
They were expecting us to be impressed that they'd done JNI wrappers of
about ten functions from our 500+ function API. We said "Presumably you
have tools to generate this stuff automatically?" and they didn't
understand what we were talking about.
[Sun's] initial success was because they built the computer that
they themselves wanted to use, and came up with a computer a
bunch of other people wanted to use, too. It was a joy to use a
Sun workstation at the time. But then they stopped doing that.
Remember that the original SUN-1 board was designed by Andy Bechtolsheim from a
specification given to him by Ralph Gorin, director of the Stanford academic >computing facility (LOTS), who envisioned a 4M system (1M memory, 1MIPS, 1M >pixels on the screen, 1Mbps network, based on the first Ethernet at PARC).
SUN stood for "Stanford University Network"...
The same board was used in the original routers and terminal interface >processors (TIPs) on the Stanford network, designed by Len Bosack of Cisco and >XKL fame.
Khosla and Bechtolsheim, et al., didn't "build the computer they wanted to use",
they built the one they thought would make money when they took the design from
Stanford.
On 10/13/2025 8:20 PM, Lawrence DrCOOliveiro wrote:
On Mon, 13 Oct 2025 19:26:56 -0400, Arne Vajh|+j wrote:
Enterprises with a need to document support can not just hire a
random consultant when the need arrive.
If something is mission-critical and core to their entire business,
they want a staff they can rely on, completely, to manage that
properly.
Few/no CIO's want to support the hundreds of millions of lines
of open source code their business rely on themselves.
On 10/11/2025 7:50 AM, Dan Cross wrote:
In article <10cb3rt$1hmm$1@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 10/10/2025 6:14 AM, Dan Cross wrote:
Within Sun a lot of senior engineers realized by the mid-1990s
that SPARC was going to be a dead end. They just weren't going
to be able to compete against Intel, and the realization within
(at least) the Solaris kernel team was that if Sun didn't pivot
to x86, they'd be doomed. And those folks were largely correct.
But Sun just didn't want to give up that high margin business
and compete against the likes of Dell on volume.
Good decision. The vast majority of Solaris system revenue was
made after that. And questionable whether they could have made
the same revenue on x86 due to the competition.
Good in the short term, perhaps, but bad in the long term.
Seeing a good long term business for selling proprietary Unix
for x86-64 require a very good imagination.
And one migration
Solaris/SPARC->Linux/x86-64 is cheaper than two migrations
Solaris/SPARC->Solaris/x86-64->Linux/x86-64.
OTOH, if someone is still stuck with Solaris for some reason,
they can still buy modern hardware from Dell, HPE, or Lenovo and
there's a good chance Solaris 11.4 will work on it.
Yes.
But it still does not make sense to do a migration that will
require another migration later compared to just do one
migration to something with a future.
One can't really make a categorical statement like that. It
depends too much on the application, and how much it leveraged
the Solaris environment. For instance, something that makes
heavy use of zones, SMF, ZFS, doors, the management stuff, etc,
might be much easier to move to Solaris x86 than Linux.
Did you read what you replied to??
For
that matter, it may be easier to move to illumos rather than
Linux.
Sure.
But moving to Illumos is not moving to a well supported platform
with a highly likely future.
On 10/13/2025 5:36 PM, Lawrence DrCOOliveiro wrote:
On Mon, 13 Oct 2025 16:38:38 -0400, Arne Vajh|+j wrote:
Support is easy. If you need support you pay.
The thing is, expertise in a non-proprietary product is not confined to
the company that makes that product. There is plenty of Open Source
expertise available in the community that you can hire. If you rely on an
outside company, particularly a large one, you know that inevitably their
interests align with their shareholders, and sooner or later will come
into conflict with yours (as happens with Microsoft, for example). If you
rely on your own employees, that canrCOt happen.
Enterprises with a need to document support can not just hire a random consultant when the need arrive.
They need an ongoing contract with a company with a SLA and a reputation
that indicates they can deliver in case it is needed.
In article <10c6irh$er0$1@reader2.panix.com>,
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
In article <memo.20251007223453.10624Y@jgd.cix.co.uk>,
John Dallman <jgd@cix.co.uk> wrote:
That is changing, reasonably quickly. ARM stopped releasing new
cores that could do A32 or T32 in 2023, having been phasing them
out since 2021.
I should have said "ARM stopped releasing new _A-profile_ cores that
could do A32 or T32 in 2023 ..."
I wonder if this suggests that they'll introduce a compressed
instruction set a la Thumb for 64 bit mode; -M profile seems to
top out at ARMv8.1; and according to the ARMv8-M ARM, only
supports T32.
ARM v8-M does not have 64-bit registers or instructions, or virtual
memory. It has an optional, simple, memory protection system.
The
additions at ARMv8.1M are not the same as the ones in ARM v8.1A.
Presumably at some point they'll introduce an ARMv9 core for
the embedded market and this will become an issue.
Or maybe they won't. We could be in a world of 32-bit embedded
cores in that space for a very long time indeed.
It depends what you're doing, really. Qualcomm cellphone-derived SoCs
with 64-bit Cortex-A cores are already widely used in robotics and
similar kinds of "embedded" uses. But there's no need at all for 64-bit
in tiny microcontrollers.
OS data that you found few years ago claims that vast majority of
companies use Linux distributions for which support contract would
be with third party.
Red Hat seem to be used by relatively small percentage of companies
using Linux.
In article <memo.20251011151314.10624m@jgd.cix.co.uk>,
John Dallman <jgd@cix.co.uk> wrote:
Our stuff does gain significantly from 64-bit addressing; I could
believe fields that didn't need 64-bit gave up on Sun earlier.
I can see that. Personally, I really liked Tru64 nee DEC Unix
nee OSF/1 AXP on Alpha. OSF/1 felt like it was a much better
system overall if one had to go if swimming in Unix waters,
while Solaris felt underbaked.
Of course, Solaris was still better than AIX, HP-UX, or even
Irix, but it was a real disappointment when none of the other
OSF members followed through on actually adopting OSF/1.
"Oppose Sun Forever!"
I never quite got the business play behind Java from Sun's
perspective. It seemed to explode in popularity overnight, but
they never quite figured out how to monetize it; I remember
hearing from some Sun folks that they wanted to set standards
and be at the center of the ecosystem, but were content to let
other players actually build the production infrastructure.
I thought Microsoft really ran circles around them with Java on
the client side, and on the server side, it made less sense. A
bytecode language makes some sense in a fractured and extremely
heterogenous client environment; less so in more controlled
server environments. I'll grant that the _language_ was better
than many of the alternatives, but the JRE felt like more of an
impediment for where Java ultimately landed.
But RHEL clones still exist. Rocky, Alma, Oracle, Amazon etc..
Redhat's changes may have reduced compatibility from 100%
to 99.95%, but my impression is that the industry in general
consider the compatibility acceptable.
Well, then I suppose they'll either split their product line or
introduce a 32-bit M profile for V9.
I'm do not entirely agree with that assessment re: 64-bit in
MCUs, however: a lot of work is going into cryptographically
signed secure boot stacks and hardware attestation for firmware;
64-bit registers can make implementing cryptography primitives
with large key sizes much easier.
The main uses for sever-side Java, as I understand it, are:
It happened to have the right idioms for writing server front-ends
that could distribute requests to the backend efficiently.
In article <mddh5w4gba0.fsf@panix5.panix.com>,
Rich Alderson <news@alderson.users.panix.com> wrote:
cross@spitfire.i.gajendra.net (Dan Cross) writes:
[Sun's] initial success was because they built the computer that
they themselves wanted to use, and came up with a computer a
bunch of other people wanted to use, too. It was a joy to use a
Sun workstation at the time. But then they stopped doing that.
Remember that the original SUN-1 board was designed by Andy Bechtolsheim
from a specification given to him by Ralph Gorin, director of the Stanford >> academic computing facility (LOTS), who envisioned a 4M system (1M memory, >> 1MIPS, 1M pixels on the screen, 1Mbps network, based on the first Ethernet >> at PARC).
SUN stood for "Stanford University Network"...
The same board was used in the original routers and terminal interface
processors (TIPs) on the Stanford network, designed by Len Bosack of Cisco >> and XKL fame.
Khosla and Bechtolsheim, et al., didn't "build the computer they wanted to >> use", they built the one they thought would make money when they took the
design from Stanford.
Khosla was out within what, 4 or 5 years? And he wasn't an engineer.
The "building the computer they wanted to use" bit comes first-hand from engineers with single-digit Sun employee numbers. It wasn't just the hardware, but the software as well, of course.
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 10/13/2025 5:36 PM, Lawrence DrCOOliveiro wrote:
On Mon, 13 Oct 2025 16:38:38 -0400, Arne Vajh|+j wrote:
Support is easy. If you need support you pay.
The thing is, expertise in a non-proprietary product is not confined to
the company that makes that product. There is plenty of Open Source
expertise available in the community that you can hire. If you rely on an >>> outside company, particularly a large one, you know that inevitably their >>> interests align with their shareholders, and sooner or later will come
into conflict with yours (as happens with Microsoft, for example). If you >>> rely on your own employees, that canrCOt happen.
Enterprises with a need to document support can not just hire a random
consultant when the need arrive.
They need an ongoing contract with a company with a SLA and a reputation
that indicates they can deliver in case it is needed.
If company need paper then they have to pay for it. They still can
choose smaller company as source of support.
OS data that you found few years ago claims that vast majority ofIt depends a lot on how you are counting.
companies use Linux distributions for which support contract would
be with third party. Red Hat seem to be used by relatively small
percentage of companies using Linux.
1) [RHEL] is the market that generates most of Linux revenue
On Mon, 13 Oct 2025 21:20:43 -0400, Arne Vajh|+j wrote:
On 10/13/2025 8:20 PM, Lawrence DrCOOliveiro wrote:
On Mon, 13 Oct 2025 19:26:56 -0400, Arne Vajh|+j wrote:
Enterprises with a need to document support can not just hire a
random consultant when the need arrive.
If something is mission-critical and core to their entire business,
they want a staff they can rely on, completely, to manage that
properly.
Few/no CIO's want to support the hundreds of millions of lines
of open source code their business rely on themselves.
The whole point of having all that code is that they didnrCOt need to write it themselves.
You have to take responsibility for your own business, donrCOt you?
In article <10chjc5$1s2mr$1@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 10/11/2025 7:50 AM, Dan Cross wrote:
For
that matter, it may be easier to move to illumos rather than
Linux.
Sure.
But moving to Illumos is not moving to a well supported platform
with a highly likely future.
Again, it really depends on the customer. illumos is open
source; if a customer has deep enough pockets and really wants
to stick to that world, they can pay someone to maintain it or
do it themselves.
That's not appropriate for every organization, of course, but it
is not totally unreasonable for those that can and want to do
it.
In article <memo.20251011151314.10624m@jgd.cix.co.uk>,
John Dallman <jgd@cix.co.uk> wrote:
In article <10cdflq$5c0$2@reader2.panix.com>,
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
I think also the focus shifted dramatically once Java came onto
the scene; Sun seemed to move away from its traditional computer
business in order to focus more full on java and its ecosystem.
They tried that on us, but were deeply unconvincing.
They were expecting us to be impressed that they'd done JNI wrappers of
about ten functions from our 500+ function API. We said "Presumably you
have tools to generate this stuff automatically?" and they didn't
understand what we were talking about.
I never quite got the business play behind Java from Sun's
perspective. It seemed to explode in popularity overnight, but
they never quite figured out how to monetize it; I remember
hearing from some Sun folks that they wanted to set standards
and be at the center of the ecosystem, but were content to let
other players actually build the production infrastructure.
I
thought Microsoft really ran circles around them with Java on
the client side, and on the server side, it made less sense. A
bytecode language makes some sense in a fractured and extremely heterogenious client environment; less so in more controlled
server environments. I'll grant that the _language_ was better
than many of the alternatives, but the JRE felt like more of an
impediment for where Java ultimately landed.
In article <10ckadi$7dr$1@reader2.panix.com>,
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
I thought Microsoft really ran circles around them with Java on
the client side, and on the server side, it made less sense. A
bytecode language makes some sense in a fractured and extremely
heterogenous client environment; less so in more controlled
server environments. I'll grant that the _language_ was better
than many of the alternatives, but the JRE felt like more of an
impediment for where Java ultimately landed.
The main uses for sever-side Java, as I understand it, are:
It happened to have the right idioms for writing server front-ends that
could distribute requests to the backend efficiently.
Being able to do
this the same way, within the parts of the JRE that are effectively an OS,
on all the different host platforms, was more efficient in developer time than writing a bunch of different implementations. Developer time is
really expensive.
On 10/13/2025 10:03 PM, Lawrence DrCOOliveiro wrote:
On Mon, 13 Oct 2025 21:20:43 -0400, Arne Vajh|+j wrote:
On 10/13/2025 8:20 PM, Lawrence DrCOOliveiro wrote:
On Mon, 13 Oct 2025 19:26:56 -0400, Arne Vajh|+j wrote:
Enterprises with a need to document support can not just hire a
random consultant when the need arrive.
If something is mission-critical and core to their entire business,
they want a staff they can rely on, completely, to manage that
properly.
Few/no CIO's want to support the hundreds of millions of lines of open
source code their business rely on themselves.
The whole point of having all that code is that they didnrCOt need to
write it themselves.
Yes. But they want free beer more than free speech.
You have to take responsibility for your own business, donrCOt you?
They don't want to write or maintain their own OS.
They don't want to write or maintain their own platform software
(web/app servers, database servers, message queue servers, cache servers etc.).
They don't want to write or maintain their own tools (compilers, build
tools, IDE's, source control, unit test frameworks etc.).
None of that stuff is their business.
They want to focus on their business the applications that help them
produce and sell whatever products or services.
On 10/13/2025 10:04 PM, Dan Cross wrote:
In article <10chjc5$1s2mr$1@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 10/11/2025 7:50 AM, Dan Cross wrote:
For
that matter, it may be easier to move to illumos rather than
Linux.
Sure.
But moving to Illumos is not moving to a well supported platform
with a highly likely future.
Again, it really depends on the customer. illumos is open
source; if a customer has deep enough pockets and really wants
to stick to that world, they can pay someone to maintain it or
do it themselves.
That's not appropriate for every organization, of course, but it
is not totally unreasonable for those that can and want to do
it.
It is not appropriate for most organizations.
Maintaining an OS (whether in-house or some consulting
company) is not what CIO's are looking for.
In article <10ckadi$7dr$1@reader2.panix.com>,
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
In article <memo.20251011151314.10624m@jgd.cix.co.uk>,
John Dallman <jgd@cix.co.uk> wrote:
Our stuff does gain significantly from 64-bit addressing; I could
believe fields that didn't need 64-bit gave up on Sun earlier.
I can see that. Personally, I really liked Tru64 nee DEC Unix
nee OSF/1 AXP on Alpha. OSF/1 felt like it was a much better
system overall if one had to go if swimming in Unix waters,
while Solaris felt underbaked.
I was happy with it, but a very experienced Unix chap of my acquaintance >reckoned "It doesn't run - it just lurches!" regarding it as a
Frankenstein job of parts stitched together.
Of course, Solaris was still better than AIX, HP-UX, or even
Irix, but it was a real disappointment when none of the other
OSF members followed through on actually adopting OSF/1.
"Oppose Sun Forever!"
Time was when we supported AIX, HP-UX, Irix, OSF1, and Solaris. We
probably supported them all simultaneously on 32-bit (except Tru64) and >64-bit for a while, along with HP-UX Itanium, although we got rid of that >faster than HP-UX PA-RISC.
I never quite got the business play behind Java from Sun's
perspective. It seemed to explode in popularity overnight, but
they never quite figured out how to monetize it; I remember
hearing from some Sun folks that they wanted to set standards
and be at the center of the ecosystem, but were content to let
other players actually build the production infrastructure.
The trick with monetising something like that is to price it so that >customers find it far cheaper to pay than to write their own. However,
you still need to be able to make money on it. I've seen this done with a >sliding royalty scale.
However, this kind of scheme definitely would have clashed with the
desire Sun had to make Java a standard piece of client software. It may
have been doomed to unprofitability by the enthusiasm of its creators.
I thought Microsoft really ran circles around them with Java on
the client side, and on the server side, it made less sense. A
bytecode language makes some sense in a fractured and extremely
heterogenous client environment; less so in more controlled
server environments. I'll grant that the _language_ was better
than many of the alternatives, but the JRE felt like more of an
impediment for where Java ultimately landed.
The main uses for sever-side Java, as I understand it, are:
It happened to have the right idioms for writing server front-ends that
could distribute requests to the backend efficiently. Being able to do
this the same way, within the parts of the JRE that are effectively an OS,
on all the different host platforms, was more efficient in developer time >than writing a bunch of different implementations. Developer time is
really expensive.
The hardware resources it soaks up at runtime are beneficial for hardware >vendors, as they get to sell more hardware.
In article <mddh5w4gba0.fsf@panix5.panix.com>,
Rich Alderson <news@alderson.users.panix.com> wrote:
cross@spitfire.i.gajendra.net (Dan Cross) writes:
[Sun's] initial success was because they built the computer that
they themselves wanted to use, and came up with a computer a
bunch of other people wanted to use, too. It was a joy to use a
Sun workstation at the time. But then they stopped doing that.
Remember that the original SUN-1 board was designed by Andy Bechtolsheim >>> from a specification given to him by Ralph Gorin, director of the Stanford >>> academic computing facility (LOTS), who envisioned a 4M system (1M memory, >>> 1MIPS, 1M pixels on the screen, 1Mbps network, based on the first Ethernet >>> at PARC).
SUN stood for "Stanford University Network"...
The same board was used in the original routers and terminal interface
processors (TIPs) on the Stanford network, designed by Len Bosack of Cisco >>> and XKL fame.
Khosla and Bechtolsheim, et al., didn't "build the computer they wanted to >>> use", they built the one they thought would make money when they took the >>> design from Stanford.
Khosla was out within what, 4 or 5 years? And he wasn't an engineer.
Indeed. He was Andy's friend from the Graduate school of Business, and >probably the one who said "this thing could make money!!!!" and started the >search that led to Scott McNealy. Andy would be the one to bring in Bill Joy.
The "building the computer they wanted to use" bit comes first-hand from
engineers with single-digit Sun employee numbers. It wasn't just the
hardware, but the software as well, of course.
That's probably what they were told, but that's not what the VCs were told.
On 10/13/2025 10:03 PM, Lawrence DrCOOliveiro wrote:
On Mon, 13 Oct 2025 21:20:43 -0400, Arne Vajh|+j wrote:
On 10/13/2025 8:20 PM, Lawrence DrCOOliveiro wrote:
On Mon, 13 Oct 2025 19:26:56 -0400, Arne Vajh|+j wrote:
Enterprises with a need to document support can not just hire a
random consultant when the need arrive.
If something is mission-critical and core to their entire business,
they want a staff they can rely on, completely, to manage that
properly.
Few/no CIO's want to support the hundreds of millions of lines
of open source code their business rely on themselves.
The whole point of having all that code is that they didnrCOt need to write >> it themselves.
Yes. But they want free beer more than free speech.
You have to take responsibility for your own business, donrCOt you?
They don't want to write or maintain their own OS.
They don't want to write or maintain their own platform
software (web/app servers, database servers, message queue
servers, cache servers etc.).
They don't want to write or maintain their own tools
(compilers, build tools, IDE's, source control, unit
test frameworks etc.).
None of that stuff is their business.
They want to focus on their business the applications
that help them produce and sell whatever products
or services.
In article <10ckbq2$7dr$4@reader2.panix.com>,
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
Well, then I suppose they'll either split their product line or
introduce a 32-bit M profile for V9.
They are effectively in the process of splitting the product line. The >current instruction sets with a future are T32 and A64. A32 is on the way >out.
I'm do not entirely agree with that assessment re: 64-bit in
MCUs, however: a lot of work is going into cryptographically
signed secure boot stacks and hardware attestation for firmware;
64-bit registers can make implementing cryptography primitives
with large key sizes much easier.
Fair point. The question would then be if it's worth creating a T64 or
just a simplified A64. It seems likely ARM is discussing that internally
or maybe even with some customers under NDA.
In article <10cmovf$3a740$1@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 10/13/2025 10:03 PM, Lawrence DrCOOliveiro wrote:
On Mon, 13 Oct 2025 21:20:43 -0400, Arne Vajh|+j wrote:
On 10/13/2025 8:20 PM, Lawrence DrCOOliveiro wrote:
On Mon, 13 Oct 2025 19:26:56 -0400, Arne Vajh|+j wrote:
Enterprises with a need to document support can not just hire a
random consultant when the need arrive.
If something is mission-critical and core to their entire business,
they want a staff they can rely on, completely, to manage that
properly.
Few/no CIO's want to support the hundreds of millions of lines
of open source code their business rely on themselves.
The whole point of having all that code is that they didnrCOt need to write >>> it themselves.
Yes. But they want free beer more than free speech.
You have to take responsibility for your own business, donrCOt you?
They don't want to write or maintain their own OS.
They don't want to write or maintain their own platform
software (web/app servers, database servers, message queue
servers, cache servers etc.).
They don't want to write or maintain their own tools
(compilers, build tools, IDE's, source control, unit
test frameworks etc.).
None of that stuff is their business.
They want to focus on their business the applications
that help them produce and sell whatever products
or services.
Every single one of the FAANG companies do all of those things.
At Google, we used to joke that, "not only does Google reinventYou're kinda going in circles here by arguing that very big companies
the wheel, we vulcanize the rubber for the tires." Spanner, Piper/Fig/Jujutsu, Prodkernel/ChromeOS/Android, CitC, gunit, Go
(not to mention the work on LLVM/Clang), Blaze/Bazel/Skylark,
etc, are all examples of the things you mentioned above. And
that's not even to mention all the custom hardware.
For organizations working at hyperscale, there comes a point
where the off-the-shelf solutions simply cannot scale to meet
the load you're putting on them.
At that point, you have no choice but to do it yourself.
I believe Arne's point was the fairly obvious one that a retail
chain or a hospital chain does not need to and cannot afford to
maintain, for example, their own operating system.
On Wed, 15 Oct 2025 15:33:20 -0500, Craig A. Berry wrote:
I believe Arne's point was the fairly obvious one that a retail
chain or a hospital chain does not need to and cannot afford to
maintain, for example, their own operating system.
Do you think that is hard to do?
In article <10cmovf$3a740$1@dont-email.me>,Few companies are like Google.
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 10/13/2025 10:03 PM, Lawrence DrCOOliveiro wrote:
On Mon, 13 Oct 2025 21:20:43 -0400, Arne Vajh|+j wrote:
Few/no CIO's want to support the hundreds of millions of lines
of open source code their business rely on themselves.
The whole point of having all that code is that they didnrCOt need to write >>> it themselves.
Yes. But they want free beer more than free speech.
You have to take responsibility for your own business, donrCOt you?
They don't want to write or maintain their own OS.
They don't want to write or maintain their own platform
software (web/app servers, database servers, message queue
servers, cache servers etc.).
They don't want to write or maintain their own tools
(compilers, build tools, IDE's, source control, unit
test frameworks etc.).
None of that stuff is their business.
They want to focus on their business the applications
that help them produce and sell whatever products
or services.
Every single one of the FAANG companies do all of those things.
At Google, we used to joke that, "not only does Google reinvent
the wheel, we vulcanize the rubber for the tires." Spanner, Piper/Fig/Jujutsu, Prodkernel/ChromeOS/Android, CitC, gunit, Go
(not to mention the work on LLVM/Clang), Blaze/Bazel/Skylark,
etc, are all examples of the things you mentioned above. And
that's not even to mention all the custom hardware.
For organizations working at hyperscale, there comes a point
where the off-the-shelf solutions simply cannot scale to meet
the load you're putting on them.
At that point, you have no choice but to do it yourself.
In article <memo.20251014170713.10624x@jgd.cix.co.uk>,
John Dallman <jgd@cix.co.uk> wrote:
In article <10ckadi$7dr$1@reader2.panix.com>,
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
In article <memo.20251011151314.10624m@jgd.cix.co.uk>,
John Dallman <jgd@cix.co.uk> wrote:
Our stuff does gain significantly from 64-bit addressing; I could
believe fields that didn't need 64-bit gave up on Sun earlier.
I can see that. Personally, I really liked Tru64 nee DEC Unix
nee OSF/1 AXP on Alpha. OSF/1 felt like it was a much better
system overall if one had to go if swimming in Unix waters,
while Solaris felt underbaked.
I was happy with it, but a very experienced Unix chap of my acquaintance
reckoned "It doesn't run - it just lurches!" regarding it as a
Frankenstein job of parts stitched together.
Ha! I can sort of see why they'd say that. It definitely had
odd bits of Mach and System V seemingly bolted onto it. Overall
though I thought it was a good system.
To bring it back to VMS (and sheepishly admit a good bunch of
the recent drift is my own) We had an Alpha running OpenVMS AXP
1.2, or whatever one of the earlier versions was;
On 10/15/2025 8:16 AM, Dan Cross wrote:
In article <10cmovf$3a740$1@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 10/13/2025 10:03 PM, Lawrence DrCOOliveiro wrote:
On Mon, 13 Oct 2025 21:20:43 -0400, Arne Vajh|+j wrote:
Few/no CIO's want to support the hundreds of millions of lines
of open source code their business rely on themselves.
The whole point of having all that code is that they didnrCOt need to write
it themselves.
Yes. But they want free beer more than free speech.
You have to take responsibility for your own business, donrCOt you?
They don't want to write or maintain their own OS.
They don't want to write or maintain their own platform
software (web/app servers, database servers, message queue
servers, cache servers etc.).
They don't want to write or maintain their own tools
(compilers, build tools, IDE's, source control, unit
test frameworks etc.).
None of that stuff is their business.
They want to focus on their business the applications
that help them produce and sell whatever products
or services.
Every single one of the FAANG companies do all of those things.
At Google, we used to joke that, "not only does Google reinvent
the wheel, we vulcanize the rubber for the tires." Spanner,
Piper/Fig/Jujutsu, Prodkernel/ChromeOS/Android, CitC, gunit, Go
(not to mention the work on LLVM/Clang), Blaze/Bazel/Skylark,
etc, are all examples of the things you mentioned above. And
that's not even to mention all the custom hardware.
For organizations working at hyperscale, there comes a point
where the off-the-shelf solutions simply cannot scale to meet
the load you're putting on them.
At that point, you have no choice but to do it yourself.
Few companies are like Google.
For a few reasons:
[snip]
3) Google is not just a company using IT to produce
products/services - Google is also a company doing
IT for other.
Google Search is an IT user where it is not a given
that they want their own distro.
But Android and ChromeOS is Google delivering an
OS to other. The OS is their business in that case.
And one facet of GCP is that Google is taking
over OS support from Redhat/Canonical/SUSE when
companies moves their workload from on-prem to
GCP managed services. Linux support is their
business.
My napkin calculation / RNG says you will need more than a million
Linux instances for the math to work. Google has that. Most
companies does not.
In article <memo.20251014170713.10624x@jgd.cix.co.uk>,
John Dallman <jgd@cix.co.uk> wrote:
In article <10ckadi$7dr$1@reader2.panix.com>,
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
I never quite got the business play behind Java from Sun's
perspective. It seemed to explode in popularity overnight, but
they never quite figured out how to monetize it; I remember
hearing from some Sun folks that they wanted to set standards
and be at the center of the ecosystem, but were content to let
other players actually build the production infrastructure.
The trick with monetising something like that is to price it so that
customers find it far cheaper to pay than to write their own. However,
you still need to be able to make money on it. I've seen this done with a
sliding royalty scale.
However, this kind of scheme definitely would have clashed with the
desire Sun had to make Java a standard piece of client software. It may
have been doomed to unprofitability by the enthusiasm of its creators.
I think that's a really insightful way to put it.
My sense was that they overplayed their hand, and did so
prematurely relative to the actual value they were holding onto.
I mentioned Microsoft and Java on the client side: I believe
that they were largely responsible for failure of Java desktop
applications (and the supporting ecosystem) to take root. As I
recall, at the time, MSFT tried to license Java from Sun: Sun
said no, and I'm quite sure that McNealy was positively giddy
about it as well. However, I think in doing so, Sun gravely
underestimated Gates-era MSFT, because then Microsoft very
publicly said, "we're going to wait and see whether the industry
adopts Java on the desktop." But, since Microsoft was the
biggest player in that space, the rest of the industy waited to
see what Microsoft would do and whether they would support it on
Windows: the result was that Java no one adopted it, and so it
never saw widespread client-side adoption.
Oh sure, it had some
adoption in mobile phone type applications, but util Android
(which tried to skirt the licensing issues with Dalvik) that
was pretty limited.
Anyway, while Microsoft stalled, they did
C# in the background, and when it was ready, they no longer had
any real need for Java on the client side.
The framing that the web rendered Java on desktops obsolete is
incomplete. Certainly, that was true for _many_ applications,
as the web rendered much of the client-side ecosystem obsolete,
but consider things in Microsoft's portfolio like Word, Except,
PowerPoint, and so on. Those remained solidly desktop focused
until 360;
one never saw credible competitors to that in Java,
which was something Sun very much wanted (recall McNealy's
writing at this time about a "new" style of development based
around open source and Java).
Similarly, investment in C# shows
that they weren't quite ready to move everything to the web;
On 10/15/2025 7:01 PM, Lawrence DrCOOliveiro wrote:
On Wed, 15 Oct 2025 15:33:20 -0500, Craig A. Berry wrote:
I believe Arne's point was the fairly obvious one that a retail chain
or a hospital chain does not need to and cannot afford to maintain,
for example, their own operating system.
Do you think that is hard to do?
Hire enough experts to have people that know the code base of every
critical part: Linux kernel, glibc etc. probably 50-100 million lines of code: bloody expensive. We are talking hundreds engineers - and not just
any engineers but top engineers.
In article <10cpc9g$191j$2@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
And one facet of GCP is that Google is taking
over OS support from Redhat/Canonical/SUSE when
companies moves their workload from on-prem to
GCP managed services. Linux support is their
business.
Do you mean ContainerOS? That's just a distro.
On 10/15/25 7:16 AM, Dan Cross wrote:
In article <10cmovf$3a740$1@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 10/13/2025 10:03 PM, Lawrence DrCOOliveiro wrote:
On Mon, 13 Oct 2025 21:20:43 -0400, Arne Vajh|+j wrote:
On 10/13/2025 8:20 PM, Lawrence DrCOOliveiro wrote:
On Mon, 13 Oct 2025 19:26:56 -0400, Arne Vajh|+j wrote:
Enterprises with a need to document support can not just hire a
random consultant when the need arrive.
If something is mission-critical and core to their entire business, >>>>>> they want a staff they can rely on, completely, to manage that
properly.
Few/no CIO's want to support the hundreds of millions of lines
of open source code their business rely on themselves.
The whole point of having all that code is that they didnrCOt need to write
it themselves.
Yes. But they want free beer more than free speech.
You have to take responsibility for your own business, donrCOt you?
They don't want to write or maintain their own OS.
They don't want to write or maintain their own platform
software (web/app servers, database servers, message queue
servers, cache servers etc.).
They don't want to write or maintain their own tools
(compilers, build tools, IDE's, source control, unit
test frameworks etc.).
None of that stuff is their business.
They want to focus on their business the applications
that help them produce and sell whatever products
or services.
Every single one of the FAANG companies do all of those things.
In other words, hardly anyone.
At Google, we used to joke that, "not only does Google reinvent
the wheel, we vulcanize the rubber for the tires." Spanner,
Piper/Fig/Jujutsu, Prodkernel/ChromeOS/Android, CitC, gunit, Go
(not to mention the work on LLVM/Clang), Blaze/Bazel/Skylark,
etc, are all examples of the things you mentioned above. And
that's not even to mention all the custom hardware.
For organizations working at hyperscale, there comes a point
where the off-the-shelf solutions simply cannot scale to meet
the load you're putting on them.
At that point, you have no choice but to do it yourself.
You're kinda going in circles here by arguing that very big companies
whose business is to make their own technology need to make their own >technology.
I believe Arne's point was the fairly obvious one that a
retail chain or a hospital chain does not need to and cannot afford to >maintain, for example, their own operating system.
On 10/15/2025 8:26 PM, Dan Cross wrote:
In article <10cpc9g$191j$2@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
And one facet of GCP is that Google is taking
over OS support from Redhat/Canonical/SUSE when
companies moves their workload from on-prem to
GCP managed services. Linux support is their
business.
Do you mean ContainerOS? That's just a distro.
I am talking about that like 10 years ago a company
would run like:
their application + their database server
RHEL [paying Redhat for Linux support]
ESXi
on-prem HW
but now they may run as (assuming Google customer):
their application in GKE + database as GCP managed service
whatever Linux Google want to use [paying Google for Linux support as
part of what they pay for the cloud services]
Linux with KVM
Google HW
Amazon, Microsoft and Google are taking revenue away
from Redhat (IBM). They have de facto gotten into
the Linux support business.
On 10/15/2025 7:58 AM, Dan Cross wrote:
[snip]
Oh sure, it had some
adoption in mobile phone type applications, but util Android
(which tried to skirt the licensing issues with Dalvik) that
was pretty limited.
Almost all the 3 millions apps available for the 3 billion
Android phones are written in Java or Kotlin. Not particular limited.
Anyway, while Microsoft stalled, they did
C# in the background, and when it was ready, they no longer had
any real need for Java on the client side.
MS started .NET and C# after they were forced to drop their
Java.
Anders Hejlsberg was actually headhunted from Borland to
do MS Java. And when that was no longer a thing he moved
on to creating .NET and C#.
The framing that the web rendered Java on desktops obsolete is
incomplete. Certainly, that was true for _many_ applications,
as the web rendered much of the client-side ecosystem obsolete,
but consider things in Microsoft's portfolio like Word, Except,
PowerPoint, and so on. Those remained solidly desktop focused
until 360;
What moved to web in the early 00's were all the internal
business app frontends. The stuff that used to be done on
VB6, Delphi, Jyacc etc..
Mostly trivial stuff but millions of applications requiring
millions of developers.
MS Office and other MSVC++ MFC apps may have been difficult to
port to web at the time, but it would also have been difficult
to come up with a business case for it - that first showed up
when MS had a cloud and could charge customer per user per month
for it.
one never saw credible competitors to that in Java,
which was something Sun very much wanted (recall McNealy's
writing at this time about a "new" style of development based
around open source and Java).
OpenOffice owned by Sun at the time actually did implement
some stuff in Java.
But neither as OpenOffice as office package nor Java as language
for desktop apps ever took off.
Similarly, investment in C# shows
that they weren't quite ready to move everything to the web;
????
One of the main areas for C# is web applications ASP.NET and
was so from day 1.
(not everybody may like ASP.NET web forms, but that is
another discussion)
In article <10cpeu9$26ht$1@dont-email.me>,^^^ ^^^^^^^^^^^^^^^>>
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 10/15/2025 8:26 PM, Dan Cross wrote:
In article <10cpc9g$191j$2@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
And one facet of GCP is that Google is taking
over OS support from Redhat/Canonical/SUSE when
companies moves their workload from on-prem to
GCP managed services. Linux support is their
business.
Do you mean ContainerOS? That's just a distro.
I am talking about that like 10 years ago a company
would run like:
their application + their database server
RHEL [paying Redhat for Linux support]
ESXi
on-prem HW
but now they may run as (assuming Google customer):
their application in GKE + database as GCP managed service
part of what they pay for the cloud services]
Linux with KVM
Google HW
Not quite how the stack is structured.
Amazon, Microsoft and Google are taking revenue away
from Redhat (IBM). They have de facto gotten into
the Linux support business.
Not really. They're taking revenue away from Broadcom/VMWare,
perhaps, and probably from Dell, HPE, and Lenovo. But if you
want to run RHEL on a VM on Google's cloud, they won't stop you. https://cloud.google.com/compute/docs/images/os-details
In article <10cpebq$26b5$1@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 10/15/2025 7:58 AM, Dan Cross wrote:
[snip]
Oh sure, it had some
adoption in mobile phone type applications, but util Android
(which tried to skirt the licensing issues with Dalvik) that
was pretty limited.
Almost all the 3 millions apps available for the 3 billion
Android phones are written in Java or Kotlin. Not particular limited.
...but not running on the JVM or using the JRE.
On 10/15/2025 9:01 PM, Dan Cross wrote:
In article <10cpebq$26b5$1@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 10/15/2025 7:58 AM, Dan Cross wrote:
[snip]
Oh sure, it had some
adoption in mobile phone type applications, but util Android
(which tried to skirt the licensing issues with Dalvik) that
was pretty limited.
Almost all the 3 millions apps available for the 3 billion
Android phones are written in Java or Kotlin. Not particular limited.
...but not running on the JVM or using the JRE.
True.
But the difference is not that big.
[snip]
On 10/15/2025 8:51 PM, Dan Cross wrote:^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In article <10cpeu9$26ht$1@dont-email.me>,^^^ ^^^^^^^^^^^^^^^>>
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 10/15/2025 8:26 PM, Dan Cross wrote:
In article <10cpc9g$191j$2@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
And one facet of GCP is that Google is taking
over OS support from Redhat/Canonical/SUSE when
companies moves their workload from on-prem to
GCP managed services. Linux support is their
business.
Do you mean ContainerOS? That's just a distro.
I am talking about that like 10 years ago a company
would run like:
their application + their database server
RHEL [paying Redhat for Linux support]
ESXi
on-prem HW
but now they may run as (assuming Google customer):
their application in GKE + database as GCP managed service
whatever Linux Google want to use [paying Google for Linux support as
part of what they pay for the cloud services]
Linux with KVM
Google HW
Not quite how the stack is structured.
Amazon, Microsoft and Google are taking revenue away
from Redhat (IBM). They have de facto gotten into
the Linux support business.
Not really. They're taking revenue away from Broadcom/VMWare,
perhaps, and probably from Dell, HPE, and Lenovo. But if you
want to run RHEL on a VM on Google's cloud, they won't stop you.
https://cloud.google.com/compute/docs/images/os-details
If someone has a strong desire to do cloud like they did 10
years ago, then buying GCE instances, installing RHEL,
installing OpenShift, installing database, installing
application and manage everything is certainly still an option.
But I was very explicit above talking about managed services.
Managed Kubernetes and managed database. GKE not GCE.
Again I wonder if you read what you are replying to.