• VMS previous DEC/CPQ/HP[E] decisions and paths

    From Simon Clubley@clubley@remove_me.eisner.decus.org-Earth.UFP to comp.os.vms on Tue Sep 16 17:46:26 2025
    From Newsgroup: comp.os.vms

    On 2025-09-12, bill <bill.gunshannon@gmail.com> wrote:

    I .too, like VMS (contrary to what a lot of people here think :-) and
    I personally know of a couple of niche markets VMS used to be strong
    in (maybe not dominate, but held a good position). I never really
    understood why they lost those markets and I would love to see them
    back. But the more I read and see the more it seems to me that there
    is no desire to actually grow the VMS market and the majority (including those who actually control it) are perfectly happy to just let things
    slide slowly down a black hole from which nothing ever returns.


    Especially given that z/OS is actually several years older than VMS
    and is still going very strongly indeed.

    Could VMS still have been as strong to this day if different decisions
    and paths in the past had been taken ?

    Simon.
    --
    Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
    Walking destinations on a map are further away than they appear.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Tue Sep 16 16:40:15 2025
    From Newsgroup: comp.os.vms

    On 9/16/2025 1:46 PM, Simon Clubley wrote:
    On 2025-09-12, bill <bill.gunshannon@gmail.com> wrote:
    I .too, like VMS (contrary to what a lot of people here think :-) and
    I personally know of a couple of niche markets VMS used to be strong
    in (maybe not dominate, but held a good position). I never really
    understood why they lost those markets and I would love to see them
    back. But the more I read and see the more it seems to me that there
    is no desire to actually grow the VMS market and the majority (including
    those who actually control it) are perfectly happy to just let things
    slide slowly down a black hole from which nothing ever returns.

    Especially given that z/OS is actually several years older than VMS
    and is still going very strongly indeed.

    Could VMS still have been as strong to this day if different decisions
    and paths in the past had been taken ?

    Of course.

    But I don't think there is much point.

    Many/most declining companies/products could have avoided
    decline if they at the time decisions had to be made
    had known what everybody knows 10/20/30/40 years later.

    I could have won the lottery if I had known which
    numbers would be drawn.

    :-)

    Arne







    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Wade@g4ugm@dave.invalid to comp.os.vms on Tue Sep 16 21:47:08 2025
    From Newsgroup: comp.os.vms

    On 16/09/2025 18:46, Simon Clubley wrote:
    On 2025-09-12, bill <bill.gunshannon@gmail.com> wrote:

    I .too, like VMS (contrary to what a lot of people here think :-) and
    I personally know of a couple of niche markets VMS used to be strong
    in (maybe not dominate, but held a good position). I never really
    understood why they lost those markets and I would love to see them
    back. But the more I read and see the more it seems to me that there
    is no desire to actually grow the VMS market and the majority (including
    those who actually control it) are perfectly happy to just let things
    slide slowly down a black hole from which nothing ever returns.


    Especially given that z/OS is actually several years older than VMS
    and is still going very strongly indeed.


    I don't believe its as strong as you believe. Perhaps the Z platform is,
    but z/OS is pretty much limited to traditional big banks and airline reservation systems. These systems are all much larger that most VMS
    systems so migration away is harder and riskier. The hundreds of SMEs
    that once had a small IBM/370 like the 43xx or 9370 have gone. It sold
    its X86 server, laptop and server business to Lenovo. I think IBM may be regretting this. These SME customers would have been the type for whom
    the cloud made sense, but they have all gone X86 and its cloud business
    is not the success it hoped for.

    They require compliance to the Payment Card Industry Data Security
    Standard (PCI DSS). This requires supported software, so IBM uses this
    to drive the hardware/software cycle. Typically each generation of
    hardware only supports two releases of software, and only the current + previous release is supported.

    Just as there were two prices for Alpha boxes there are two prices for
    Z. One high one if you run zOS, one lower one if you run zLinux. z boxes
    are big, but you pay for what you use. So if you have zOS you probably
    have some spare CPUs you can turn on for minimal cost...

    Another notable feature of Z hardware is the virtualisation technology inherent in the "hardware". So it all comes with multiple Logical
    PARtitions or LPARs which despite their name are more like physical partitioning of the hardware, and zVM which uses the "Start Interpretive Execution" (SIE) instruction to create Virtual Machines.

    DEC never had anything like this.


    Could VMS still have been as strong to this day if different decisions
    and paths in the past had been taken ?


    I don't think so. Whilst I feel it would have been wonderful to have had
    a VLC on my desk in 1990s the pricing precluded that. Perhaps if the
    price of the VLC had arrived at the same time, and for the same price as
    the PS/2 and you had kept binary compatibility with VAX rather than
    going Alpha and then Itanium..

    .. lets face it the competition such as pr1mos, hp-ux , Solaris, GCOS6
    are all in simiular states of decline...


    Simon.


    Dave
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Tue Sep 16 17:00:43 2025
    From Newsgroup: comp.os.vms

    On 9/16/2025 4:47 PM, David Wade wrote:
    On 16/09/2025 18:46, Simon Clubley wrote:
    Especially given that z/OS is actually several years older than VMS
    and is still going very strongly indeed.

    I don't believe its as strong as you believe. Perhaps the Z platform is,
    but z/OS is pretty much limited to traditional big banks and airline reservation systems. These systems are all much larger that most VMS
    systems so migration away is harder and riskier.

    I am not sure that the attrition rate for z/OS is less than for VMS.

    But they started at a way higher point, so they are still at a higher
    point.

    Another notable feature of Z hardware is the virtualisation technology inherent in the "hardware". So it all comes with multiple Logical
    PARtitions or LPARs which despite their name are more like physical partitioning of the hardware, and zVM which uses the "Start Interpretive Execution" (SIE) instruction to create Virtual Machines.

    DEC never had anything like this.

    I always considered Alpha Galaxy to be somewhat similar to LPAR.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Wade@g4ugm@dave.invalid to comp.os.vms on Tue Sep 16 22:38:41 2025
    From Newsgroup: comp.os.vms

    On 16/09/2025 22:00, Arne Vajh|+j wrote:
    On 9/16/2025 4:47 PM, David Wade wrote:
    On 16/09/2025 18:46, Simon Clubley wrote:
    Especially given that z/OS is actually several years older than VMS
    and is still going very strongly indeed.

    I don't believe its as strong as you believe. Perhaps the Z platform
    is, but z/OS is pretty much limited to traditional big banks and
    airline reservation systems. These systems are all much larger that
    most VMS systems so migration away is harder and riskier.

    I am not sure that the attrition rate for z/OS is less than for VMS.

    But they started at a way higher point, so they are still at a higher
    point.

    Another notable feature of Z hardware is the virtualisation technology
    inherent in the "hardware". So it all comes with multiple Logical
    PARtitions or LPARs which despite their name are more like physical
    partitioning of the hardware, and zVM which uses the "Start
    Interpretive Execution" (SIE) instruction to create Virtual Machines.

    DEC never had anything like this.

    I always considered Alpha Galaxy to be somewhat similar to LPAR.


    Isn't that the layer that translates Vax instructions? LPARs allow
    multiple operating systems to be run. Could you ever run VMS and ULTRIX
    at the same time on the same Alpha box.

    Arne


    Dave

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Tue Sep 16 21:49:31 2025
    From Newsgroup: comp.os.vms

    On Tue, 16 Sep 2025 21:47:08 +0100, David Wade wrote:

    Especially given that z/OS is actually several years older than VMS
    and is still going very strongly indeed.

    I don't believe its as strong as you believe. Perhaps the Z platform
    is, but z/OS is pretty much limited to traditional big banks and
    airline reservation systems. These systems are all much larger that
    most VMS systems so migration away is harder and riskier. The
    hundreds of SMEs that once had a small IBM/370 like the 43xx or 9370
    have gone.

    IBM as a whole has been losing money for years, and laying off staff
    left and right. ThatrCOs not exactly the sign of a platform rCLgoing stronglyrCY, is it. The only recent bright spot in the company, that I
    know of, is its Red Hat acquisition.

    Another notable feature of Z hardware is the virtualisation
    technology inherent in the "hardware". So it all comes with multiple
    Logical PARtitions or LPARs which despite their name are more like
    physical partitioning of the hardware, and zVM which uses the "Start Interpretive Execution" (SIE) instruction to create Virtual
    Machines.

    Does that sound like there are a limited number of slots for
    instantiating virtual machines? Modern virtualization architectures
    arenrCOt limited like that.

    .. lets face it the competition such as pr1mos, hp-ux , Solaris, GCOS6
    are all in simiular states of decline...

    Are new installations of any of those still being sold? Somehow I donrCOt think so ...
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Tue Sep 16 19:05:04 2025
    From Newsgroup: comp.os.vms

    On 9/16/2025 5:38 PM, David Wade wrote:
    On 16/09/2025 22:00, Arne Vajh|+j wrote:
    On 9/16/2025 4:47 PM, David Wade wrote:
    On 16/09/2025 18:46, Simon Clubley wrote:
    Especially given that z/OS is actually several years older than VMS
    and is still going very strongly indeed.

    I don't believe its as strong as you believe. Perhaps the Z platform
    is, but z/OS is pretty much limited to traditional big banks and
    airline reservation systems. These systems are all much larger that
    most VMS systems so migration away is harder and riskier.

    I am not sure that the attrition rate for z/OS is less than for VMS.

    But they started at a way higher point, so they are still at a higher
    point.

    Another notable feature of Z hardware is the virtualisation
    technology inherent in the "hardware". So it all comes with multiple
    Logical PARtitions or LPARs which despite their name are more like
    physical partitioning of the hardware, and zVM which uses the "Start
    Interpretive Execution" (SIE) instruction to create Virtual Machines.

    DEC never had anything like this.

    I always considered Alpha Galaxy to be somewhat similar to LPAR.

    Isn't that the layer that translates Vax instructions?

    No - that was VEST.

    LPARs allow multiple operating systems to be run. Could you ever run VMS and ULTRIX
    at the same time on the same Alpha box.

    Galaxy allowed you to run multiple instances of VMS
    on the same Alpha (GS and ES only).

    But I believe there was both hard and soft partitioning
    and soft partitioning was VMS only, but hard partitioning
    supported both VMS and Tru64 (Ultrix was VAX and MIPS only).

    Disclaimer: long time ago and I may misremember some of it.

    Arne




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Tue Sep 16 19:18:18 2025
    From Newsgroup: comp.os.vms

    On 9/16/2025 5:49 PM, Lawrence DrCOOliveiro wrote:
    On Tue, 16 Sep 2025 21:47:08 +0100, David Wade wrote:
    Especially given that z/OS is actually several years older than VMS
    and is still going very strongly indeed.

    I don't believe its as strong as you believe. Perhaps the Z platform
    is, but z/OS is pretty much limited to traditional big banks and
    airline reservation systems. These systems are all much larger that
    most VMS systems so migration away is harder and riskier. The
    hundreds of SMEs that once had a small IBM/370 like the 43xx or 9370
    have gone.

    IBM as a whole has been losing money for years,

    No.

    IBM has made a profit every year for many many years.

    It is possible they make their money on consulting and the z, i and p businesses are loosing money, but the overall result is in black.

    and laying off staff
    left and right. ThatrCOs not exactly the sign of a platform rCLgoing stronglyrCY, is it.

    It has become quite common for companies to make layoffs
    even though they are profitable.

    The only recent bright spot in the company, that I
    know of, is its Red Hat acquisition.

    RedHat was making a ton of money for many years. But they have problems
    today.

    RHEL was *the* Linux distro for enterprise on-prem. But the enterprises
    are moving to cloud and Amazon/Microsoft/Google/Oracle do not want to
    pay RedHat (they make their own RHEL clones).

    And JBoss EAP has been mostly replaced by SpringBoot, Quarkus etc..

    Another notable feature of Z hardware is the virtualisation
    technology inherent in the "hardware". So it all comes with multiple
    Logical PARtitions or LPARs which despite their name are more like
    physical partitioning of the hardware, and zVM which uses the "Start
    Interpretive Execution" (SIE) instruction to create Virtual
    Machines.

    Does that sound like there are a limited number of slots for
    instantiating virtual machines? Modern virtualization architectures
    arenrCOt limited like that.

    Hardware partitioning is different from virtualization.

    But yes less flexible.

    Arne



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Wade@g4ugm@dave.invalid to comp.os.vms on Wed Sep 17 00:25:32 2025
    From Newsgroup: comp.os.vms

    On 16/09/2025 22:49, Lawrence DrCOOliveiro wrote:
    On Tue, 16 Sep 2025 21:47:08 +0100, David Wade wrote:

    Especially given that z/OS is actually several years older than VMS
    and is still going very strongly indeed.

    I don't believe its as strong as you believe. Perhaps the Z platform
    is, but z/OS is pretty much limited to traditional big banks and
    airline reservation systems. These systems are all much larger that
    most VMS systems so migration away is harder and riskier. The
    hundreds of SMEs that once had a small IBM/370 like the 43xx or 9370
    have gone.

    IBM as a whole has been losing money for years, and laying off staff
    left and right. ThatrCOs not exactly the sign of a platform rCLgoing stronglyrCY, is it. The only recent bright spot in the company, that I
    know of, is its Red Hat acquisition.


    Why hasn't it gone bust? Its paying a dividend?


    Another notable feature of Z hardware is the virtualisation
    technology inherent in the "hardware". So it all comes with multiple
    Logical PARtitions or LPARs which despite their name are more like
    physical partitioning of the hardware, and zVM which uses the "Start
    Interpretive Execution" (SIE) instruction to create Virtual
    Machines.

    Does that sound like there are a limited number of slots for
    instantiating virtual machines? Modern virtualization architectures
    arenrCOt limited like that.


    Interesting point. So LPARS are physical partitioning. I guess almost a
    type-0 hypervisor. You can't over commit. However its part of the
    hardware so basically "free". Given you get a minimum of 68 cores in any current Z box it isn't usually a problem. If you need to over-commit
    then you can buy zVM a type-1 hypervisor which is really a re-badged
    VM/XA from the 1970s.

    Its interesting you say "modern virtualisation" because most of the
    various "tweaks and tricks" modern X64 virtualisations use were
    developed by IBM in the 1970s an 80s for VM/XA & VM/ESA. X86 and AMD
    CPUs didn't get these until 2005/6. zVM is really slick... but expensive.


    .. lets face it the competition such as pr1mos, hp-ux , Solaris, GCOS6
    are all in simiular states of decline...

    Are new installations of any of those still being sold? Somehow I donrCOt think so ...

    Are new installations of VMS still being sold? So you can buy Solaris
    and I think HP-UX, not sure about GCOS6 but from a conversation I had at
    the weekend about a DPS6 at a local computer museum, I understand its
    still in use in nuclear power stations, apparently because Digital no
    longer wanted that business.

    Dave
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Wed Sep 17 01:01:00 2025
    From Newsgroup: comp.os.vms

    In article <10acicc$2jbit$1@dont-email.me>,
    David Wade <g4ugm@dave.invalid> wrote:
    On 16/09/2025 18:46, Simon Clubley wrote:
    On 2025-09-12, bill <bill.gunshannon@gmail.com> wrote:

    I .too, like VMS (contrary to what a lot of people here think :-) and
    I personally know of a couple of niche markets VMS used to be strong
    in (maybe not dominate, but held a good position). I never really
    understood why they lost those markets and I would love to see them
    back. But the more I read and see the more it seems to me that there
    is no desire to actually grow the VMS market and the majority (including >>> those who actually control it) are perfectly happy to just let things
    slide slowly down a black hole from which nothing ever returns.


    Especially given that z/OS is actually several years older than VMS
    and is still going very strongly indeed.


    I don't believe its as strong as you believe. Perhaps the Z platform is,
    but z/OS is pretty much limited to traditional big banks and airline >reservation systems. These systems are all much larger that most VMS
    systems so migration away is harder and riskier. The hundreds of SMEs
    that once had a small IBM/370 like the 43xx or 9370 have gone. It sold
    its X86 server, laptop and server business to Lenovo. I think IBM may be >regretting this. These SME customers would have been the type for whom
    the cloud made sense, but they have all gone X86 and its cloud business
    is not the success it hoped for.

    They require compliance to the Payment Card Industry Data Security
    Standard (PCI DSS). This requires supported software, so IBM uses this
    to drive the hardware/software cycle. Typically each generation of
    hardware only supports two releases of software, and only the current + >previous release is supported.

    Just as there were two prices for Alpha boxes there are two prices for
    Z. One high one if you run zOS, one lower one if you run zLinux. z boxes
    are big, but you pay for what you use. So if you have zOS you probably
    have some spare CPUs you can turn on for minimal cost...

    Another notable feature of Z hardware is the virtualisation technology >inherent in the "hardware". So it all comes with multiple Logical
    PARtitions or LPARs which despite their name are more like physical >partitioning of the hardware, and zVM which uses the "Start Interpretive >Execution" (SIE) instruction to create Virtual Machines.

    DEC never had anything like this.

    There _were_ hypervisors on e.g. VAX hardare. For instance, the
    "VAX Security Kernel" VMM, which could run multiple guest OSes
    on a single physical VAX (including both VMS and Ultrix/32). https://www.cs.cmu.edu/~15811/papers/vax_vmm.pdf.

    Not exactly LPARs, but certainly a VMM a la z/VM.

    Could VMS still have been as strong to this day if different decisions
    and paths in the past had been taken ?

    I don't think so. Whilst I feel it would have been wonderful to have had
    a VLC on my desk in 1990s the pricing precluded that. Perhaps if the
    price of the VLC had arrived at the same time, and for the same price as
    the PS/2 and you had kept binary compatibility with VAX rather than
    going Alpha and then Itanium..

    .. lets face it the competition such as pr1mos, hp-ux , Solaris, GCOS6
    are all in simiular states of decline...

    I've heard the story about when Bell Telephone broke up, freeing
    AT&T to compete in the computer business. They intended to push
    Unix ("we own it!") and brought in a bunch of vendors to discuss
    terms; Gates wa there representing Microsoft, which at the time
    had a major line of Unix business selling Xenix. Apparently he
    lost it at some point, and started pounding the table with his
    fist: "You guys don't get it!" he shouted, "it's all about
    volume!" A real "Nikita Khrushchev banging his shoe on the UN
    podium" moment, to be sure, but in hindsight he was, of course,
    correct.

    A reasonably configured VLC was, what, about USD $4k in the
    mid-/late-1990s? I don't think DEC was ever comfortable with high-volume/low-margin business in the way the PC vendors were.
    Really, that was true of all of the minicomputer/workstation
    vendors.

    I mean, Pr1mos is basically gone. There's an emulator, but I
    don't think (new) hardware has been sold for decades, since
    Pr1me went under. Solaris and HP-UX are on their last legs.
    Is GCOS6 even still available, or is it just legacy support?

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Wed Sep 17 06:08:38 2025
    From Newsgroup: comp.os.vms

    On Wed, 17 Sep 2025 00:25:32 +0100, David Wade wrote:

    Its interesting you say "modern virtualisation" because most of the
    various "tweaks and tricks" modern X64 virtualisations use were
    developed by IBM in the 1970s an 80s for VM/XA & VM/ESA. X86 and AMD
    CPUs didn't get these until 2005/6. zVM is really slick... but
    expensive.

    IBM invented virtualization, in the beginning to run multiple instances of CMS. This was their attempt to compete with interactive timesharing
    systems from DEC and other vendors. Trouble is, unlike those others, which
    had multiuser support built-in, CMS was single-user only. So as a quick
    hack, the rCLCPrCY (later rCLVMrCY) hypervisor was introduced. Each user effectively had their own (virtual) machine. Sounds like a neat idea,
    until you realize that communication and sharing of info between machines (i.e. between different users) wouldnrCOt have been so easy.

    Did IBM ever address that problem of communication between machines?

    <https://www.libvirt.org/manpages/virsh.html>

    Are new installations of VMS still being sold?

    Probably not.

    So you can buy Solaris and I think HP-UX ...

    And of course macOS, but thatrCOs not in the server/enterprise league. But
    it still likely, far and away, the most popular OS that can legally call itself rCLUnixrCY.

    (Not that many people care about that any more.)
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Wade@g4ugm@dave.invalid to comp.os.vms on Wed Sep 17 11:06:26 2025
    From Newsgroup: comp.os.vms

    On 17/09/2025 07:08, Lawrence DrCOOliveiro wrote:
    On Wed, 17 Sep 2025 00:25:32 +0100, David Wade wrote:

    Its interesting you say "modern virtualisation" because most of the
    various "tweaks and tricks" modern X64 virtualisations use were
    developed by IBM in the 1970s an 80s for VM/XA & VM/ESA. X86 and AMD
    CPUs didn't get these until 2005/6. zVM is really slick... but
    expensive.

    IBM invented virtualization, in the beginning to run multiple instances of CMS. This was their attempt to compete with interactive timesharing
    systems from DEC and other vendors.

    I am sorry, but it was really because their own products, TSO and TSS
    didn't work. IBM really disliked VM and has tried to kill it several
    times. So the original VM work was done on the 360/47 & 67 special 360
    models with virtual memory. The original 370 announcement did not
    include Virtual Memory support, this cost them a lot of money as they
    ended up retro-fitting it to several CPUs. The XA architecture does not satisfy the

    <https://en.wikipedia.org/wiki/Popek_and_Goldberg_virtualization_requirements>


    so the hypervisor had to be re-written to use the SIE microcode....

    They failed because the MVS team needed it to develop MVS now zOS..


    Trouble is, unlike those others, which
    had multiuser support built-in, CMS was single-user only. So as a quick
    hack, the rCLCPrCY (later rCLVMrCY) hypervisor was introduced. Each user effectively had their own (virtual) machine. Sounds like a neat idea,
    until you realize that communication and sharing of info between machines (i.e. between different users) wouldnrCOt have been so easy.

    Not really. They were developed at the same time.


    Did IBM ever address that problem of communication between machines?


    Depends what you mean by communications?

    The spool can be used to exchange files, so for example for e-mail via
    virtual readers, punches and printers...

    From virtually day 1 there was the Virtual Machine Communications
    Facility (VMCF) , then IUCV - Inter User Communication Facility. TCP/IP
    can be layered on top of these.

    You can use these protocols to implement "Service Machines", virtual
    machines which run a server program.

    For example the IBM Office Automation System PROFS later Office Vision
    used "service machines" with which the user communications via IUCV to
    manage Document Storage, Diary Management and Messaging.

    I think around the late 1970's IBM included the Shared File System which finally allowed several users to have write access to the same file at
    the same time...

    .. so yes communications is not a problem.


    <htps://www.libvirt.org/manpages/virsh.htmlt>

    Are new installations of VMS still being sold?

    Probably not.

    So you can buy Solaris and I think HP-UX ...

    And of course macOS, but thatrCOs not in the server/enterprise league. But
    it still likely, far and away, the most popular OS that can legally call itself rCLUnixrCY.

    (Not that many people care about that any more.)

    I don't believe that it can legally be called UNIX, but yes its derived
    from BSD but Apple no longer make what we call servers...

    Dave


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Wed Sep 17 11:52:05 2025
    From Newsgroup: comp.os.vms

    In article <10ae172$33ukj$1@dont-email.me>,
    David Wade <g4ugm@dave.invalid> wrote:
    On 17/09/2025 07:08, Lawrence DrCOOliveiro wrote:
    On Wed, 17 Sep 2025 00:25:32 +0100, David Wade wrote:

    Its interesting you say "modern virtualisation" because most of the
    various "tweaks and tricks" modern X64 virtualisations use were
    developed by IBM in the 1970s an 80s for VM/XA & VM/ESA. X86 and AMD
    CPUs didn't get these until 2005/6. zVM is really slick... but
    expensive.

    IBM invented virtualization, in the beginning to run multiple instances of >> CMS. This was their attempt to compete with interactive timesharing
    systems from DEC and other vendors.

    I am sorry, but it was really because their own products, TSO and TSS
    didn't work. IBM really disliked VM and has tried to kill it several
    times.

    Arguing with Lawrence is like trying to explain physics to a
    brick. Known troll; best not to feed; all that good stuff. He
    likes to distort reality so that it confirms his sophmoric "lol
    everything should be Linux" worldview, and will just double down
    or move the goal posts when he's shown to be objectively wrong.

    The irony here is that he's in a DEC newsgroup asserting that VM
    was created as a product in response to DEC to support
    timesharing, when in fact a) timesharing was invented on IBM
    machines, and b) it was a skunkworks project that was far more
    heavily influenced by Multics than anything else (the people who
    worked on VM where in the same physical _building_ as the
    Multics people and they knew each other well).

    So the original VM work was done on the 360/47 & 67 special 360
    models with virtual memory. The original 370 announcement did not
    include Virtual Memory support, this cost them a lot of money as they
    ended up retro-fitting it to several CPUs. The XA architecture does not >satisfy the

    Fun fact: a year or two ago, I asked Doug McIlroy whether they
    had looked at a low-end IBM 360 for Unix (I thought a low-end
    machine like the 360/30 might have been attractive for a number
    of reasons, but it doesn't seem like something they considered
    before going with the PDP-11).

    But I was a bit ambiguous and he thought I meant Multics, for
    which he said that they had, quite seriously, but IBM didn't
    want to budge on adding virtual memory to the architecture, so
    they went with the GE 645 instead, as GE was willing to make the
    hardware modifications they wanted. The 360/67 work was
    happening at the time, but it was a skunkworks project itself,
    and they were unaware of it.

    <https://en.wikipedia.org/wiki/Popek_and_Goldberg_virtualization_requirements>

    I'm not sure about the specifics here, which is not to say that
    I don't believe you, but I'd love to see a source. 370 is known
    to meet the P&G requirements, and XA extended the architecture
    with some new features for supporting virtual machines; do you
    recall what they added that _violated_ the P&G requirements?

    so the hypervisor had to be re-written to use the SIE microcode....

    They failed because the MVS team needed it to develop MVS now zOS..

    Sounds like you've got some inside baseball info here; I'd love
    to see some sources if you can share them!

    Trouble is, unlike those others, which
    had multiuser support built-in, CMS was single-user only. So as a quick
    hack, the rCLCPrCY (later rCLVMrCY) hypervisor was introduced. Each user
    effectively had their own (virtual) machine. Sounds like a neat idea,
    until you realize that communication and sharing of info between machines
    (i.e. between different users) wouldnrCOt have been so easy.

    Not really. They were developed at the same time.

    Lol. The troll really has no idea what he's talking about.

    Did IBM ever address that problem of communication between machines?

    Depends what you mean by communications?

    The spool can be used to exchange files, so for example for e-mail via >virtual readers, punches and printers...

    From virtually day 1 there was the Virtual Machine Communications
    Facility (VMCF) , then IUCV - Inter User Communication Facility. TCP/IP
    can be layered on top of these.

    You can use these protocols to implement "Service Machines", virtual >machines which run a server program.

    For example the IBM Office Automation System PROFS later Office Vision
    used "service machines" with which the user communications via IUCV to >manage Document Storage, Diary Management and Messaging.

    I think around the late 1970's IBM included the Shared File System which >finally allowed several users to have write access to the same file at
    the same time...

    .. so yes communications is not a problem.

    You should tell him that CMS stands for, "Conversational Monitor
    System". VM is all about communications, as most timeshared
    systems are.

    <htps://www.libvirt.org/manpages/virsh.htmlt>

    Are new installations of VMS still being sold?

    Probably not.

    So you can buy Solaris and I think HP-UX ...

    And of course macOS, but thatrCOs not in the server/enterprise league. But >> it still likely, far and away, the most popular OS that can legally call
    itself rCLUnixrCY.

    (Not that many people care about that any more.)

    I don't believe that it can legally be called UNIX, but yes its derived
    from BSD but Apple no longer make what we call servers...

    macOS is one of the few that _can_ legally be called Unix. The
    full list is here: https://www.opengroup.org/openbrand/register/

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Craig A. Berry@craigberry@nospam.mac.com to comp.os.vms on Wed Sep 17 06:53:38 2025
    From Newsgroup: comp.os.vms

    On 9/17/25 5:06 AM, David Wade wrote:
    On 17/09/2025 07:08, Lawrence DrCOOliveiro wrote:

    And of course macOS, but thatrCOs not in the server/enterprise league. But >> it still likely, far and away, the most popular OS that can legally call
    itself rCLUnixrCY.

    I don't believe that it can legally be called UNIX, but yes its derived
    from BSD but Apple no longer make what we call servers...

    Apparently it can:

    https://www.opengroup.org/openbrand/register/brand3725.htm
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Wade@g4ugm@dave.invalid to comp.os.vms on Wed Sep 17 16:39:55 2025
    From Newsgroup: comp.os.vms

    On 17/09/2025 12:52, Dan Cross wrote:
    In article <10ae172$33ukj$1@dont-email.me>,
    David Wade <g4ugm@dave.invalid> wrote:
    On 17/09/2025 07:08, Lawrence DrCOOliveiro wrote:
    On Wed, 17 Sep 2025 00:25:32 +0100, David Wade wrote:

    Its interesting you say "modern virtualisation" because most of the
    <https://en.wikipedia.org/wiki/Popek_and_Goldberg_virtualization_requirements>

    I'm not sure about the specifics here, which is not to say that
    I don't believe you, but I'd love to see a source. 370 is known
    to meet the P&G requirements, and XA extended the architecture
    with some new features for supporting virtual machines; do you
    recall what they added that _violated_ the P&G requirements?


    It is the same issue that differentiates a 68000 and the 68010 and which prevented the VAX having a hypervisor without microcode changes...

    The VM/370 hypervisor relies on running the virtual machines in "problem state" or "user mode" even if the VM thinks its running in "Supervisor
    State" or "privileged mode". So for example CMS thinks its running in
    real memory, where as in fact it running in virtual.

    In order for this to work any instruction which discloses the system
    state needs to be a privileged instruction. This is true on S/370 but
    this generates a huge overhead when running non-virtual machines.

    So on XA and later there are ways to examine the system state from a non-privileged program.

    I found the paper on Virtualising VAX that was linked interesting ...

    https://www.cs.cmu.edu/~15811/papers/vax_vmm.pdf

    So they modified the VAX microcode to get round this problem, however
    the VAX has an additional challenges as it has four protection states,
    not two like a S/370. There are additional issues with VAX covered in
    this paper....


    so the hypervisor had to be re-written to use the SIE microcode....

    They failed because the MVS team needed it to develop MVS now zOS..

    Sounds like you've got some inside baseball info here; I'd love
    to see some sources if you can share them!


    I think its widely "put about. For example in Melinda Varian's paper
    from 1991:-

    "VM AND THE VM COMMUNITY: Past, Present, and Future"

    https://www.leeandmelindavarian.com/Melinda/neuvm.pdf

    bottom of page 55 in the PDF:-

    There is a widely a widely believed (but possibly apocryphal) story that anti-VM, pro-MVS forces at one point nearly succeeded in convincing the company to kill VM, but the President of IBM, upon learning how heavily
    the MVS developers depended upon VM, said simply, rCLIf itrCOs good enough for you, itrCOs good enough for the customers.rCY


    Trouble is, unlike those others, which
    had multiuser support built-in, CMS was single-user only. So as a quick
    hack, the rCLCPrCY (later rCLVMrCY) hypervisor was introduced. Each user >>> effectively had their own (virtual) machine. Sounds like a neat idea,
    until you realize that communication and sharing of info between machines >>> (i.e. between different users) wouldnrCOt have been so easy.

    Not really. They were developed at the same time.

    Lol. The troll really has no idea what he's talking about.

    Did IBM ever address that problem of communication between machines?

    Depends what you mean by communications?

    The spool can be used to exchange files, so for example for e-mail via
    virtual readers, punches and printers...

    From virtually day 1 there was the Virtual Machine Communications
    Facility (VMCF) , then IUCV - Inter User Communication Facility. TCP/IP
    can be layered on top of these.

    You can use these protocols to implement "Service Machines", virtual
    machines which run a server program.

    For example the IBM Office Automation System PROFS later Office Vision
    used "service machines" with which the user communications via IUCV to
    manage Document Storage, Diary Management and Messaging.

    I think around the late 1970's IBM included the Shared File System which
    finally allowed several users to have write access to the same file at
    the same time...

    .. so yes communications is not a problem.

    You should tell him that CMS stands for, "Conversational Monitor
    System". VM is all about communications, as most timeshared
    systems are.

    <htps://www.libvirt.org/manpages/virsh.htmlt>

    Are new installations of VMS still being sold?

    Probably not.

    So you can buy Solaris and I think HP-UX ...

    And of course macOS, but thatrCOs not in the server/enterprise league. But >>> it still likely, far and away, the most popular OS that can legally call >>> itself rCLUnixrCY.

    (Not that many people care about that any more.)

    I don't believe that it can legally be called UNIX, but yes its derived >>from BSD but Apple no longer make what we call servers...

    macOS is one of the few that _can_ legally be called Unix. The
    full list is here: https://www.opengroup.org/openbrand/register/



    oh thanks for that...


    - Dan C.


    Dave
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Rich Alderson@news@alderson.users.panix.com to comp.os.vms on Wed Sep 17 15:15:01 2025
    From Newsgroup: comp.os.vms

    cross@spitfire.i.gajendra.net (Dan Cross) writes:

    In article <10ae172$33ukj$1@dont-email.me>,
    David Wade <g4ugm@dave.invalid> wrote:

    I think around the late 1970's IBM included the Shared File System which
    finally allowed several users to have write access to the same file at
    the same time...

    .. so yes communications is not a problem.

    You should tell him that CMS stands for, "Conversational Monitor
    System". VM is all about communications, as most timeshared
    systems are.

    Dan, that's actually a retronym. It was originally called the
    "Cambridge Monitor System". I think the renaming occurred when
    it moved off the modified 360/40 to the 360/67, but I could be
    hallucinating like an LLM.
    --
    Rich Alderson news@alderson.users.panix.com
    Audendum est, et veritas investiganda; quam etiamsi non assequamur,
    omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
    --Galen --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Wed Sep 17 20:23:55 2025
    From Newsgroup: comp.os.vms

    In article <10aekoc$3a62f$1@dont-email.me>,
    David Wade <g4ugm@dave.invalid> wrote:
    On 17/09/2025 12:52, Dan Cross wrote:
    In article <10ae172$33ukj$1@dont-email.me>,
    David Wade <g4ugm@dave.invalid> wrote:
    On 17/09/2025 07:08, Lawrence DrCOOliveiro wrote:
    On Wed, 17 Sep 2025 00:25:32 +0100, David Wade wrote:

    Its interesting you say "modern virtualisation" because most of the
    <https://en.wikipedia.org/wiki/Popek_and_Goldberg_virtualization_requirements>

    I'm not sure about the specifics here, which is not to say that
    I don't believe you, but I'd love to see a source. 370 is known
    to meet the P&G requirements, and XA extended the architecture
    with some new features for supporting virtual machines; do you
    recall what they added that _violated_ the P&G requirements?

    It is the same issue that differentiates a 68000 and the 68010 and which >prevented the VAX having a hypervisor without microcode changes...

    Or the inverse? The issue with the 68000 was that it noted the
    processor privilege mode, interrupt level, and debugging trace
    control in the status register, and reading that register was
    unprivileged. The 68010 simply made the instruction reading the
    entire SR privileged, and added an unprivileged instruction to
    read just the condition codes.

    Sounds like IBM took an already clasically virtualizable machine
    and made it not so for efficiency reasons, adding in new
    sensitive and yet unprivileged instructions, but also a
    compatibility hack via microcode and a new instruction to switch
    to that?

    The VM/370 hypervisor relies on running the virtual machines in "problem >state" or "user mode" even if the VM thinks its running in "Supervisor >State" or "privileged mode". So for example CMS thinks its running in
    real memory, where as in fact it running in virtual.

    In order for this to work any instruction which discloses the system
    state needs to be a privileged instruction. This is true on S/370 but
    this generates a huge overhead when running non-virtual machines.

    Yup. This is pretty much theorem 1 from P&G's 1974 CACM paper.

    P&G would classify instructions that expose that kind state as
    "sensitive". Their criteria is that all sensitive instructions
    must be a subset of the set of privileged instructions, so that
    they can be trapped (and emulated, usually) by the hypervisor.

    So on XA and later there are ways to examine the system state from a >non-privileged program.

    Interesting. I guess I'm curious what they changed; perhaps the
    address mode bit? My reading suggests that they added some
    enhancements to improve VM performance, but it's unclear to me
    what they did that made XA unvirtualizable.

    I found the paper on Virtualising VAX that was linked interesting ...

    https://www.cs.cmu.edu/~15811/papers/vax_vmm.pdf

    Thanks! I thought it was interesting.

    So they modified the VAX microcode to get round this problem, however
    the VAX has an additional challenges as it has four protection states,
    not two like a S/370. There are additional issues with VAX covered in
    this paper....

    Critically, P&G never considered virtual memory beyond a single
    relocation register. VM invented shadow paging to make it cope.
    I imagine the paging scheme on the VAX would require similar
    techniques.

    https://homes.cs.aau.dk/~kleist/Courses/nds-e05/papers/virtual-vax.pdf
    goes into some detail here. The ring compression thing is
    interesting.

    so the hypervisor had to be re-written to use the SIE microcode....

    They failed because the MVS team needed it to develop MVS now zOS..

    Sounds like you've got some inside baseball info here; I'd love
    to see some sources if you can share them!

    I think its widely "put about. For example in Melinda Varian's paper
    from 1991:-

    "VM AND THE VM COMMUNITY: Past, Present, and Future"

    https://www.leeandmelindavarian.com/Melinda/neuvm.pdf

    bottom of page 55 in the PDF:-

    There is a widely a widely believed (but possibly apocryphal) story that >anti-VM, pro-MVS forces at one point nearly succeeded in convincing the >company to kill VM, but the President of IBM, upon learning how heavily
    the MVS developers depended upon VM, said simply, rCLIf itrCOs good enough >for you, itrCOs good enough for the customers.rCY

    My problem with Varian's paper is that every time I sit down to
    read just a part, I get sucked into it and an hour or two goes
    by. It's just too good!

    [snip]
    macOS is one of the few that _can_ legally be called Unix. The
    full list is here: https://www.opengroup.org/openbrand/register/

    oh thanks for that...

    Sure thing!

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Wade@g4ugm@dave.invalid to comp.os.vms on Wed Sep 17 21:57:40 2025
    From Newsgroup: comp.os.vms

    On 17/09/2025 21:23, Dan Cross wrote:
    In article <10aekoc$3a62f$1@dont-email.me>,
    David Wade <g4ugm@dave.invalid> wrote:
    On 17/09/2025 12:52, Dan Cross wrote:
    In article <10ae172$33ukj$1@dont-email.me>,
    David Wade <g4ugm@dave.invalid> wrote:
    On 17/09/2025 07:08, Lawrence DrCOOliveiro wrote:
    On Wed, 17 Sep 2025 00:25:32 +0100, David Wade wrote:

    Its interesting you say "modern virtualisation" because most of the
    <https://en.wikipedia.org/wiki/Popek_and_Goldberg_virtualization_requirements>

    I'm not sure about the specifics here, which is not to say that
    I don't believe you, but I'd love to see a source. 370 is known
    to meet the P&G requirements, and XA extended the architecture
    with some new features for supporting virtual machines; do you
    recall what they added that _violated_ the P&G requirements?

    It is the same issue that differentiates a 68000 and the 68010 and which
    prevented the VAX having a hypervisor without microcode changes...

    Or the inverse? The issue with the 68000 was that it noted the
    processor privilege mode, interrupt level, and debugging trace
    control in the status register, and reading that register was
    unprivileged. The 68010 simply made the instruction reading the
    entire SR privileged, and added an unprivileged instruction to
    read just the condition codes.

    Sounds like IBM took an already clasically virtualizable machine
    and made it not so for efficiency reasons, adding in new
    sensitive and yet unprivileged instructions, but also a
    compatibility hack via microcode and a new instruction to switch
    to that?

    I think its to do with switching between 24 and 31 bit addressing...
    .. SIE or Start Interpretive Execution creates a virtual environment
    that the microcode manages.

    As I am sure you know many of the earlier 370 class machines had similar facilities in that ECPS:VM implemented some of the functions normally
    carried out in the Hypervisor in the CPU microcode. I found this free to download paper on it :-

    https://dl.acm.org/doi/abs/10.1145/1096532.1096534

    in many ways SIE is an extension of these assists...



    The VM/370 hypervisor relies on running the virtual machines in "problem
    state" or "user mode" even if the VM thinks its running in "Supervisor
    State" or "privileged mode". So for example CMS thinks its running in
    real memory, where as in fact it running in virtual.

    In order for this to work any instruction which discloses the system
    state needs to be a privileged instruction. This is true on S/370 but
    this generates a huge overhead when running non-virtual machines.

    Yup. This is pretty much theorem 1 from P&G's 1974 CACM paper.

    P&G would classify instructions that expose that kind state as
    "sensitive". Their criteria is that all sensitive instructions
    must be a subset of the set of privileged instructions, so that
    they can be trapped (and emulated, usually) by the hypervisor.

    So on XA and later there are ways to examine the system state from a
    non-privileged program.

    Interesting. I guess I'm curious what they changed; perhaps the
    address mode bit? My reading suggests that they added some
    enhancements to improve VM performance, but it's unclear to me
    what they did that made XA unvirtualizable.

    I found the paper on Virtualising VAX that was linked interesting ...

    https://www.cs.cmu.edu/~15811/papers/vax_vmm.pdf

    Thanks! I thought it was interesting.

    So they modified the VAX microcode to get round this problem, however
    the VAX has an additional challenges as it has four protection states,
    not two like a S/370. There are additional issues with VAX covered in
    this paper....

    Critically, P&G never considered virtual memory beyond a single
    relocation register. VM invented shadow paging to make it cope.
    I imagine the paging scheme on the VAX would require similar
    techniques.

    https://homes.cs.aau.dk/~kleist/Courses/nds-e05/papers/virtual-vax.pdf
    goes into some detail here. The ring compression thing is
    interesting.

    so the hypervisor had to be re-written to use the SIE microcode....

    They failed because the MVS team needed it to develop MVS now zOS..

    Sounds like you've got some inside baseball info here; I'd love
    to see some sources if you can share them!

    I think its widely "put about. For example in Melinda Varian's paper
    from 1991:-

    "VM AND THE VM COMMUNITY: Past, Present, and Future"

    https://www.leeandmelindavarian.com/Melinda/neuvm.pdf

    bottom of page 55 in the PDF:-

    There is a widely a widely believed (but possibly apocryphal) story that
    anti-VM, pro-MVS forces at one point nearly succeeded in convincing the
    company to kill VM, but the President of IBM, upon learning how heavily
    the MVS developers depended upon VM, said simply, rCLIf itrCOs good enough >> for you, itrCOs good enough for the customers.rCY

    My problem with Varian's paper is that every time I sit down to
    read just a part, I get sucked into it and an hour or two goes
    by. It's just too good!

    [snip]
    macOS is one of the few that _can_ legally be called Unix. The
    full list is here: https://www.opengroup.org/openbrand/register/

    oh thanks for that...

    Sure thing!

    - Dan C.


    Dave
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Wed Sep 17 22:03:19 2025
    From Newsgroup: comp.os.vms

    In article <mddecs451sa.fsf@panix3.panix.com>,
    Rich Alderson <news@alderson.users.panix.com> wrote: >cross@spitfire.i.gajendra.net (Dan Cross) writes:

    In article <10ae172$33ukj$1@dont-email.me>,
    David Wade <g4ugm@dave.invalid> wrote:

    I think around the late 1970's IBM included the Shared File System which >>> finally allowed several users to have write access to the same file at
    the same time...

    .. so yes communications is not a problem.

    You should tell him that CMS stands for, "Conversational Monitor
    System". VM is all about communications, as most timeshared
    systems are.

    Dan, that's actually a retronym. It was originally called the
    "Cambridge Monitor System". I think the renaming occurred when
    it moved off the modified 360/40 to the 360/67, but I could be
    hallucinating like an LLM.

    Thanks, Rich. I was hoping Varian might pin down the date of
    that change. She does mention that the initial name was
    "Cambridge Monitor System", and on page 57 says that,'VM/370 was
    announced with two components, CP, the "Control Program", and
    CMS, which was now to be called the "Conversational Monitor
    System".'

    So it seems that the name stuck throughout the lifetime of CP on
    the 360.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Wed Sep 17 22:13:39 2025
    From Newsgroup: comp.os.vms

    In article <10af7c4$3ae3g$1@dont-email.me>,
    David Wade <g4ugm@dave.invalid> wrote:
    On 17/09/2025 21:23, Dan Cross wrote:
    In article <10aekoc$3a62f$1@dont-email.me>,
    David Wade <g4ugm@dave.invalid> wrote:
    On 17/09/2025 12:52, Dan Cross wrote:
    In article <10ae172$33ukj$1@dont-email.me>,
    David Wade <g4ugm@dave.invalid> wrote:
    On 17/09/2025 07:08, Lawrence DrCOOliveiro wrote:
    On Wed, 17 Sep 2025 00:25:32 +0100, David Wade wrote:

    Its interesting you say "modern virtualisation" because most of the >>>>> <https://en.wikipedia.org/wiki/Popek_and_Goldberg_virtualization_requirements>

    I'm not sure about the specifics here, which is not to say that
    I don't believe you, but I'd love to see a source. 370 is known
    to meet the P&G requirements, and XA extended the architecture
    with some new features for supporting virtual machines; do you
    recall what they added that _violated_ the P&G requirements?

    It is the same issue that differentiates a 68000 and the 68010 and which >>> prevented the VAX having a hypervisor without microcode changes...

    Or the inverse? The issue with the 68000 was that it noted the
    processor privilege mode, interrupt level, and debugging trace
    control in the status register, and reading that register was
    unprivileged. The 68010 simply made the instruction reading the
    entire SR privileged, and added an unprivileged instruction to
    read just the condition codes.

    Sounds like IBM took an already clasically virtualizable machine
    and made it not so for efficiency reasons, adding in new
    sensitive and yet unprivileged instructions, but also a
    compatibility hack via microcode and a new instruction to switch
    to that?

    I think its to do with switching between 24 and 31 bit addressing...
    .. SIE or Start Interpretive Execution creates a virtual environment
    that the microcode manages.

    _nod_ makes sense.

    As I am sure you know many of the earlier 370 class machines had similar >facilities in that ECPS:VM implemented some of the functions normally >carried out in the Hypervisor in the CPU microcode. I found this free to >download paper on it :-

    https://dl.acm.org/doi/abs/10.1145/1096532.1096534

    in many ways SIE is an extension of these assists...

    Ooo, that's very interesting. Thanks for the reference!

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.os.vms on Sat Sep 20 21:14:40 2025
    From Newsgroup: comp.os.vms

    In article <10ac7ph$2nnj7$1@dont-email.me>, clubley@remove_me.eisner.decus.org-Earth.UFP (Simon Clubley) wrote:

    Especially given that z/OS is actually several years older than VMS
    and is still going very strongly indeed.

    Could VMS still have been as strong to this day if different
    decisions and paths in the past had been taken ?

    Possibly, but one can't be certain what different decisions would have
    that effect, and there's a substantial element of luck in these matters.
    Here's my bid:

    The VAX instruction set is quite nice in some ways and quite horrible in others. Some of those made it hard to make run very fast.

    The extremely variable-length instructions are a prime example. In
    contrast to VAX, the IBM Z instruction set only has three instruction
    lengths - 2, 4 and 6 bytes, which has not changed since System/360 - and
    you can always discover the length of each instruction from its first two bytes. That makes having multiple instructions being decoded
    simultaneously easier, which is a bottleneck in x86 and x86-64, the other long-lasting CISC instruction set.

    The relative simplicity of the IBM Z instruction set probably derives
    from the greater abstraction in its design process. IBM tried to design
    it without considering implementation very much, because they were
    producing five initial implementations. These had a speed range of 30:1,
    using quite varied technology.

    There's one place where IBM paid too much attention to implementation:
    the hexadecimal floating point. It was picked because it allowed a
    simpler shifter for normalisation, but caused excessive loss of precision.
    That was a bad idea, no matter how much IBM tried to hide the problems,
    and limited their mainframes in technical computing. They added IEEE floating-point in System/390, but that was far too late.

    In contrast, the VAX instruction set was likely designed in parallel with
    the 11/780, and is pretty much built around the concept of a microcode implementation running one instruction at a time. It did get floating
    point right, though.

    VAX was replaced by Alpha because there was no way to make VAX fast
    enough to compete with the RISCs of the late 1980s and early 1990s. A
    different VAX that could use out-of-order execution effectively might
    have been able to compete. If so, that would have enabled DEC to stay in
    its technical computing niche, instead of switching to the commercial
    data processing market in time to be slaughtered by Wintel.

    Our alternate history VAX would then have had to be extended to 64-bit.
    This was possible for System/390 and x86, so it might well have been
    possible for alt-VAX. If DEC had still been financially healthy, there
    could have been a proper 64-bit VMS API, rather than the half-done
    mixture that was implemented in our history.

    I talked to a colleague, who returned to my employer after a takeover,
    and remembers our business in the early 1980s. He's perfectly clear that
    VMS was a far better OS for technical computing than any of the
    proprietary minicomputer OSes of the time, all of which are dead. But VAX couldn't match the performance of high-end 68000 Unix machines, followed
    by the RISCs, and the rest is history.

    John
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Sat Sep 20 23:40:07 2025
    From Newsgroup: comp.os.vms

    On Sat, 20 Sep 2025 21:13 +0100 (BST), John Dallman wrote:

    ... the IBM Z instruction set only has three instruction lengths -
    2, 4 and 6 bytes, which has not changed since System/360 - and you
    can always discover the length of each instruction from its first
    two bytes. That makes having multiple instructions being decoded simultaneously easier, which is a bottleneck in x86 and x86-64, the
    other long-lasting CISC instruction set.

    Mainframes were never designed for high CPU performance.

    Look at the current Top500 list of the worldrCOs fastest machines; what architectures do you see? IBM POWER offers a few contenders; also ARM,
    I think MIPS, and of course the most common is x86-64. At some point
    no doubt a RISC-V machine is likely to make an appearance.

    No IBM Z. Not before, not now, not ever.

    I talked to a colleague, who returned to my employer after a
    takeover, and remembers our business in the early 1980s. He's
    perfectly clear that VMS was a far better OS for technical computing
    than any of the proprietary minicomputer OSes of the time, all of
    which are dead. But VAX couldn't match the performance of high-end
    68000 Unix machines, followed by the RISCs, and the rest is history.

    Presumably your colleague was talking only about non-Unix systems?
    After all, what were rCLUnix workstationsrCY if not platforms for
    rCLtechnical computingrCY?

    And while official rCLUnixrCY did last into the 1990s and early rCO00s
    before (mostly) expiring, the spirit lives on in Linux today, to the
    extent that both Microsoft and Apple have decided that they need to
    add support for running actual Linux kernels on their respective
    proprietary platforms.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sat Sep 20 20:09:53 2025
    From Newsgroup: comp.os.vms

    On 9/20/2025 7:40 PM, Lawrence DrCOOliveiro wrote:
    On Sat, 20 Sep 2025 21:13 +0100 (BST), John Dallman wrote:

    ... the IBM Z instruction set only has three instruction lengths -
    2, 4 and 6 bytes, which has not changed since System/360 - and you
    can always discover the length of each instruction from its first
    two bytes. That makes having multiple instructions being decoded
    simultaneously easier, which is a bottleneck in x86 and x86-64, the
    other long-lasting CISC instruction set.

    Mainframes were never designed for high CPU performance.

    Look at the current Top500 list of the worldrCOs fastest machines; what architectures do you see? IBM POWER offers a few contenders; also ARM,
    I think MIPS, and of course the most common is x86-64. At some point
    no doubt a RISC-V machine is likely to make an appearance.

    No IBM Z. Not before, not now, not ever.

    Not now.

    But once upon a time.

    IBM 3090 with integrated vector facility and the
    equivalent and compatible Amdahl vector.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sat Sep 20 20:51:21 2025
    From Newsgroup: comp.os.vms

    On 9/20/2025 4:13 PM, John Dallman wrote:
    The VAX instruction set is quite nice in some ways and quite horrible in others. Some of those made it hard to make run very fast.

    The extremely variable-length instructions are a prime example.

    CASEx is probably the worst.

    Example of >100 bytes long:

    .title longinst
    .psect $CODE quad,pic,con,lcl,shr,exe,nowrt
    .entry letter,^m<>
    casel @4(ap), #1, #<26 - 1>
    100$: .word 201$ - 100$
    .word 202$ - 100$
    .word 203$ - 100$
    .word 204$ - 100$
    .word 205$ - 100$
    .word 206$ - 100$
    .word 207$ - 100$
    .word 208$ - 100$
    .word 209$ - 100$
    .word 210$ - 100$
    .word 211$ - 100$
    .word 212$ - 100$
    .word 213$ - 100$
    .word 214$ - 100$
    .word 215$ - 100$
    .word 216$ - 100$
    .word 217$ - 100$
    .word 218$ - 100$
    .word 219$ - 100$
    .word 220$ - 100$
    .word 221$ - 100$
    .word 222$ - 100$
    .word 223$ - 100$
    .word 224$ - 100$
    .word 225$ - 100$
    .word 226$ - 100$
    201$: movl #<64+1>, r0
    brb 300$
    202$: movl #<64+2>, r0
    brb 300$
    203$: movl #<64+3>, r0
    brb 300$
    204$: movl #<64+4>, r0
    brb 300$
    205$: movl #<64+5>, r0
    brb 300$
    206$: movl #<64+6>, r0
    brb 300$
    207$: movl #<64+7>, r0
    brb 300$
    208$: movl #<64+8>, r0
    brb 300$
    209$: movl #<64+9>, r0
    brb 300$
    210$: movl #<64+10>, r0
    brb 300$
    211$: movl #<64+11>, r0
    brb 300$
    212$: movl #<64+12>, r0
    brb 300$
    213$: movl #<64+13>, r0
    brb 300$
    214$: movl #<64+14>, r0
    brb 300$
    215$: movl #<64+15>, r0
    brb 300$
    216$: movl #<64+16>, r0
    brb 300$
    217$: movl #<64+17>, r0
    brb 300$
    218$: movl #<64+18>, r0
    brb 300$
    219$: movl #<64+19>, r0
    brb 300$
    220$: movl #<64+20>, r0
    brb 300$
    221$: movl #<64+21>, r0
    brb 300$
    222$: movl #<64+22>, r0
    brb 300$
    223$: movl #<64+23>, r0
    brb 300$
    224$: movl #<64+24>, r0
    brb 300$
    225$: movl #<64+25>, r0
    brb 300$
    226$: movl #<64+26>, r0
    300$: ret
    .end

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sat Sep 20 21:10:51 2025
    From Newsgroup: comp.os.vms

    On 9/20/2025 8:51 PM, Arne Vajh|+j wrote:
    On 9/20/2025 4:13 PM, John Dallman wrote:
    The VAX instruction set is quite nice in some ways and quite horrible in
    others. Some of those made it hard to make run very fast.

    The extremely variable-length instructions are a prime example.

    CASEx is probably the worst.

    Example of >100 bytes long:

    Correction:

    50 bytes long

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sat Sep 20 21:30:00 2025
    From Newsgroup: comp.os.vms

    On 9/20/2025 9:10 PM, Arne Vajh|+j wrote:
    On 9/20/2025 8:51 PM, Arne Vajh|+j wrote:
    On 9/20/2025 4:13 PM, John Dallman wrote:
    The VAX instruction set is quite nice in some ways and quite horrible in >>> others. Some of those made it hard to make run very fast.

    The extremely variable-length instructions are a prime example.

    CASEx is probably the worst.

    Example of >100 bytes long:

    Correction:

    50 bytes long

    But it is possible to make a 100 byte instruction.

    If reusing jump destinations I guess it would be possible
    to create a 32 KB instruction.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Sun Sep 21 01:40:34 2025
    From Newsgroup: comp.os.vms

    On Sat, 20 Sep 2025 20:09:53 -0400, Arne Vajh|+j wrote:

    On 9/20/2025 7:40 PM, Lawrence DrCOOliveiro wrote:

    Look at the current Top500 list of the worldrCOs fastest machines; what
    architectures do you see? IBM POWER offers a few contenders; also ARM,
    I think MIPS, and of course the most common is x86-64. At some point no
    doubt a RISC-V machine is likely to make an appearance.

    No IBM Z. Not before, not now, not ever.

    Not now.

    But once upon a time.

    IBM 3090 with integrated vector facility and the equivalent and
    compatible Amdahl vector.

    Was it ever competitive?

    No. ThatrCOs why it was abandoned.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.os.vms on Sun Sep 21 10:57:40 2025
    From Newsgroup: comp.os.vms

    In article <10ane0m$1dl6v$4@dont-email.me>, ldo@nz.invalid (Lawrence D_Oliveiro) wrote:

    Mainframes were never designed for high CPU performance.

    IBM certainly intended them to be, and the IBM 360 Model 91 was the first
    ever computer to use Tomasulo's algorithm, which is now ubiquitous in
    fast microprocessors.

    <https://en.wikipedia.org/wiki/IBM_System/360_Model_91> <https://en.wikipedia.org/wiki/Tomasulo%27s_algorithm>

    Modern IBM Z is not CPU-competitive with fast systems, but it is much
    faster than the originals, via most of the same methods as current fast systems. The instruction set has coped with that fairly well. I'm not
    claiming it's a great one, but it has been more amenable to changing implementations than VAX was.

    He's perfectly clear that VMS was a far better OS for technical
    computing than any of the proprietary minicomputer OSes of the time
    Presumably your colleague was talking only about non-Unix systems?

    That's why I said "proprietary minicomputer OSes." The product I work on
    was originally developed on VAX/VMS, and after having supported and
    dropped many OSes, now runs on Android, iOS, Linux, macOS and Windows. Amusingly, it now runs on more ARM64 platforms than x86-family ones,
    although x86-64 Windows and Linux are where most of the volume is.

    John
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.os.vms on Sun Sep 21 10:57:40 2025
    From Newsgroup: comp.os.vms

    In article <68cf5518$0$718$14726298@news.sunsite.dk>, arne@vajhoej.dk
    (Arne Vajhoj) wrote:

    Correction:
    50 bytes long
    But it is possible to make a 100 byte instruction.

    If reusing jump destinations I guess it would be possible
    to create a 32 KB instruction.

    Ouch!

    Register masks are another thing that make fast implementation difficult.


    John
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Wade@g4ugm@dave.invalid to comp.os.vms on Sun Sep 21 10:56:33 2025
    From Newsgroup: comp.os.vms

    On 21/09/2025 00:40, Lawrence DrCOOliveiro wrote:
    On Sat, 20 Sep 2025 21:13 +0100 (BST), John Dallman wrote:

    ... the IBM Z instruction set only has three instruction lengths -
    2, 4 and 6 bytes, which has not changed since System/360 - and you
    can always discover the length of each instruction from its first
    two bytes. That makes having multiple instructions being decoded
    simultaneously easier, which is a bottleneck in x86 and x86-64, the
    other long-lasting CISC instruction set.

    Mainframes were never designed for high CPU performance.

    Look at the current Top500 list of the worldrCOs fastest machines; what architectures do you see? IBM POWER offers a few contenders; also ARM,
    I think MIPS, and of course the most common is x86-64. At some point
    no doubt a RISC-V machine is likely to make an appearance.

    No IBM Z. Not before, not now, not ever.


    No, but these machines are all special purpose. Look at some of the Z technical documents, the way the system is build is fascinating.

    The real advantage of the 360/370 etc. architecture was the way it did
    IO. The original channel with its own dedicated processor and 8-bit bus running at 1Mhz yielding 8 Mbits/sec was rapid for its era.

    Then the use of block mode terminals so the management of input fields
    was all done in the terminal controller. The Mainframe never saw an
    interrupt until a complete form was filled in.

    I think DEC or was it HP forgot this with the Alpha. I remember looking
    at Alpha for Microsoft Exchange on Windows/NT. It was really hard to
    justify using an Alpha because Exchange is very IO intensive. You
    couldn't get enough RAID to use the CPU.

    But we digress, I don't believe the techniques IBM use to perpetuate the
    use of Z would have worked with VMS. Remember IBM too has had its
    failures. No one runs AIX on Z or X86 these days but at one time it was flavour of the month in IBM. I think OS/2 is in a similar position to
    VMS. Now on its third owner/manager after IBM...

    Dave
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sun Sep 21 13:08:34 2025
    From Newsgroup: comp.os.vms

    On 9/20/2025 9:40 PM, Lawrence DrCOOliveiro wrote:
    On Sat, 20 Sep 2025 20:09:53 -0400, Arne Vajh|+j wrote:
    On 9/20/2025 7:40 PM, Lawrence DrCOOliveiro wrote:
    Look at the current Top500 list of the worldrCOs fastest machines; what
    architectures do you see? IBM POWER offers a few contenders; also ARM,
    I think MIPS, and of course the most common is x86-64. At some point no
    doubt a RISC-V machine is likely to make an appearance.

    No IBM Z. Not before, not now, not ever.

    Not now.

    But once upon a time.

    IBM 3090 with integrated vector facility and the equivalent and
    compatible Amdahl vector.

    Was it ever competitive?

    No. ThatrCOs why it was abandoned.

    It was produced and sold for a number of years. In competition
    with Cray, NEC, Fujitsu etc..

    Production and sale stopped when the entire class
    (single super computers with vector aggregate) went
    away (and was replaced by distributed super computers).

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sun Sep 21 13:15:50 2025
    From Newsgroup: comp.os.vms

    On 9/21/2025 5:56 AM, John Dallman wrote:
    In article <68cf5518$0$718$14726298@news.sunsite.dk>, arne@vajhoej.dk
    (Arne Vajh|+j) wrote:
    Correction:
    >50 bytes long
    But it is possible to make a 100 byte instruction.

    If reusing jump destinations I guess it would be possible
    to create a 32 KB instruction.

    Ouch!

    Register masks are another thing that make fast implementation difficult.

    The memory access in calls/callg and ret was a problem.

    I once read somewhere that those accounted for 25% of time
    spent in some programs.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sun Sep 21 15:20:55 2025
    From Newsgroup: comp.os.vms

    On 9/21/2025 1:08 PM, Arne Vajh|+j wrote:
    On 9/20/2025 9:40 PM, Lawrence DrCOOliveiro wrote:
    On Sat, 20 Sep 2025 20:09:53 -0400, Arne Vajh|+j wrote:
    On 9/20/2025 7:40 PM, Lawrence DrCOOliveiro wrote:
    Look at the current Top500 list of the worldrCOs fastest machines; what >>>> architectures do you see? IBM POWER offers a few contenders; also ARM, >>>> I think MIPS, and of course the most common is x86-64. At some point no >>>> doubt a RISC-V machine is likely to make an appearance.

    No IBM Z. Not before, not now, not ever.

    Not now.

    But once upon a time.

    IBM 3090 with integrated vector facility and the equivalent and
    compatible Amdahl vector.

    Was it ever competitive?

    No. ThatrCOs why it was abandoned.

    It was produced and sold for a number of years. In competition
    with Cray, NEC, Fujitsu etc..

    Production and sale stopped when the entire class
    (single super computers with vector aggregate) went
    away (and was replaced by distributed super computers).

    I could of course also have included VAX 9000 with
    Vector Option.

    But I believe those were very rare.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Sun Sep 21 20:12:42 2025
    From Newsgroup: comp.os.vms

    On Sun, 21 Sep 2025 13:08:34 -0400, Arne Vajh|+j wrote:

    On 9/20/2025 9:40 PM, Lawrence DrCOOliveiro wrote:

    On Sat, 20 Sep 2025 20:09:53 -0400, Arne Vajh|+j wrote:

    IBM 3090 with integrated vector facility and the equivalent and
    compatible Amdahl vector.

    Was it ever competitive?

    No. ThatrCOs why it was abandoned.

    It was produced and sold for a number of years.

    Only during the era when IBMrCOs marketing machine had peak credibility. As that waned, and actual engineering became more important, so did IBMrCOs fortunes.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Sun Sep 21 20:20:38 2025
    From Newsgroup: comp.os.vms

    On Sun, 21 Sep 2025 10:56 +0100 (BST), John Dallman wrote:

    In article <10ane0m$1dl6v$4@dont-email.me>, ldo@nz.invalid (Lawrence D_Oliveiro) wrote:

    Mainframes were never designed for high CPU performance.

    IBM certainly intended them to be, and the IBM 360 Model 91 was the
    first ever computer to use Tomasulo's algorithm, which is now ubiquitous
    in fast microprocessors.

    Or rather, IBM claimed they would offer high CPU performance. Remember,
    this machine was vapourware for the longest time; the 360/90 project was something IBM created, to begin with, just to try to dissuade potential customers from buying CDCrCOs 6000-family machines. By the time it finally shipped as the 360/91, it fell far short of those built-up expectations.

    IBM overpromised and underdelivered, just as they did earlier with the
    7030.

    Modern IBM Z is not CPU-competitive with fast systems, but it is much
    faster than the originals ...

    IrCOm sure it is, but it still wouldnrCOt be anywhere near the rCLsupercomputerrCY
    class.

    Mainframes are all about high I/O throughput. They are not about high CPU performance, and they are not about low I/O latency (needed for
    interactive or real-time operation) either.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Sun Sep 21 20:31:13 2025
    From Newsgroup: comp.os.vms

    On Sun, 21 Sep 2025 10:56:33 +0100, David Wade wrote:

    On 21/09/2025 00:40, Lawrence DrCOOliveiro wrote:

    No IBM Z [in supercomputer rankings]. Not before, not now, not ever.

    No, but these machines are all special purpose.

    My point exactly.

    The real advantage of the 360/370 etc. architecture was the way it did
    IO. The original channel with its own dedicated processor and 8-bit bus running at 1Mhz yielding 8 Mbits/sec was rapid for its era.

    Then the use of block mode terminals so the management of input fields
    was all done in the terminal controller. The Mainframe never saw an
    interrupt until a complete form was filled in.

    In other words, mainframes are, and were, all about high I/O throughput
    and efficient batch operation. Notice that they are *not* about low I/O latency, which is important for interactive and real-time work.

    Imagine trying to run a full-screen text editor on those block-mode
    terminals -- TECO, TPU/EVE, Emacs ... a few dozen users interrupting the
    CPU on every keystroke would probably bring a big, multi-million-dollar
    IBM system to its knees.

    I think DEC or was it HP forgot this with the Alpha.

    No they didnrCOt. DEC machines were all about interactivity, right from the original PDP-1. That meant low latency, even at the expense of high throughput. ThatrCOs why they were able to run circles around far more expensive (and complex) IBM hardware in the interactive timesharing
    market.

    Remember machines in the various PDP families were quite popular in lab/ factory situations, doing monitoring, data collection and process control
    in real time.

    I remember looking at Alpha for Microsoft Exchange on Windows/NT. It was really hard to justify using an Alpha because Exchange is very IO
    intensive. You couldn't get enough RAID to use the CPU.

    Or maybe Windows NT (and Exchange) were just too inefficient. Did you
    compare performance with DEC Unix on the same hardware? Linux was also starting to build a reputation for offering higher performance on the vendorrCOs own hardware than the vendor-supplied OS.

    But we digress, I don't believe the techniques IBM use to perpetuate the
    use of Z would have worked with VMS.

    Correct. VMS, again, followed in that DEC tradition of being primarily an interactive, not a batch, OS.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Wade@g4ugm@dave.invalid to comp.os.vms on Sun Sep 21 23:35:43 2025
    From Newsgroup: comp.os.vms

    On 21/09/2025 21:31, Lawrence DrCOOliveiro wrote:
    On Sun, 21 Sep 2025 10:56:33 +0100, David Wade wrote:

    On 21/09/2025 00:40, Lawrence DrCOOliveiro wrote:

    No IBM Z [in supercomputer rankings]. Not before, not now, not ever.

    No, but these machines are all special purpose.

    My point exactly.

    The real advantage of the 360/370 etc. architecture was the way it did
    IO. The original channel with its own dedicated processor and 8-bit bus
    running at 1Mhz yielding 8 Mbits/sec was rapid for its era.

    Then the use of block mode terminals so the management of input fields
    was all done in the terminal controller. The Mainframe never saw an
    interrupt until a complete form was filled in.

    In other words, mainframes are, and were, all about high I/O throughput
    and efficient batch operation. Notice that they are *not* about low I/O latency, which is important for interactive and real-time work.

    Imagine trying to run a full-screen text editor on those block-mode
    terminals -- TECO, TPU/EVE, Emacs ... a few dozen users interrupting the
    CPU on every keystroke would probably bring a big, multi-million-dollar
    IBM system to its knees.

    You actually can't write an editor that works like that, and you don't
    need it. IBMs XEDIT is just as powerful as EMACS in its own way, with
    the while screen being multiple, editable fields. You have to leverage
    what you have. I still prefer xedit to teco or emacs.


    I think DEC or was it HP forgot this with the Alpha.

    No they didnrCOt. DEC machines were all about interactivity, right from the original PDP-1. That meant low latency, even at the expense of high throughput. ThatrCOs why they were able to run circles around far more expensive (and complex) IBM hardware in the interactive timesharing
    market.

    Then why did they try and sell them as Database Servers or Exchange
    Server. In fact the converse applies. I well remember sharing a drink
    with a friend who was rolling out office automation in a big bank.

    At the time the VAX servers he had for All-In-One would not scale to all
    the users he needed to deliver OA too. So senior managers and directors
    got all-in-one, but the plebs got IBMs Office Vision because the
    mainframe scaled better with large numbers of screens, with sub-second response.


    Remember machines in the various PDP families were quite popular in lab/ factory situations, doing monitoring, data collection and process control
    in real time.


    We must have had hundreds of 11-s running CAMAC crates, but there is
    usually no random database access on such systems. Bang the data to tape
    or floppy disk. Send to mainframe for analysis..


    I remember looking at Alpha for Microsoft Exchange on Windows/NT. It was
    really hard to justify using an Alpha because Exchange is very IO
    intensive. You couldn't get enough RAID to use the CPU.

    Or maybe Windows NT (and Exchange) were just too inefficient. Did you
    compare performance with DEC Unix on the same hardware? Linux was also starting to build a reputation for offering higher performance on the vendorrCOs own hardware than the vendor-supplied OS.

    Thats crap. Exchange is very efficient in terms of CPU use. It just
    hammers the disks. So how could adding an Alpha CPU increase
    performance. The alpha that is simply overkill. You could get the same performance, running the same OS on much cheaper, lower performance, in
    CPU terms boxes. You just need a mirror set for every 250 users...

    If you were Microsoft at the time you wanted Exchange which only runs on Windows so other OSs not an option.


    But we digress, I don't believe the techniques IBM use to perpetuate the
    use of Z would have worked with VMS.

    Correct. VMS, again, followed in that DEC tradition of being primarily an interactive, not a batch, OS.

    well yes, but it degrades terribly when you get short of RAM and hit the dreaded type-behind. I remember some of my users coming back from a VMS introduction and saying there was no way they were having a VAX how
    could we get an IBM 4381. I told them and they were very happy...

    Dave
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sun Sep 21 19:20:20 2025
    From Newsgroup: comp.os.vms

    On 9/21/2025 6:35 PM, David Wade wrote:
    On 21/09/2025 21:31, Lawrence DrCOOliveiro wrote:
    On Sun, 21 Sep 2025 10:56:33 +0100, David Wade wrote:
    I remember looking at Alpha for Microsoft Exchange on Windows/NT. It was >>> really hard to justify using an Alpha because Exchange is very IO
    intensive. You couldn't get enough RAID to use the CPU.

    Or maybe Windows NT (and Exchange) were just too inefficient. Did you
    compare performance with DEC Unix on the same hardware? Linux was also
    starting to build a reputation for offering higher performance on the
    vendorrCOs own hardware than the vendor-supplied OS.

    Thats crap. Exchange is very efficient in terms of CPU use. It just
    hammers the disks. So how could adding an Alpha CPU increase
    performance. The alpha that is simply overkill. You could get the same performance, running the same OS on much cheaper, lower performance, in
    CPU terms boxes. You just need a mirror set for every 250 users...

    If you were Microsoft at the time you wanted Exchange which only runs on Windows so other OSs not an option.

    For databases the argument was that 64 bit allowed for larger address
    space and more memory and more caching would increase performance.

    I don't know if that applies to Exchange as well.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Sun Sep 21 23:21:03 2025
    From Newsgroup: comp.os.vms

    On Sun, 21 Sep 2025 23:35:43 +0100, David Wade wrote:

    On 21/09/2025 21:31, Lawrence DrCOOliveiro wrote:

    Imagine trying to run a full-screen text editor on those block-mode
    terminals -- TECO, TPU/EVE, Emacs ... a few dozen users interrupting
    the CPU on every keystroke would probably bring a big,
    multi-million-dollar IBM system to its knees.

    You actually can't write an editor that works like that, and you don't
    need it. IBMs XEDIT is just as powerful as EMACS in its own way ...

    I like that rCLin its own wayrCY -- itrCOs such a ... versatile ... phrase ...

    ... with the while screen being multiple, editable fields.

    You can do that in Emacs easily enough. The point being, you donrCOt have to work that way if yourCOre just doing basic file editing.

    But if you want to see the advanced stuff in action, just check out its
    help system.

    You have to leverage what you have. I still prefer xedit to teco or
    emacs.

    Funny how those two sentences are somewhat at odds with each other ...

    I think DEC or was it HP forgot this with the Alpha.

    No they didnrCOt. DEC machines were all about interactivity, right from
    the original PDP-1. That meant low latency, even at the expense of high
    throughput. ThatrCOs why they were able to run circles around far more
    expensive (and complex) IBM hardware in the interactive timesharing
    market.

    Then why did they try and sell them as Database Servers or Exchange
    Server.

    Remember that DEC was on its way down at this point, while Microsoft was
    still on the way up. Call it desperation: trying to find any kind of
    market at all, to try to maximize the sales of Alpha servers.

    This was the point where Jon rCLmaddogrCY Hall was able to persuade his boss to send a brand spanking new Alpha to some young Comp Sci student in
    Finland, so that the latter could port this new OS kernel he was working
    on to something other than x86.

    Unlike Windows NT (or even OpenVMS), he even made it a full 64-bit port.

    In fact the converse applies. I well remember sharing a drink
    with a friend who was rolling out office automation in a big bank.

    At the time the VAX servers he had for All-In-One would not scale to all
    the users he needed to deliver OA too. So senior managers and directors
    got all-in-one, but the plebs got IBMs Office Vision because the
    mainframe scaled better with large numbers of screens, with sub-second response.

    Was this rCLOffice VisionrCY thing based on fields on block-mode screens? If so, I rest my case.

    Remember machines in the various PDP families were quite popular in
    lab/ factory situations, doing monitoring, data collection and process
    control in real time.

    We must have had hundreds of 11-s running CAMAC crates, but there is
    usually no random database access on such systems. Bang the data to tape
    or floppy disk. Send to mainframe for analysis..

    That was because in those days, a rCLdatabaserCY (in the sense of rCLsomething more complex than ISAM files, with its own query languagerCY) needed big
    iron to run. That restriction didnrCOt really go away until the 1980s.

    As an example of the opposite extreme nowadays, look at the data
    collection for experiments on CERNrCOs Large Hadron Collider: much of the sensor input is discarded as noise or otherwise unimportant at a low level close to its source, before passing the choicer parts on for higher-level analysis. That reduces petabytes of raw data to mere terabytes. ;)

    Exchange is very efficient in terms of CPU use. It just hammers the
    disks.

    I donrCOt know why it would need to. ItrCOs only email, for goshsakes. ItrCOs not even the most data-intensive thing a typical company would need to
    deal with.

    The most resource-intensive thing an email server needs to deal with
    nowadays is scans for viruses and other malware. That needs fair amounts
    of RAM and CPU.

    (Speaking from experience.)

    If you were Microsoft at the time you wanted Exchange which only runs on Windows so other OSs not an option.

    So much for rCLopen standardsrCY, eh? They were very much able to make themselves the rCLnew IBMrCY, at least for a while.

    Thankfully that rCLwhilerCY is over.

    Correct. VMS, again, followed in that DEC tradition of being primarily
    an interactive, not a batch, OS.

    well yes, but it degrades terribly when you get short of RAM and hit the dreaded type-behind. I remember some of my users coming back from a VMS introduction and saying there was no way they were having a VAX how
    could we get an IBM 4381. I told them and they were very happy...

    Did they not try a RISC Unix machine? Those were the ones that left the
    VAX in the dust.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Sun Sep 21 23:22:18 2025
    From Newsgroup: comp.os.vms

    On Sun, 21 Sep 2025 19:20:20 -0400, Arne Vajh|+j wrote:

    For databases the argument was that 64 bit allowed for larger address
    space and more memory and more caching would increase performance.

    I don't know if that applies to Exchange as well.

    Even if it did, it would have been moot. Windows NT remained resolutely 32-bit, even on 64-bit machines like Alpha, right into the rCO00s.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sun Sep 21 19:48:30 2025
    From Newsgroup: comp.os.vms

    On 9/21/2025 7:22 PM, Lawrence DrCOOliveiro wrote:
    On Sun, 21 Sep 2025 19:20:20 -0400, Arne Vajh|+j wrote:
    For databases the argument was that 64 bit allowed for larger address
    space and more memory and more caching would increase performance.

    I don't know if that applies to Exchange as well.

    Even if it did, it would have been moot. Windows NT remained resolutely 32-bit, even on 64-bit machines like Alpha, right into the rCO00s.

    Relevant point.

    But a 64 bit version was supposed to happen. People were
    expecting it. MS dragged their feet and eventually
    pulled the plug on Alpha.

    And then HP did the same and we got Itanium. And MS added
    Windows support for that (64 bit that is).

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Mon Sep 22 01:17:15 2025
    From Newsgroup: comp.os.vms

    On Sun, 21 Sep 2025 19:48:30 -0400, Arne Vajh|+j wrote:

    But a 64 bit version was supposed to happen. People were expecting it.
    MS dragged their feet and eventually pulled the plug on Alpha.

    Obviously it was just too hard for Windows NT to support a mix of 32-bit
    and 64-bit architectures. So much for portability ...

    And then HP did the same and we got Itanium. And MS added Windows
    support for that (64 bit that is).

    Itanium was a very high-profile, big-budget project. I suppose itrCOs
    possible that HP and Intel contributed some of the costs for Microsoft to create 64-bit NT for that.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Goodwin@david+usenet@zx.net.nz to comp.os.vms on Mon Sep 22 19:57:42 2025
    From Newsgroup: comp.os.vms

    In article <10aq2se$21gcf$2@dont-email.me>, arne@vajhoej.dk says...

    On 9/21/2025 7:22 PM, Lawrence D?Oliveiro wrote:
    On Sun, 21 Sep 2025 19:20:20 -0400, Arne Vajhoj wrote:
    For databases the argument was that 64 bit allowed for larger address
    space and more memory and more caching would increase performance.

    I don't know if that applies to Exchange as well.

    Even if it did, it would have been moot. Windows NT remained resolutely 32-bit, even on 64-bit machines like Alpha, right into the ?00s.

    Relevant point.

    Windows 2000 was to introduce new VLM APIs that allow 32bit applications
    on Alpha to access very large amounts of memory.

    But a 64 bit version was supposed to happen. People were
    expecting it. MS dragged their feet and eventually
    pulled the plug on Alpha.

    Nah, that was 100% Compaqs doing.

    Windows 2000 RC2 came out on Alpha, and the 64bit port was well underway
    when Compaq announced in mid-1999 that they were not going to support
    Windows on Alpha anymore and that they were going to lay off all of the
    people working on Alpha platform support with Microsoft once NT 4.0 SP6
    was out the door.

    As future Alphas weren't going to support Windows, there was little
    point in Microsoft continuing to release new versions of Windows for old
    and increasingly obsolete models by themselves so Win2k RC2 was the
    final release for Alpha.

    Microsoft was still committed to doing 64bit Windows for Itanium though,
    and Itanium hardware wasn't ready yet. As they still had plenty of
    Alphas lying around, they continued working on the 64bit Alpha port
    internally until Itanium hardware was ready in sufficient quantities.

    And the 64bit port for Alpha did get a reasonable way though. It boots,
    it runs, has networking and internet explorer. The build I have is
    certainly not ready for release, but it feels like they could have had
    it out the door in 2001 if it wasn't for Compaq. I was able to do a
    64bit build of Kermit 95 for Alpha and take this screenshot on my trusty AlphaServer 800:
    https://davidrg.github.io/ckwin/images/win2k64-alpha.png
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Goodwin@david+usenet@zx.net.nz to comp.os.vms on Mon Sep 22 20:13:12 2025
    From Newsgroup: comp.os.vms

    In article <10aq82r$22nft$3@dont-email.me>, ldo@nz.invalid says...

    On Sun, 21 Sep 2025 19:48:30 -0400, Arne Vajhoj wrote:

    But a 64 bit version was supposed to happen. People were expecting it.
    MS dragged their feet and eventually pulled the plug on Alpha.

    Obviously it was just too hard for Windows NT to support a mix of 32-bit
    and 64-bit architectures. So much for portability ...

    Windows NT was ported to:
    * i860 (never released as it turned out to not be very powerful)
    * MIPS R3000 (never released as it became obsolete, but we have a video
    from DEC of it running on a DECstation)
    * MIPS R4000
    * Clipper (publicly demonstrated but never released as Integraph gave
    up on the architecture)
    * x86
    * Alpha
    * PowerPC
    * Alpha, 64bit (never released as Compaq gave up on the architecture,
    but I have it running on a machine)
    * Itanium
    * AMD64
    * 32bit ARM
    * 64bit ARM

    I've heard HP privately demonstrated Windows NT running on PA-RISC at
    one point in the mid 90s too, though if it it ever happened it would
    have needed special little or bi-endian hardware and it was certainly
    never released.

    In more recent history people have got the PowerPC version of Windows NT running unmodified on Power Macintoshes, Nintendo GameCubes and Nintendo
    Wiis by supplying a custom ARC bootloader, HAL and drivers.

    So I think its a bit disingenuous to claim Windows NT wasn't portable.

    And then HP did the same and we got Itanium. And MS added Windows
    support for that (64 bit that is).

    Itanium was a very high-profile, big-budget project. I suppose it?s
    possible that HP and Intel contributed some of the costs for Microsoft to create 64-bit NT for that.

    Its certainly possible, though I wouldn't be surprised if Microsoft was
    doing it on their own accord too. The whole industry thought Itanium was
    the future, and it was to be Intels 64bit platform going forward. With
    Alpha out of the picture, if Microsoft wanted 64bit Windows then it was
    going to be Itanium. Even IBM ported AIX to Itanium.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Gary R. Schmidt@grschmidt@acm.org to comp.os.vms on Mon Sep 22 19:05:06 2025
    From Newsgroup: comp.os.vms

    On 22/9/25 09:20, Arne Vajh|+j wrote:
    On 9/21/2025 6:35 PM, David Wade wrote:
    On 21/09/2025 21:31, Lawrence DrCOOliveiro wrote:
    On Sun, 21 Sep 2025 10:56:33 +0100, David Wade wrote:
    I remember looking at Alpha for Microsoft Exchange on Windows/NT. It
    was
    really hard to justify using an Alpha because Exchange is very IO
    intensive. You couldn't get enough RAID to use the CPU.

    Or maybe Windows NT (and Exchange) were just too inefficient. Did you
    compare performance with DEC Unix on the same hardware? Linux was also
    starting to build a reputation for offering higher performance on the
    vendorrCOs own hardware than the vendor-supplied OS.

    Thats crap. Exchange is very efficient in terms of CPU use. It just
    hammers the disks. So how could adding an Alpha CPU increase
    performance. The alpha that is simply overkill. You could get the same
    performance, running the same OS on much cheaper, lower performance,
    in CPU terms boxes. You just need a mirror set for every 250 users...

    If you were Microsoft at the time you wanted Exchange which only runs
    on Windows so other OSs not an option.

    For databases the argument was that 64 bit allowed for larger address
    space and more memory and more caching would increase performance.

    I don't know if that applies to Exchange as well.

    Arne

    It doesn't matter - Windows on the Alpha was 32-bit, as was Exchange.

    Cheers,
    Gary B-)
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Mon Sep 22 20:39:06 2025
    From Newsgroup: comp.os.vms

    In article <memo.20250921105625.10624S@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    [snip]
    Modern IBM Z is not CPU-competitive with fast systems,

    I'm not sure I agree with this. Modern Z is extremely
    impressive.

    Telum II has 8 cores per chip with 36 MiB of L2 cache per core,
    clock rates max out at 5.5 GHz, it has has an on-chip DPU and
    32-core AI accelerator, and is implemented on Samsung's 5nm
    process.

    OTOH, x86 and ARM core counts per socket appear to be higher.
    But I bet IBM gives them a run for their money with this thing.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.os.vms on Mon Sep 22 21:54:40 2025
    From Newsgroup: comp.os.vms

    In article <MPG.433bc89eda9fea6d98975d@news.zx.net.nz>,
    david+usenet@zx.net.nz (David Goodwin) wrote:

    Microsoft was still committed to doing 64bit Windows for Itanium
    though, and Itanium hardware wasn't ready yet. As they still had
    plenty of Alphas lying around, they continued working on the 64bit
    Alpha port internally until Itanium hardware was ready in
    sufficient quantities.

    I used the Itanium simulator that run on x86-32, and it was /extremely/
    slow, because of the painful nature of managing a simulated 64-bit
    address space on a 32-bit machine. I suggested to my Intel FAE ("Field Application Engineer" not "Fuel-Air Explosive"), who helped ISVs with
    porting and was ex-DEC, that running the simulator on Alpha would be more satisfactory. "Yes. But we wouldn't do that, would we?"

    John
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Mon Sep 22 23:00:29 2025
    From Newsgroup: comp.os.vms

    On Mon, 22 Sep 2025 20:13:12 +1200, David Goodwin wrote:

    So I think its a bit disingenuous to claim Windows NT wasn't portable.

    The fact that many of the ports you mention never made it to production release, and even the ones (other than x86) that did are now defunct, I
    think reinforces my point. The ports were difficult and expensive to
    create, and difficult and expensive to maintain. In the end they were all
    just abandoned.

    Even the concept of a portable OS seems to have gone from Windows
    nowadays. It has taken Microsoft a lot of trouble to come up with the ARM port, for example, and I donrCOt think the compatibility issues have
    entirely been worked out, even after all these years.

    A RISC-V Windows port will likely never happen.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Mon Sep 22 23:03:33 2025
    From Newsgroup: comp.os.vms

    On Mon, 22 Sep 2025 19:57:42 +1200, David Goodwin wrote:

    Windows 2000 was to introduce new VLM APIs that allow 32bit applications
    on Alpha to access very large amounts of memory.

    ThererCOs a reason the API is still called rCLWin32rCY, not rCLWin64rCY. Instead of
    using POSIX-style symbolic type names like size_t, time_t and off_t, they explicitly use 32-bit types.

    This leads to craziness like, when getting the size of a file, it returns
    the high half and low half in separate 32-bit quantities, even on a 64-bit system, with native 64-bit integer support!
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Mon Sep 22 20:46:03 2025
    From Newsgroup: comp.os.vms

    On 9/22/2025 7:03 PM, Lawrence DrCOOliveiro wrote:
    On Mon, 22 Sep 2025 19:57:42 +1200, David Goodwin wrote:
    Windows 2000 was to introduce new VLM APIs that allow 32bit applications
    on Alpha to access very large amounts of memory.

    ThererCOs a reason the API is still called rCLWin32rCY, not rCLWin64rCY. Instead of
    using POSIX-style symbolic type names like size_t, time_t and off_t, they explicitly use 32-bit types.

    This leads to craziness like, when getting the size of a file, it returns
    the high half and low half in separate 32-bit quantities, even on a 64-bit system, with native 64-bit integer support!

    There are two aspects here.

    1) types that have different sizes on different
    platforms/compilers/configs vs types that have
    same sizes on all platforms/compilers/configs

    Experience shows that the latter is better than the
    former, because it makes it easier to write portable
    code with well defined behavior.

    off_t is a signed integer of unknown size.

    DWORD is always 32 bit.

    2) use of two 32 bit integers vs one 64 bit integer
    on a platform/compiler that supports 64 bit integers

    Obviously it is nicer to have one 64 bit integer.

    Win32 API GetFileSizeEx return one 64 bit integer (and
    two 32 bit integers via a union), but GetFileAttributesEx
    return two 32 bit integers.

    The last one may have been tricky to fix because they got
    the two bit integers in the wrong order: high before low.

    Arne



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Mon Sep 22 20:50:58 2025
    From Newsgroup: comp.os.vms

    On 9/22/2025 8:46 PM, Arne Vajh|+j wrote:
    On 9/22/2025 7:03 PM, Lawrence DrCOOliveiro wrote:
    On Mon, 22 Sep 2025 19:57:42 +1200, David Goodwin wrote:
    Windows 2000 was to introduce new VLM APIs that allow 32bit applications >>> on Alpha to access very large amounts of memory.

    ThererCOs a reason the API is still called rCLWin32rCY, not rCLWin64rCY. Instead of
    using POSIX-style symbolic type names like size_t, time_t and off_t, they
    explicitly use 32-bit types.

    This leads to craziness like, when getting the size of a file, it returns
    the high half and low half in separate 32-bit quantities, even on a
    64-bit
    system, with native 64-bit integer support!

    There are two aspects here.

    1) types that have different sizes on different
    -a-a platforms/compilers/configs vs types that have
    -a-a same sizes on all platforms/compilers/configs

    Experience shows that the latter is better than the
    former, because it makes it easier to write portable
    code with well defined behavior.

    off_t is a signed integer of unknown size.

    On VMS it is 32 or 64 bit depending on whether
    _LARGEFILE is defined.

    xab.XAB$L_EBK and xab.XAB$W_FFB may not be a pretty
    interface, but we know we got 32 bit and 16 bit.

    Arne


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Johnny Billquist@bqt@softjar.se to comp.os.vms on Tue Sep 23 15:20:25 2025
    From Newsgroup: comp.os.vms

    On 2025-09-21 02:51, Arne Vajh|+j wrote:
    On 9/20/2025 4:13 PM, John Dallman wrote:
    The VAX instruction set is quite nice in some ways and quite horrible in
    others. Some of those made it hard to make run very fast.

    The extremely variable-length instructions are a prime example.

    CASEx is probably the worst.

    Example of >100 bytes long:

    I think that's a bad example. The size of case is basically just a displacement table, which is not needed at all be read in for the
    execution of the instruction. It's just that at some point, *one* of the elements of that displacement table needs to be read in in order to
    adjust the PC. The rest you can ignore, and from a pipelining point of
    view, it's similar to any kind of conditional branching.

    Potentially performance killing is that it would be very hard to do speculative fetching or execution on a case instruction, so you'd
    probably stall on a memory fetch. But conditional branches always are a
    bit of a hiccup for performance, case might be a bit worse, but nowhere
    near a real killer.

    So no, I don't think case is a good example of something making VAX hard
    to get fast. What is much more painful is just the general arguments processing for any instruction, which is all rather variable in size,
    and which cannot be predicted, and which *have* to be read in before you
    can make much progress on any instruction. Displacement with indexing
    for maybe 5 arguments. *That's* a headache.

    Johnny

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.os.vms on Tue Sep 23 20:50:40 2025
    From Newsgroup: comp.os.vms

    In article <10asked$2lq0s$3@dont-email.me>, ldo@nz.invalid (Lawrence D_Oliveiro) wrote:

    Lawrence D_Oliveiro <ldo@nz.invalid> wrote:
    The fact that many of the ports you mention never made it to
    production release, and even the ones (other than x86) that did are
    now defunct, I think reinforces my point. The ports were difficult
    and expensive to create, and difficult and expensive to maintain.
    In the end they were all just abandoned.

    Microsoft is a commercial organisation, and has to pay staff for all the
    work done on Windows. This increases costs compared to open-source work
    that doesn't show up in the costs for Linux, or the BSDs. I've worked on thoroughly portable application software for Windows NT (and Unixes)
    since 1995. My employers have at least considered porting to every
    Windows NT platform available. I've been involved with those decisions
    and done the more recent ports.

    i860 never appeared in machines people could buy.

    In the mid-1990s, MIPS R3000 and R4000 were only available in expensive workstations from MIPS, DEC and SGI. SGI had an ongoing internal
    disagreement over embracing Windows NT or sticking with Irix. The only NT machines they ever sold were Intel-based.

    There was a company - NetPower - that planned to sell R4000-based
    machines in the high-end PC market, and we had one of their prototypes
    for porting. They had not launched the machines when the Pentium Pro
    completely destroyed MIPS' performance advantage over x86. NetPower
    switched to x86.

    Intergraph Clipper was abandoned by the manufacturer before any machines
    were sold for Windows NT.

    x86 was the usual platform for Windows NT. The saying my team coined was
    "If you don't know about processor architectures, you want Intel. If you
    want the fastest CPU and can cope with a lot of software not being
    available, you want Alpha. If you really, really believe in IBM's
    strategy and are prepared to pay at least three times as much to stick
    with it, you want PowerPC. There isn't a reason that good to want MIPS."

    Alpha was killed by Compaq. This was announced about a week after a
    prototype Merced had first booted Windows. Compaq didn't see the point in continuing to fund Alpha development when Itanium was going to be great.
    Or so they thought.

    Alpha 64-bit was continued by Microsoft for a while when Itanium hardware wasn't readily available, as already discussed.

    PowerPC was abandoned by Microsoft. We considered supporting it, but the
    IBM RS/6000 hardware it needed was very expensive, and we weren't getting
    any requests to support it. If the PowerPC Common Hardware Reference
    Platform project had been adopted it might have had a future, but IBM and
    Apple both had motive to prevent that happening.

    Itanium was an expensive fiasco in the general computing market. Its sole benefit to Windows was that it taught Microsoft a lot about doing 64-bit.
    They made a good decision in dropping it early on,

    AMD64 is the main Windows platform now. My experience of porting to it
    was that it took less than 5% of the work required for Itanium. According
    to AMD, Microsoft were responsible for Intel building AMD-compatible
    x86-64. Their original plan was to use a different instruction encoding,
    to force software vendors to do separate builds for AMD and Intel. They
    hoped that many would not bother with AMD, and thus drive them out of the market. Microsoft said if Intel did that, they wouldn't have Windows for
    it, and Intel had to back down.

    32-bit ARM was part of one of Microsoft's less good ideas. There appears
    to be a widespread opinion within the company that the Windows GUI is intrinsically and obviously superior to any other. There is no single
    best GUI, IMHO. In any case, Microsoft's reaction to the iPad was to
    create several generations of "iPad killer" tablets, none of which got anywhere.

    They had obnoxiously cut-down versions of Windows which made it very hard
    to test software unless you worked in the exact way that Microsoft had
    prepared for. I severed relations with the Microsoft person who was
    trying to get me to support one of these after he'd told us an important
    wrong fact - people make mistakes - but not told us when he learned it
    was wrong, and refused to apologise for the omission. I'd done several
    weeks of work on the basis of that claim, proved it false and asked him
    "What the hell?"

    32-bit ARM is dying anyway, because the 32- and 64-bit ISAs are very
    different, far more so than for any other architecture I know, and modern
    core designs are 64-bit-only. Leaving out 32-bit execution makes the
    cores smaller and cheaper, so it is disappearing.

    64-bit ARM is where full-strength Windows with all the tools appeared on
    ARM. Its final success is not yet decided, but it's already much better
    than any other non-x86 Windows.

    Even the concept of a portable OS seems to have gone from Windows
    nowadays. It has taken Microsoft a lot of trouble to come up with
    the ARM port, for example, and I don't think the compatibility
    issues have entirely been worked out, even after all these years.

    From interacting with them on this quite a bit, the trouble seems to have
    been accepting that they needed to do the job thoroughly, and provide development tools. I contributed to this in a small way, by running the
    Visual Studio command-line tools that targeted ARM64 under the x86
    emulator. This allowed me to use our custom build environment, making the porting job far simpler. The IDE would not run that way, which affects me
    about as much as the rainfall in the Gobi Desert. After a while,
    Microsoft started producing native ARM64 tools.

    A RISC-V Windows port will likely never happen.

    Quite likely not, because RISC-V is suffering from an ongoing failure to produce cores fast enough for desktops, or even mobile devices. This has
    lasted long enough that I'm becoming doubtful it will ever happen.

    John
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Tue Sep 23 22:14:27 2025
    From Newsgroup: comp.os.vms

    On Tue, 23 Sep 2025 20:49 +0100 (BST), John Dallman wrote:

    Microsoft is a commercial organisation, and has to pay staff for all
    the work done on Windows. This increases costs compared to
    open-source work that doesn't show up in the costs for Linux, or the
    BSDs.

    There are lots of commercial organizations making money off the Linux ecosystem, and even the BSD ones as well.

    MicrosoftrCOs costs are higher, simply because it is a proprietary
    platform, without the economies of scale that benefit the Open Source
    world.

    I've worked on thoroughly portable application software for Windows
    NT (and Unixes) since 1995.

    rCLUnixrCY (as in the trademark licensees) offered some portability, but
    it was always limited by the desire of the Unix companies to stick in
    a competitive edge somewhere. That led to fragmentation, which
    Microsoft exploited to barge its way right in and take over.

    Today, the BSDs still suffer from that fragmentation, too. But Linux,
    for the most part, does not. Notwithstanding the number of Linux
    distros outnumbers BSD variants by something on the order of 50:1, it
    is much easier to move among that vast array of Linux distros (and
    take your work with you) than among that smaller number of BSD
    variants. rCLDistro-hoppingrCY is a popular phenomenon among Linux users,
    that has never happened among any kind of *nix system before.

    In the mid-1990s, MIPS R3000 and R4000 were only available in
    expensive workstations from MIPS, DEC and SGI. SGI had an ongoing
    internal disagreement over embracing Windows NT or sticking with
    Irix. The only NT machines they ever sold were Intel-based.

    And about 3|u the price of comparable Windows NT hardware from other
    vendors, as I recall from the ads of the time. A last-ditch attempt to
    broaden their market beyond Unix/Irix, which failed.

    There was a company - NetPower - that planned to sell R4000-based
    machines in the high-end PC market, and we had one of their
    prototypes for porting. They had not launched the machines when the
    Pentium Pro completely destroyed MIPS' performance advantage over
    x86. NetPower switched to x86.

    Nevertheless, MIPS chips are still available, and outship x86 by
    something like 3:1. And Linux still supports them.

    x86 was the usual platform for Windows NT.

    I recall some boast that the original development platform for Windows
    NT was MIPS. This point was made to big-up its portability cred or
    something. I guess that didnrCOt last long ...

    The saying my team coined was "If you don't know about processor architectures, you want Intel. If you want the fastest CPU and can
    cope with a lot of software not being available, you want Alpha. If
    you really, really believe in IBM's strategy and are prepared to pay
    at least three times as much to stick with it, you want PowerPC ..."

    PowerPC/POWER, too, is still around and making money for IBM. YourCOll
    find a few POWER-based machines lurking around the upper parts of the
    Top500 supercomputer list, so obviously their performance is not too
    shabby.

    Alpha was killed by Compaq.

    Even though no one makes Alpha chips any more, the Linux kernel still
    supports it.

    So think about it: support for Alpha has lasted longer than support
    for Itanium.

    PowerPC was abandoned by Microsoft.

    Yet another reason, I guess why the entirety of the Top500 list runs
    Linux, and nothing else. What happened to Windows Server HPC?
    Disappeared without a trace.

    I suspect Microsoft had to pay users to run it, anyway.

    Itanium was an expensive fiasco in the general computing market. Its
    sole benefit to Windows was that it taught Microsoft a lot about
    doing 64-bit.

    If that was their main exposure to 64-bit architectures, no wonder
    theyrCOve been having trouble ...

    I remember some executive -- might have been at Intel -- saying that
    the first OS they got booting on Itanium was Linux.

    And it has been I think the last OS to drop support for that
    architecture, long after Microsoft had given up.

    32-bit ARM was part of one of Microsoft's less good ideas. There
    appears to be a widespread opinion within the company that the
    Windows GUI is intrinsically and obviously superior to any other.
    There is no single best GUI, IMHO.

    I would agree. But both MicrosoftrCOs and ApplerCOs platforms were born
    out of the assumption, popular in the 1990s, that the GUI had to be
    tied inextricably into the OS kernel.

    This did offer a performance advantage on the hardware of the time,
    compared to the separate X11-based GUI layer on Unix machines (and
    carried over to Linux and the BSDs).

    But that performance advantage has long gone. Nowadays, we have the
    greater flexibility of a GUI layer which is modular and endlessly
    configurable and replaceable. Or you can run a Linux or BSD system
    with no GUI at all, if you wish.

    [Microsoft] had obnoxiously cut-down versions of Windows which made
    it very hard to test software unless you worked in the exact way
    that Microsoft had prepared for.

    They still work that way. Look at their rCLWindows IoT EditionrCY offering
    for the Raspberry Pi, for example, which is hopelessly crippled
    compared to the full-function Linux offering. Windows development
    requires a separate full-cost Windows PC, while Linux allows the Pi to self-host its entire development and deployment stack.

    They canrCOt escape the mindset of their revenue model: each version of
    Windows must be carefully targeted at a particular market segment,
    which means it must be functionally crippled to minimize the risk of
    it cannibalizing sales from a version intended for some other market
    segment. Every potential customer must fit into some pigeonhole. And
    so they miss the gaps between the pigeonholes, and something like
    Linux can swoop in and take over a new market segment.

    A RISC-V Windows port will likely never happen.

    Quite likely not, because RISC-V is suffering from an ongoing
    failure to produce cores fast enough for desktops, or even mobile
    devices. This has lasted long enough that I'm becoming doubtful it
    will ever happen.

    IrCOm not so sure about that. Android will already run on RISC-V.
    TheyrCOre targeting a similar market to ARM, and there have been ARM
    chips powerful enough to take up respectable positions on the Top500
    list. ThatrCOs why I think itrCOs only a matter of time before we see a
    RISC-V machine in there as well.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Goodwin@david+usenet@zx.net.nz to comp.os.vms on Fri Sep 26 12:58:51 2025
    From Newsgroup: comp.os.vms

    In article <10asked$2lq0s$3@dont-email.me>, ldo@nz.invalid says...

    On Mon, 22 Sep 2025 20:13:12 +1200, David Goodwin wrote:

    So I think its a bit disingenuous to claim Windows NT wasn't portable.

    The fact that many of the ports you mention never made it to production release, and even the ones (other than x86) that did are now defunct, I think reinforces my point. The ports were difficult and expensive to
    create, and difficult and expensive to maintain. In the end they were all just abandoned.

    What makes you think they were difficult or expensive? There are plenty
    of other reasons why Microsoft, a for-profit company, might choose to discontinue them.

    Even if development costs were zero, maintaining the port would still
    come with *some* cost. If sales aren't enough to cover costs, however
    small they may be, Microsoft would be loosing money. As a for-profit
    company, Microsoft is likely to stop doing things that cost money rather
    than generate profit.

    And as long as costs are non-zero, there is still opportunity cost to
    contend with. Even if maintaining a particular port is profitable, that doesn't mean there isn't something else *more* profitable Microsoft
    could dedicate those resources towards.

    Linux is not immune to this either. Even when profit is not a concern,
    every feature and every port still comes with a maintenance cost that
    must be justified in some way. Linux no longer supports Itanium for the
    same reason Windows no longer supports Itanium: the costs started to ought-weigh the benefits.

    Even the concept of a portable OS seems to have gone from Windows
    nowadays. It has taken Microsoft a lot of trouble to come up with the ARM port, for example, and I don?t think the compatibility issues have
    entirely been worked out, even after all these years.

    A lot of trouble? They made some (obviously) bad decisions with Windows
    RT, but that doesn't imply the port was especially difficult. At the
    time it came out, Windows was still running on x86, x86-64 and Itanium
    so I doubt adding a fourth architecture was unusually difficult.

    Arm64 windows seems to work just fine currently - most users probably
    wouldn't notice any difference. Porting my app was pretty trivial. I'm
    not sure what compatibility issues you might be talking about?

    A RISC-V Windows port will likely never happen.

    That of course depends on if it will ever look like a *profitable*
    platform to sell Windows on. Right now the demand for RISC-V powered
    Windows devices is probably very low which would suggest that there are
    other more profitable things Microsoft could be spending their developer resources on.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Goodwin@david+usenet@zx.net.nz to comp.os.vms on Fri Sep 26 12:58:51 2025
    From Newsgroup: comp.os.vms

    In article <10askk5$2lq0s$4@dont-email.me>, ldo@nz.invalid says...

    On Mon, 22 Sep 2025 19:57:42 +1200, David Goodwin wrote:

    Windows 2000 was to introduce new VLM APIs that allow 32bit applications
    on Alpha to access very large amounts of memory.

    There?s a reason the API is still called ?Win32?, not ?Win64?. Instead of using POSIX-style symbolic type names like size_t, time_t and off_t, they explicitly use 32-bit types.

    On 64bit Windows, pointers that Win32 APIs consume are all 64bit. The
    API wasn't renamed because it wasn't really a useful thing to do - while
    the underlying types may have changed, the API was still largely the
    same. Building the same code for both 32bit and 64bit Windows is easy
    enough.

    This leads to craziness like, when getting the size of a file, it returns the high half and low half in separate 32-bit quantities, even on a 64-bit system, with native 64-bit integer support!

    There is a good reason for this: Windows NT gained support for large
    files before Unix did. The very first release of Windows NT in 1993
    supported files larger than 4GB, but Microsofts compiler at the time
    didn't support 64bit integers so a different solution was required.

    Other oddities in the Win32 API are usually explained by a desire to
    make porting applications from Win16 as easy as possible.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Fri Sep 26 22:51:50 2025
    From Newsgroup: comp.os.vms

    On Fri, 26 Sep 2025 12:58:51 +1200, David Goodwin wrote:

    In article <10asked$2lq0s$3@dont-email.me>, ldo@nz.invalid says...

    On Mon, 22 Sep 2025 20:13:12 +1200, David Goodwin wrote:

    So I think its a bit disingenuous to claim Windows NT wasn't
    portable.

    The fact that many of the ports you mention never made it to
    production release, and even the ones (other than x86) that did are
    now defunct, I think reinforces my point. The ports were difficult
    and expensive to create, and difficult and expensive to maintain.
    In the end they were all just abandoned.

    What makes you think they were difficult or expensive?

    The fact that you admit as much in you very next sentence:

    There are plenty of other reasons why Microsoft, a for-profit
    company, might choose to discontinue them.

    Only one that matters: profit.

    [lots of other discussion of exactly how difficult and expensive it
    is to maintain a cross-platform proprietary OS omitted]

    Again, just reinforcing my point.

    Linux is not immune to this either.

    It does seem to manage portability much more easily. It seems like,
    every time somebody creates a new processor nowadays, the first thing
    they get running on it is Linux.

    Linux no longer supports Itanium for the same reason Windows no
    longer supports Itanium: the costs started to ought-weigh the
    benefits.

    The point being, Linux was able to continue supporting Itanium long
    after Microsoft started winding down the addition of new features
    to its Itanium port.

    Even now, as I mentioned before, Linux continues to support Alpha,
    decades after Microsoft completely gave up on it. That was an
    architecture that died before Itanium!

    The fact remains, the cost of porting Linux to every architecture
    under the sun is a lot lower than for Windows.

    Even the concept of a portable OS seems to have gone from Windows
    nowadays. It has taken Microsoft a lot of trouble to come up with
    the ARM port, for example, and I don?t think the compatibility
    issues have entirely been worked out, even after all these years.

    A lot of trouble? They made some (obviously) bad decisions with
    Windows RT, but that doesn't imply the port was especially
    difficult.

    The fact that they have needed so many tries to actually create a
    semi-usable Windows-on-ARM port indicates otherwise.

    A RISC-V Windows port will likely never happen.

    That of course depends on if it will ever look like a *profitable*
    platform to sell Windows on.

    ItrCOs already plenty profitable for lots of companies selling
    RISC-V-based products, as witness the growth in same. RISC-V CPUs are
    already shipping in the billions of units per year, as compared to
    x86, which at its peak was only about a third of a billion, and has
    since fallen back from that.

    Think of it this way: profits from x86-based products are declining.
    This includes Windows. Future growth requires Microsoft to look to
    other hardware platforms. But Windows is just too difficult and
    expensive to port to non-x86 platforms -- it has just about managed
    ARM, after a great deal of trouble; to have to spend a great deal to
    move, yet again, to something else, while profits continue to decline,
    is just out of the question.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andreas Eder@a_eder_muc@web.de to comp.os.vms on Sun Sep 28 22:21:58 2025
    From Newsgroup: comp.os.vms

    On Di 23 Sep 2025 at 20:49, jgd@cix.co.uk (John Dallman) wrote:

    In article <10asked$2lq0s$3@dont-email.me>, ldo@nz.invalid (Lawrence D_Oliveiro) wrote:

    Lawrence D_Oliveiro <ldo@nz.invalid> wrote:
    The fact that many of the ports you mention never made it to
    production release, and even the ones (other than x86) that did are
    now defunct, I think reinforces my point. The ports were difficult
    and expensive to create, and difficult and expensive to maintain.
    In the end they were all just abandoned.

    Microsoft is a commercial organisation, and has to pay staff for all the
    work done on Windows. This increases costs compared to open-source work
    that doesn't show up in the costs for Linux, or the BSDs. I've worked on thoroughly portable application software for Windows NT (and Unixes)
    since 1995. My employers have at least considered porting to every
    Windows NT platform available. I've been involved with those decisions
    and done the more recent ports.

    i860 never appeared in machines people could buy.

    What do you mean by that?
    My PPOE bought an Aliaint machine - i think it was FX/2800 - that was built with just such chips.


    'Andreas
    --
    ceterum censeo redmondinem esse delendam
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.os.vms on Sun Sep 28 22:11:40 2025
    From Newsgroup: comp.os.vms

    In article <87bjmufhuh.fsf@eder.anydns.info>, a_eder_muc@web.de (Andreas
    Eder) wrote:
    On Di 23 Sep 2025 at 20:49, jgd@cix.co.uk (John Dallman) wrote:
    i860 never appeared in machines people could buy.
    My PPOE bought an Alliant machine - i think it was FX/2800 - that
    was built with just such chips.

    OK, I'm wrong. I am pretty sure that the i860 machines Microsoft used for
    early NT development were never sold.

    John
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Sun Sep 28 22:57:05 2025
    From Newsgroup: comp.os.vms

    On Sun, 28 Sep 2025 22:21:58 +0200, Andreas Eder wrote:

    On Di 23 Sep 2025 at 20:49, jgd@cix.co.uk (John Dallman) wrote:

    i860 never appeared in machines people could buy.

    My PPOE bought an Aliaint machine - i think it was FX/2800 - that was
    built with just such chips.

    According to Da Wiki, the FX/2800 range appeared in 1990 <https://en.wikipedia.org/wiki/Alliant_Computer_Systems#1990s>, so
    Windows NT was still a (vapourware) glint in Dave CutlerrCOs eye at that
    point.

    Presumably it was running some kind of Unix system.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sun Sep 28 19:09:34 2025
    From Newsgroup: comp.os.vms

    On 9/28/2025 6:57 PM, Lawrence DrCOOliveiro wrote:
    On Sun, 28 Sep 2025 22:21:58 +0200, Andreas Eder wrote:
    On Di 23 Sep 2025 at 20:49, jgd@cix.co.uk (John Dallman) wrote:

    i860 never appeared in machines people could buy.

    My PPOE bought an Aliaint machine - i think it was FX/2800 - that was
    built with just such chips.

    According to Da Wiki, the FX/2800 range appeared in 1990 <https://en.wikipedia.org/wiki/Alliant_Computer_Systems#1990s>, so
    Windows NT was still a (vapourware) glint in Dave CutlerrCOs eye at that point.

    Presumably it was running some kind of Unix system.

    https://www.paralogos.com/DeadSuper/Alliant/index.html

    has a bit more detail than Wikipedia.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Mon Sep 29 10:47:52 2025
    From Newsgroup: comp.os.vms

    In article <memo.20250928221052.10624C@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    In article <87bjmufhuh.fsf@eder.anydns.info>, a_eder_muc@web.de (Andreas >Eder) wrote:
    On Di 23 Sep 2025 at 20:49, jgd@cix.co.uk (John Dallman) wrote:
    i860 never appeared in machines people could buy.
    My PPOE bought an Alliant machine - i think it was FX/2800 - that
    was built with just such chips.

    OK, I'm wrong. I am pretty sure that the i860 machines Microsoft used for >early NT development were never sold.

    I believe that's correct. Engineers who worked on the Windows
    kernel tell me those machines were basically prototypes.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andreas Eder@a_eder_muc@web.de to comp.os.vms on Mon Sep 29 20:16:11 2025
    From Newsgroup: comp.os.vms

    On So 28 Sep 2025 at 22:57, Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:

    On Sun, 28 Sep 2025 22:21:58 +0200, Andreas Eder wrote:

    On Di 23 Sep 2025 at 20:49, jgd@cix.co.uk (John Dallman) wrote:

    i860 never appeared in machines people could buy.

    My PPOE bought an Aliaint machine - i think it was FX/2800 - that was
    built with just such chips.

    According to Da Wiki, the FX/2800 range appeared in 1990 <https://en.wikipedia.org/wiki/Alliant_Computer_Systems#1990s>, so
    Windows NT was still a (vapourware) glint in Dave CutlerrCOs eye at that point.

    Presumably it was running some kind of Unix system.

    Yes, of course it was. But it was a machine people could buy with i860s inside.

    'Andreas
    --
    ceterum censeo redmondinem esse delendam
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Goodwin@david+usenet@zx.net.nz to comp.os.vms on Tue Sep 30 15:28:34 2025
    From Newsgroup: comp.os.vms

    In article <877bxhf7kk.fsf@eder.anydns.info>, a_eder_muc@web.de says...

    On So 28 Sep 2025 at 22:57, Lawrence D?Oliveiro <ldo@nz.invalid> wrote:

    On Sun, 28 Sep 2025 22:21:58 +0200, Andreas Eder wrote:

    On Di 23 Sep 2025 at 20:49, jgd@cix.co.uk (John Dallman) wrote:

    i860 never appeared in machines people could buy.

    My PPOE bought an Aliaint machine - i think it was FX/2800 - that was
    built with just such chips.

    According to Da Wiki, the FX/2800 range appeared in 1990 <https://en.wikipedia.org/wiki/Alliant_Computer_Systems#1990s>, so
    Windows NT was still a (vapourware) glint in Dave Cutler?s eye at that point.

    Presumably it was running some kind of Unix system.

    Yes, of course it was. But it was a machine people could buy with i860s inside.

    I don't think the issue was ever that you couldn't buy them - Olivietti
    even sold some.

    The issue was that they turned out to be not as good as expected, so
    Microsoft built some new hardware using MIPS instead and switched
    development to that.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Tue Sep 30 02:45:45 2025
    From Newsgroup: comp.os.vms

    On Tue, 30 Sep 2025 15:28:34 +1300, David Goodwin wrote:

    The issue was that [i860] turned out to be not as good as expected, so Microsoft built some new hardware using MIPS instead and switched
    development to that.

    DidnrCOt seem to help, though, did it? The MIPS version of NT didnrCOt last long, either.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Goodwin@david+usenet@zx.net.nz to comp.os.vms on Wed Oct 1 12:11:43 2025
    From Newsgroup: comp.os.vms

    In article <10bfg8o$3d9e6$1@dont-email.me>, ldo@nz.invalid says...

    On Tue, 30 Sep 2025 15:28:34 +1300, David Goodwin wrote:

    The issue was that [i860] turned out to be not as good as expected, so Microsoft built some new hardware using MIPS instead and switched development to that.

    Didn?t seem to help, though, did it?

    IIRC the *reason* for the i860 port first, and then the MIPS port, was
    to ensure that the operating system was developed from the start with portability in mind. Dependencies on the way PCs work couldn't creep in
    if everything also had to work on some other non-PC platform.

    So the MIPS port achieved its purpose. Once the job was done, Microsoft
    sold their hardware designs to MIPS Technologies who used it as a basis
    for a line of workstations until SGI bought the company.

    The MIPS version of NT didn?t last long, either.

    The MIPS version was never very popular to begin with - today the
    hardware is flying pigs rare. I assume the only people buying it were
    those who *needed* Windows and more performance than an x86 could
    supply, but couldn't afford an Alpha. Probably not a very big market.

    The MIPS NT workstations appear to have disappeared from the market at
    around the same time the Pentium Pro appeared. I assume at this point
    x86 was now fast enough and cheaper than MIPS, so there was no reason
    for anyone to buy the MIPS option anymore. At least some builders of
    MIPS workstations ended up switching to building Pentium Pro systems - NeTpower announced their switch in February 1996.

    By late 1996 no one was buying Windows NT for MIPS systems anymore, so Microsoft stopped maintaining it.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Tue Sep 30 23:51:32 2025
    From Newsgroup: comp.os.vms

    On Wed, 1 Oct 2025 12:11:43 +1300, David Goodwin wrote:

    IIRC the *reason* for the i860 port first, and then the MIPS port,
    was to ensure that the operating system was developed from the start
    with portability in mind.

    We already know that one of the design goals for Windows NT from the
    beginning was rCLportability in mindrCY. The question was whether it
    achieved that. Ultimately, it did not.

    So the MIPS port achieved its purpose. Once the job was done,
    Microsoft sold their hardware designs to MIPS Technologies who used
    it as a basis for a line of workstations until SGI bought the
    company.

    So, having done the port and climbed that mountain, it was realized
    that climbing portability mountains in a proprietary OS is hard, and
    so the Windows NT team soft-pedalled that particular design goal from
    that point on ... ?

    The MIPS version of NT didnrCOt last long, either.

    The MIPS version was never very popular to begin with - today the
    hardware is flying pigs rare.

    I already mentioned that MIPS processors outship x86 by about 3:1,
    last I checked. You wouldnrCOt call x86 rCLflying pigs rarerCY, would you?

    By late 1996 no one was buying Windows NT for MIPS systems anymore, so Microsoft stopped maintaining it.

    People still buy them and run Linux on them, which is why Linux still
    continues to support them.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Goodwin@david+usenet@zx.net.nz to comp.os.vms on Wed Oct 1 17:52:38 2025
    From Newsgroup: comp.os.vms

    In article <10bhqe4$uqv$3@dont-email.me>, ldo@nz.invalid says...

    On Wed, 1 Oct 2025 12:11:43 +1300, David Goodwin wrote:

    IIRC the *reason* for the i860 port first, and then the MIPS port,
    was to ensure that the operating system was developed from the start
    with portability in mind.

    We already know that one of the design goals for Windows NT from the beginning was ?portability in mind?. The question was whether it
    achieved that. Ultimately, it did not.

    You've yet to give a good reason to believe it isn't portable. The fact
    it has been released on six architectures and publicly demonstrated on a seventh would suggest you are wrong.

    So the MIPS port achieved its purpose. Once the job was done,
    Microsoft sold their hardware designs to MIPS Technologies who used
    it as a basis for a line of workstations until SGI bought the
    company.

    So, having done the port and climbed that mountain, it was realized
    that climbing portability mountains in a proprietary OS is hard, and
    so the Windows NT team soft-pedalled that particular design goal from
    that point on ... ?

    As you have previously established, Microsoft is a for-profit company.
    Their goal is to make profit, not to support as many platforms as
    possible for as long as possible whether or not there is worthwhile
    demand for Windows on those platforms.

    Selling a product for a hardware platform that can no longer be
    purchased is not a path to profit, as such spending any amount however
    trivial on that platform is to act counter to the companies goals.

    The MIPS version of NT didn?t last long, either.

    The MIPS version was never very popular to begin with - today the
    hardware is flying pigs rare.

    I already mentioned that MIPS processors outship x86 by about 3:1,
    last I checked. You wouldn?t call x86 ?flying pigs rare?, would you?

    Set top boxes and routers were not the target market for Windows in the
    90s, and they are clearly not a market Microsoft is interested in
    pursuing today.

    In the 90s Windows NT was only released for IBM PC compatibles, and
    platforms which conformed (to varying degrees) to the ARC standard.
    Later from the 2000 after ARC ceased to be relevant, EFI was adopted as
    a new standard.

    By late 1996 no one was buying Windows NT for MIPS systems anymore, so Microsoft stopped maintaining it.

    People still buy them and run Linux on them, which is why Linux still continues to support them.

    Yes, but no one is being paid to maintain Linux support for early 90s
    MIPS workstations, and no one is buying Linux for these platforms
    either.

    This is fine as as profit is not the goal and "for fun" is a good enough motivation. Microsoft clearly has other goals and motiviations.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Wed Oct 1 05:05:55 2025
    From Newsgroup: comp.os.vms

    On Wed, 1 Oct 2025 17:52:38 +1300, David Goodwin wrote:

    You've yet to give a good reason to believe [Windows NT] isn't
    portable. The fact it has been released on six architectures and
    publicly demonstrated on a seventh would suggest you are wrong.

    The fact that none of them survived reinforces my point. The ports
    survived only long enough for Microsoft to claim bragging rights, and
    then expired not long after.

    Linux long ago surpassed that score, several times over. And thatrCOs
    just counting ports that continue to be maintained today.

    As you have previously established, Microsoft is a for-profit
    company. Their goal is to make profit, not to support as many
    platforms as possible for as long as possible whether or not there
    is worthwhile demand for Windows on those platforms.

    See, there you go, apologizing for WindowsrCO lack of portability while
    at the same time still trying to claim it really is portable.

    I already mentioned that MIPS processors outship x86 by about 3:1,
    last I checked. You wouldn?t call x86 rCLflying pigs rarerCY, would
    you?

    Set top boxes and routers were not the target market for Windows in
    the 90s, and they are clearly not a market Microsoft is interested
    in pursuing today.

    See, there you go, conceding my point again. While still strenuously
    trying to deny it.

    YourCOre thinking in terms of low-margin products using MIPS, arenrCOt
    you? While that may be partially true, there are also some pretty
    high-margin ones indeed.

    This is fine as as profit is not the goal and "for fun" is a good
    enough motivation. Microsoft clearly has other goals and
    motiviations.

    rCLFor funrCY may be an excuse for the survival of the Alpha port. But
    Linux support exists for many of those platforms (including MIPS)
    precisely because companies are making a profit from it.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bill@bill.gunshannon@gmail.com to comp.os.vms on Wed Oct 1 14:17:22 2025
    From Newsgroup: comp.os.vms

    On 10/1/2025 12:52 AM, David Goodwin wrote:


    In the 90s Windows NT was only released for IBM PC compatibles, and
    platforms which conformed (to varying degrees) to the ARC standard.
    Later from the 2000 after ARC ceased to be relevant, EFI was adopted as
    a new standard.


    I seem to remember NT coming with a list of supported hardware
    and if you had otherwise MS did not guarantee it would run at
    all, much less perform acceptably.

    bill

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Goodwin@david+usenet@zx.net.nz to comp.os.vms on Thu Oct 2 08:56:41 2025
    From Newsgroup: comp.os.vms

    In article <mk59hoFkf86U2@mid.individual.net>, bill.gunshannon@gmail.com says...

    On 10/1/2025 12:52 AM, David Goodwin wrote:


    In the 90s Windows NT was only released for IBM PC compatibles, and platforms which conformed (to varying degrees) to the ARC standard.
    Later from the 2000 after ARC ceased to be relevant, EFI was adopted as
    a new standard.


    I seem to remember NT coming with a list of supported hardware
    and if you had otherwise MS did not guarantee it would run at
    all, much less perform acceptably.

    Yeah, the Hardware Compatibility List (HCL) told you machines (or other hardware) Windows NT was *known* to be compatible with - it had been
    tested and should work fine. Anything not on the list came down to
    whether the vendor had written drivers for it since the version of
    Windows NT you're running came out. It took a while for some vendors to
    start building NT drivers, and not all bothered until it started to
    become more widespread with Windows 2000 and XP.

    For RISC machines, the HCL mattered more. Rather than aiming to
    standardise hardware under the ARC standard as PC vendors did under the
    "IBM PC compatible" de facto standard, a lot of RISC vendors just relied
    on using Windows NT's Hardware Abstraction Layer to paper over any
    deviations from the ARC standard or prior machines they may have
    produced. Each new machine got a new HAL module, and without one of
    those Windows NT probably wouldn't even boot.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Thu Oct 2 12:29:49 2025
    From Newsgroup: comp.os.vms

    In article <MPG.43477ac440ab7840989764@news.zx.net.nz>,
    David Goodwin <david+usenet@zx.net.nz> wrote:
    In article <10bhqe4$uqv$3@dont-email.me>, ldo@nz.invalid says...

    In general, arguing with Lawrence is like trying to reason with
    a leaking pen: it doesn't change and just gets ink all over your
    fingers.

    The MIPS version of NT didn?t last long, either.

    The MIPS version was never very popular to begin with - today the
    hardware is flying pigs rare.

    I already mentioned that MIPS processors outship x86 by about 3:1,
    last I checked. You wouldn?t call x86 ?flying pigs rare?, would you?

    Set top boxes and routers were not the target market for Windows in the
    90s, and they are clearly not a market Microsoft is interested in
    pursuing today.

    Moreover, 99.9% of those MIPS CPUs that are outselling x86 are
    embedded microcontrollers that just happen to use the MIPS
    instruction set. If they run any OS at all, it's way more than
    likely to be some kind of RTOS.

    For that matter, ARM Cortex-M0 CPUs are _incredibly_ common, in
    all sorts of things that many people are unaware even has a
    microcontroller inside of it, but Linux isn't running on them.

    There are cute hacks like uCLinux designed to run on constrained
    systems, but I doubt that more than a tiny fraction of those
    CPUs are running it, and besides, it's not being used for
    general-purpose compute, which is what Windows targets.

    Bottom line: pointing to the number of MIPS CPUs shipped versus
    x86 as some kind of "evidence" for the non-portability of
    Windows is similar pointing to the number of pineapples shipped
    versus cars as evidence that cars don't grow on trees.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.os.vms on Thu Oct 2 20:24:09 2025
    From Newsgroup: comp.os.vms

    On Thu, 2 Oct 2025 12:29:49 -0000 (UTC)
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    In article <MPG.43477ac440ab7840989764@news.zx.net.nz>,
    David Goodwin <david+usenet@zx.net.nz> wrote:
    In article <10bhqe4$uqv$3@dont-email.me>, ldo@nz.invalid says...

    In general, arguing with Lawrence is like trying to reason with
    a leaking pen: it doesn't change and just gets ink all over your
    fingers.

    The MIPS version of NT didn?t last long, either.

    The MIPS version was never very popular to begin with - today the
    hardware is flying pigs rare.

    I already mentioned that MIPS processors outship x86 by about 3:1,
    last I checked. You wouldn?t call x86 ?flying pigs rare?, would
    you?

    Set top boxes and routers were not the target market for Windows in
    the 90s, and they are clearly not a market Microsoft is interested
    in pursuing today.

    Moreover, 99.9% of those MIPS CPUs that are outselling x86 are
    embedded microcontrollers that just happen to use the MIPS
    instruction set. If they run any OS at all, it's way more than
    likely to be some kind of RTOS.


    Most likely, Lowrence is citing statistics from 15-20 years ago.
    Right now MIPS is very close to dead. It's very unlikely that it
    still outsells x86.

    For that matter, ARM Cortex-M0 CPUs are _incredibly_ common, in
    all sorts of things that many people are unaware even has a
    microcontroller inside of it, but Linux isn't running on them.


    Is it?
    We are pretty heavy users of ARM MCUs. Either all of theme or all but
    one had M4 core. Zero with M0.
    M0 is an odd bird in Cortex-M line. For example, it does not comply wih
    ARM v.7-M ISA definitions. For me that's alone is sufficient reason to
    never touch it.

    There are cute hacks like uCLinux designed to run on constrained
    systems, but I doubt that more than a tiny fraction of those
    CPUs are running it, and besides,

    Well, that's not the same as MIPS. Running Linux on Cortex-M is
    technically hard and mostly stupid.
    Running Linux on something like Microchip PIC32M is technically easy.
    It's just rarely happens to be the best solution to any particular
    design requirements. But sometimes it is.

    it's not being used for
    general-purpose compute, which is what Windows targets.

    Exactly. Unlike Windows CE, on wich MIPS was supported for rather long
    time, but always played a 3rd, 4th or 5th fiddle to Arm, x86, Hitachi SH
    and PPC.


    Bottom line: pointing to the number of MIPS CPUs shipped versus
    x86 as some kind of "evidence" for the non-portability of
    Windows is similar pointing to the number of pineapples shipped
    versus cars as evidence that cars don't grow on trees.

    - Dan C.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Thu Oct 2 22:22:33 2025
    From Newsgroup: comp.os.vms

    On Wed, 1 Oct 2025 05:05:55 -0000 (UTC), I wrote:

    YourCOre thinking in terms of low-margin products using MIPS, arenrCOt you? While that may be partially true, there are also some pretty high-margin
    ones indeed.

    As an example, I have with me a second-hand Cisco 3850 switch, which I
    have been learning about for a customer. I have no idea what they cost new
    -- no doubt something substantial. I think they started making them in
    2013 -- long after Windows NT for MIPS and all the other non-x86
    architectures had gone defunct.

    When you power it up, it says it has a rCLCavium Octeon IIrCY processor. it is running rCLOpen IOS XErCY, which is a version of CiscorCOs well-known IOS network-management platform. ItrCOs built on a Linux kernel, and makes use
    of LinuxrCOs container capabilities to allow the customer to run custom
    Python code on the switch.

    So thatrCOs some pretty beefy compute capabilities, beyond those of some
    dinky little embedded controller, wouldnrCOt you say?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Goodwin@david+usenet@zx.net.nz to comp.os.vms on Fri Oct 3 13:17:15 2025
    From Newsgroup: comp.os.vms

    In article <10blr7t$9co$1@reader2.panix.com>,
    cross@spitfire.i.gajendra.net says...

    In article <MPG.43477ac440ab7840989764@news.zx.net.nz>,
    David Goodwin <david+usenet@zx.net.nz> wrote:
    In article <10bhqe4$uqv$3@dont-email.me>, ldo@nz.invalid says...

    In general, arguing with Lawrence is like trying to reason with
    a leaking pen: it doesn't change and just gets ink all over your
    fingers.

    Yeah, he has made these same assertions in the past and replying to them
    today yields a response no different from last time. I've given up
    trying to have a discussion with someone who doesn't make arguments in
    good faith. "Don't feed the trolls", etc.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Simon Clubley@clubley@remove_me.eisner.decus.org-Earth.UFP to comp.os.vms on Fri Oct 3 12:12:55 2025
    From Newsgroup: comp.os.vms

    On 2025-10-02, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
    In article <MPG.43477ac440ab7840989764@news.zx.net.nz>,
    David Goodwin <david+usenet@zx.net.nz> wrote:

    In general, arguing with Lawrence is like trying to reason with
    a leaking pen: it doesn't change and just gets ink all over your
    fingers.


    Do you maintain a fortune file of these comparisons to cycle through ? :-)


    Set top boxes and routers were not the target market for Windows in the >>90s, and they are clearly not a market Microsoft is interested in
    pursuing today.

    Moreover, 99.9% of those MIPS CPUs that are outselling x86 are
    embedded microcontrollers that just happen to use the MIPS
    instruction set. If they run any OS at all, it's way more than
    likely to be some kind of RTOS.


    Here is one example at the lower end (which is also available in hobbyist friendly packaging):

    https://uk.farnell.com/microchip/pic32mx250f128b-i-sp/mcu-32bit-pic32-40mhz-spdip-28/dp/2097773

    Simon.
    --
    Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
    Walking destinations on a map are further away than they appear.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Fri Oct 3 08:37:35 2025
    From Newsgroup: comp.os.vms

    On 10/3/2025 8:12 AM, Simon Clubley wrote:
    On 2025-10-02, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
    In article <MPG.43477ac440ab7840989764@news.zx.net.nz>,
    David Goodwin <david+usenet@zx.net.nz> wrote:
    Set top boxes and routers were not the target market for Windows in the
    90s, and they are clearly not a market Microsoft is interested in
    pursuing today.

    Moreover, 99.9% of those MIPS CPUs that are outselling x86 are
    embedded microcontrollers that just happen to use the MIPS
    instruction set. If they run any OS at all, it's way more than
    likely to be some kind of RTOS.

    Here is one example at the lower end (which is also available in hobbyist friendly packaging):

    https://uk.farnell.com/microchip/pic32mx250f128b-i-sp/mcu-32bit-pic32-40mhz-spdip-28/dp/2097773

    I think this part of the spec illustrate the target market:

    <quote>
    MIPS32-< M4K-< core with MIPS16e-< mode for up to 40% smaller code size </quote>

    Switching to 16 bit mode to reduce application size is not where
    Microsoft is with Windows today.

    Arne


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From chrisq@syseng@gfsys.co.uk to comp.os.vms on Sat Oct 4 23:28:41 2025
    From Newsgroup: comp.os.vms

    On 10/1/25 06:05, Lawrence DrCOOliveiro wrote:
    On Wed, 1 Oct 2025 17:52:38 +1300, David Goodwin wrote:

    You've yet to give a good reason to believe [Windows NT] isn't
    portable. The fact it has been released on six architectures and
    publicly demonstrated on a seventh would suggest you are wrong.

    The fact that none of them survived reinforces my point. The ports
    survived only long enough for Microsoft to claim bragging rights, and
    then expired not long after.

    Fwir, the discussion is about nt portability, or not. Not whether
    other architectures survived, boxes sold etc. Classic deflection..

    The fact that was ported to so many other architectures reflects
    the fact that it was designed with a HAL to enable just that ability.
    Quite profound for it's time, even if you hate windows in general.


    Please try to keep up.

    Chris


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From chrisq@syseng@gfsys.co.uk to comp.os.vms on Sat Oct 4 23:42:41 2025
    From Newsgroup: comp.os.vms

    On 10/1/25 20:56, David Goodwin wrote:
    In article <mk59hoFkf86U2@mid.individual.net>, bill.gunshannon@gmail.com says...

    On 10/1/2025 12:52 AM, David Goodwin wrote:


    In the 90s Windows NT was only released for IBM PC compatibles, and
    platforms which conformed (to varying degrees) to the ARC standard.
    Later from the 2000 after ARC ceased to be relevant, EFI was adopted as
    a new standard.


    I seem to remember NT coming with a list of supported hardware
    and if you had otherwise MS did not guarantee it would run at
    all, much less perform acceptably.

    Yeah, the Hardware Compatibility List (HCL) told you machines (or other hardware) Windows NT was *known* to be compatible with - it had been
    tested and should work fine. Anything not on the list came down to
    whether the vendor had written drivers for it since the version of
    Windows NT you're running came out. It took a while for some vendors to
    start building NT drivers, and not all bothered until it started to
    become more widespread with Windows 2000 and XP.

    You could argue that the hcl has now been subsumed into the driver and
    kernel layers. That only made possible by a strictly layered OS design.

    In the old days, hardware compatibility lists were common, but the real achievement of open source, Linux FreeBSD etc, is that you can now plug
    in any old card and the OS will find and fully support it. That sort
    of thing has had decades of support and development, but makes life much easier.

    Chris


    For RISC machines, the HCL mattered more. Rather than aiming to
    standardise hardware under the ARC standard as PC vendors did under the
    "IBM PC compatible" de facto standard, a lot of RISC vendors just relied
    on using Windows NT's Hardware Abstraction Layer to paper over any
    deviations from the ARC standard or prior machines they may have
    produced. Each new machine got a new HAL module, and without one of
    those Windows NT probably wouldn't even boot.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Sat Oct 4 22:49:21 2025
    From Newsgroup: comp.os.vms

    On Sat, 4 Oct 2025 23:28:41 +0100, chrisq wrote:

    On 10/1/25 06:05, Lawrence DrCOOliveiro wrote:

    On Wed, 1 Oct 2025 17:52:38 +1300, David Goodwin wrote:

    You've yet to give a good reason to believe [Windows NT] isn't
    portable. The fact it has been released on six architectures and
    publicly demonstrated on a seventh would suggest you are wrong.

    The fact that none of them survived reinforces my point. The ports
    survived only long enough for Microsoft to claim bragging rights, and
    then expired not long after.

    Fwir, the discussion is about nt portability, or not. Not whether other architectures survived, boxes sold etc. Classic deflection..

    I wasnrCOt the one trying to offer excuses for why particular NT ports survived or not, based on the supposed popularity (or not) of the hardware
    in question. I pointed out that Linux continued to support architectures
    that were less successful in the marketplace, like Alpha and Itanium, long after Microsoft had to abandon them. And it supports ones that are still popular, like MIPS and POWER, again long after Microsoft had to admit
    defeat.

    So the common factor for the failure of NT on these architectures is not whether they were successful in the marketplace or not; the common factor
    was NT itself.

    To repeat what I said further back:

    The [NT] ports were difficult and expensive to create, and difficult and expensive to maintain. In the end they were all just abandoned.

    Also (with regard to NT not taking advantage of 64-bit Alpha):

    Obviously it was just too hard for Windows NT to support a mix of 32-bit
    and 64-bit architectures. So much for portability ...

    The fact that was ported to so many other architectures reflects the
    fact that it was designed with a HAL to enable just that ability.
    Quite profound for it's time, even if you hate windows in general.

    Maybe the HAL was part of the problem? IrCOve been trying to find a HAL- equivalent in the Linux kernel, and there doesnrCOt seem to be one.

    Consider the question: does a rCLhardware abstraction layerrCY abstract away from device drivers as well? So drivers are supposed to be hidden away
    below the HAL, not visible above it? But on Linux you have portable device drivers, which can be compiled for different processor architectures to
    access the same peripheral hardware, which saves having to write different drivers for those peripherals for different architectures. Where does a rCLHALrCY fit into this?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Sun Oct 5 02:14:18 2025
    From Newsgroup: comp.os.vms

    In article <68dfc38f$0$673$14726298@news.sunsite.dk>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/3/2025 8:12 AM, Simon Clubley wrote:
    On 2025-10-02, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
    In article <MPG.43477ac440ab7840989764@news.zx.net.nz>,
    David Goodwin <david+usenet@zx.net.nz> wrote:
    Set top boxes and routers were not the target market for Windows in the >>>> 90s, and they are clearly not a market Microsoft is interested in
    pursuing today.

    Moreover, 99.9% of those MIPS CPUs that are outselling x86 are
    embedded microcontrollers that just happen to use the MIPS
    instruction set. If they run any OS at all, it's way more than
    likely to be some kind of RTOS.

    Here is one example at the lower end (which is also available in hobbyist
    friendly packaging):

    https://uk.farnell.com/microchip/pic32mx250f128b-i-sp/mcu-32bit-pic32-40mhz-spdip-28/dp/2097773

    I think this part of the spec illustrate the target market:

    <quote>
    MIPS32-< M4K-< core with MIPS16e-< mode for up to 40% smaller code size ></quote>

    Switching to 16 bit mode to reduce application size is not where
    Microsoft is with Windows today.

    a) MSFT isn't running Windows on that core, but Linux isn't
    running on it, either.

    b) MIPS16e is to MIPS 32 as Thumb or Thumb-2 is to ARM, or as
    the RISC-V compressed ISA is to RISC-V.

    c) Windows on ARM does use Thumb-2: https://devblogs.microsoft.com/oldnewthing/20210531-00/?p=105265

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Sun Oct 5 02:17:12 2025
    From Newsgroup: comp.os.vms

    In article <10boek6$1nure$1@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
    On 2025-10-02, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
    In article <MPG.43477ac440ab7840989764@news.zx.net.nz>,
    David Goodwin <david+usenet@zx.net.nz> wrote:

    In general, arguing with Lawrence is like trying to reason with
    a leaking pen: it doesn't change and just gets ink all over your
    fingers.


    Do you maintain a fortune file of these comparisons to cycle through ? :-)

    Nah. I just make 'em up as I go along.

    Set top boxes and routers were not the target market for Windows in the >>>90s, and they are clearly not a market Microsoft is interested in >>>pursuing today.

    Moreover, 99.9% of those MIPS CPUs that are outselling x86 are
    embedded microcontrollers that just happen to use the MIPS
    instruction set. If they run any OS at all, it's way more than
    likely to be some kind of RTOS.

    Here is one example at the lower end (which is also available in hobbyist >friendly packaging):

    https://uk.farnell.com/microchip/pic32mx250f128b-i-sp/mcu-32bit-pic32-40mhz-spdip-28/dp/2097773

    Yup. That class of CPU probably outsells the sort of thing that
    runs regular Linux by 100:1; it probably outsells anything that
    people are putting uCLinux on by 20:1. But the troll won't
    understand that.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sat Oct 4 22:40:26 2025
    From Newsgroup: comp.os.vms

    On 10/4/2025 10:14 PM, Dan Cross wrote:
    In article <68dfc38f$0$673$14726298@news.sunsite.dk>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/3/2025 8:12 AM, Simon Clubley wrote:
    On 2025-10-02, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
    In article <MPG.43477ac440ab7840989764@news.zx.net.nz>,
    David Goodwin <david+usenet@zx.net.nz> wrote:
    Set top boxes and routers were not the target market for Windows in the >>>>> 90s, and they are clearly not a market Microsoft is interested in
    pursuing today.

    Moreover, 99.9% of those MIPS CPUs that are outselling x86 are
    embedded microcontrollers that just happen to use the MIPS
    instruction set. If they run any OS at all, it's way more than
    likely to be some kind of RTOS.

    Here is one example at the lower end (which is also available in hobbyist >>> friendly packaging):

    https://uk.farnell.com/microchip/pic32mx250f128b-i-sp/mcu-32bit-pic32-40mhz-spdip-28/dp/2097773

    I think this part of the spec illustrate the target market:

    <quote>
    MIPS32-< M4K-< core with MIPS16e-< mode for up to 40% smaller code size
    </quote>

    Switching to 16 bit mode to reduce application size is not where
    Microsoft is with Windows today.

    a) MSFT isn't running Windows on that core, but Linux isn't
    running on it, either.

    b) MIPS16e is to MIPS 32 as Thumb or Thumb-2 is to ARM, or as
    the RISC-V compressed ISA is to RISC-V.

    c) Windows on ARM does use Thumb-2: https://devblogs.microsoft.com/oldnewthing/20210531-00/?p=105265

    So MIPS16e is not 16 bit in traditional sense (16 bit registers,
    16 bit address space etc.) but just shorter instructions (16 bit)?

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Sun Oct 5 05:03:19 2025
    From Newsgroup: comp.os.vms

    In article <68e1da98$0$677$14726298@news.sunsite.dk>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/4/2025 10:14 PM, Dan Cross wrote:
    In article <68dfc38f$0$673$14726298@news.sunsite.dk>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/3/2025 8:12 AM, Simon Clubley wrote:
    On 2025-10-02, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
    In article <MPG.43477ac440ab7840989764@news.zx.net.nz>,
    David Goodwin <david+usenet@zx.net.nz> wrote:
    Set top boxes and routers were not the target market for Windows in the >>>>>> 90s, and they are clearly not a market Microsoft is interested in
    pursuing today.

    Moreover, 99.9% of those MIPS CPUs that are outselling x86 are
    embedded microcontrollers that just happen to use the MIPS
    instruction set. If they run any OS at all, it's way more than
    likely to be some kind of RTOS.

    Here is one example at the lower end (which is also available in hobbyist >>>> friendly packaging):

    https://uk.farnell.com/microchip/pic32mx250f128b-i-sp/mcu-32bit-pic32-40mhz-spdip-28/dp/2097773

    I think this part of the spec illustrate the target market:

    <quote>
    MIPS32-< M4K-< core with MIPS16e-< mode for up to 40% smaller code size
    </quote>

    Switching to 16 bit mode to reduce application size is not where
    Microsoft is with Windows today.

    a) MSFT isn't running Windows on that core, but Linux isn't
    running on it, either.

    b) MIPS16e is to MIPS 32 as Thumb or Thumb-2 is to ARM, or as
    the RISC-V compressed ISA is to RISC-V.

    c) Windows on ARM does use Thumb-2:
    https://devblogs.microsoft.com/oldnewthing/20210531-00/?p=105265

    So MIPS16e is not 16 bit in traditional sense (16 bit registers,
    16 bit address space etc.) but just shorter instructions (16 bit)?

    Correct. It's just a denser encoding, using 16-bits for an
    instruction instead of the 32-bits of standard MIPS. For
    embedded applications, which usually use Harvard architectures
    but have limited flash or other space for program text (and even
    smaller amounts of SRAM for data), this can be a big win.

    The tradeoff is some limitations versus the standard 32-bit
    encoding: for example, it can only address 8 registers instead
    of the full register file, and immediate displacements/values
    are smaller, since there are fewer bits in which to encode them.
    But the address space is not limited to 16 bits or anything like
    that, and registers remain their normal size. In this sense, it
    is wholy unlike x86's real mode, for example. A compiler can
    freely emit the two variants in the same code stream, and the
    processor can switch between them on a jump or branch
    instruction.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.os.vms on Sun Oct 5 17:49:40 2025
    From Newsgroup: comp.os.vms

    In article <10bsk9q$svt$1@reader2.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    c) Windows on ARM does use Thumb-2: https://devblogs.microsoft.com/oldnewthing/20210531-00/?p=105265

    Interesting, thanks. 64-bit ARM Windows code does not use any form of
    Thumb; it was left out of the 64-bit ARM ISA, which is very different
    from the classic 32-bit ISA.

    John
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Tue Oct 7 12:51:42 2025
    From Newsgroup: comp.os.vms

    In article <memo.20251005174851.10624P@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    In article <10bsk9q$svt$1@reader2.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    c) Windows on ARM does use Thumb-2:
    https://devblogs.microsoft.com/oldnewthing/20210531-00/?p=105265

    Interesting, thanks. 64-bit ARM Windows code does not use any form of
    Thumb; it was left out of the 64-bit ARM ISA, which is very different
    from the classic 32-bit ISA.

    Yes, and good point. Most ARM64 cores also support A32 and T32,
    but if Windows is only using A64 it doesn't matter.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.os.vms on Tue Oct 7 22:35:40 2025
    From Newsgroup: comp.os.vms

    In article <10c32cu$ahp$2@reader2.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    Most ARM64 cores also support A32 and T32

    That is changing, reasonably quickly. ARM stopped releasing new cores
    that could do A32 or T32 in 2023, having been phasing them out since 2021. Apple's recent cores and Qualcomm's Oryons are likewise 64-bit only.

    but if Windows is only using A64 it doesn't matter.

    Microsoft supply compilers that can target 32-bit code, and run-time
    libraries for 32-bit programs. I've never tried building anything on ARM Windows for 32-bit so I don't know how well they work. I don't know if
    ARM Windows 11, which is always a 64-bit OS, will notice that the
    hardware is incapable of running A32/T32, but I hope to have appropriate hardware fairly soon.

    John
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.os.vms on Wed Oct 8 17:04:40 2025
    From Newsgroup: comp.os.vms

    In article <10ad18c$2d4$1@reader1.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    I mean, Pr1mos is basically gone. There's an emulator, but I
    don't think (new) hardware has been sold for decades, since
    Pr1me went under.

    Emulator here: <https://github.com/prirun/p50em>, not to be confused with
    a version of (obsolete) Android for PCs with the same name. No new
    hardware since the early 1990s.

    Solaris and HP-UX are on their last legs.

    Oracle still say they're supporting Solaris 11.4 with mainstream support
    until 2031 and offering extended support until 2037, but that's 20 years
    after the final CPU model, the M8, was released.

    HP-UX support from HPE ends at the end of 2025. The hardware stopped
    being sold in 2021.

    Is GCOS6 even still available, or is it just legacy support?

    Seems to be all emulation now.

    John
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Wed Oct 8 19:45:49 2025
    From Newsgroup: comp.os.vms

    In article <memo.20251008170323.10624a@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    In article <10ad18c$2d4$1@reader1.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    I mean, Pr1mos is basically gone. There's an emulator, but I
    don't think (new) hardware has been sold for decades, since
    Pr1me went under.

    Emulator here: <https://github.com/prirun/p50em>,

    That's the one.

    not to be confused with a version of (obsolete) Android for PCs
    with the same name.

    I'll bet there's a band with that name, too....

    No new hardware since the early 1990s.

    And so it goes.

    Solaris and HP-UX are on their last legs.

    Oracle still say they're supporting Solaris 11.4 with mainstream support >until 2031 and offering extended support until 2037, but that's 20 years >after the final CPU model, the M8, was released.

    I wonder what percentage of Solaris installations are on SPARC
    and what are x86 at this point. 2037 is only 12 years away.

    HP-UX support from HPE ends at the end of 2025. The hardware stopped
    being sold in 2021.


    Oh, how the mightly have fallen.

    Is GCOS6 even still available, or is it just legacy support?

    Seems to be all emulation now.

    And I presume that's all for existing customers.

    To bring this back to VMS, this all worries me a bit.
    Biological monocultures tend to be susceptible to single points
    of failure; I don't think software is particularly different.

    Putting all our eggs in one Linux basket may not be the best
    idea from a resilience point of view, which is why it's nice
    that there are alternatives.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Wed Oct 8 20:50:57 2025
    From Newsgroup: comp.os.vms

    In article <memo.20251007223453.10624Y@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    In article <10c32cu$ahp$2@reader2.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    Most ARM64 cores also support A32 and T32

    That is changing, reasonably quickly. ARM stopped releasing new cores
    that could do A32 or T32 in 2023, having been phasing them out since 2021. >Apple's recent cores and Qualcomm's Oryons are likewise 64-bit only.

    I thought I had heard something to that effect.

    I wonder if this suggests that they'll introduce a compressed
    instruction set a la Thumb for 64 bit mode; -M profile seems to
    top out at ARMv8.1; and according to the ARMv8-M ARM, only
    supports T32. Presumably at some point they'll introduce an
    ARMv9 core for the embedded market and this will become an
    issue.

    Or maybe they won't. We could be in a world of 32-bit embedded
    cores in that space for a very long time indeed.

    but if Windows is only using A64 it doesn't matter.

    Microsoft supply compilers that can target 32-bit code, and run-time >libraries for 32-bit programs. I've never tried building anything on ARM >Windows for 32-bit so I don't know how well they work. I don't know if
    ARM Windows 11, which is always a 64-bit OS, will notice that the
    hardware is incapable of running A32/T32, but I hope to have appropriate >hardware fairly soon.

    Interesting. I'm curious how that experiment ends.

    I was also curious how code density stacks up. Waterman's
    dissertation has a chart comparing RV64C to other 64-bit ISAs,
    and A64 is about 25% less dense than RV64C. (https://people.eecs.berkeley.edu/~krste/papers/EECS-2016-1.pdf,
    page 62)

    I don't see a particularly good comparison against 32-bit ISAs,
    though.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Wed Oct 8 16:53:35 2025
    From Newsgroup: comp.os.vms

    On 10/8/2025 12:03 PM, John Dallman wrote:
    In article <10ad18c$2d4$1@reader1.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:
    Solaris and HP-UX are on their last legs.

    Oracle still say they're supporting Solaris 11.4 with mainstream support until 2031 and offering extended support until 2037, but that's 20 years after the final CPU model, the M8, was released.

    HP-UX support from HPE ends at the end of 2025. The hardware stopped
    being sold in 2021.

    Note though that 11.4 is from 2018. And there will never
    be a 11.5.

    Oracle's Solaris 11.4 support seems to be like HP/HPE's
    VMS 8.4 support. And that is not positive.

    Arne


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Wed Oct 8 17:00:31 2025
    From Newsgroup: comp.os.vms

    On 10/8/2025 3:45 PM, Dan Cross wrote:
    In article <memo.20251008170323.10624a@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    In article <10ad18c$2d4$1@reader1.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:
    Solaris and HP-UX are on their last legs.

    Oracle still say they're supporting Solaris 11.4 with mainstream support
    until 2031 and offering extended support until 2037, but that's 20 years
    after the final CPU model, the M8, was released.

    I wonder what percentage of Solaris installations are on SPARC
    and what are x86 at this point. 2037 is only 12 years away.

    Back in the Sun days Solaris/SPARC was way more common than
    Solaris/x86-64 (and Solaris/x86 before that).

    And I doubt it has changed. I don't recall a time where
    Solaris/SPARC was considered dead and Solaris/x86-64 was
    considered to have a bright future. And one migration Solaris/SPARC->Linux/x86-64 is cheaper than two migrations Solaris/SPARC->Solaris/x86-64->Linux/x86-64.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.os.vms on Thu Oct 9 01:02:03 2025
    From Newsgroup: comp.os.vms

    On Wed, 8 Oct 2025 17:00:31 -0400
    Arne Vajhoj <arne@vajhoej.dk> wrote:
    On 10/8/2025 3:45 PM, Dan Cross wrote:
    In article <memo.20251008170323.10624a@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    In article <10ad18c$2d4$1@reader1.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:
    Solaris and HP-UX are on their last legs.

    Oracle still say they're supporting Solaris 11.4 with mainstream
    support until 2031 and offering extended support until 2037, but
    that's 20 years after the final CPU model, the M8, was released.

    I wonder what percentage of Solaris installations are on SPARC
    and what are x86 at this point. 2037 is only 12 years away.

    Back in the Sun days Solaris/SPARC was way more common than
    Solaris/x86-64 (and Solaris/x86 before that).

    And I doubt it has changed. I don't recall a time where
    Solaris/SPARC was considered dead and Solaris/x86-64 was
    considered to have a bright future. And one migration Solaris/SPARC->Linux/x86-64 is cheaper than two migrations Solaris/SPARC->Solaris/x86-64->Linux/x86-64.

    Arne

    If we believe that submission of benchmark results is an indicator of
    interest then it looks like Oracle lost interest in Solaris for x86-64 approximately in 2012H2, i.e. few years earlier than they finally
    decided to stop development of Solaris for SPARC.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Wade@dave@g4ugm.invalid to comp.os.vms on Thu Oct 9 00:47:20 2025
    From Newsgroup: comp.os.vms

    On 09/10/2025 00:02, Michael S wrote:
    On Wed, 8 Oct 2025 17:00:31 -0400
    Arne Vajh|+j <arne@vajhoej.dk> wrote:

    On 10/8/2025 3:45 PM, Dan Cross wrote:
    In article <memo.20251008170323.10624a@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    In article <10ad18c$2d4$1@reader1.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:
    Solaris and HP-UX are on their last legs.

    Oracle still say they're supporting Solaris 11.4 with mainstream
    support until 2031 and offering extended support until 2037, but
    that's 20 years after the final CPU model, the M8, was released.

    I wonder what percentage of Solaris installations are on SPARC
    and what are x86 at this point. 2037 is only 12 years away.

    Back in the Sun days Solaris/SPARC was way more common than
    Solaris/x86-64 (and Solaris/x86 before that).

    And I doubt it has changed. I don't recall a time where
    Solaris/SPARC was considered dead and Solaris/x86-64 was
    considered to have a bright future. And one migration
    Solaris/SPARC->Linux/x86-64 is cheaper than two migrations
    Solaris/SPARC->Solaris/x86-64->Linux/x86-64.

    Arne


    If we believe that submission of benchmark results is an indicator of interest then it looks like Oracle lost interest in Solaris for x86-64 approximately in 2012H2, i.e. few years earlier than they finally
    decided to stop development of Solaris for SPARC.

    Looks like the Linux releases that support Sparc are more recent than
    the Solaris builds...

    Save
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Thu Oct 9 00:27:18 2025
    From Newsgroup: comp.os.vms

    On Thu, 9 Oct 2025 00:47:20 +0200, David Wade wrote:

    Looks like the Linux releases that support Sparc are more recent than
    the Solaris builds...

    Yet another case where Linux continues to support a CPU architecture long after the proprietary OSes have given up.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.os.vms on Fri Oct 10 08:15:40 2025
    From Newsgroup: comp.os.vms

    In article <20251009010203.000044ac@yahoo.com>, already5chosen@yahoo.com (Michael S) wrote:

    If we believe that submission of benchmark results is an indicator
    of interest then it looks like Oracle lost interest in Solaris for
    x86-64 approximately in 2012H2, i.e. few years earlier than they
    finally decided to stop development of Solaris for SPARC.

    That's about right. Sun would occasionally ask my employers to support
    Solaris on x86-64 (we'd supported it on SPARC for many years) but they
    were never able to demonstrate any customer demand. After the Oracle
    takeover, the requests stopped: Oracle wanted to sell proprietary
    hardware, until they lost interest in Solaris in favour of cloud.

    John
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Fri Oct 10 10:14:36 2025
    From Newsgroup: comp.os.vms

    In article <10c6jdf$1sato$2@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/8/2025 3:45 PM, Dan Cross wrote:
    In article <memo.20251008170323.10624a@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    In article <10ad18c$2d4$1@reader1.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:
    Solaris and HP-UX are on their last legs.

    Oracle still say they're supporting Solaris 11.4 with mainstream support >>> until 2031 and offering extended support until 2037, but that's 20 years >>> after the final CPU model, the M8, was released.

    I wonder what percentage of Solaris installations are on SPARC
    and what are x86 at this point. 2037 is only 12 years away.

    Back in the Sun days Solaris/SPARC was way more common than
    Solaris/x86-64 (and Solaris/x86 before that).

    Yup.

    And I doubt it has changed. I don't recall a time where
    Solaris/SPARC was considered dead and Solaris/x86-64 was
    considered to have a bright future.

    Within Sun a lot of senior engineers realized by the mid-1990s
    that SPARC was going to be a dead end. They just weren't going
    to be able to compete against Intel, and the realization within
    (at least) the Solaris kernel team was that if Sun didn't pivot
    to x86, they'd be doomed. And those folks were largely correct.
    But Sun just didn't want to give up that high margin business
    and compete against the likes of Dell on volume.

    I also don't think they took Linux seriously enough, and by the
    time they did, it was too late: had OpenSolaris happened 8 years
    earlier, maybe it could have been a viable alternative, but as
    it was, it was too little, too late.

    Perhaps even 2000 would have been too late; it's really striking
    how they didn't open up Solaris until Linux had been on the
    scene for _16 years_. They should have listened to Larry McVoy
    in 1993: https://www.landley.net/history/mirror/unix/srcos.html

    And one migration
    Solaris/SPARC->Linux/x86-64 is cheaper than two migrations >Solaris/SPARC->Solaris/x86-64->Linux/x86-64.

    OTOH, if someone is still stuck with Solaris for some reason,
    they can still buy modern hardware from Dell, HPE, or Lenovo and
    there's a good chance Solaris 11.4 will work on it.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Fri Oct 10 10:30:50 2025
    From Newsgroup: comp.os.vms

    In article <memo.20251010081421.10624e@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    In article <20251009010203.000044ac@yahoo.com>, already5chosen@yahoo.com >(Michael S) wrote:

    If we believe that submission of benchmark results is an indicator
    of interest then it looks like Oracle lost interest in Solaris for
    x86-64 approximately in 2012H2, i.e. few years earlier than they
    finally decided to stop development of Solaris for SPARC.

    That's about right. Sun would occasionally ask my employers to support >Solaris on x86-64 (we'd supported it on SPARC for many years) but they
    were never able to demonstrate any customer demand. After the Oracle >takeover, the requests stopped: Oracle wanted to sell proprietary
    hardware, until they lost interest in Solaris in favour of cloud.

    Oracle wanted to be IBM: a single vendor that gives you a
    soup-to-nuts "enterprise" solution with everything included,
    from hardware up through services (and the attendent recurring
    revenue). I'm surprised they didn't do their own networking
    gear (true story: Sun's first revenue generating product was a
    3Mbit Ethernet board: https://akapugs.blog/2022/05/17/681/). I
    don't think they were ever interested in Sun's earlier,
    traditional markets: workstations and so forth were
    uninteresting.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andrew Back@andrew@carrierdetect.com to comp.os.vms on Fri Oct 10 12:11:17 2025
    From Newsgroup: comp.os.vms

    On 17/09/2025 00:25, David Wade wrote:

    Interesting point. So LPARS are physical partitioning. I guess almost a type-0 hypervisor. You can't over commit. However its part of the
    hardware so basically "free". Given you get a minimum of 68 cores in any current Z box it isn't usually a problem. If you need to over-commit
    then you can buy zVM a type-1 hypervisor which is really a re-badged VM/
    XA from the 1970s.

    My understanding was that LPARs as configured using PR/SM are logical resources in terms of CPU, managed using using a derivative of VM
    integrated at firmware level, hence not physical partitioning as I'd understand it (such as how a Sun E10K manages this).

    A quick search turned up IBM documentation which has the line "Shared
    Logical CPUs assigned to LPARs":


    https://www.ibm.com/docs/en/zp-and-ca/3.1.0?topic=simulation-cecs-prsm-lpars

    Well, just the name "Logical" suggests not physical partitioning.

    Happy to be corrected. It's not my area of expertise and keen to improve
    my understanding.

    Andrew

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Fri Oct 10 13:27:44 2025
    From Newsgroup: comp.os.vms

    On Fri, 10 Oct 2025 12:11:17 +0100, Andrew Back wrote:

    My understanding was that LPARs as configured using PR/SM are logical resources in terms of CPU, managed using using a derivative of VM
    integrated at firmware level, hence not physical partitioning as I'd understand it (such as how a Sun E10K manages this).

    Linux does management of resources like CPU, RAM etc via its rCLcgroupsrCY mechanism. You donrCOt need to be using full virtualization to take
    advantage of this; it works with containers, too.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Fri Oct 10 13:30:02 2025
    From Newsgroup: comp.os.vms

    On Fri, 10 Oct 2025 08:14 +0100 (BST), John Dallman wrote:

    Oracle wanted to sell proprietary hardware, until they lost interest
    in Solaris in favour of cloud.

    IrCOm sure the fans of the various OpenSolaris offshoots would love to see Solaris open-sourced again. Surely it would be no loss to Oracle to do
    this now.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Fri Oct 10 09:55:05 2025
    From Newsgroup: comp.os.vms

    On 10/10/2025 9:30 AM, Lawrence DrCOOliveiro wrote:
    On Fri, 10 Oct 2025 08:14 +0100 (BST), John Dallman wrote:
    Oracle wanted to sell proprietary hardware, until they lost interest
    in Solaris in favour of cloud.

    IrCOm sure the fans of the various OpenSolaris offshoots would love to see Solaris open-sourced again. Surely it would be no loss to Oracle to do
    this now.

    I doubt it would make a difference.

    They got a copy years ago. They could not make it a success.

    There is no reason to believe that getting a copy again would make
    it a success.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Fri Oct 10 10:05:49 2025
    From Newsgroup: comp.os.vms

    On 10/10/2025 6:14 AM, Dan Cross wrote:
    In article <10c6jdf$1sato$2@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/8/2025 3:45 PM, Dan Cross wrote:
    I wonder what percentage of Solaris installations are on SPARC
    and what are x86 at this point. 2037 is only 12 years away.

    Back in the Sun days Solaris/SPARC was way more common than
    Solaris/x86-64 (and Solaris/x86 before that).

    Yup.

    And I doubt it has changed. I don't recall a time where
    Solaris/SPARC was considered dead and Solaris/x86-64 was
    considered to have a bright future.

    Within Sun a lot of senior engineers realized by the mid-1990s
    that SPARC was going to be a dead end. They just weren't going
    to be able to compete against Intel, and the realization within
    (at least) the Solaris kernel team was that if Sun didn't pivot
    to x86, they'd be doomed. And those folks were largely correct.
    But Sun just didn't want to give up that high margin business
    and compete against the likes of Dell on volume.

    Good decision. The vast majority of Solaris system revenue was
    made after that. And questionable whether they could have made
    the same revenue on x86 due to the competition.

    And one migration
    Solaris/SPARC->Linux/x86-64 is cheaper than two migrations
    Solaris/SPARC->Solaris/x86-64->Linux/x86-64.

    OTOH, if someone is still stuck with Solaris for some reason,
    they can still buy modern hardware from Dell, HPE, or Lenovo and
    there's a good chance Solaris 11.4 will work on it.
    Yes.
    But it still does not make sense to do a migration that will
    require another migration later compared to just do one
    migration to something with a future.
    Arne


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.os.vms on Fri Oct 10 20:14:40 2025
    From Newsgroup: comp.os.vms

    In article <10can8q$dop$1@reader2.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    I don't think [Oracle] were ever interested in Sun's earlier,
    traditional markets: workstations and so forth were uninteresting.

    By the time of the takeover, Sun wasn't very interested in SPARC
    workstations, because their market share was close to zero. x86-64
    Windows and Linux had demolished all the traditional Unix workstations by
    then. The Sun server business was still going, but loosing money pretty
    fast.

    John
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.os.vms on Fri Oct 10 20:14:40 2025
    From Newsgroup: comp.os.vms

    In article <10camac$nch$1@reader2.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    I also don't think they took Linux seriously enough, and by the
    time they did, it was too late: had OpenSolaris happened 8 years
    earlier, maybe it could have been a viable alternative, but as
    it was, it was too little, too late.

    They wasted several years on a fiasco. Since the Linux system calls were somewhat Solaris-like in those days, they had the idea of making Solaris
    x86 capable of running Linux binaries. So they hired a bunch of Linux
    people - apparently not very good ones - and set them to work. Those guys
    came back after over a year with a huge pile of changes to the Solaris
    kernel that made it capable of running a RHEL3.0 x86 32-bit userland. But
    only that, not any other distro. The Solaris kernel people weren't
    willing to take on a load of changes that weren't done to their standards,
    and after a lot of arguing, the whole job was abandoned.

    Open Solaris seemed to be based on the idea that Linux people would
    prefer to work on Solaris, which is a terrible failure in understanding
    their motivations.

    John
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Fri Oct 10 21:58:11 2025
    From Newsgroup: comp.os.vms

    On Fri, 10 Oct 2025 09:55:05 -0400, Arne Vajh|+j wrote:

    On 10/10/2025 9:30 AM, Lawrence DrCOOliveiro wrote:

    IrCOm sure the fans of the various OpenSolaris offshoots would love to
    see Solaris open-sourced again. Surely it would be no loss to Oracle to
    do this now.

    I doubt it would make a difference.

    They got a copy years ago. They could not make it a success.

    DonrCOt know what you mean by rCLsuccessrCY. Open-source projects live or die, not by the sheer number of users they can attract, but by the level of contributions from an active community.

    In other words, as long as projects continue to attract contributors, they
    are not (yet) in danger of fading out.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Fri Oct 10 22:02:06 2025
    From Newsgroup: comp.os.vms

    On Fri, 10 Oct 2025 20:13 +0100 (BST), John Dallman wrote:

    In article <10can8q$dop$1@reader2.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    I don't think [Oracle] were ever interested in Sun's earlier,
    traditional markets: workstations and so forth were uninteresting.

    By the time of the takeover, Sun wasn't very interested in SPARC workstations, because their market share was close to zero. x86-64
    Windows and Linux had demolished all the traditional Unix workstations
    by then. The Sun server business was still going, but l[o]sing money
    pretty fast.

    Sun continued to be reasonably profitable for a while after the collapse
    of the entire Unix workstation market, as I recall, because SPARC machines were the platform of choice for setting up these newfangled rCLInternetrCY servers.

    Linux was around and growing, but information about it could only spread
    by word of mouth, not because any big company was behind it. Eventually
    that did overcome SunrCOs sheer marketing visibility, but it didnrCOt happen overnight.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Fri Oct 10 22:04:47 2025
    From Newsgroup: comp.os.vms

    On Fri, 10 Oct 2025 10:05:49 -0400, Arne Vajh|+j wrote:

    The vast majority of Solaris system revenue was made after that. And questionable whether they could have made the same revenue on x86
    due to the competition.

    In my experience, companies with high-volume, low-margin products have
    a better chance of success taking over low-volume, high-margin markets
    than those trying to go the other way.

    But it still does not make sense to do a migration that will require
    another migration later compared to just do one migration to
    something with a future.

    Depending on the costs involved, cashflow considerations might dictate otherwise.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Fri Oct 10 22:11:12 2025
    From Newsgroup: comp.os.vms

    On Fri, 10 Oct 2025 20:13 +0100 (BST), John Dallman wrote:

    Those guys came back after over a year with a huge pile of changes
    to the Solaris kernel that made it capable of running a RHEL3.0 x86
    32-bit userland. But only that, not any other distro.

    Maybe not so surprising, given the prevailing mentality that a market
    must be dominated by one vendor -- which was true in the proprietary
    world, but didnrCOt carry over to the open-source world. Red Hat were
    perhaps the most visible Linux company at the time, and more than one
    party was guilty of assuming that rCLLinuxrCY was going to become
    synonymous with rCLRed HatrCY.

    The Solaris kernel people weren't willing to take on a load of
    changes that weren't done to their standards, and after a lot of
    arguing, the whole job was abandoned.

    Respect to them for *having* standards. ;)

    Open Solaris seemed to be based on the idea that Linux people would
    prefer to work on Solaris, which is a terrible failure in
    understanding their motivations.

    OpenSolaris is an open-source project. Like any open-source project,
    it doesnrCOt depend for its success on attracting large numbers of
    people who will simply passively use it but not contribute anything
    back. The project lives and dies by the level of active contributions
    from the community.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Sat Oct 11 11:39:38 2025
    From Newsgroup: comp.os.vms

    In article <memo.20251010201326.10624f@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    In article <10can8q$dop$1@reader2.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    I don't think [Oracle] were ever interested in Sun's earlier,
    traditional markets: workstations and so forth were uninteresting.

    By the time of the takeover, Sun wasn't very interested in SPARC >workstations, because their market share was close to zero. x86-64
    Windows and Linux had demolished all the traditional Unix workstations by >then. The Sun server business was still going, but loosing money pretty
    fast.

    Oh totally. Sun was a shell of its former self by then. Oracle
    didn't care; everything was done through a web browser anyway.

    I'd argue that Sun more or less abandoned the workstation market
    when they switched to SVR4 and away from BSD with the move to
    Solaris from SunOS 4. I think also the focus shifted
    dramatically once Java came onto the scene; Sun seemed to move
    away from its traditional computer business in order to focus
    more full on java and its ecosystem.

    Their initial success was because they built the computer that
    they themselves wanted to use, and came up with a computer a
    bunch of other people wanted to use, too. It was a joy to use a
    Sun workstation at the time. But then they stopped doing that.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Sat Oct 11 11:47:54 2025
    From Newsgroup: comp.os.vms

    In article <10cb37p$1hml$2@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/10/2025 9:30 AM, Lawrence DrCOOliveiro wrote:
    On Fri, 10 Oct 2025 08:14 +0100 (BST), John Dallman wrote:
    Oracle wanted to sell proprietary hardware, until they lost interest
    in Solaris in favour of cloud.

    IrCOm sure the fans of the various OpenSolaris offshoots would love to see >> Solaris open-sourced again. Surely it would be no loss to Oracle to do
    this now.

    I doubt it would make a difference.

    They got a copy years ago. They could not make it a success.

    There is no reason to believe that getting a copy again would make
    it a success.

    It never _really_ went away; illumos is still around, though at
    this point distinct from Solaris itself, which occasionally
    causes problems: an issue came up just recently where certain
    ELF sections were not merged in the linker because LLVM was
    changed to emit a bit defined for Solaris `ld` but absent from
    illumos `ld`, but LLVM treats the two platforms as the same.
    The result was two distinct sections with the same name emitted
    into the linked object, so that Rust programs (and presumably
    C++ programs, too) failed to initialize static globals. Of
    course, the GNU linker has its own version of that bit, though
    the Solaris one predates GNU here.

    Anyway, despite things like that, there's some good engineering
    in there; it's actually a very pleasant code base to work in.
    But of course engineering doesn't matter beyond the minimum,
    hence Linux's domination.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Sat Oct 11 11:50:57 2025
    From Newsgroup: comp.os.vms

    In article <10cb3rt$1hmm$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/10/2025 6:14 AM, Dan Cross wrote:
    In article <10c6jdf$1sato$2@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/8/2025 3:45 PM, Dan Cross wrote:
    I wonder what percentage of Solaris installations are on SPARC
    and what are x86 at this point. 2037 is only 12 years away.

    Back in the Sun days Solaris/SPARC was way more common than
    Solaris/x86-64 (and Solaris/x86 before that).

    Yup.

    And I doubt it has changed. I don't recall a time where
    Solaris/SPARC was considered dead and Solaris/x86-64 was
    considered to have a bright future.

    Within Sun a lot of senior engineers realized by the mid-1990s
    that SPARC was going to be a dead end. They just weren't going
    to be able to compete against Intel, and the realization within
    (at least) the Solaris kernel team was that if Sun didn't pivot
    to x86, they'd be doomed. And those folks were largely correct.
    But Sun just didn't want to give up that high margin business
    and compete against the likes of Dell on volume.

    Good decision. The vast majority of Solaris system revenue was
    made after that. And questionable whether they could have made
    the same revenue on x86 due to the competition.

    Good in the short term, perhaps, but bad in the long term.

    And one migration
    Solaris/SPARC->Linux/x86-64 is cheaper than two migrations
    Solaris/SPARC->Solaris/x86-64->Linux/x86-64.

    OTOH, if someone is still stuck with Solaris for some reason,
    they can still buy modern hardware from Dell, HPE, or Lenovo and
    there's a good chance Solaris 11.4 will work on it.

    Yes.
    But it still does not make sense to do a migration that will
    require another migration later compared to just do one
    migration to something with a future.

    One can't really make a categorical statement like that. It
    depends too much on the application, and how much it leveraged
    the Solaris environment. For instance, something that makes
    heavy use of zones, SMF, ZFS, doors, the management stuff, etc,
    might be much easier to move to Solaris x86 than Linux. For
    that matter, it may be easier to move to illumos rather than
    Linux.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Sat Oct 11 11:59:24 2025
    From Newsgroup: comp.os.vms

    In article <memo.20251010201326.10624g@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    In article <10camac$nch$1@reader2.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    I also don't think they took Linux seriously enough, and by the
    time they did, it was too late: had OpenSolaris happened 8 years
    earlier, maybe it could have been a viable alternative, but as
    it was, it was too little, too late.

    They wasted several years on a fiasco. Since the Linux system calls were >somewhat Solaris-like in those days, they had the idea of making Solaris
    x86 capable of running Linux binaries. So they hired a bunch of Linux
    people - apparently not very good ones - and set them to work. Those guys >came back after over a year with a huge pile of changes to the Solaris
    kernel that made it capable of running a RHEL3.0 x86 32-bit userland. But >only that, not any other distro. The Solaris kernel people weren't
    willing to take on a load of changes that weren't done to their standards, >and after a lot of arguing, the whole job was abandoned.

    Do you mean the LX-branded zone stuff? Or something else?

    Open Solaris seemed to be based on the idea that Linux people would
    prefer to work on Solaris, which is a terrible failure in understanding
    their motivations.

    I think that was one of the ideas. The other is that there is
    more value in code being open source than in it being closed.
    I think McVoy's memo was a lot more influential than he's given
    credit for here.

    But yeah, it's hard not to see it presenting as, "ok kids, fine,
    you got us; we'll let you get out of the kiddie pool now and
    come into the grownup pool with us if you stop being so fussy."

    At anyrate, the developer community they expected to attract
    never actually materialized: it was almost all Sun engineers
    working on it.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.os.vms on Sat Oct 11 13:41:40 2025
    From Newsgroup: comp.os.vms

    In article <10c6irh$er0$1@reader2.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:
    In article <memo.20251007223453.10624Y@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:

    That is changing, reasonably quickly. ARM stopped releasing new
    cores that could do A32 or T32 in 2023, having been phasing them
    out since 2021.

    I should have said "ARM stopped releasing new _A-profile_ cores that
    could do A32 or T32 in 2023 ..."

    I wonder if this suggests that they'll introduce a compressed
    instruction set a la Thumb for 64 bit mode; -M profile seems to
    top out at ARMv8.1; and according to the ARMv8-M ARM, only
    supports T32.

    ARM v8-M does not have 64-bit registers or instructions, or virtual
    memory. It has an optional, simple, memory protection system. The
    additions at ARMv8.1M are not the same as the ones in ARM v8.1A.

    Presumably at some point they'll introduce an ARMv9 core for
    the embedded market and this will become an issue.

    Or maybe they won't. We could be in a world of 32-bit embedded
    cores in that space for a very long time indeed.

    It depends what you're doing, really. Qualcomm cellphone-derived SoCs
    with 64-bit Cortex-A cores are already widely used in robotics and
    similar kinds of "embedded" uses. But there's no need at all for 64-bit
    in tiny microcontrollers.

    John
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.os.vms on Sat Oct 11 15:14:40 2025
    From Newsgroup: comp.os.vms

    In article <10cdgqs$5c0$5@reader2.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:
    In article <memo.20251010201326.10624g@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:

    They wasted several years on a fiasco.
    Do you mean the LX-branded zone stuff? Or something else?

    Something else, which predated Zones. It never got released.

    John
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.os.vms on Sat Oct 11 15:14:40 2025
    From Newsgroup: comp.os.vms

    In article <10cdflq$5c0$2@reader2.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    I'd argue that Sun more or less abandoned the workstation market
    when they switched to SVR4 and away from BSD with the move to
    Solaris from SunOS 4.

    That doesn't match my experience. Solaris was first released in 1992 and
    had taken over by 1996. Sun released the Blade workstations in 2000, and
    new Ultra workstations in 2006, and didn't discontinue them until 2008.
    Until at least 2005, we had customers doing serious work on SPARC
    workstations, although nobody was switching to them from other platforms.


    Our stuff does gain significantly from 64-bit addressing; I could believe fields that didn't need 64-bit gave up on Sun earlier.

    I think also the focus shifted dramatically once Java came onto
    the scene; Sun seemed to move away from its traditional computer
    business in order to focus more full on java and its ecosystem.

    They tried that on us, but were deeply unconvincing.

    They were expecting us to be impressed that they'd done JNI wrappers of
    about ten functions from our 500+ function API. We said "Presumably you
    have tools to generate this stuff automatically?" and they didn't
    understand what we were talking about.

    John
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jayjwa@jayjwa@atr2.ath.cx.invalid to comp.os.vms on Sat Oct 11 14:44:35 2025
    From Newsgroup: comp.os.vms

    Lawrence DrCOOliveiro <ldo@nz.invalid> writes:

    IrCOm sure the fans of the various OpenSolaris offshoots would love to see Solaris open-sourced again.
    The action is at illumos now. I'm on various IRC channels about Solaris,
    Omni OS, and Open Indiana. They get way more traffic than Oracle
    Solaris.

    The default Omni OS install is about 300mb, which is quick to get up SSH
    and a server or virtual machine environment. Open Indiana has a full
    desktop w/Mate. I have it in bhyve in Omni OS. ZFS-on-root, zones,
    crossbow, and bhyve are great. Unfortunately, the hardware support is no
    where near what Linux has but if it runs for you it runs.

    There's several other projects under the illumos umbrella.
    --
    PGP Key ID: 781C A3E2 C6ED 70A6 B356 7AF5 B510 542E D460 5CAE
    "The Internet should always be the Wild West!"
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Sat Oct 11 19:42:48 2025
    From Newsgroup: comp.os.vms

    On Sat, 11 Oct 2025 15:13 +0100 (BST), John Dallman wrote:

    In article <10cdflq$5c0$2@reader2.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    I think also the focus shifted dramatically once Java came onto the
    scene; Sun seemed to move away from its traditional computer business
    in order to focus more full on java and its ecosystem.

    They tried that on us, but were deeply unconvincing.

    Corel, I think it was, announced a project to rewrite their entire office suite in Java.

    They got far enough into the project to realize that it was a massive step backwards in performance. At that point, they gave up.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Rich Alderson@news@alderson.users.panix.com to comp.os.vms on Sat Oct 11 21:27:03 2025
    From Newsgroup: comp.os.vms

    cross@spitfire.i.gajendra.net (Dan Cross) writes:

    [Sun's] initial success was because they built the computer that
    they themselves wanted to use, and came up with a computer a
    bunch of other people wanted to use, too. It was a joy to use a
    Sun workstation at the time. But then they stopped doing that.

    Remember that the original SUN-1 board was designed by Andy Bechtolsheim from a specification given to him by Ralph Gorin, director of the Stanford academic computing facility (LOTS), who envisioned a 4M system (1M memory, 1MIPS, 1M pixels on the screen, 1Mbps network, based on the first Ethernet at PARC).

    SUN stood for "Stanford University Network"...

    The same board was used in the original routers and terminal interface processors (TIPs) on the Stanford network, designed by Len Bosack of Cisco and XKL fame.

    Khosla and Bechtolsheim, et al., didn't "build the computer they wanted to use",
    they built the one they thought would make money when they took the design from Stanford.
    --
    Rich Alderson news@alderson.users.panix.com
    Audendum est, et veritas investiganda; quam etiamsi non assequamur,
    omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
    --Galen --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sun Oct 12 21:07:17 2025
    From Newsgroup: comp.os.vms

    On 10/11/2025 7:50 AM, Dan Cross wrote:
    In article <10cb3rt$1hmm$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/10/2025 6:14 AM, Dan Cross wrote:
    Within Sun a lot of senior engineers realized by the mid-1990s
    that SPARC was going to be a dead end. They just weren't going
    to be able to compete against Intel, and the realization within
    (at least) the Solaris kernel team was that if Sun didn't pivot
    to x86, they'd be doomed. And those folks were largely correct.
    But Sun just didn't want to give up that high margin business
    and compete against the likes of Dell on volume.

    Good decision. The vast majority of Solaris system revenue was
    made after that. And questionable whether they could have made
    the same revenue on x86 due to the competition.

    Good in the short term, perhaps, but bad in the long term.

    Seeing a good long term business for selling proprietary Unix
    for x86-64 require a very good imagination.

    And one migration
    Solaris/SPARC->Linux/x86-64 is cheaper than two migrations
    Solaris/SPARC->Solaris/x86-64->Linux/x86-64.

    OTOH, if someone is still stuck with Solaris for some reason,
    they can still buy modern hardware from Dell, HPE, or Lenovo and
    there's a good chance Solaris 11.4 will work on it.

    Yes.
    But it still does not make sense to do a migration that will
    require another migration later compared to just do one
    migration to something with a future.

    One can't really make a categorical statement like that. It
    depends too much on the application, and how much it leveraged
    the Solaris environment. For instance, something that makes
    heavy use of zones, SMF, ZFS, doors, the management stuff, etc,
    might be much easier to move to Solaris x86 than Linux.

    Did you read what you replied to??

    For
    that matter, it may be easier to move to illumos rather than
    Linux.

    Sure.

    But moving to Illumos is not moving to a well supported platform
    with a highly likely future.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sun Oct 12 21:11:49 2025
    From Newsgroup: comp.os.vms

    On 10/11/2025 7:39 AM, Dan Cross wrote:
    In article <memo.20251010201326.10624f@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    In article <10can8q$dop$1@reader2.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    I don't think [Oracle] were ever interested in Sun's earlier,
    traditional markets: workstations and so forth were uninteresting.

    By the time of the takeover, Sun wasn't very interested in SPARC
    workstations, because their market share was close to zero. x86-64
    Windows and Linux had demolished all the traditional Unix workstations by
    then. The Sun server business was still going, but loosing money pretty
    fast.

    Oh totally. Sun was a shell of its former self by then. Oracle
    didn't care; everything was done through a web browser anyway.

    I'd argue that Sun more or less abandoned the workstation market
    when they switched to SVR4 and away from BSD with the move to
    Solaris from SunOS 4. I think also the focus shifted
    dramatically once Java came onto the scene; Sun seemed to move
    away from its traditional computer business in order to focus
    more full on java and its ecosystem.

    Sun did not make money on Java and did not even have potential
    for making money on Java.

    There was not much money in Java SE. The money was in Java EE.

    Sun's Java EE products sucked big time.

    So it was IBM, BEA (acquired by Oracle in 2008), JBoss (acquired
    by Redhat in 2006), Oracle, SAP etc. that made all the money
    on Java.

    Arne



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Mon Oct 13 01:21:17 2025
    From Newsgroup: comp.os.vms

    On Sun, 12 Oct 2025 21:11:49 -0400, Arne Vajh|+j wrote:

    Sun did not make money on Java and did not even have potential
    for making money on Java.

    There was not much money in Java SE. The money was in Java EE.

    Sun's Java EE products sucked big time.

    I thought Oracle acquired Sun for one reason and one reason only: to get control of Java.

    They also tried to make money off J2ME. Then Google found a way to take
    the open-source version of J2SE and build a mobile smartphone platform
    that left J2ME in the dust. Naturally Ellison was livid about that, and
    found an excuse, any excuse, to sue.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Mon Oct 13 01:23:07 2025
    From Newsgroup: comp.os.vms

    On Sun, 12 Oct 2025 21:07:17 -0400, Arne Vajh|+j wrote:

    But moving to Illumos is not moving to a well supported platform with a highly likely future.

    Moving to an open-source platform at least leaves your options open, for taking control of your own destiny. Particularly when the future of the proprietary platform is looking more and more iffy.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sun Oct 12 21:42:25 2025
    From Newsgroup: comp.os.vms

    On 10/12/2025 9:21 PM, Lawrence DrCOOliveiro wrote:
    On Sun, 12 Oct 2025 21:11:49 -0400, Arne Vajh|+j wrote:
    Sun did not make money on Java and did not even have potential
    for making money on Java.

    There was not much money in Java SE. The money was in Java EE.

    Sun's Java EE products sucked big time.

    I thought Oracle acquired Sun for one reason and one reason only: to get control of Java.

    It was probably a very important factor.

    But it was to get control of Sun Java SE, which Sun was not making
    money on, but was the foundation for a lot of Oracle stuff that Oracle
    was making truckloads of money on (the middleware stuff acquired from
    BEA, the ERP and CRM stuff where they change the name every other year
    acquired from various other companies).

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris Townley@news@cct-net.co.uk to comp.os.vms on Mon Oct 13 10:57:09 2025
    From Newsgroup: comp.os.vms

    On 13/10/2025 02:07, Arne Vajh|+j wrote:


    Seeing a good long term business for selling proprietary Unix
    for x86-64 require a very good imagination.


    Arne

    Red Hat do well out of it, although not quite propriety, not quite open source...
    --
    Chris
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Wade@dave@g4ugm.invalid to comp.os.vms on Mon Oct 13 12:41:58 2025
    From Newsgroup: comp.os.vms

    On 13/10/2025 11:57, Chris Townley wrote:
    On 13/10/2025 02:07, Arne Vajh|+j wrote:


    Seeing a good long term business for selling proprietary Unix
    for x86-64 require a very good imagination.


    Arne

    Red Hat do well out of it, although not quite propriety, not quite open source...


    RedHat have worked hard to make it impossible to use their Linux without paying. In addition they do well because in order to comply with many
    security policies you need supported software.

    So unless you are the French Gendarmerie, who have their own Linux
    Distro, you need to pay RedHat for support. Its not cheap

    Dave
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Simon Clubley@clubley@remove_me.eisner.decus.org-Earth.UFP to comp.os.vms on Mon Oct 13 12:29:10 2025
    From Newsgroup: comp.os.vms

    On 2025-10-13, David Wade <dave@g4ugm.invalid> wrote:
    On 13/10/2025 11:57, Chris Townley wrote:
    On 13/10/2025 02:07, Arne Vajhoj wrote:


    Seeing a good long term business for selling proprietary Unix
    for x86-64 require a very good imagination.


    Red Hat do well out of it, although not quite propriety, not quite open
    source...


    RedHat have worked hard to make it impossible to use their Linux without paying. In addition they do well because in order to comply with many security policies you need supported software.

    So unless you are the French Gendarmerie, who have their own Linux
    Distro, you need to pay RedHat for support. Its not cheap


    Have a look at Oracle Linux which is a RHEL rebuild with some Oracle enhancements (such as an optional custom kernel). Unlike the rest of
    Oracle, there are no tricks (so far) and patches flow at regular
    intervals. Oracle also make patches available for previous versions
    of Oracle Linux as well.

    The most annoying thing about RHEL these days is that its supposed to
    be an _Enterprise_ operating system which has traditionally meant
    applications being able to maintain backwards compatibility with
    previous versions. Yet, RedHat seem to change things around at will.

    BTW, RHEL 10 appears to have completely dropped 32-bit application
    support (this is different from RHEL itself having a 32-bit RHEL
    version, which got dropped around RHEL 7).

    If true, this means all your 32-bit legacy applications will stop
    working on RHEL 10. Goodness knows what they were thinking when
    they did that.

    Simon.
    --
    Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
    Walking destinations on a map are further away than they appear.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Simon Clubley@clubley@remove_me.eisner.decus.org-Earth.UFP to comp.os.vms on Mon Oct 13 12:36:02 2025
    From Newsgroup: comp.os.vms

    On 2025-10-11, John Dallman <jgd@cix.co.uk> wrote:

    They were expecting us to be impressed that they'd done JNI wrappers of
    about ten functions from our 500+ function API. We said "Presumably you
    have tools to generate this stuff automatically?" and they didn't
    understand what we were talking about.


    Bloody &*&^* stupid JNI. :-( As someone who writes some programs for
    personal use on Android, containing a mixture of Java and C code,
    I bloody well _hate_ that interface. :-(

    I believe I may have mentioned this once or twice before. :-)

    BTW, Google's latest stupidity is that they are going to stop you
    from being able to sideload even your own Android applications unless
    you register as a developer with Google. The whole point of Android
    is that it is supposed to be an open environment.

    Simon.
    --
    Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
    Walking destinations on a map are further away than they appear.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.os.vms on Mon Oct 13 16:57:40 2025
    From Newsgroup: comp.os.vms

    In article <10ciram$26jkd$1@dont-email.me>, clubley@remove_me.eisner.decus.org-Earth.UFP (Simon Clubley) wrote:

    BTW, RHEL 10 appears to have completely dropped 32-bit application
    support (this is different from RHEL itself having a 32-bit RHEL
    version, which got dropped around RHEL 7).

    If true, this means all your 32-bit legacy applications will stop
    working on RHEL 10. Goodness knows what they were thinking when
    they did that.

    It is true. No 32-bit libraries are provided for RHEL10. Therefore,
    32-bit application cannot be run.

    However, RHEL9 runs 32-bit applications happily, and will be supported
    until May 2032, or May 2035 if you pay Red Hat some more money.

    If that isn't long enough, Tuxcare will provide patches for CVEs for as
    long as you're prepared to keep playing. Since they are the maintainers
    for the AlmaLinux work-alike of RHEL, they are decently credible. My
    employers used them for a year's patches for CentOS 7.9, and they were
    fine.

    <https://tuxcare.com/endless-lifecycle-support/>

    <https://tuxcare.com/procomputers-extended-lifecycle-support/> has more details.

    <https://www.suse.com/products/multi-linux-support/> SUSE also provide
    support for EoL RHEL.

    Personally, having a decade to replace 32-bit applications seems like
    plenty, but I work in a software development shop where we have to stay
    on supported Linuxes and tools. We stopped releasing 32-bit Linux
    software a decade ago, and had no pushback at all from customers. The
    customers continue to demand 32-bit Windows software, somewhat to my discontent.

    John
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Mon Oct 13 16:22:50 2025
    From Newsgroup: comp.os.vms

    On 10/13/2025 5:57 AM, Chris Townley wrote:
    On 13/10/2025 02:07, Arne Vajh|+j wrote:
    Seeing a good long term business for selling proprietary Unix
    for x86-64 require a very good imagination.

    Red Hat do well out of it, although not quite propriety, not quite open source...

    Redhat do (or at least did) very well by selling support for
    a bundle of open source software. Almost everything in RHEL is
    open source and most of it is not written or maintained by Redhat.

    Different business model.

    But despite RHEL becoming the de facto standard for
    enterprise Linux then Redhat never made as much money as
    Sun did back in the late 90's and very early 00's.

    The open source nature of the code base had huge
    advantages for Redhat:
    * they got it for free
    * compatibility with other Linux distros
    but also with a downside:
    * it is possible to create RHEL clones

    That means wide usage but also cap on the price tag.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Mon Oct 13 16:38:38 2025
    From Newsgroup: comp.os.vms

    On 10/13/2025 6:41 AM, David Wade wrote:
    On 13/10/2025 11:57, Chris Townley wrote:
    On 13/10/2025 02:07, Arne Vajh|+j wrote:
    Seeing a good long term business for selling proprietary Unix
    for x86-64 require a very good imagination.

    Red Hat do well out of it, although not quite propriety, not quite
    open source...

    RedHat have worked hard to make it impossible to use their Linux without paying. In addition they do well because in order to comply with many security policies you need supported software.

    So unless you are the French Gendarmerie, who have their own Linux
    Distro, you need to pay RedHat for support. Its not cheap

    RHEL product management is getting squeezed. The IBM bean counters
    want higher profit. And sale is dropping due to companies moving
    their Linux workload from on-prem RHEL to cloud non-RHEL. So they
    have done some "crazy" stuff to make it harder for RHEL clones.

    But RHEL clones still exist. Rocky, Alma, Oracle, Amazon etc..
    Redhat's changes may have reduced compatibility from 100%
    to 99.95%, but my impression is that the industry in general
    consider the compatibility acceptable.

    Support is easy. If you need support you pay. Redhat is still
    an obvious choice in that case. But few make that choice, because
    most only provide containers and let the cloud vendor provide
    the host Linux. And they don't want to pay Redhat.

    Arne


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Mon Oct 13 16:45:37 2025
    From Newsgroup: comp.os.vms

    On 10/13/2025 8:36 AM, Simon Clubley wrote:
    On 2025-10-11, John Dallman <jgd@cix.co.uk> wrote:
    They were expecting us to be impressed that they'd done JNI wrappers of
    about ten functions from our 500+ function API. We said "Presumably you
    have tools to generate this stuff automatically?" and they didn't
    understand what we were talking about.

    Bloody &*&^* stupid JNI. :-( As someone who writes some programs for
    personal use on Android, containing a mixture of Java and C code,
    I bloody well _hate_ that interface. :-(

    I believe I may have mentioned this once or twice before. :-)

    JNI is a very low level and very primitive interface. Very
    1990'ish. It can be a PITA to work with.

    It has been replaced by the Foreign Function interface
    in Java 16 (2021) as incubator an Java 22 (2024) as final.

    Several libraries has been created that provide an easier
    interface on top of JNI.

    Most famous is JNA.

    If we focus on VMS, then I have also created such
    a library: VMSCall. It speaks VMS calling convention
    in Java.

    In general: avoid JNI unless you really need it.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Mon Oct 13 16:52:32 2025
    From Newsgroup: comp.os.vms

    On 10/11/2025 10:13 AM, John Dallman wrote:
    In article <10cdflq$5c0$2@reader2.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:
    I think also the focus shifted dramatically once Java came onto
    the scene; Sun seemed to move away from its traditional computer
    business in order to focus more full on java and its ecosystem.

    They tried that on us, but were deeply unconvincing.

    They were expecting us to be impressed that they'd done JNI wrappers of
    about ten functions from our 500+ function API. We said "Presumably you
    have tools to generate this stuff automatically?" and they didn't
    understand what we were talking about.

    You may have been lucky.

    :-)

    The concept of:

    Java code---(millions of low level function calls via JNI)--->native code

    is not good.

    Java code---(thousands of high level service calls via JNI)--->native code

    may work OK.

    Moving data between managed code and unmanaged code is in general
    tricky and cost a lot of CPU cycles.

    And Java JNI is not even a good implementation of that.

    .NET did much better with InteropServices/DllImport and C++ CLI.

    Arne


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Mon Oct 13 21:27:13 2025
    From Newsgroup: comp.os.vms

    On Mon, 13 Oct 2025 16:45:37 -0400, Arne Vajh|+j wrote:

    In general: avoid JNI unless you really need it.

    My main encounter with Java programming so far has been on Android, and I
    did find the need to use JNI once or twice. I came up with a way to make
    it slightly more palatable <https://github.com/ldo/JNIGlue>.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Mon Oct 13 21:32:17 2025
    From Newsgroup: comp.os.vms

    On Mon, 13 Oct 2025 12:41:58 +0200, David Wade wrote:

    So unless you are the French Gendarmerie, who have their own Linux
    Distro, you need to pay RedHat for support. Its not cheap

    Lots of other organizations have their own custom distros too. IrCOm
    sure there are even consultancies who would specialize in creating
    such things for you.

    <https://www.linuxjournal.com/content/how-build-custom-distributions-scratch> <https://www.yoctoproject.org/>
    <https://idalko.com/build-linux-distribution/>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Mon Oct 13 21:36:24 2025
    From Newsgroup: comp.os.vms

    On Mon, 13 Oct 2025 16:38:38 -0400, Arne Vajh|+j wrote:

    Support is easy. If you need support you pay.

    The thing is, expertise in a non-proprietary product is not confined to
    the company that makes that product. There is plenty of Open Source
    expertise available in the community that you can hire. If you rely on an outside company, particularly a large one, you know that inevitably their interests align with their shareholders, and sooner or later will come
    into conflict with yours (as happens with Microsoft, for example). If you
    rely on your own employees, that canrCOt happen.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris Townley@news@cct-net.co.uk to comp.os.vms on Mon Oct 13 22:53:28 2025
    From Newsgroup: comp.os.vms

    On 13/10/2025 21:38, Arne Vajh|+j wrote:
    On 10/13/2025 6:41 AM, David Wade wrote:
    On 13/10/2025 11:57, Chris Townley wrote:
    On 13/10/2025 02:07, Arne Vajh|+j wrote:
    Seeing a good long term business for selling proprietary Unix
    for x86-64 require a very good imagination.

    Red Hat do well out of it, although not quite propriety, not quite
    open source...

    RedHat have worked hard to make it impossible to use their Linux
    without paying. In addition they do well because in order to comply
    with many security policies you need supported software.

    So unless you are the French Gendarmerie, who have their own Linux
    Distro, you need to pay RedHat for support. Its not cheap

    RHEL product management is getting squeezed. The IBM bean counters
    want higher profit. And sale is dropping due to companies moving
    their Linux workload from on-prem RHEL to cloud non-RHEL. So they
    have done some "crazy" stuff to make it harder for RHEL clones.

    But RHEL clones still exist. Rocky, Alma, Oracle, Amazon etc..
    Redhat's changes may have reduced compatibility from 100%
    to 99.95%, but my impression is that the industry in general
    consider the compatibility acceptable.

    Support is easy. If you need support you pay. Redhat is still
    an obvious choice in that case. But few make that choice, because
    most only provide containers and let the cloud vendor provide
    the host Linux. And they don't want to pay Redhat.

    Arne

    My former company would only use RHEL
    --
    Chris
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Mon Oct 13 19:23:03 2025
    From Newsgroup: comp.os.vms

    On 10/13/2025 5:53 PM, Chris Townley wrote:
    On 13/10/2025 21:38, Arne Vajh|+j wrote:
    On 10/13/2025 6:41 AM, David Wade wrote:
    On 13/10/2025 11:57, Chris Townley wrote:
    On 13/10/2025 02:07, Arne Vajh|+j wrote:
    Seeing a good long term business for selling proprietary Unix
    for x86-64 require a very good imagination.

    Red Hat do well out of it, although not quite propriety, not quite
    open source...

    RedHat have worked hard to make it impossible to use their Linux
    without paying. In addition they do well because in order to comply
    with many security policies you need supported software.

    So unless you are the French Gendarmerie, who have their own Linux
    Distro, you need to pay RedHat for support. Its not cheap

    RHEL product management is getting squeezed. The IBM bean counters
    want higher profit. And sale is dropping due to companies moving
    their Linux workload from on-prem RHEL to cloud non-RHEL. So they
    have done some "crazy" stuff to make it harder for RHEL clones.

    But RHEL clones still exist. Rocky, Alma, Oracle, Amazon etc..
    Redhat's changes may have reduced compatibility from 100%
    to 99.95%, but my impression is that the industry in general
    consider the compatibility acceptable.

    Support is easy. If you need support you pay. Redhat is still
    an obvious choice in that case. But few make that choice, because
    most only provide containers and let the cloud vendor provide
    the host Linux. And they don't want to pay Redhat.

    My former company would only use RHEL

    On-prem I assume?

    Because paying the cloud vendor for VM's, installing
    RHEL and Kubernetes (in form of Openshift for a Redhat
    shop) instead of just using EKS/AKS/GKE would be
    "unusual".

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Mon Oct 13 19:26:56 2025
    From Newsgroup: comp.os.vms

    On 10/13/2025 5:36 PM, Lawrence DrCOOliveiro wrote:
    On Mon, 13 Oct 2025 16:38:38 -0400, Arne Vajh|+j wrote:
    Support is easy. If you need support you pay.

    The thing is, expertise in a non-proprietary product is not confined to
    the company that makes that product. There is plenty of Open Source
    expertise available in the community that you can hire. If you rely on an outside company, particularly a large one, you know that inevitably their interests align with their shareholders, and sooner or later will come
    into conflict with yours (as happens with Microsoft, for example). If you rely on your own employees, that canrCOt happen.

    Enterprises with a need to document support can not just hire a random consultant when the need arrive.

    They need an ongoing contract with a company with a SLA and a reputation
    that indicates they can deliver in case it is needed.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris Townley@news@cct-net.co.uk to comp.os.vms on Tue Oct 14 00:58:18 2025
    From Newsgroup: comp.os.vms

    On 14/10/2025 00:23, Arne Vajh|+j wrote:
    On 10/13/2025 5:53 PM, Chris Townley wrote:
    On 13/10/2025 21:38, Arne Vajh|+j wrote:
    On 10/13/2025 6:41 AM, David Wade wrote:
    On 13/10/2025 11:57, Chris Townley wrote:
    On 13/10/2025 02:07, Arne Vajh|+j wrote:
    Seeing a good long term business for selling proprietary Unix
    for x86-64 require a very good imagination.

    Red Hat do well out of it, although not quite propriety, not quite
    open source...

    RedHat have worked hard to make it impossible to use their Linux
    without paying. In addition they do well because in order to comply
    with many security policies you need supported software.

    So unless you are the French Gendarmerie, who have their own Linux
    Distro, you need to pay RedHat for support. Its not cheap

    RHEL product management is getting squeezed. The IBM bean counters
    want higher profit. And sale is dropping due to companies moving
    their Linux workload from on-prem RHEL to cloud non-RHEL. So they
    have done some "crazy" stuff to make it harder for RHEL clones.

    But RHEL clones still exist. Rocky, Alma, Oracle, Amazon etc..
    Redhat's changes may have reduced compatibility from 100%
    to 99.95%, but my impression is that the industry in general
    consider the compatibility acceptable.

    Support is easy. If you need support you pay. Redhat is still
    an obvious choice in that case. But few make that choice, because
    most only provide containers and let the cloud vendor provide
    the host Linux. And they don't want to pay Redhat.

    My former company would only use RHEL

    On-prem I assume?

    Because paying the cloud vendor for VM's, installing
    RHEL and Kubernetes (in form of Openshift for a Redhat
    shop) instead of just using EKS/AKS/GKE would be
    "unusual".

    Correct - that rather beat the cloud!
    --
    Chris
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Tue Oct 14 00:20:14 2025
    From Newsgroup: comp.os.vms

    On Mon, 13 Oct 2025 19:26:56 -0400, Arne Vajh|+j wrote:

    On 10/13/2025 5:36 PM, Lawrence DrCOOliveiro wrote:

    On Mon, 13 Oct 2025 16:38:38 -0400, Arne Vajh|+j wrote:
    .
    Support is easy. If you need support you pay.

    The thing is, expertise in a non-proprietary product is not
    confined to the company that makes that product. There is plenty of
    Open Source expertise available in the community that you can hire.
    If you rely on an outside company, particularly a large one, you
    know that inevitably their interests align with their shareholders,
    and sooner or later will come into conflict with yours (as happens
    with Microsoft, for example). If you rely on your own employees,
    that canrCOt happen.

    Enterprises with a need to document support can not just hire a
    random consultant when the need arrive.

    If something is mission-critical and core to their entire business,
    they want a staff they can rely on, completely, to manage that
    properly.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Mon Oct 13 21:20:43 2025
    From Newsgroup: comp.os.vms

    On 10/13/2025 8:20 PM, Lawrence DrCOOliveiro wrote:
    On Mon, 13 Oct 2025 19:26:56 -0400, Arne Vajh|+j wrote:
    On 10/13/2025 5:36 PM, Lawrence DrCOOliveiro wrote:
    On Mon, 13 Oct 2025 16:38:38 -0400, Arne Vajh|+j wrote:
    Support is easy. If you need support you pay.

    The thing is, expertise in a non-proprietary product is not
    confined to the company that makes that product. There is plenty of
    Open Source expertise available in the community that you can hire.
    If you rely on an outside company, particularly a large one, you
    know that inevitably their interests align with their shareholders,
    and sooner or later will come into conflict with yours (as happens
    with Microsoft, for example). If you rely on your own employees,
    that canrCOt happen.

    Enterprises with a need to document support can not just hire a
    random consultant when the need arrive.

    If something is mission-critical and core to their entire business,
    they want a staff they can rely on, completely, to manage that
    properly.

    Few/no CIO's want to support the hundreds of millions of lines
    of open source code their business rely on themselves.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Tue Oct 14 01:52:50 2025
    From Newsgroup: comp.os.vms

    In article <memo.20251011151314.10624m@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    In article <10cdflq$5c0$2@reader2.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    I'd argue that Sun more or less abandoned the workstation market
    when they switched to SVR4 and away from BSD with the move to
    Solaris from SunOS 4.

    That doesn't match my experience. Solaris was first released in 1992 and
    had taken over by 1996. Sun released the Blade workstations in 2000, and
    new Ultra workstations in 2006, and didn't discontinue them until 2008.
    Until at least 2005, we had customers doing serious work on SPARC >workstations, although nobody was switching to them from other platforms.

    I remember well the transition from 4.2/4.3BSD/SunOS 4.1.3U1 and
    4.1.4, to Solaris 2.5/2.5.1/2.6. On the same hardware, Solaris
    was significantly slower, but beyond that, the user experience
    just wasn't what it had been on SunOS 4. Really basic stuff
    that, in retrospect, seems like nitpicking but at the time felt
    indicitive of a change in overall focus, were as simple as which
    version of the interpreter the `awk` command ran: on SunOS 4,
    this had been `nawk`; on Solaris it was the old, pre-book `awk`
    from 7th Edition Unix.

    Thins like that really made it feel like like Sun had abandoned
    their traditional engineer/CS/sciencist userbase in favor of
    more business/finance style applications added up. This was no
    longer the hacker's box; it was now a macine for doing Serious
    Work. Apparently, even the guy who was charged with doing the
    SVR4 bringup on SPARC at Sun (after the AT&T deal) was dismayed
    by just how much of a step backwards it really was.

    By 1997 or so, I had fully switched to either Linux or FreeBSD
    on Intel, as had a lot of people. A few years later everything
    was 64 bit anyway.

    Our stuff does gain significantly from 64-bit addressing; I could believe >fields that didn't need 64-bit gave up on Sun earlier.

    I can see that. Personally, I really liked Tru64 nee DEC Unix
    nee OSF/1 AXP on Alpha. OSF/1 felt like it was a much better
    system overall if one had to go if swimming in Unix waters,
    while Solaris felt underbaked.

    Of course, Solaris was still better than AIX, HP-UX, or even
    Irix, but it was a real disappointment when none of the other
    OSF members followed through on actually adopting OSF/1.
    "Oppose Sun Forever!"

    I think also the focus shifted dramatically once Java came onto
    the scene; Sun seemed to move away from its traditional computer
    business in order to focus more full on java and its ecosystem.

    They tried that on us, but were deeply unconvincing.

    They were expecting us to be impressed that they'd done JNI wrappers of
    about ten functions from our 500+ function API. We said "Presumably you
    have tools to generate this stuff automatically?" and they didn't
    understand what we were talking about.

    I never quite got the business play behind Java from Sun's
    perspective. It seemed to explode in popularity overnight, but
    they never quite figured out how to monetize it; I remember
    hearing from some Sun folks that they wanted to set standards
    and be at the center of the ecosystem, but were content to let
    other players actually build the production infrastructure. I
    thought Microsoft really ran circles around them with Java on
    the client side, and on the server side, it made less sense. A
    bytecode language makes some sense in a fractured and extremely
    heterogenious client environment; less so in more controlled
    server environments. I'll grant that the _language_ was better
    than many of the alternatives, but the JRE felt like more of an
    impediment for where Java ultimately landed.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Tue Oct 14 01:56:31 2025
    From Newsgroup: comp.os.vms

    In article <mddh5w4gba0.fsf@panix5.panix.com>,
    Rich Alderson <news@alderson.users.panix.com> wrote: >cross@spitfire.i.gajendra.net (Dan Cross) writes:

    [Sun's] initial success was because they built the computer that
    they themselves wanted to use, and came up with a computer a
    bunch of other people wanted to use, too. It was a joy to use a
    Sun workstation at the time. But then they stopped doing that.

    Remember that the original SUN-1 board was designed by Andy Bechtolsheim from a
    specification given to him by Ralph Gorin, director of the Stanford academic >computing facility (LOTS), who envisioned a 4M system (1M memory, 1MIPS, 1M >pixels on the screen, 1Mbps network, based on the first Ethernet at PARC).

    SUN stood for "Stanford University Network"...

    The same board was used in the original routers and terminal interface >processors (TIPs) on the Stanford network, designed by Len Bosack of Cisco and >XKL fame.

    Khosla and Bechtolsheim, et al., didn't "build the computer they wanted to use",
    they built the one they thought would make money when they took the design from
    Stanford.

    Khosla was out within what, 4 or 5 years? And he wasn't an
    engineer.

    The "building the computer they wanted to use" bit comes
    first-hand from engineers with single-digit Sun employee
    numbers. It wasn't just the hardware, but the software as well,
    of course.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Tue Oct 14 02:03:27 2025
    From Newsgroup: comp.os.vms

    On Mon, 13 Oct 2025 21:20:43 -0400, Arne Vajh|+j wrote:

    On 10/13/2025 8:20 PM, Lawrence DrCOOliveiro wrote:

    On Mon, 13 Oct 2025 19:26:56 -0400, Arne Vajh|+j wrote:

    Enterprises with a need to document support can not just hire a
    random consultant when the need arrive.

    If something is mission-critical and core to their entire business,
    they want a staff they can rely on, completely, to manage that
    properly.

    Few/no CIO's want to support the hundreds of millions of lines
    of open source code their business rely on themselves.

    The whole point of having all that code is that they didnrCOt need to write
    it themselves.

    You have to take responsibility for your own business, donrCOt you?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Tue Oct 14 02:04:14 2025
    From Newsgroup: comp.os.vms

    In article <10chjc5$1s2mr$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/11/2025 7:50 AM, Dan Cross wrote:
    In article <10cb3rt$1hmm$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/10/2025 6:14 AM, Dan Cross wrote:
    Within Sun a lot of senior engineers realized by the mid-1990s
    that SPARC was going to be a dead end. They just weren't going
    to be able to compete against Intel, and the realization within
    (at least) the Solaris kernel team was that if Sun didn't pivot
    to x86, they'd be doomed. And those folks were largely correct.
    But Sun just didn't want to give up that high margin business
    and compete against the likes of Dell on volume.

    Good decision. The vast majority of Solaris system revenue was
    made after that. And questionable whether they could have made
    the same revenue on x86 due to the competition.

    Good in the short term, perhaps, but bad in the long term.

    Seeing a good long term business for selling proprietary Unix
    for x86-64 require a very good imagination.

    Maybe. It doesn't make much imagation at all to see that a
    business built on selling properitary Unix on SPARC doesn't have
    a long term future. Don't forget that Sun dove off a cliff.

    And one migration
    Solaris/SPARC->Linux/x86-64 is cheaper than two migrations
    Solaris/SPARC->Solaris/x86-64->Linux/x86-64.

    OTOH, if someone is still stuck with Solaris for some reason,
    they can still buy modern hardware from Dell, HPE, or Lenovo and
    there's a good chance Solaris 11.4 will work on it.

    Yes.
    But it still does not make sense to do a migration that will
    require another migration later compared to just do one
    migration to something with a future.

    One can't really make a categorical statement like that. It
    depends too much on the application, and how much it leveraged
    the Solaris environment. For instance, something that makes
    heavy use of zones, SMF, ZFS, doors, the management stuff, etc,
    might be much easier to move to Solaris x86 than Linux.

    Did you read what you replied to??

    Yes. Did you?

    |One can't really make a categorical statement like that.

    That is, if a customer has a big investiment in Solaris, which
    has strong binary compatibility guarantees across versions and
    guarantees about compatibility at the source level across
    architectures, then it actually _may_ be more cost effective to
    pivot to Solaris x86_64 than to Linux.

    This is not rocket science.

    For
    that matter, it may be easier to move to illumos rather than
    Linux.

    Sure.

    But moving to Illumos is not moving to a well supported platform
    with a highly likely future.

    Again, it really depends on the customer. illumos is open
    source; if a customer has deep enough pockets and really wants
    to stick to that world, they can pay someone to maintain it or
    do it themselves.

    That's not appropriate for every organization, of course, but it
    is not totally unreasonable for those that can and want to do
    it.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.os.vms on Tue Oct 14 02:12:47 2025
    From Newsgroup: comp.os.vms

    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/13/2025 5:36 PM, Lawrence DrCOOliveiro wrote:
    On Mon, 13 Oct 2025 16:38:38 -0400, Arne Vajh|+j wrote:
    Support is easy. If you need support you pay.

    The thing is, expertise in a non-proprietary product is not confined to
    the company that makes that product. There is plenty of Open Source
    expertise available in the community that you can hire. If you rely on an
    outside company, particularly a large one, you know that inevitably their
    interests align with their shareholders, and sooner or later will come
    into conflict with yours (as happens with Microsoft, for example). If you
    rely on your own employees, that canrCOt happen.

    Enterprises with a need to document support can not just hire a random consultant when the need arrive.

    They need an ongoing contract with a company with a SLA and a reputation
    that indicates they can deliver in case it is needed.

    If company need paper then they have to pay for it. They still can
    choose smaller company as source of support.

    OS data that you found few years ago claims that vast majority of
    companies use Linux distributions for which support contract would
    be with third party. Red Hat seem to be used by relatively small
    percentage of companies using Linux.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Tue Oct 14 02:16:34 2025
    From Newsgroup: comp.os.vms

    In article <memo.20251011134009.10624j@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    In article <10c6irh$er0$1@reader2.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:
    In article <memo.20251007223453.10624Y@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:

    That is changing, reasonably quickly. ARM stopped releasing new
    cores that could do A32 or T32 in 2023, having been phasing them
    out since 2021.

    I should have said "ARM stopped releasing new _A-profile_ cores that
    could do A32 or T32 in 2023 ..."

    I wonder if this suggests that they'll introduce a compressed
    instruction set a la Thumb for 64 bit mode; -M profile seems to
    top out at ARMv8.1; and according to the ARMv8-M ARM, only
    supports T32.

    ARM v8-M does not have 64-bit registers or instructions, or virtual
    memory. It has an optional, simple, memory protection system.

    Yes. As I mentioned, ARMv8.1 in M profile is T32 only,
    according to the ARM (Arch. Ref. Manual in this context---leave
    it up to ARM to overload the acronym ARM so many different
    ways).

    There is no M profile for ARMv9 at present. My question earlier
    was, if there were, one wonders whether they would use 64-bit
    registers and introduce a compressed instruction set.

    The
    additions at ARMv8.1M are not the same as the ones in ARM v8.1A.

    Presumably at some point they'll introduce an ARMv9 core for
    the embedded market and this will become an issue.

    Or maybe they won't. We could be in a world of 32-bit embedded
    cores in that space for a very long time indeed.

    It depends what you're doing, really. Qualcomm cellphone-derived SoCs
    with 64-bit Cortex-A cores are already widely used in robotics and
    similar kinds of "embedded" uses. But there's no need at all for 64-bit
    in tiny microcontrollers.

    Well, then I suppose they'll either split their product line or
    introduce a 32-bit M profile for V9.

    I'm do not entirely agree with that assessment re: 64-bit in
    MCUs, however: a lot of work is going into cryptographically
    signed secure boot stacks and hardware attestation for firmware;
    64-bit registers can make implementing cryptography primitives
    with large key sizes much easier.

    E.g., the PSP on AMD EPYC is a Cortex-A5 right now (presumably
    because they wanted a real MMU, and not just an MPU), but I
    could imagine that changing over time.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Tue Oct 14 02:43:58 2025
    From Newsgroup: comp.os.vms

    On Tue, 14 Oct 2025 02:12:47 -0000 (UTC), Waldek Hebisch wrote:

    OS data that you found few years ago claims that vast majority of
    companies use Linux distributions for which support contract would
    be with third party.

    Or even in-house staff. Much more convenient for enterprise-level
    deployments, I would say. Look at how big outfits like Google/Alphabet, Facebook/Meta, and even Microsoft, create their own distros. ItrCOs not even that hard to do.

    Red Hat seem to be used by relatively small percentage of companies
    using Linux.

    You see that all too often in this group and others, the mentality among
    those more used to proprietary platforms (i.e. Microsoft) that one company must totally dominate, therefore rCLLinuxrCY must be synonymous with rCLRed HatrCY. ItrCOs not, never has been, never will be.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.os.vms on Tue Oct 14 17:08:40 2025
    From Newsgroup: comp.os.vms

    In article <10ckadi$7dr$1@reader2.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    In article <memo.20251011151314.10624m@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    Our stuff does gain significantly from 64-bit addressing; I could
    believe fields that didn't need 64-bit gave up on Sun earlier.

    I can see that. Personally, I really liked Tru64 nee DEC Unix
    nee OSF/1 AXP on Alpha. OSF/1 felt like it was a much better
    system overall if one had to go if swimming in Unix waters,
    while Solaris felt underbaked.

    I was happy with it, but a very experienced Unix chap of my acquaintance reckoned "It doesn't run - it just lurches!" regarding it as a
    Frankenstein job of parts stitched together.

    Of course, Solaris was still better than AIX, HP-UX, or even
    Irix, but it was a real disappointment when none of the other
    OSF members followed through on actually adopting OSF/1.
    "Oppose Sun Forever!"

    Time was when we supported AIX, HP-UX, Irix, OSF1, and Solaris. We
    probably supported them all simultaneously on 32-bit (except Tru64) and
    64-bit for a while, along with HP-UX Itanium, although we got rid of that faster than HP-UX PA-RISC.

    I never quite got the business play behind Java from Sun's
    perspective. It seemed to explode in popularity overnight, but
    they never quite figured out how to monetize it; I remember
    hearing from some Sun folks that they wanted to set standards
    and be at the center of the ecosystem, but were content to let
    other players actually build the production infrastructure.

    The trick with monetising something like that is to price it so that
    customers find it far cheaper to pay than to write their own. However,
    you still need to be able to make money on it. I've seen this done with a sliding royalty scale.

    However, this kind of scheme definitely would have clashed with the
    desire Sun had to make Java a standard piece of client software. It may
    have been doomed to unprofitability by the enthusiasm of its creators.

    I thought Microsoft really ran circles around them with Java on
    the client side, and on the server side, it made less sense. A
    bytecode language makes some sense in a fractured and extremely
    heterogenous client environment; less so in more controlled
    server environments. I'll grant that the _language_ was better
    than many of the alternatives, but the JRE felt like more of an
    impediment for where Java ultimately landed.

    The main uses for sever-side Java, as I understand it, are:

    It happened to have the right idioms for writing server front-ends that
    could distribute requests to the backend efficiently. Being able to do
    this the same way, within the parts of the JRE that are effectively an OS,
    on all the different host platforms, was more efficient in developer time
    than writing a bunch of different implementations. Developer time is
    really expensive.

    The hardware resources it soaks up at runtime are beneficial for hardware vendors, as they get to sell more hardware.

    John
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.os.vms on Tue Oct 14 17:08:40 2025
    From Newsgroup: comp.os.vms

    In article <10cjo0e$2f7tf$2@dont-email.me>, arne@vajhoej.dk (Arne Vajhoj) wrote:

    But RHEL clones still exist. Rocky, Alma, Oracle, Amazon etc..
    Redhat's changes may have reduced compatibility from 100%
    to 99.95%, but my impression is that the industry in general
    consider the compatibility acceptable.

    Yup, it is. If you're selling closed-source binary software, which is
    still big business in some sectors, then if you sell for Linux, you must support RHEL _and_ its work-alikes. Doing this isn't actually too hard.

    You run your central build machines on real RHEL, and pay for support on
    that. Your development and test machines run work-alikes. You make sure everything you ship was built on the central build machines.

    This works well. Compatibility from RHEL onto the work-alikes gets tested
    far more than the other way around, by the work-alike producers, and the
    ISVs who work this way. It allows your customers to standardise on a
    work-alike and be supported.

    RHEL (and work-alikes) have other advantages. They have long and
    predictable support lives, and excellent compatibility of software built
    on older point releases onto newer point releases (e.g., the current RHEL
    9.6 and the 9.7 due in November). You can update your development, build
    and test machines onto new point releases as they appear (after
    confirmatory testing, of course!) and your build machines remain on a
    supported version, with no need to isolate them or use weird security
    methods. You can thus encourage your customers to update, so that they
    stay secure.

    RHEL freezes its glibc version when a new major version ships. They apply bug-fix and security updates to it, of course. Glibc has strong forwards compatibility commitments: every symbol is versioned, and behaviour
    changes require new versions of affected symbols. This means that
    anything built against a given version of glibc will run on any later
    version (for the same architecture, pointer size, etc).

    Red Hat offers optional "GCC Toolsets" for supported RHEL. These give you
    a newer version of gcc, installed as an optional extra, with a path of
    its own and not affecting the system compiler. This also comes with
    static versions of libstdc++ and libgcc, the support libraries for the compiler, and some scripts for Gnu ld. The effect of all this is that
    when you build software on, say, RHEL8, which comes GCC 8.x, using GCC
    Toolset 11, which provides GCC 11, you get binaries which will run on an ordinary RHEL8 system without any special tools installed. The GCC 11
    support library functions that aren't in GCC 8's support libraries get statically linked into your binaries.

    The effect of that is that you can build C and C++ code on an RHEL system
    with a GCC Toolset, and as far as the languages are concerned, your code
    will run correctly on any Linux with equal or later glibc and gcc to the
    RHEL you used.

    That sounds terrifying, but it works. It probably stays working because
    Red Hat, who are significant contributors to both glibc and gcc, make
    sure that it stays working.

    That means that will a single x86-64 build on RHEL 8.10, I can support
    all the RHEL work-alikes. SLES, Ubuntu, and Debian. I also know it's
    extremely likely to work on any Linux with glibc 2.28 (the version RHEL8
    uses) or later and GCC 8 (ditto) or later. This is great from my point of
    view. Doing extra builds would be more expensive, and testing them
    thoroughly would be much more expensive.

    I don't know if Red Hat set out to produce the best Linux for building closed-source binary software on, but they achieved it.

    John
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.os.vms on Tue Oct 14 17:08:40 2025
    From Newsgroup: comp.os.vms

    In article <10ckbq2$7dr$4@reader2.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    Well, then I suppose they'll either split their product line or
    introduce a 32-bit M profile for V9.

    They are effectively in the process of splitting the product line. The
    current instruction sets with a future are T32 and A64. A32 is on the way
    out.

    I'm do not entirely agree with that assessment re: 64-bit in
    MCUs, however: a lot of work is going into cryptographically
    signed secure boot stacks and hardware attestation for firmware;
    64-bit registers can make implementing cryptography primitives
    with large key sizes much easier.

    Fair point. The question would then be if it's worth creating a T64 or
    just a simplified A64. It seems likely ARM is discussing that internally
    or maybe even with some customers under NDA.

    John
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Tue Oct 14 19:38:11 2025
    From Newsgroup: comp.os.vms

    On Tue, 14 Oct 2025 17:07 +0100 (BST), John Dallman wrote:

    The main uses for sever-side Java, as I understand it, are:

    It happened to have the right idioms for writing server front-ends
    that could distribute requests to the backend efficiently.

    Java had threading built right into the core of the language from the beginning. Trouble is, threading is not necessarily the most efficient
    way to do everything.

    If you want to use poll(2)-type calls, the Java wrapper makes that
    unbelievably convoluted to do. Compare the Java API <https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/nio/channels/Selector.html>
    with the Python one <https://docs.python.org/3/library/select.html>.
    Note that the Python API is complete on that page, while the Java page
    is just the beginning of your API journey ...
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Rich Alderson@news@alderson.users.panix.com to comp.os.vms on Tue Oct 14 15:49:08 2025
    From Newsgroup: comp.os.vms

    cross@spitfire.i.gajendra.net (Dan Cross) writes:

    In article <mddh5w4gba0.fsf@panix5.panix.com>,
    Rich Alderson <news@alderson.users.panix.com> wrote:
    cross@spitfire.i.gajendra.net (Dan Cross) writes:

    [Sun's] initial success was because they built the computer that
    they themselves wanted to use, and came up with a computer a
    bunch of other people wanted to use, too. It was a joy to use a
    Sun workstation at the time. But then they stopped doing that.

    Remember that the original SUN-1 board was designed by Andy Bechtolsheim
    from a specification given to him by Ralph Gorin, director of the Stanford >> academic computing facility (LOTS), who envisioned a 4M system (1M memory, >> 1MIPS, 1M pixels on the screen, 1Mbps network, based on the first Ethernet >> at PARC).

    SUN stood for "Stanford University Network"...

    The same board was used in the original routers and terminal interface
    processors (TIPs) on the Stanford network, designed by Len Bosack of Cisco >> and XKL fame.

    Khosla and Bechtolsheim, et al., didn't "build the computer they wanted to >> use", they built the one they thought would make money when they took the
    design from Stanford.

    Khosla was out within what, 4 or 5 years? And he wasn't an engineer.

    Indeed. He was Andy's friend from the Graduate school of Business, and probably the one who said "this thing could make money!!!!" and started the search that led to Scott McNealy. Andy would be the one to bring in Bill Joy.

    The "building the computer they wanted to use" bit comes first-hand from engineers with single-digit Sun employee numbers. It wasn't just the hardware, but the software as well, of course.

    That's probably what they were told, but that's not what the VCs were told.
    --
    Rich Alderson news@alderson.users.panix.com
    Audendum est, et veritas investiganda; quam etiamsi non assequamur,
    omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
    --Galen --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Tue Oct 14 16:47:12 2025
    From Newsgroup: comp.os.vms

    On 10/13/2025 10:12 PM, Waldek Hebisch wrote:
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/13/2025 5:36 PM, Lawrence DrCOOliveiro wrote:
    On Mon, 13 Oct 2025 16:38:38 -0400, Arne Vajh|+j wrote:
    Support is easy. If you need support you pay.

    The thing is, expertise in a non-proprietary product is not confined to
    the company that makes that product. There is plenty of Open Source
    expertise available in the community that you can hire. If you rely on an >>> outside company, particularly a large one, you know that inevitably their >>> interests align with their shareholders, and sooner or later will come
    into conflict with yours (as happens with Microsoft, for example). If you >>> rely on your own employees, that canrCOt happen.

    Enterprises with a need to document support can not just hire a random
    consultant when the need arrive.

    They need an ongoing contract with a company with a SLA and a reputation
    that indicates they can deliver in case it is needed.

    If company need paper then they have to pay for it. They still can
    choose smaller company as source of support.

    Sure.

    But that smaller company needs to have a reputation that makes
    the paper look credible in the eyes of internal and external
    auditors.

    OS data that you found few years ago claims that vast majority of
    companies use Linux distributions for which support contract would
    be with third party. Red Hat seem to be used by relatively small
    percentage of companies using Linux.
    It depends a lot on how you are counting.
    number RHEL instances / number Linux servers is not that big.

    number on-prem RHEL instances / number on-prem Linux servers with paid
    service by distro creator is pretty big. Likely over 50%.

    So one side you can say that RHEL is only big in a small
    part of the overall Linux server market, but its importance
    goes far beyond what that indicates, because:
    1) it is the market that generates most of Linux revenue
    2) it has made RHEL *the* enterprise Linux distro - the
    one that other copy

    A lot of the servers not running RHEL do run either
    commercial RHEL clones or free RHEL clones.
    The latter being CentOS in the old days (before Redhat
    messed that up) and today RockyLinux, AlmaLinux etc..

    Arne





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Tue Oct 14 22:10:28 2025
    From Newsgroup: comp.os.vms

    On Tue, 14 Oct 2025 16:47:12 -0400, Arne Vajh|+j wrote:

    1) [RHEL] is the market that generates most of Linux revenue

    Maybe not. Maybe a lot of that revenue is invisible, simply because it involves small businesses (like myself and my clients).
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Tue Oct 14 20:13:35 2025
    From Newsgroup: comp.os.vms

    On 10/13/2025 10:03 PM, Lawrence DrCOOliveiro wrote:
    On Mon, 13 Oct 2025 21:20:43 -0400, Arne Vajh|+j wrote:
    On 10/13/2025 8:20 PM, Lawrence DrCOOliveiro wrote:
    On Mon, 13 Oct 2025 19:26:56 -0400, Arne Vajh|+j wrote:
    Enterprises with a need to document support can not just hire a
    random consultant when the need arrive.

    If something is mission-critical and core to their entire business,
    they want a staff they can rely on, completely, to manage that
    properly.

    Few/no CIO's want to support the hundreds of millions of lines
    of open source code their business rely on themselves.

    The whole point of having all that code is that they didnrCOt need to write it themselves.

    Yes. But they want free beer more than free speech.

    You have to take responsibility for your own business, donrCOt you?

    They don't want to write or maintain their own OS.

    They don't want to write or maintain their own platform
    software (web/app servers, database servers, message queue
    servers, cache servers etc.).

    They don't want to write or maintain their own tools
    (compilers, build tools, IDE's, source control, unit
    test frameworks etc.).

    None of that stuff is their business.

    They want to focus on their business the applications
    that help them produce and sell whatever products
    or services.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Tue Oct 14 20:35:03 2025
    From Newsgroup: comp.os.vms

    On 10/13/2025 10:04 PM, Dan Cross wrote:
    In article <10chjc5$1s2mr$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/11/2025 7:50 AM, Dan Cross wrote:
    For
    that matter, it may be easier to move to illumos rather than
    Linux.

    Sure.

    But moving to Illumos is not moving to a well supported platform
    with a highly likely future.

    Again, it really depends on the customer. illumos is open
    source; if a customer has deep enough pockets and really wants
    to stick to that world, they can pay someone to maintain it or
    do it themselves.

    That's not appropriate for every organization, of course, but it
    is not totally unreasonable for those that can and want to do
    it.

    It is not appropriate for most organizations.

    Maintaining an OS (whether in-house or some consulting
    company) is not what CIO's are looking for.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Tue Oct 14 21:17:20 2025
    From Newsgroup: comp.os.vms

    On 10/13/2025 9:52 PM, Dan Cross wrote:
    In article <memo.20251011151314.10624m@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    In article <10cdflq$5c0$2@reader2.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:
    I think also the focus shifted dramatically once Java came onto
    the scene; Sun seemed to move away from its traditional computer
    business in order to focus more full on java and its ecosystem.

    They tried that on us, but were deeply unconvincing.

    They were expecting us to be impressed that they'd done JNI wrappers of
    about ten functions from our 500+ function API. We said "Presumably you
    have tools to generate this stuff automatically?" and they didn't
    understand what we were talking about.

    I never quite got the business play behind Java from Sun's
    perspective. It seemed to explode in popularity overnight, but
    they never quite figured out how to monetize it; I remember
    hearing from some Sun folks that they wanted to set standards
    and be at the center of the ecosystem, but were content to let
    other players actually build the production infrastructure.

    Sun did not have any success making money of Java.
    But they made so much money of their server business that they
    apparently thought they could spend some money on Java. Until
    a decade later when their server market dried up - Java
    development was in slow motion from around 2006 until
    Oracle restarted it.

    Sun did build J2EE/JavaEE app servers. One miserable failure
    after another. IBM, BEA and the other made the money from
    that market. And in all due respect they also did a lot of the
    work. For one of the early releases of J2EE (1.2 or 1.3) it is said
    that IBM did 80% of the spec work.

    I
    thought Microsoft really ran circles around them with Java on
    the client side, and on the server side, it made less sense. A
    bytecode language makes some sense in a fractured and extremely heterogenious client environment; less so in more controlled
    server environments. I'll grant that the _language_ was better
    than many of the alternatives, but the JRE felt like more of an
    impediment for where Java ultimately landed.

    Java on desktop never took off. AWT, Swing, SWT, JavaFX none
    of them. Bad timing. The world was switching to web UI's and
    there were many well established technologies in the market:
    VB6, Delphi, MSVC++ with MFC etc..

    (Java applets had a good run in browsers, because they could
    a lot of interesting stuff, but that stopped when it became
    clear that those capabilities was leading to hundreds of
    security vulnerabilities)

    (and Java GUI is probably the worlds most used GUI today
    in Android phones, but that is a different thing)

    In some ways JIT compilation lend itself better to server
    side than client side. The relative startup overhead is huge for
    a client app being used 5 minutes. But not a problem for a server
    app running for a month between restarts.

    Also note that WORA do make some sense for servers especially
    for less prioritized OS. I am not so sure that we would have
    ActiveMQ and Tomcat on VMS if it was not because they just
    run as is.

    But bytecode and JIT is a lot more than just WORA. It provides
    really rich meta data that can be used with annotations and
    reflection. There is a reason why Microsoft for WinRT decided
    to use .NET meta data format even though it is a native
    COM based technology. It also makes optimization of dynamic
    generated code easy - just generate the byte code, load it,
    run it and the JIT compiler takes care of it similar to
    any other code.

    For those that want the native code then there are
    options today. Oracle Java 9-16 did support AOT compilation.
    AOT was then moved to the GraalVM product.

    (And now there are talk about moving it back into OpenJDK)

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Tue Oct 14 21:29:48 2025
    From Newsgroup: comp.os.vms

    On 10/14/2025 12:07 PM, John Dallman wrote:
    In article <10ckadi$7dr$1@reader2.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:
    I thought Microsoft really ran circles around them with Java on
    the client side, and on the server side, it made less sense. A
    bytecode language makes some sense in a fractured and extremely
    heterogenous client environment; less so in more controlled
    server environments. I'll grant that the _language_ was better
    than many of the alternatives, but the JRE felt like more of an
    impediment for where Java ultimately landed.

    The main uses for sever-side Java, as I understand it, are:

    It happened to have the right idioms for writing server front-ends that
    could distribute requests to the backend efficiently.

    Servlet/JSP looking up remote EJB in JNDI??

    Being able to do
    this the same way, within the parts of the JRE that are effectively an OS,
    on all the different host platforms, was more efficient in developer time than writing a bunch of different implementations. Developer time is
    really expensive.

    I believe the success of J2EE/Java EE/Jakarta EE relates to:
    1) Money. There was thrown a lot of resources into it from
    Sun, IBM, BEA, Oracle, SAP, Borland etc.. They simply added more
    functionality than any other platform.
    2) Because it was a multi vendor thing, then the model became
    vendor independent API's - not just in theory but in practice
    as well. It is actually possible to switch vendor.
    3) A well working cooperation between the commercial vendors and
    the open source community.

    Arne




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Wed Oct 15 01:30:31 2025
    From Newsgroup: comp.os.vms

    On Tue, 14 Oct 2025 20:13:35 -0400, Arne Vajh|+j wrote:

    On 10/13/2025 10:03 PM, Lawrence DrCOOliveiro wrote:

    On Mon, 13 Oct 2025 21:20:43 -0400, Arne Vajh|+j wrote:

    On 10/13/2025 8:20 PM, Lawrence DrCOOliveiro wrote:

    On Mon, 13 Oct 2025 19:26:56 -0400, Arne Vajh|+j wrote:

    Enterprises with a need to document support can not just hire a
    random consultant when the need arrive.

    If something is mission-critical and core to their entire business,
    they want a staff they can rely on, completely, to manage that
    properly.

    Few/no CIO's want to support the hundreds of millions of lines of open
    source code their business rely on themselves.

    The whole point of having all that code is that they didnrCOt need to
    write it themselves.

    Yes. But they want free beer more than free speech.

    They want to cut costs, in particular compliance costs. They donrCOt want to be nickel-and-dimed to death over things like CALs every time they want to
    do a new deployment. They want the flexibility to be able to adapt to
    changing business circumstances.

    ThatrCOs the mindset of a successful business.

    You have to take responsibility for your own business, donrCOt you?

    They don't want to write or maintain their own OS.

    They don't want to write or maintain their own platform software
    (web/app servers, database servers, message queue servers, cache servers etc.).

    They don't want to write or maintain their own tools (compilers, build
    tools, IDE's, source control, unit test frameworks etc.).

    None of that stuff is their business.

    Open Source doesnrCOt give you turnkey black boxes. You have to have some expertise in at least configuring the underlying layers, and sometimes in
    how to patch them. That goes with the territory.

    They want to focus on their business the applications that help them
    produce and sell whatever products or services.

    Sure they do. But all abstractions are leaky: no matter how much you
    pretend otherwise, there will be issues to do with the lower layers that
    you will need to be mindful of.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Wed Oct 15 03:19:46 2025
    From Newsgroup: comp.os.vms

    In article <10cmq7n$3a740$2@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/13/2025 10:04 PM, Dan Cross wrote:
    In article <10chjc5$1s2mr$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/11/2025 7:50 AM, Dan Cross wrote:
    For
    that matter, it may be easier to move to illumos rather than
    Linux.

    Sure.

    But moving to Illumos is not moving to a well supported platform
    with a highly likely future.

    Again, it really depends on the customer. illumos is open
    source; if a customer has deep enough pockets and really wants
    to stick to that world, they can pay someone to maintain it or
    do it themselves.

    That's not appropriate for every organization, of course, but it
    is not totally unreasonable for those that can and want to do
    it.

    It is not appropriate for most organizations.

    Most != All.

    Maintaining an OS (whether in-house or some consulting
    company) is not what CIO's are looking for.

    I doubt you've ever worked at a company that maintains their own
    OS. I have. In fact, doing OS development work is my job.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Wed Oct 15 11:58:03 2025
    From Newsgroup: comp.os.vms

    In article <memo.20251014170713.10624x@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    In article <10ckadi$7dr$1@reader2.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    In article <memo.20251011151314.10624m@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    Our stuff does gain significantly from 64-bit addressing; I could
    believe fields that didn't need 64-bit gave up on Sun earlier.

    I can see that. Personally, I really liked Tru64 nee DEC Unix
    nee OSF/1 AXP on Alpha. OSF/1 felt like it was a much better
    system overall if one had to go if swimming in Unix waters,
    while Solaris felt underbaked.

    I was happy with it, but a very experienced Unix chap of my acquaintance >reckoned "It doesn't run - it just lurches!" regarding it as a
    Frankenstein job of parts stitched together.

    Ha! I can sort of see why they'd say that. It definitely had
    odd bits of Mach and System V seemingly bolted onto it. Overall
    though I thought it was a good system.

    To bring it back to VMS (and sheepishly admit a good bunch of
    the recent drift is my own) We had an Alpha running OpenVMS AXP
    1.2, or whatever one of the earlier versions was; it was obvious
    this was the future over VAX.

    Of course, Solaris was still better than AIX, HP-UX, or even
    Irix, but it was a real disappointment when none of the other
    OSF members followed through on actually adopting OSF/1.
    "Oppose Sun Forever!"

    Time was when we supported AIX, HP-UX, Irix, OSF1, and Solaris. We
    probably supported them all simultaneously on 32-bit (except Tru64) and >64-bit for a while, along with HP-UX Itanium, although we got rid of that >faster than HP-UX PA-RISC.

    I've said this before in this group, but the homogeneity of
    modern computing does not strike me as a universally good thing.
    There are economies of scale one can leverage, to be sure, but
    just as monocultures aren't robust against external threats in
    biological systems, I can't help but think that the same is true
    of computing systems.

    It felt like there was a time when we had built hetergeneous
    systems that were at least reasonable to manage; these days, I
    think we'd know how to do much better. But the diversity of
    systems and platforms common 30 years ago are mostly gone, and
    we're left with essentially three buckets: Windows, Linux, and
    a small sliver of "everything else". Not great.

    I never quite got the business play behind Java from Sun's
    perspective. It seemed to explode in popularity overnight, but
    they never quite figured out how to monetize it; I remember
    hearing from some Sun folks that they wanted to set standards
    and be at the center of the ecosystem, but were content to let
    other players actually build the production infrastructure.

    The trick with monetising something like that is to price it so that >customers find it far cheaper to pay than to write their own. However,
    you still need to be able to make money on it. I've seen this done with a >sliding royalty scale.

    However, this kind of scheme definitely would have clashed with the
    desire Sun had to make Java a standard piece of client software. It may
    have been doomed to unprofitability by the enthusiasm of its creators.

    I think that's a really insightful way to put it.

    My sense was that they overplayed their hand, and did so
    prematurely relative to the actual value they were holding onto.

    I mentioned Microsoft and Java on the client side: I believe
    that they were largely responsible for failure of Java desktop
    applications (and the supporting ecosystem) to take root. As I
    recall, at the time, MSFT tried to license Java from Sun: Sun
    said no, and I'm quite sure that McNealy was positively giddy
    about it as well. However, I think in doing so, Sun gravely
    underestimated Gates-era MSFT, because then Microsoft very
    publicly said, "we're going to wait and see whether the industry
    adopts Java on the desktop." But, since Microsoft was the
    biggest player in that space, the rest of the industy waited to
    see what Microsoft would do and whether they would support it on
    Windows: the result was that Java no one adopted it, and so it
    never saw widespread client-side adoption. Oh sure, it had some
    adoption in mobile phone type applications, but util Android
    (which tried to skirt the licensing issues with Dalvik) that
    was pretty limited. Anyway, while Microsoft stalled, they did
    C# in the background, and when it was ready, they no longer had
    any real need for Java on the client side.

    The framing that the web rendered Java on desktops obsolete is
    incomplete. Certainly, that was true for _many_ applications,
    as the web rendered much of the client-side ecosystem obsolete,
    but consider things in Microsoft's portfolio like Word, Except,
    PowerPoint, and so on. Those remained solidly desktop focused
    until 360; one never saw credible competitors to that in Java,
    which was something Sun very much wanted (recall McNealy's
    writing at this time about a "new" style of development based
    around open source and Java). Similarly, investment in C# shows
    that they weren't quite ready to move everything to the web; I
    think it just took time for the browser ecosystem to reach the
    level of maturity where it would reasonably support the sorts of
    rich graphical and highly interactive applications we had
    formerly seen running natively on desktop (and to some lesser
    extent, timesharing) machines. We often forget how much the
    early web looked like a souped-up 3270.

    I thought Microsoft really ran circles around them with Java on
    the client side, and on the server side, it made less sense. A
    bytecode language makes some sense in a fractured and extremely
    heterogenous client environment; less so in more controlled
    server environments. I'll grant that the _language_ was better
    than many of the alternatives, but the JRE felt like more of an
    impediment for where Java ultimately landed.

    The main uses for sever-side Java, as I understand it, are:

    It happened to have the right idioms for writing server front-ends that
    could distribute requests to the backend efficiently. Being able to do
    this the same way, within the parts of the JRE that are effectively an OS,
    on all the different host platforms, was more efficient in developer time >than writing a bunch of different implementations. Developer time is
    really expensive.

    That makes sense, though I would argue that the same is true of
    languages that provide a managed runtime, regardless of whether
    they compile to byte code or native instructions.

    The hardware resources it soaks up at runtime are beneficial for hardware >vendors, as they get to sell more hardware.

    Indeed, I now recall a BYTE interview with McNealy where he says
    much the same thing. Sun wanted to sell hardware; you needed
    hardware to run the JVM.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Wed Oct 15 12:10:37 2025
    From Newsgroup: comp.os.vms

    In article <mddplape023.fsf@panix5.panix.com>,
    Rich Alderson <news@alderson.users.panix.com> wrote: >cross@spitfire.i.gajendra.net (Dan Cross) writes:

    In article <mddh5w4gba0.fsf@panix5.panix.com>,
    Rich Alderson <news@alderson.users.panix.com> wrote:
    cross@spitfire.i.gajendra.net (Dan Cross) writes:

    [Sun's] initial success was because they built the computer that
    they themselves wanted to use, and came up with a computer a
    bunch of other people wanted to use, too. It was a joy to use a
    Sun workstation at the time. But then they stopped doing that.

    Remember that the original SUN-1 board was designed by Andy Bechtolsheim >>> from a specification given to him by Ralph Gorin, director of the Stanford >>> academic computing facility (LOTS), who envisioned a 4M system (1M memory, >>> 1MIPS, 1M pixels on the screen, 1Mbps network, based on the first Ethernet >>> at PARC).

    SUN stood for "Stanford University Network"...

    The same board was used in the original routers and terminal interface
    processors (TIPs) on the Stanford network, designed by Len Bosack of Cisco >>> and XKL fame.

    Khosla and Bechtolsheim, et al., didn't "build the computer they wanted to >>> use", they built the one they thought would make money when they took the >>> design from Stanford.

    Khosla was out within what, 4 or 5 years? And he wasn't an engineer.

    Indeed. He was Andy's friend from the Graduate school of Business, and >probably the one who said "this thing could make money!!!!" and started the >search that led to Scott McNealy. Andy would be the one to bring in Bill Joy.

    The "building the computer they wanted to use" bit comes first-hand from
    engineers with single-digit Sun employee numbers. It wasn't just the
    hardware, but the software as well, of course.

    That's probably what they were told, but that's not what the VCs were told.

    I'm sure that's very true. But that doesn't mean that the
    engineers were not incentivized to build machines that they,
    themselves, actually wanted to use: in this sense, Sun benefited
    from a brief shining moment where that aligned with business
    interests.

    This isn't hard to imagine. The Motorola 68000 was clearly a
    chip that one could build a reasonable system around (once they
    figured out how to do fault handling properly, anyway, so that
    they could restart an instruction after a page fault); people
    wanted graphics; and networking was obviously going to be
    important; and Unix people wanted Unix. If you could get that
    on a machine that wasn't groaning under the weight of double
    digit numbers of timesharing users? Sounds wonderful. As an
    engineer I _still_ want all of that on my desktop machine.

    Salus wrote an article a decade ago, borrwing from his Daemon,
    Gnu and Penguin book, that articulates what I was driving at far
    better than I can: https://www.usenix.org/system/files/login/articles/login_apr15_17_salus.pdf

    To quote:
    |In late 1987, AT&T announced that it had purchased a large
    |percentage of Sun Microsystems and that Sun would receive
    |preferential treatment as AT&T/UNIX Systems Labs developed new
    |software. Sun announced that its next system would not be a
    |further extension of SunOS (which had been based on Berkeley
    |UNIX) but would be derived from AT&TrCOs System V, Revision 4. A
    |shiver ran through the UNIX world: the scientific community
    |felt that Sun was turning its back on them, and the other
    |vendors felt that the rCLspecial arrangementrCY would mean that Sun
    |would get the jump on them.

    I think this idea that "the scientific community felt that Sun
    was turning its back on them" is the bit I wanted to hammer
    home.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Wed Oct 15 12:16:35 2025
    From Newsgroup: comp.os.vms

    In article <10cmovf$3a740$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/13/2025 10:03 PM, Lawrence DrCOOliveiro wrote:
    On Mon, 13 Oct 2025 21:20:43 -0400, Arne Vajh|+j wrote:
    On 10/13/2025 8:20 PM, Lawrence DrCOOliveiro wrote:
    On Mon, 13 Oct 2025 19:26:56 -0400, Arne Vajh|+j wrote:
    Enterprises with a need to document support can not just hire a
    random consultant when the need arrive.

    If something is mission-critical and core to their entire business,
    they want a staff they can rely on, completely, to manage that
    properly.

    Few/no CIO's want to support the hundreds of millions of lines
    of open source code their business rely on themselves.

    The whole point of having all that code is that they didnrCOt need to write >> it themselves.

    Yes. But they want free beer more than free speech.

    You have to take responsibility for your own business, donrCOt you?

    They don't want to write or maintain their own OS.

    They don't want to write or maintain their own platform
    software (web/app servers, database servers, message queue
    servers, cache servers etc.).

    They don't want to write or maintain their own tools
    (compilers, build tools, IDE's, source control, unit
    test frameworks etc.).

    None of that stuff is their business.

    They want to focus on their business the applications
    that help them produce and sell whatever products
    or services.

    Every single one of the FAANG companies do all of those things.
    At Google, we used to joke that, "not only does Google reinvent
    the wheel, we vulcanize the rubber for the tires." Spanner,
    Piper/Fig/Jujutsu, Prodkernel/ChromeOS/Android, CitC, gunit, Go
    (not to mention the work on LLVM/Clang), Blaze/Bazel/Skylark,
    etc, are all examples of the things you mentioned above. And
    that's not even to mention all the custom hardware.

    For organizations working at hyperscale, there comes a point
    where the off-the-shelf solutions simply cannot scale to meet
    the load you're putting on them.

    At that point, you have no choice but to do it yourself.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Wed Oct 15 12:20:01 2025
    From Newsgroup: comp.os.vms

    In article <memo.20251014170713.10624z@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    In article <10ckbq2$7dr$4@reader2.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:
    Well, then I suppose they'll either split their product line or
    introduce a 32-bit M profile for V9.

    They are effectively in the process of splitting the product line. The >current instruction sets with a future are T32 and A64. A32 is on the way >out.

    100% agreed here.

    I'm do not entirely agree with that assessment re: 64-bit in
    MCUs, however: a lot of work is going into cryptographically
    signed secure boot stacks and hardware attestation for firmware;
    64-bit registers can make implementing cryptography primitives
    with large key sizes much easier.

    Fair point. The question would then be if it's worth creating a T64 or
    just a simplified A64. It seems likely ARM is discussing that internally
    or maybe even with some customers under NDA.

    Indeed; this is my question. I can well imagine that, at some
    point, they would like to move away from T32 but continue with
    supporting M-profile CPUs for the embedded market. At that
    point, some sort of T64 seems like a good idea.

    I'm sure you're right that they're at least exploring it
    internally; it will be interesting to see what they come up with
    and how it compares against RV64C.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Craig A. Berry@craigberry@nospam.mac.com to comp.os.vms on Wed Oct 15 15:33:20 2025
    From Newsgroup: comp.os.vms


    On 10/15/25 7:16 AM, Dan Cross wrote:
    In article <10cmovf$3a740$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/13/2025 10:03 PM, Lawrence DrCOOliveiro wrote:
    On Mon, 13 Oct 2025 21:20:43 -0400, Arne Vajh|+j wrote:
    On 10/13/2025 8:20 PM, Lawrence DrCOOliveiro wrote:
    On Mon, 13 Oct 2025 19:26:56 -0400, Arne Vajh|+j wrote:
    Enterprises with a need to document support can not just hire a
    random consultant when the need arrive.

    If something is mission-critical and core to their entire business,
    they want a staff they can rely on, completely, to manage that
    properly.

    Few/no CIO's want to support the hundreds of millions of lines
    of open source code their business rely on themselves.

    The whole point of having all that code is that they didnrCOt need to write >>> it themselves.

    Yes. But they want free beer more than free speech.

    You have to take responsibility for your own business, donrCOt you?

    They don't want to write or maintain their own OS.

    They don't want to write or maintain their own platform
    software (web/app servers, database servers, message queue
    servers, cache servers etc.).

    They don't want to write or maintain their own tools
    (compilers, build tools, IDE's, source control, unit
    test frameworks etc.).

    None of that stuff is their business.

    They want to focus on their business the applications
    that help them produce and sell whatever products
    or services.

    Every single one of the FAANG companies do all of those things.

    In other words, hardly anyone.

    At Google, we used to joke that, "not only does Google reinvent
    the wheel, we vulcanize the rubber for the tires." Spanner, Piper/Fig/Jujutsu, Prodkernel/ChromeOS/Android, CitC, gunit, Go
    (not to mention the work on LLVM/Clang), Blaze/Bazel/Skylark,
    etc, are all examples of the things you mentioned above. And
    that's not even to mention all the custom hardware.

    For organizations working at hyperscale, there comes a point
    where the off-the-shelf solutions simply cannot scale to meet
    the load you're putting on them.

    At that point, you have no choice but to do it yourself.
    You're kinda going in circles here by arguing that very big companies
    whose business is to make their own technology need to make their own technology. I believe Arne's point was the fairly obvious one that a
    retail chain or a hospital chain does not need to and cannot afford to maintain, for example, their own operating system.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Wed Oct 15 23:01:44 2025
    From Newsgroup: comp.os.vms

    On Wed, 15 Oct 2025 15:33:20 -0500, Craig A. Berry wrote:

    I believe Arne's point was the fairly obvious one that a retail
    chain or a hospital chain does not need to and cannot afford to
    maintain, for example, their own operating system.

    Do you think that is hard to do?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Wed Oct 15 19:40:51 2025
    From Newsgroup: comp.os.vms

    On 10/15/2025 7:01 PM, Lawrence DrCOOliveiro wrote:
    On Wed, 15 Oct 2025 15:33:20 -0500, Craig A. Berry wrote:
    I believe Arne's point was the fairly obvious one that a retail
    chain or a hospital chain does not need to and cannot afford to
    maintain, for example, their own operating system.

    Do you think that is hard to do?

    Clone an existing distro and change name and logo: easy.

    Hire a couple of "experts" that in case of a problem
    can post on the internet and hope somebody else can
    come up with a fix and then apply the fix: easy.

    Hire a couple of "experts" that know C and
    in case of a problem plan to start read code
    and hope they will be able to find a solution: easy.

    Hire enough experts to have people that know
    the code base of every critical part: Linux
    kernel, glibc etc. probably 50-100 million
    lines of code: bloody expensive. We are talking
    hundreds engineers - and not just any engineers
    but top engineers.

    Redhat, Canonical, SUSE etc. have them.

    Amazon, Microsoft, Google etc. have them.

    The vast majority of companies don't have them.

    Arne



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Wed Oct 15 19:55:28 2025
    From Newsgroup: comp.os.vms

    On 10/15/2025 8:16 AM, Dan Cross wrote:
    In article <10cmovf$3a740$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/13/2025 10:03 PM, Lawrence DrCOOliveiro wrote:
    On Mon, 13 Oct 2025 21:20:43 -0400, Arne Vajh|+j wrote:
    Few/no CIO's want to support the hundreds of millions of lines
    of open source code their business rely on themselves.

    The whole point of having all that code is that they didnrCOt need to write >>> it themselves.

    Yes. But they want free beer more than free speech.

    You have to take responsibility for your own business, donrCOt you?

    They don't want to write or maintain their own OS.

    They don't want to write or maintain their own platform
    software (web/app servers, database servers, message queue
    servers, cache servers etc.).

    They don't want to write or maintain their own tools
    (compilers, build tools, IDE's, source control, unit
    test frameworks etc.).

    None of that stuff is their business.

    They want to focus on their business the applications
    that help them produce and sell whatever products
    or services.

    Every single one of the FAANG companies do all of those things.
    At Google, we used to joke that, "not only does Google reinvent
    the wheel, we vulcanize the rubber for the tires." Spanner, Piper/Fig/Jujutsu, Prodkernel/ChromeOS/Android, CitC, gunit, Go
    (not to mention the work on LLVM/Clang), Blaze/Bazel/Skylark,
    etc, are all examples of the things you mentioned above. And
    that's not even to mention all the custom hardware.

    For organizations working at hyperscale, there comes a point
    where the off-the-shelf solutions simply cannot scale to meet
    the load you're putting on them.

    At that point, you have no choice but to do it yourself.
    Few companies are like Google.

    For a few reasons:

    1) As you mention they may have special customization needs
    due to their scale.

    2) But even if they did not need that, then their numbers
    are special. If cost of creating a competent Linux team
    that can deliver support at Redhat level is less than
    number of Linux instances multiplied with what Redhat
    would charge per instance (and I am sure that Google would
    get a gigantic discount if they asked), then it makes
    financial sense to DIY. But it requires a huge number
    of Linux instances.

    My napkin calculation / RNG says you will need more
    than a million Linux instances for the math to work.
    Google has that. Most companies does not.

    3) Google is not just a company using IT to produce
    products/services - Google is also a company doing
    IT for other.

    Google Search is an IT user where it is not a given
    that they want their own distro.

    But Android and ChromeOS is Google delivering an
    OS to other. The OS is their business in that case.

    And one facet of GCP is that Google is taking
    over OS support from Redhat/Canonical/SUSE when
    companies moves their workload from on-prem to
    GCP managed services. Linux support is their
    business.

    Arne




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Wed Oct 15 19:59:39 2025
    From Newsgroup: comp.os.vms

    On 10/15/2025 7:58 AM, Dan Cross wrote:
    In article <memo.20251014170713.10624x@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    In article <10ckadi$7dr$1@reader2.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:
    In article <memo.20251011151314.10624m@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    Our stuff does gain significantly from 64-bit addressing; I could
    believe fields that didn't need 64-bit gave up on Sun earlier.

    I can see that. Personally, I really liked Tru64 nee DEC Unix

    Compaq Tru64, Digital Unix, DEC OSF/1.

    nee OSF/1 AXP on Alpha. OSF/1 felt like it was a much better
    system overall if one had to go if swimming in Unix waters,
    while Solaris felt underbaked.

    I was happy with it, but a very experienced Unix chap of my acquaintance
    reckoned "It doesn't run - it just lurches!" regarding it as a
    Frankenstein job of parts stitched together.

    Ha! I can sort of see why they'd say that. It definitely had
    odd bits of Mach and System V seemingly bolted onto it. Overall
    though I thought it was a good system.

    To bring it back to VMS (and sheepishly admit a good bunch of
    the recent drift is my own) We had an Alpha running OpenVMS AXP
    1.2, or whatever one of the earlier versions was;

    Probably 1.5.

    VMS Alpha went 1.0 -> 1.5 -> 6.1.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Thu Oct 16 00:26:10 2025
    From Newsgroup: comp.os.vms

    In article <10cpc9g$191j$2@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/15/2025 8:16 AM, Dan Cross wrote:
    In article <10cmovf$3a740$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/13/2025 10:03 PM, Lawrence DrCOOliveiro wrote:
    On Mon, 13 Oct 2025 21:20:43 -0400, Arne Vajh|+j wrote:
    Few/no CIO's want to support the hundreds of millions of lines
    of open source code their business rely on themselves.

    The whole point of having all that code is that they didnrCOt need to write
    it themselves.

    Yes. But they want free beer more than free speech.

    You have to take responsibility for your own business, donrCOt you?

    They don't want to write or maintain their own OS.

    They don't want to write or maintain their own platform
    software (web/app servers, database servers, message queue
    servers, cache servers etc.).

    They don't want to write or maintain their own tools
    (compilers, build tools, IDE's, source control, unit
    test frameworks etc.).

    None of that stuff is their business.

    They want to focus on their business the applications
    that help them produce and sell whatever products
    or services.

    Every single one of the FAANG companies do all of those things.
    At Google, we used to joke that, "not only does Google reinvent
    the wheel, we vulcanize the rubber for the tires." Spanner,
    Piper/Fig/Jujutsu, Prodkernel/ChromeOS/Android, CitC, gunit, Go
    (not to mention the work on LLVM/Clang), Blaze/Bazel/Skylark,
    etc, are all examples of the things you mentioned above. And
    that's not even to mention all the custom hardware.

    For organizations working at hyperscale, there comes a point
    where the off-the-shelf solutions simply cannot scale to meet
    the load you're putting on them.

    At that point, you have no choice but to do it yourself.

    Few companies are like Google.

    Yup.

    For a few reasons:
    [snip]

    3) Google is not just a company using IT to produce
    products/services - Google is also a company doing
    IT for other.

    Google Search is an IT user where it is not a given
    that they want their own distro.

    Actually, a disproportionate amount of kernel effort was put
    into place specifically for search, but there is a dedicated
    team that does kernel work for production specifically.

    Internal Google IT, i.e. the people who staff the helpdesk and
    manage desktop workstations, provision laptops, and so on, had
    their own Linux distro that was based on Debian, and (mostly)
    unrelated to the production OS. They did almost no kernel work,
    however.

    But Android and ChromeOS is Google delivering an
    OS to other. The OS is their business in that case.

    And one facet of GCP is that Google is taking
    over OS support from Redhat/Canonical/SUSE when
    companies moves their workload from on-prem to
    GCP managed services. Linux support is their
    business.

    Do you mean ContainerOS? That's just a distro.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Thu Oct 16 00:28:09 2025
    From Newsgroup: comp.os.vms

    On Wed, 15 Oct 2025 19:55:28 -0400, Arne Vajh|+j wrote:

    My napkin calculation / RNG says you will need more than a million
    Linux instances for the math to work. Google has that. Most
    companies does not.

    <https://linuxfromscratch.org/>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Wed Oct 15 20:30:48 2025
    From Newsgroup: comp.os.vms

    On 10/15/2025 7:58 AM, Dan Cross wrote:
    In article <memo.20251014170713.10624x@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    In article <10ckadi$7dr$1@reader2.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:
    I never quite got the business play behind Java from Sun's
    perspective. It seemed to explode in popularity overnight, but
    they never quite figured out how to monetize it; I remember
    hearing from some Sun folks that they wanted to set standards
    and be at the center of the ecosystem, but were content to let
    other players actually build the production infrastructure.

    The trick with monetising something like that is to price it so that
    customers find it far cheaper to pay than to write their own. However,
    you still need to be able to make money on it. I've seen this done with a
    sliding royalty scale.

    However, this kind of scheme definitely would have clashed with the
    desire Sun had to make Java a standard piece of client software. It may
    have been doomed to unprofitability by the enthusiasm of its creators.

    I think that's a really insightful way to put it.

    My sense was that they overplayed their hand, and did so
    prematurely relative to the actual value they were holding onto.

    I mentioned Microsoft and Java on the client side: I believe
    that they were largely responsible for failure of Java desktop
    applications (and the supporting ecosystem) to take root. As I
    recall, at the time, MSFT tried to license Java from Sun: Sun
    said no, and I'm quite sure that McNealy was positively giddy
    about it as well. However, I think in doing so, Sun gravely
    underestimated Gates-era MSFT, because then Microsoft very
    publicly said, "we're going to wait and see whether the industry
    adopts Java on the desktop." But, since Microsoft was the
    biggest player in that space, the rest of the industy waited to
    see what Microsoft would do and whether they would support it on
    Windows: the result was that Java no one adopted it, and so it
    never saw widespread client-side adoption.

    Not quite what happened.

    Sun did license Java to MS.

    But MS violated license condition and their J++ had a couple
    of incompatibilities (they replaced JNI with something better
    and they replaced RMI with COM that fitted better with Windows).

    Sun sued and MS had to pay 20 M$ (peanuts for MS) and ditch the product.

    So MS ditched their Java and Sun delivered Java to Windows
    users that wanted it. And in the early 00's that was most Windows
    users, because everybody needed applet support in their browsers.

    Until applets died out and Java stopped being needed/wanted
    by most ordinary users.

    And the world changed and in recent years MS created their
    own Java again based on OpenJDK.

    Oh sure, it had some
    adoption in mobile phone type applications, but util Android
    (which tried to skirt the licensing issues with Dalvik) that
    was pretty limited.

    Almost all the 3 millions apps available for the 3 billion
    Android phones are written in Java or Kotlin. Not particular limited.

    Anyway, while Microsoft stalled, they did
    C# in the background, and when it was ready, they no longer had
    any real need for Java on the client side.

    MS started .NET and C# after they were forced to drop their
    Java.

    Anders Hejlsberg was actually headhunted from Borland to
    do MS Java. And when that was no longer a thing he moved
    on to creating .NET and C#.

    The framing that the web rendered Java on desktops obsolete is
    incomplete. Certainly, that was true for _many_ applications,
    as the web rendered much of the client-side ecosystem obsolete,
    but consider things in Microsoft's portfolio like Word, Except,
    PowerPoint, and so on. Those remained solidly desktop focused
    until 360;

    What moved to web in the early 00's were all the internal
    business app frontends. The stuff that used to be done on
    VB6, Delphi, Jyacc etc..

    Mostly trivial stuff but millions of applications requiring
    millions of developers.

    MS Office and other MSVC++ MFC apps may have been difficult to
    port to web at the time, but it would also have been difficult
    to come up with a business case for it - that first showed up
    when MS had a cloud and could charge customer per user per month
    for it.

    one never saw credible competitors to that in Java,
    which was something Sun very much wanted (recall McNealy's
    writing at this time about a "new" style of development based
    around open source and Java).

    OpenOffice owned by Sun at the time actually did implement
    some stuff in Java.

    But neither as OpenOffice as office package nor Java as language
    for desktop apps ever took off.

    Similarly, investment in C# shows
    that they weren't quite ready to move everything to the web;

    ????

    One of the main areas for C# is web applications ASP.NET and
    was so from day 1.

    (not everybody may like ASP.NET web forms, but that is
    another discussion)

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Thu Oct 16 00:31:33 2025
    From Newsgroup: comp.os.vms

    On Wed, 15 Oct 2025 19:40:51 -0400, Arne Vajh|+j wrote:

    On 10/15/2025 7:01 PM, Lawrence DrCOOliveiro wrote:

    On Wed, 15 Oct 2025 15:33:20 -0500, Craig A. Berry wrote:

    I believe Arne's point was the fairly obvious one that a retail chain
    or a hospital chain does not need to and cannot afford to maintain,
    for example, their own operating system.

    Do you think that is hard to do?

    Hire enough experts to have people that know the code base of every
    critical part: Linux kernel, glibc etc. probably 50-100 million lines of code: bloody expensive. We are talking hundreds engineers - and not just
    any engineers but top engineers.

    When the Raspberry Pi was first released, there was a Debian version
    available for it, but it was not optimized for the PirCOs unusual
    combination of hardware floating point + an older version of the ARM instruction set.

    Two guys decided to take it upon themselves to rebuild the whole of
    Debian, from source, optimized for the Pi hardware. It took them six
    weeks.

    They called their distro rCLRaspbianrCY. These days I think the Raspberry Pi foundation has taken on official support for it, and are calling it rCLRaspberry Pi OSrCY.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Wed Oct 15 20:40:42 2025
    From Newsgroup: comp.os.vms

    On 10/15/2025 8:26 PM, Dan Cross wrote:
    In article <10cpc9g$191j$2@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    And one facet of GCP is that Google is taking
    over OS support from Redhat/Canonical/SUSE when
    companies moves their workload from on-prem to
    GCP managed services. Linux support is their
    business.

    Do you mean ContainerOS? That's just a distro.

    I am talking about that like 10 years ago a company
    would run like:

    their application + their database server
    RHEL [paying Redhat for Linux support]
    ESXi
    on-prem HW

    but now they may run as (assuming Google customer):

    their application in GKE + database as GCP managed service
    whatever Linux Google want to use [paying Google for Linux support as
    part of what they pay for the cloud services]
    Linux with KVM
    Google HW

    Amazon, Microsoft and Google are taking revenue away
    from Redhat (IBM). They have de facto gotten into
    the Linux support business.

    Arne



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Thu Oct 16 00:43:51 2025
    From Newsgroup: comp.os.vms

    In article <10cp0eg$3tqik$1@dont-email.me>,
    Craig A. Berry <craigberry@nospam.mac.com> wrote:
    On 10/15/25 7:16 AM, Dan Cross wrote:
    In article <10cmovf$3a740$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/13/2025 10:03 PM, Lawrence DrCOOliveiro wrote:
    On Mon, 13 Oct 2025 21:20:43 -0400, Arne Vajh|+j wrote:
    On 10/13/2025 8:20 PM, Lawrence DrCOOliveiro wrote:
    On Mon, 13 Oct 2025 19:26:56 -0400, Arne Vajh|+j wrote:
    Enterprises with a need to document support can not just hire a
    random consultant when the need arrive.

    If something is mission-critical and core to their entire business, >>>>>> they want a staff they can rely on, completely, to manage that
    properly.

    Few/no CIO's want to support the hundreds of millions of lines
    of open source code their business rely on themselves.

    The whole point of having all that code is that they didnrCOt need to write
    it themselves.

    Yes. But they want free beer more than free speech.

    You have to take responsibility for your own business, donrCOt you?

    They don't want to write or maintain their own OS.

    They don't want to write or maintain their own platform
    software (web/app servers, database servers, message queue
    servers, cache servers etc.).

    They don't want to write or maintain their own tools
    (compilers, build tools, IDE's, source control, unit
    test frameworks etc.).

    None of that stuff is their business.

    They want to focus on their business the applications
    that help them produce and sell whatever products
    or services.

    Every single one of the FAANG companies do all of those things.

    In other words, hardly anyone.

    I wonder what percentage of professional software engineers work
    or have worked at one of those companies at this point.

    But Arne's statements were categorical and near absolute. "They
    don't..." "Few/no..." "None of that is their business...", and
    so on. It wasn't Meta's business to do any of that stuf,
    either, until they reached a point where they had to.

    At Google, we used to joke that, "not only does Google reinvent
    the wheel, we vulcanize the rubber for the tires." Spanner,
    Piper/Fig/Jujutsu, Prodkernel/ChromeOS/Android, CitC, gunit, Go
    (not to mention the work on LLVM/Clang), Blaze/Bazel/Skylark,
    etc, are all examples of the things you mentioned above. And
    that's not even to mention all the custom hardware.

    For organizations working at hyperscale, there comes a point
    where the off-the-shelf solutions simply cannot scale to meet
    the load you're putting on them.

    At that point, you have no choice but to do it yourself.

    You're kinda going in circles here by arguing that very big companies
    whose business is to make their own technology need to make their own >technology.

    Really? I thought I was providing a counter-example to Arne's
    assertions.

    And none of those companies started out big; with the exception
    of Apple and Microsoft, which both started at the dawn of the
    personal computer era, it was not part of the mission for
    Google, Meta, Amazon, Netflix, etc, to do any of the things that
    Arne mentioned most organizations don't want to do. The FAANGs
    didn't want to do them, either, honestly, but they do so out of
    business necessity, which is the point: most companies don't
    have to do those things because they never reach the point where
    it's required. That's not necessarily a feature.

    It's an oversimplifictation to assert that people and businesses
    won't do the things that are essential to their core business;
    history shows that they can and will---once it actually becomes
    a necessity.

    I believe Arne's point was the fairly obvious one that a
    retail chain or a hospital chain does not need to and cannot afford to >maintain, for example, their own operating system.

    Of course, but those aren't technology companies. Most farmers
    don't need to maintain their own OS either, though I know at
    least two who do just for fun. But just saying that most child
    daycare centers don't need their own in-house IT stack is a non
    sequitur, because they're not reliant on their technical
    infrastructure the way that organizations for which it is
    essential are.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Thu Oct 16 00:51:43 2025
    From Newsgroup: comp.os.vms

    In article <10cpeu9$26ht$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/15/2025 8:26 PM, Dan Cross wrote:
    In article <10cpc9g$191j$2@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    And one facet of GCP is that Google is taking
    over OS support from Redhat/Canonical/SUSE when
    companies moves their workload from on-prem to
    GCP managed services. Linux support is their
    business.

    Do you mean ContainerOS? That's just a distro.

    I am talking about that like 10 years ago a company
    would run like:

    their application + their database server
    RHEL [paying Redhat for Linux support]
    ESXi
    on-prem HW

    but now they may run as (assuming Google customer):

    their application in GKE + database as GCP managed service
    whatever Linux Google want to use [paying Google for Linux support as
    part of what they pay for the cloud services]
    Linux with KVM
    Google HW

    Not quite how the stack is structured.

    Amazon, Microsoft and Google are taking revenue away
    from Redhat (IBM). They have de facto gotten into
    the Linux support business.

    Not really. They're taking revenue away from Broadcom/VMWare,
    perhaps, and probably from Dell, HPE, and Lenovo. But if you
    want to run RHEL on a VM on Google's cloud, they won't stop you. https://cloud.google.com/compute/docs/images/os-details

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Thu Oct 16 01:01:52 2025
    From Newsgroup: comp.os.vms

    In article <10cpebq$26b5$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/15/2025 7:58 AM, Dan Cross wrote:
    [snip]
    Oh sure, it had some
    adoption in mobile phone type applications, but util Android
    (which tried to skirt the licensing issues with Dalvik) that
    was pretty limited.

    Almost all the 3 millions apps available for the 3 billion
    Android phones are written in Java or Kotlin. Not particular limited.

    ...but not running on the JVM or using the JRE.

    Anyway, while Microsoft stalled, they did
    C# in the background, and when it was ready, they no longer had
    any real need for Java on the client side.

    MS started .NET and C# after they were forced to drop their
    Java.

    Be careful: it is precisely this forcing event that I am
    referring to. Could MSFT have come into compliance with the
    Java licensing terms instead of doing C#? I'm quite sure they
    could have, but this was the era of MSFT "Embrace and Extend",
    where they'd de facto take over a standard ("embrace") and make
    their extended version the de facto standard ("extend"). Sun
    very much did not want to let them do that to Java, and did not.

    Anders Hejlsberg was actually headhunted from Borland to
    do MS Java. And when that was no longer a thing he moved
    on to creating .NET and C#.

    See above.

    The framing that the web rendered Java on desktops obsolete is
    incomplete. Certainly, that was true for _many_ applications,
    as the web rendered much of the client-side ecosystem obsolete,
    but consider things in Microsoft's portfolio like Word, Except,
    PowerPoint, and so on. Those remained solidly desktop focused
    until 360;

    What moved to web in the early 00's were all the internal
    business app frontends. The stuff that used to be done on
    VB6, Delphi, Jyacc etc..

    Mostly trivial stuff but millions of applications requiring
    millions of developers.

    MS Office and other MSVC++ MFC apps may have been difficult to
    port to web at the time, but it would also have been difficult
    to come up with a business case for it - that first showed up
    when MS had a cloud and could charge customer per user per month
    for it.

    They didn't need a "cloud": they needed a large, Internet-scale
    server architecture and data center presence, and they had such
    things pretty quickly: remember when they bought Hotmail?

    They could have easily charged subscription fees.

    one never saw credible competitors to that in Java,
    which was something Sun very much wanted (recall McNealy's
    writing at this time about a "new" style of development based
    around open source and Java).

    OpenOffice owned by Sun at the time actually did implement
    some stuff in Java.

    Right. So no credible competitors.

    But neither as OpenOffice as office package nor Java as language
    for desktop apps ever took off.

    Similarly, investment in C# shows
    that they weren't quite ready to move everything to the web;

    ????

    The whole point of CLR languages on Windows desktops is that
    they run locally.

    One of the main areas for C# is web applications ASP.NET and
    was so from day 1.

    (not everybody may like ASP.NET web forms, but that is
    another discussion)

    I'm not saying it wasn't a use-case; I'm saying that investing
    in the client-side infrastructure to be able to write rich
    applications that run locally shows that they weren't ready,
    organizationally, business-wise, or technologically, to move
    everything to the web.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Thu Oct 16 08:18:26 2025
    From Newsgroup: comp.os.vms

    On 10/15/2025 8:51 PM, Dan Cross wrote:
    In article <10cpeu9$26ht$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/15/2025 8:26 PM, Dan Cross wrote:
    In article <10cpc9g$191j$2@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    And one facet of GCP is that Google is taking
    over OS support from Redhat/Canonical/SUSE when
    companies moves their workload from on-prem to
    GCP managed services. Linux support is their
    business.

    Do you mean ContainerOS? That's just a distro.

    I am talking about that like 10 years ago a company
    would run like:

    their application + their database server
    RHEL [paying Redhat for Linux support]
    ESXi
    on-prem HW

    but now they may run as (assuming Google customer):

    their application in GKE + database as GCP managed service
    ^^^ ^^^^^^^^^^^^^^^>>
    whatever Linux Google want to use [paying Google for Linux support as
    part of what they pay for the cloud services]
    Linux with KVM
    Google HW

    Not quite how the stack is structured.

    Amazon, Microsoft and Google are taking revenue away
    from Redhat (IBM). They have de facto gotten into
    the Linux support business.

    Not really. They're taking revenue away from Broadcom/VMWare,
    perhaps, and probably from Dell, HPE, and Lenovo. But if you
    want to run RHEL on a VM on Google's cloud, they won't stop you. https://cloud.google.com/compute/docs/images/os-details

    If someone has a strong desire to do cloud like they did 10
    years ago, then buying GCE instances, installing RHEL,
    installing OpenShift, installing database, installing
    application and manage everything is certainly still an option.

    But I was very explicit above talking about managed services.
    Managed Kubernetes and managed database. GKE not GCE.

    Again I wonder if you read what you are replying to.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Thu Oct 16 09:24:26 2025
    From Newsgroup: comp.os.vms

    On 10/15/2025 9:01 PM, Dan Cross wrote:
    In article <10cpebq$26b5$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/15/2025 7:58 AM, Dan Cross wrote:
    [snip]
    Oh sure, it had some
    adoption in mobile phone type applications, but util Android
    (which tried to skirt the licensing issues with Dalvik) that
    was pretty limited.

    Almost all the 3 millions apps available for the 3 billion
    Android phones are written in Java or Kotlin. Not particular limited.

    ...but not running on the JVM or using the JRE.

    True.

    But the difference is not that big.

    A) library

    Mostly:

    Android SDK =
    standard Java library
    - desktop GUI (Swing)
    + Android GUI
    + phone stuff

    So the Java developer need to learn a new GUI (but few will
    miss Swing) and learn some phone specific API's (which
    is sort of unavoidable).

    But mostly the usual API. Which was the core of Oracle's
    lawsuit against Google.

    They can use most of the third party Java libraries that they
    know from elsewhere. If they make sense on a phone and are
    not too heavy.

    B) VM

    It is different:

    Java source
    --(javac)-->
    Java byte code (stack based)

    JVM

    (Hotspot JVM is JIT only, GraalVM is JIT or AOT, J9 is JIT with
    cache between runs which result in some sort of AOT)

    vs

    Java source
    --(javac)-->
    Java byte code (stack based)
    --(Android tool)-->
    Android byte code (register based)

    Android VM

    (Dalvik in Android 2-4 is JIT, ART in Android 5-6 i AOT,
    ART in Android 7- is a hybrid between AOT and JIT)

    And while I am sure the different byte code formats and
    AOT vs JIT can create very heated arguments among VM writers,
    then the average Java developer does not care. They write their
    Java code, they compile it with javac and then it gets deployed
    and it somehow runs.

    Arne




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Thu Oct 16 17:06:54 2025
    From Newsgroup: comp.os.vms

    In article <10cqrma$c7a9$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/15/2025 9:01 PM, Dan Cross wrote:
    In article <10cpebq$26b5$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/15/2025 7:58 AM, Dan Cross wrote:
    [snip]
    Oh sure, it had some
    adoption in mobile phone type applications, but util Android
    (which tried to skirt the licensing issues with Dalvik) that
    was pretty limited.

    Almost all the 3 millions apps available for the 3 billion
    Android phones are written in Java or Kotlin. Not particular limited.

    ...but not running on the JVM or using the JRE.

    True.

    But the difference is not that big.
    [snip]

    ...anyway, the point I was making earlier was that Java never
    achieved widespread adoption on the desktop.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Thu Oct 16 17:18:45 2025
    From Newsgroup: comp.os.vms

    In article <10cqnqi$c7a8$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/15/2025 8:51 PM, Dan Cross wrote:
    In article <10cpeu9$26ht$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/15/2025 8:26 PM, Dan Cross wrote:
    In article <10cpc9g$191j$2@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    And one facet of GCP is that Google is taking
    over OS support from Redhat/Canonical/SUSE when
    companies moves their workload from on-prem to
    GCP managed services. Linux support is their
    business.

    Do you mean ContainerOS? That's just a distro.

    I am talking about that like 10 years ago a company
    would run like:

    their application + their database server
    RHEL [paying Redhat for Linux support]
    ESXi
    on-prem HW

    but now they may run as (assuming Google customer):

    their application in GKE + database as GCP managed service
    ^^^ ^^^^^^^^^^^^^^^>>
    whatever Linux Google want to use [paying Google for Linux support as
    part of what they pay for the cloud services]
    Linux with KVM
    Google HW

    Not quite how the stack is structured.
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    Amazon, Microsoft and Google are taking revenue away
    from Redhat (IBM). They have de facto gotten into
    the Linux support business.

    Not really. They're taking revenue away from Broadcom/VMWare,
    perhaps, and probably from Dell, HPE, and Lenovo. But if you
    want to run RHEL on a VM on Google's cloud, they won't stop you.
    https://cloud.google.com/compute/docs/images/os-details

    If someone has a strong desire to do cloud like they did 10
    years ago, then buying GCE instances, installing RHEL,
    installing OpenShift, installing database, installing
    application and manage everything is certainly still an option.

    But I was very explicit above talking about managed services.
    Managed Kubernetes and managed database. GKE not GCE.

    Have you ever used any GCP services?

    Again I wonder if you read what you are replying to.

    Did you?

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2