• And so? (VMS/XDE)

    From gcalliet@gerard.calliet@pia-sofer.fr to comp.os.vms on Wed Oct 29 15:48:06 2025
    From Newsgroup: comp.os.vms

    Hello,

    We have got VMS/XDE (https://products.vmssoftware.com/vms-xde-beta). You
    can develop VMS application on GNU/Linux.

    (I learn receently the term, and I apologize for the rudness). WTF?

    It seems being a very good technical effort. So perhaps some investment.
    I cannot understand the (business) goal. But it seems investement is
    possible - but not for bare metal :( -.

    It seems to be funny to use. But I don't see for whom this effort is
    done - apart hobbyies enthousiasts -.

    Again, I'll rumble. Is it a real way to join the new generations of developers, the Open Source world? With a non-opened packaged you get
    free for n months, and which you have after that to buy?

    I need more cleverness about all that. Please.

    As usual, instructive things from Arn|| : (https://forum.vmssoftware.com/viewtopic.php?f=45&t=9622&sid=a0364371ebeaeaa5038908bfeb92b4da)

    G|-rard Calliet
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris Townley@news@cct-net.co.uk to comp.os.vms on Wed Oct 29 15:19:28 2025
    From Newsgroup: comp.os.vms

    On 29/10/2025 14:48, gcalliet wrote:
    Hello,

    We have got VMS/XDE (https://products.vmssoftware.com/vms-xde-beta). You
    can develop VMS application on GNU/Linux.

    (I learn receently the term, and I apologize for the rudness). WTF?

    It seems being a very good technical effort. So perhaps some investment.
    I cannot understand the (business) goal. But it seems investement is possible - but not for bare metal :( -.

    It seems to be funny to use. But I don't see for whom this effort is
    done - apart hobbyies enthousiasts -.

    Again, I'll rumble. Is it a real way to join the new generations of developers, the Open Source world? With a non-opened packaged you get
    free for n months, and which you have after that to buy?

    I need more cleverness about all that. Please.

    As usual, instructive things from Arn|| : (https://forum.vmssoftware.com/ viewtopic.php?f=45&t=9622&sid=a0364371ebeaeaa5038908bfeb92b4da)

    G|-rard Calliet

    I am interested, if only to see how it works, so will give the beta a
    try. Shame it doesn't support aarch64 - I did think of running it on a
    modern Pi!

    BTW my spell checker insists on changing your username to Metallica - I
    trust you like Rock!
    --
    Chris
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From drb@drb@ihatespam.msu.edu (Dennis Boone) to comp.os.vms on Wed Oct 29 16:13:53 2025
    From Newsgroup: comp.os.vms

    It seems being a very good technical effort. So perhaps some investment.
    I cannot understand the (business) goal. But it seems investement is possible - but not for bare metal :( -.

    It seems to be funny to use. But I don't see for whom this effort is
    done - apart hobbyies enthousiasts -.

    This sort of thing seems to have worked pretty well for IBM and
    development for z/OS. It seems that even developers for such
    non-mainstream environments still want modern creature comforts.

    De
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Simon Clubley@clubley@remove_me.eisner.decus.org-Earth.UFP to comp.os.vms on Wed Oct 29 18:48:24 2025
    From Newsgroup: comp.os.vms

    On 2025-10-29, Dennis Boone <drb@ihatespam.msu.edu> wrote:
    It seems being a very good technical effort. So perhaps some investment.
    I cannot understand the (business) goal. But it seems investement is possible - but not for bare metal :( -.

    The (potential) business goal is obvious if you have wide enough viewpoint
    and not just a VMS-specific viewpoint.


    It seems to be funny to use. But I don't see for whom this effort is
    done - apart hobbyies enthousiasts -.

    Actually, this kind of approach seems a perfectly normal option if you
    have any knowledge of embedded systems development. The main difference
    is that you are developing applications to run on top of that embedded
    system instead of pushing a system image via a JTAG port (for example).

    I've long thought VMS systems should be considered as some kind of
    a higher-level embedded system where applications are developed locally
    and then packaged up and pushed onto the target VMS system. It looks
    like VSI are moving in that same direction as well.

    And VSI would not be putting the effort into this unless customers
    had indicated interest for such an approach.

    This is exactly the kind of thing that VSI should be doing. Well done
    to them for actively exploring this approach.


    This sort of thing seems to have worked pretty well for IBM and
    development for z/OS. It seems that even developers for such
    non-mainstream environments still want modern creature comforts.


    This is the exact example I was going to use until you beat me to it. :-)

    When was the last time a 3270 class terminal for serious z/OS software development was acceptable to developers as the only development option ?

    And it's not just about creature comforts; it's about been able to do
    things more efficiently and quickly than is possible on the target
    system itself.

    Simon.
    --
    Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
    Walking destinations on a map are further away than they appear.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Wed Oct 29 15:52:47 2025
    From Newsgroup: comp.os.vms

    On 10/29/2025 2:48 PM, Simon Clubley wrote:
    On 2025-10-29, Dennis Boone <drb@ihatespam.msu.edu> wrote:
    It seems to be funny to use. But I don't see for whom this effort is
    done - apart hobbyies enthousiasts -.

    Actually, this kind of approach seems a perfectly normal option if you
    have any knowledge of embedded systems development. The main difference
    is that you are developing applications to run on top of that embedded
    system instead of pushing a system image via a JTAG port (for example).

    I've long thought VMS systems should be considered as some kind of
    a higher-level embedded system where applications are developed locally
    and then packaged up and pushed onto the target VMS system. It looks
    like VSI are moving in that same direction as well.

    This sort of thing seems to have worked pretty well for IBM and
    development for z/OS. It seems that even developers for such
    non-mainstream environments still want modern creature comforts.

    This is the exact example I was going to use until you beat me to it. :-)

    When was the last time a 3270 class terminal for serious z/OS software development was acceptable to developers as the only development option ?

    Developing on a different OS than the target is totally standard
    today.

    Yes - the 1/3 of development that is native code have some issues
    that need solutions.

    But the 2/3 of development that is non-native code (Java, .NET,
    Python, JavaScript, PHP etc.) just do it.

    The most common setup today must be development on Windows
    targeting Linux servers.

    Arne




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Wed Oct 29 15:59:14 2025
    From Newsgroup: comp.os.vms

    On 10/29/2025 2:48 PM, Simon Clubley wrote:
    On 2025-10-29, Dennis Boone <drb@ihatespam.msu.edu> wrote:
    It seems being a very good technical effort. So perhaps some investment. >>> I cannot understand the (business) goal. But it seems investement is
    possible - but not for bare metal :( -.

    The (potential) business goal is obvious if you have wide enough viewpoint and not just a VMS-specific viewpoint.

    And VSI would not be putting the effort into this unless customers
    had indicated interest for such an approach.

    Yes.

    I have never considered it a problem to transfer files
    between PC and VMS and to do some work on VMS (DCL
    commands, EVE editor) in a terminal window.

    If someone really want GUI, then DECWindows still
    works (even though look and feel is 30+ years old).

    But what people that learned VMS when a real VT220
    or VT320 was "it" consider easy is not so relevant.

    I assume VSI must have heard from customers and ISV's
    that the ability to develop on PC is important.

    Otherwise the investment in first VMS IDE and
    now XDE does not make sense.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris Townley@news@cct-net.co.uk to comp.os.vms on Wed Oct 29 21:45:30 2025
    From Newsgroup: comp.os.vms

    On 29/10/2025 19:52, Arne Vajh|+j wrote:
    On 10/29/2025 2:48 PM, Simon Clubley wrote:
    On 2025-10-29, Dennis Boone <drb@ihatespam.msu.edu> wrote:
    It seems to be funny to use. But I don't see for whom this effort is
    done - apart hobbyies enthousiasts -.

    Actually, this kind of approach seems a perfectly normal option if you
    have any knowledge of embedded systems development. The main difference
    is that you are developing applications to run on top of that embedded
    system instead of pushing a system image via a JTAG port (for example).

    I've long thought VMS systems should be considered as some kind of
    a higher-level embedded system where applications are developed locally
    and then packaged up and pushed onto the target VMS system. It looks
    like VSI are moving in that same direction as well.

    This sort of thing seems to have worked pretty well for IBM and
    development for z/OS.-a It seems that even developers for such
    non-mainstream environments still want modern creature comforts.

    This is the exact example I was going to use until you beat me to it. :-)

    When was the last time a 3270 class terminal for serious z/OS software
    development was acceptable to developers as the only development option ?

    Developing on a different OS than the target is totally standard
    today.

    Yes - the 1/3 of development that is native code have some issues
    that need solutions.

    But the 2/3 of development that is non-native code (Java, .NET,
    Python, JavaScript, PHP etc.) just do it.

    The most common setup today must be development on Windows
    targeting Linux servers.

    Arne

    I have used PC editors (various) to edit source code on linux for years,
    but with VMS I always used LSE for proper code. I never got LSE working
    well enough for DCL, so I normally used either EVE, or sometimes a PC editor

    I am looking forward to trying this, although my current VM host isn't
    yet up to it, I have just bought another mini PC to try it out.
    --
    Chris
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Wed Oct 29 19:03:07 2025
    From Newsgroup: comp.os.vms

    On 10/29/2025 11:19 AM, Chris Townley wrote:
    On 29/10/2025 14:48, gcalliet wrote:
    We have got VMS/XDE (https://products.vmssoftware.com/vms-xde-beta).
    You can develop VMS application on GNU/Linux.

    I am interested, if only to see how it works, so will give the beta a
    try. Shame it doesn't support aarch64 - I did think of running it on a modern Pi!

    I think given the architecture that would require VMS ARM64, which
    does not exist. Yet.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris Townley@news@cct-net.co.uk to comp.os.vms on Wed Oct 29 23:16:34 2025
    From Newsgroup: comp.os.vms

    On 29/10/2025 23:03, Arne Vajh|+j wrote:
    On 10/29/2025 11:19 AM, Chris Townley wrote:
    On 29/10/2025 14:48, gcalliet wrote:
    We have got VMS/XDE (https://products.vmssoftware.com/vms-xde-beta).
    You can develop VMS application on GNU/Linux.

    I am interested, if only to see how it works, so will give the beta a
    try. Shame it doesn't support aarch64 - I did think of running it on a
    modern Pi!

    I think given the architecture that would require VMS ARM64, which
    does not exist. Yet.

    Arne


    Yep I realised that, but misread the first bit of PR - mea culpa
    --
    Chris
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Wed Oct 29 19:25:26 2025
    From Newsgroup: comp.os.vms

    On 10/29/2025 7:16 PM, Chris Townley wrote:
    On 29/10/2025 23:03, Arne Vajh|+j wrote:
    On 10/29/2025 11:19 AM, Chris Townley wrote:
    On 29/10/2025 14:48, gcalliet wrote:
    We have got VMS/XDE (https://products.vmssoftware.com/vms-xde-beta).
    You can develop VMS application on GNU/Linux.

    I am interested, if only to see how it works, so will give the beta a
    try. Shame it doesn't support aarch64 - I did think of running it on
    a modern Pi!

    I think given the architecture that would require VMS ARM64, which
    does not exist. Yet.

    Yep I realised that, but misread the first bit of PR - mea culpa

    That PR text is rather information free.

    But Aleksandr has explained a little about how it works.

    Arne



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From gcalliet@gerard.calliet@pia-sofer.fr to comp.os.vms on Thu Oct 30 09:19:05 2025
    From Newsgroup: comp.os.vms

    Le 29/10/2025 |a 19:48, Simon Clubley a |-crit :
    The (potential) business goal is obvious if you have wide enough viewpoint and not just a VMS-specific viewpoint

    It's the point, Simon. And somehow Chris says the same thing comparing development for VMS and for z/os.

    And again, if we agree on your opinion viewing VMS as some rich embedded
    OS, again VMS/XDE is worth it.

    And again and again, my view is and has always been VMS-specific. VMS as
    a specific general OS, indeed.

    It seems now, because the strategy used by VSI or its investor has been
    for ten years a strategy copied on strategies for legacies OS (like
    z/os...), the option of a VMS revival as an alternate OS solution is
    almost dead.

    And so VMS/XDE is a good way making business for five or six years
    before the real death of VMS. (Because in my opinion, there is no future
    for an embedded VMS : not its real market, not competitive in the
    embedded market).

    I heard at Malm|| about "and sometime there will be a new VMS". As a wine level on Linux, and an interface to the Oracle cloud, I understand that
    the best new VMS is just business as usual with no-VMS.

    As you can read, I'm a little bit upset. Because as usual VSI applies strategic decisions without any consultations or explanations. It's for
    almost ten years, for example, we explain that (real) open source
    integration (or bare metal) are important. And for ten years we have the
    same answer: "investments not worth it". And now an important investment
    for invading the Linux world with non opened solutions (the VMS added
    value).

    Perhaps it's cool to develop on Linux something for VMS. But, because
    the licensing is the same ostage-like-for-legacies, I'm not sure we'll
    get any interest from new generations of developers.

    G|-rard Calliet (the grumpy dwarf).
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Thu Oct 30 11:30:53 2025
    From Newsgroup: comp.os.vms

    In article <10du7p7$38rht$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/29/2025 7:16 PM, Chris Townley wrote:
    On 29/10/2025 23:03, Arne Vajh|+j wrote:
    On 10/29/2025 11:19 AM, Chris Townley wrote:
    On 29/10/2025 14:48, gcalliet wrote:
    We have got VMS/XDE (https://products.vmssoftware.com/vms-xde-beta). >>>>> You can develop VMS application on GNU/Linux.

    I am interested, if only to see how it works, so will give the beta a >>>> try. Shame it doesn't support aarch64 - I did think of running it on
    a modern Pi!

    I think given the architecture that would require VMS ARM64, which
    does not exist. Yet.

    Yep I realised that, but misread the first bit of PR - mea culpa

    That PR text is rather information free.

    But Aleksandr has explained a little about how it works.

    I wonder how they implement system calls.

    I imagine this is mostly done with shared libraries; those bits
    that require access to the privileged instruction set are just
    "normal" functions that set a bit somewhere and do a jump, as
    opposed to a "SYSCALL" instruction or similar. For VMS this is
    actually reasonable, but I'm mildly surprised that they haven't
    done something like Dune or gVisor.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Simon Clubley@clubley@remove_me.eisner.decus.org-Earth.UFP to comp.os.vms on Thu Oct 30 13:12:30 2025
    From Newsgroup: comp.os.vms

    On 2025-10-30, gcalliet <gerard.calliet@pia-sofer.fr> wrote:
    Le 29/10/2025 a 19:48, Simon Clubley a ocrit :
    The (potential) business goal is obvious if you have wide enough viewpoint >> and not just a VMS-specific viewpoint

    It's the point, Simon. And somehow Chris says the same thing comparing development for VMS and for z/os.

    And again, if we agree on your opinion viewing VMS as some rich embedded
    OS, again VMS/XDE is worth it.

    And again and again, my view is and has always been VMS-specific. VMS as
    a specific general OS, indeed.


    You keep thinking about the world as it was 20 to 30 years ago, not how
    it is today. If VMS is to have any part in today's world, it needs to be
    in terms of how the world is today, not a quarter of a century ago.

    It seems now, because the strategy used by VSI or its investor has been
    for ten years a strategy copied on strategies for legacies OS (like z/os...), the option of a VMS revival as an alternate OS solution is
    almost dead.


    z/OS is responsible for keeping a good portion of today's world running.
    I would hardly call that a legacy OS.

    And so VMS/XDE is a good way making business for five or six years
    before the real death of VMS. (Because in my opinion, there is no future
    for an embedded VMS : not its real market, not competitive in the
    embedded market).


    Embedded refers to the development method, not the target market.
    Giving people the development tools they are asking for extends the
    life of VMS instead of reducing it.

    How many people still develop for z/OS directly on a 3270 class terminal instead of from a local PC ?

    Simon.
    --
    Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
    Walking destinations on a map are further away than they appear.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From gcalliet@gerard.calliet@pia-sofer.fr to comp.os.vms on Thu Oct 30 20:05:06 2025
    From Newsgroup: comp.os.vms

    Le 30/10/2025 |a 14:12, Simon Clubley a |-crit-a:
    On 2025-10-30, gcalliet <gerard.calliet@pia-sofer.fr> wrote:
    Le 29/10/2025 |a 19:48, Simon Clubley a |-crit :
    The (potential) business goal is obvious if you have wide enough viewpoint >>> and not just a VMS-specific viewpoint

    It's the point, Simon. And somehow Chris says the same thing comparing
    development for VMS and for z/os.

    And again, if we agree on your opinion viewing VMS as some rich embedded
    OS, again VMS/XDE is worth it.

    And again and again, my view is and has always been VMS-specific. VMS as
    a specific general OS, indeed.


    You keep thinking about the world as it was 20 to 30 years ago, not how
    it is today. If VMS is to have any part in today's world, it needs to be
    in terms of how the world is today, not a quarter of a century ago.

    I understand your way of thinking. But on my side I'm not thinking about today, but about tomorow.

    In 2013 I would not have engaged for VMS if I haven't thought about a
    large future for VMS. I thought, as today, about the future.

    On my side I do think the issues of sustainability, sober energy
    consomption, reusability, wil be very important. And it was because of
    the very large past of VMS, its sustainability qualities (think about
    backward compatibility, for example, some intrinsik LTS) that I thought
    about a very large future for VMS.

    VMS could had be a new way of thinking about legacies. I wrote someting
    about that on a linkedIn VMS group about VMS, and my publication has
    been made fixed. So I think I'm not lonely thinking VMS specific
    qualities could open another way of working with legacies. Summary ( :)
    ): the next century quarter opened thanks to the previous century quarter.

    But the way VSI acted for VMS seems to be a very classic way of working
    for legacies. A centralized offer getting the last possible energies
    giving palliative cares. The important number of sites abandoning VMS is
    a sign the strategy is not so good. And the way it has been impossible
    to integer real open source work makes impossible getting new
    enthousiasts for VMS. The cloud offer is like you could think as being
    with today, but also a way of forgetting (local or multi-local)
    clustering sustainability. Chasing after today is a losing battle, when
    we could have invented the other today, linking the past and the future.

    I hope you are right, and that we'll be able to survive like z/os. Even
    it's difficult resembling IBM while we have been altenative to IBM :).



    It seems now, because the strategy used by VSI or its investor has been
    for ten years a strategy copied on strategies for legacies OS (like
    z/os...), the option of a VMS revival as an alternate OS solution is
    almost dead.


    z/OS is responsible for keeping a good portion of today's world running.
    I would hardly call that a legacy OS.

    And so VMS/XDE is a good way making business for five or six years
    before the real death of VMS. (Because in my opinion, there is no future
    for an embedded VMS : not its real market, not competitive in the
    embedded market).


    Embedded refers to the development method, not the target market.
    Giving people the development tools they are asking for extends the
    life of VMS instead of reducing it.

    How many people still develop for z/OS directly on a 3270 class terminal instead of from a local PC ?

    Simon.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Thu Oct 30 15:52:07 2025
    From Newsgroup: comp.os.vms

    On 10/30/2025 4:19 AM, gcalliet wrote:
    It's the point, Simon. And somehow Chris says the same thing comparing development for VMS and for z/os.

    And again, if we agree on your opinion viewing VMS as some rich embedded
    OS, again VMS/XDE is worth it.

    And again and again, my view is and has always been VMS-specific. VMS as
    a specific general OS, indeed.

    It seems now, because the strategy used by VSI or its investor has been
    for ten years a strategy copied on strategies for legacies OS (like z/ os...), the option of a VMS revival as an alternate OS solution is
    almost dead.

    And so VMS/XDE is a good way making business for five or six years
    before the real death of VMS. (Because in my opinion, there is no future
    for an embedded VMS : not its real market, not competitive in the
    embedded market).

    Companies do not have computers (virtual or not) to run an OS.

    They have computers to run applications.

    Applications are created by developers.

    It does not matter if it is:
    * in house developers
    * COTS developers at an ISV
    * open source developers

    (it is my impression that VMS very much rely on the first category
    for VMS sales)

    Remember when Steve Ballmer a couple of decades ago shouted "developers developers" at a .NET conference?

    He was laughed at, but he actually had a point.

    Developers are important for an OS!

    So if customers and potential ISV's are telling VSI that
    developers do not want to work on VMS but want to work
    on a PC, then VSI has to listen.

    It seems fair to assume that is what has happened.

    I am personally fine writing code in EVE in a VT emulator
    and write code on PC and FTP a ZIP up to VMS to build and
    test. I am sure you are fine with that as well. But the
    future is not with gray haired people. The future is with
    the 20-40 yo developers.

    And if they want to use Eclipse or one of the JetBrains IDE's,
    then that is it.

    VMS is better off having developers writing code for VMS
    on a PC than having companies dropping VMS, because
    applications are not being developed.

    That said then I am not even sure that it is a technical
    thing - it may be a managerial thing. I don't think current
    VMS usage model is difficult - I expect any young developer
    above the hopeless level to be able to use Putty and
    learn a few DCL commands and to FTP files between PC
    and VMS. But if management has a perception that it is
    a problem, then it is a business problem for VSI.

    Perhaps it's cool to develop on Linux something for VMS. But, because
    the licensing is the same ostage-like-for-legacies, I'm not sure we'll
    get any interest from new-a generations of developers.
    As I understand it then it is not about saving license cost, but
    about tool availability.

    VSI support VS Code (VMS IDE), but developers are pretty
    diverse when it comes to favorite IDE's and editors. They
    want Eclipse, CLion, PyCharm, PHPStorm, GNU Emacs, Notepad++,
    Cursor, Zed etc.. With this model developers can use their
    favorite tool.

    Arne


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Thu Oct 30 15:59:22 2025
    From Newsgroup: comp.os.vms

    On 10/30/2025 9:12 AM, Simon Clubley wrote:
    On 2025-10-30, gcalliet <gerard.calliet@pia-sofer.fr> wrote:
    It seems now, because the strategy used by VSI or its investor has been
    for ten years a strategy copied on strategies for legacies OS (like
    z/os...), the option of a VMS revival as an alternate OS solution is
    almost dead.

    z/OS is responsible for keeping a good portion of today's world running.
    I would hardly call that a legacy OS.

    z/OS is still used for a lot of very important systems.

    But it is also an OS that companies are actively
    moving away from.

    Arne



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Thu Oct 30 22:26:34 2025
    From Newsgroup: comp.os.vms

    On Thu, 30 Oct 2025 15:52:07 -0400, Arne Vajh|+j wrote:

    Developers are important for an OS!

    Users attract developers, not so much the other way round.

    Look at Iphone versus Android: ApplerCOs platform was seen as way cooler,
    and attracted more of the cool developers. So it got more apps. But
    Android offered a wider range of choice and out-of-the-box functionality.
    That attracted the users. It took years for Android to close the app gap, nevertheless that wasnrCOt enough to keep Iphone dominant.

    Remember Windows Phone? Microsoft was actually paying developers to put
    apps on its platform. But in its user experience it was trying too much to
    ape Apple, which is why it lost out to Android.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Thu Oct 30 22:28:00 2025
    From Newsgroup: comp.os.vms

    On Thu, 30 Oct 2025 15:59:22 -0400, Arne Vajh|+j wrote:

    z/OS is still used for a lot of very important systems.

    But it is also an OS that companies are actively moving away from.

    Those z/OS systems will disappear, one way or the other: companies that persist in sticking with them will go out of business just that little bit more quickly ...
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Thu Oct 30 22:29:52 2025
    From Newsgroup: comp.os.vms

    On Thu, 30 Oct 2025 09:19:05 +0100, gcalliet wrote:

    I heard at Malm|| about "and sometime there will be a new VMS". As a wine level on Linux, and an interface to the Oracle cloud, I understand that
    the best new VMS is just business as usual with no-VMS.

    If VSI had started on that path from the beginning -- not bother with a
    full native VMS port to x86-64 at all, but build an emulation layer on top
    of Linux -- they could have had a functional product a few years earlier,
    at much less cost, to offer to a larger customer base than remains now,
    for consequently greater profit.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.os.vms on Sat Nov 1 16:40:00 2025
    From Newsgroup: comp.os.vms

    In article <10e0omq$n2t$14@dont-email.me>, ldo@nz.invalid (Lawrence
    D_Oliveiro) wrote:

    Remember Windows Phone? Microsoft was actually paying developers to
    put apps on its platform. But in its user experience it was trying
    too much to ape Apple, which is why it lost out to Android.

    That wasn't the problem. The difficulty was that people didn't actually
    want to use Windows Phone.

    Microsoft wanted the user experience to be like desktop Windows, but
    since doing that directly was clearly impractical, they changed Windows
    (at Windows 8) to be their idea of a phone OS. And everyone hated the
    Windows 8 user interface, and were thus put off Windows Phone.

    Microsoft tried to get my employer to offer our toolkit libraries for
    WinRT and Windows RT. We use a domain-specific language that compiles to
    C, not C++. It didn't appear to be possible to compile C for WinRT (or
    later, for Windows Store apps). The compiler options for that didn't work
    with C files. Microsoft insisted it was possible, but could never tell us
    how. We gave up on them, and stuck to producing ordinary Windows DLLs,
    Linux .so libraries and macOS dylibs.

    John
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sat Nov 1 13:04:03 2025
    From Newsgroup: comp.os.vms

    On 11/1/2025 12:40 PM, John Dallman wrote:
    In article <10e0omq$n2t$14@dont-email.me>, ldo@nz.invalid (Lawrence D_Oliveiro) wrote:
    Remember Windows Phone? Microsoft was actually paying developers to
    put apps on its platform. But in its user experience it was trying
    too much to ape Apple, which is why it lost out to Android.

    That wasn't the problem. The difficulty was that people didn't actually
    want to use Windows Phone.

    Microsoft wanted the user experience to be like desktop Windows, but
    since doing that directly was clearly impractical, they changed Windows
    (at Windows 8) to be their idea of a phone OS. And everyone hated the
    Windows 8 user interface, and were thus put off Windows Phone.

    Everybody hated Windows 8.

    But there were actually people that liked WP.

    A phone UI works better on a phone than on a desktop.

    Microsoft tried to get my employer to offer our toolkit libraries for
    WinRT and Windows RT. We use a domain-specific language that compiles to
    C, not C++. It didn't appear to be possible to compile C for WinRT (or
    later, for Windows Store apps). The compiler options for that didn't work with C files. Microsoft insisted it was possible, but could never tell us how. We gave up on them, and stuck to producing ordinary Windows DLLs,
    Linux .so libraries and macOS dylibs.

    I think you would need to wrap:

    --(call)-->[WinRT API in .winmd file]C++/CX wrapper
    component--(call)-->C Win32 DLL

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Sat Nov 1 20:14:03 2025
    From Newsgroup: comp.os.vms

    On Sat, 1 Nov 2025 16:40 +0000 (GMT Standard Time), John Dallman wrote:

    In article <10e0omq$n2t$14@dont-email.me>, ldo@nz.invalid (Lawrence D_Oliveiro) wrote:

    Remember Windows Phone? Microsoft was actually paying developers to put
    apps on its platform. But in its user experience it was trying too much
    to ape Apple, which is why it lost out to Android.

    That wasn't the problem. The difficulty was that people didn't actually
    want to use Windows Phone.

    And why was that? Because of the user experience.

    Microsoft wanted the user experience to be like desktop Windows, but
    since doing that directly was clearly impractical, they changed Windows
    (at Windows 8) to be their idea of a phone OS.

    The problem was, it wasnrCOt even a very good rCLphone OSrCY.

    Microsoft tried to get my employer to offer our toolkit libraries for
    WinRT and Windows RT.

    The two were different things, you realize.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Sat Nov 1 20:18:47 2025
    From Newsgroup: comp.os.vms

    On Sat, 1 Nov 2025 13:04:03 -0400, Arne Vajh|+j wrote:

    But there were actually people that liked [Windows Phone].

    A phone UI works better on a phone than on a desktop.

    It didnrCOt even work well there.

    Microsoft had a clever idea in rCLtilesrCY, which were like a cross between regular rCLiconsrCY and actual content-showing rCLwindowsrCY. But as happens all
    too commonly with them, they botched the execution.

    I remember a clip of a Nokia executive demonstrating one of their new
    models. The screen had these rCLtilesrCY as usual, and as the exec talked, every now and then one of them (the mail app, I particularly remember)
    would do a backflip or some such animation.

    ThatrCOs one way for an app to draw attention to itself, if it needs user attention. Trouble is, the mail app was doing this animation *even when no
    new messages had come in*. So it was just a gratuitous animation, serving
    no purpose.

    That kind of thing, plus all the limitations in OS functionality, just put
    the users off.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sat Nov 1 17:44:02 2025
    From Newsgroup: comp.os.vms

    On 10/30/2025 6:26 PM, Lawrence DrCOOliveiro wrote:
    On Thu, 30 Oct 2025 15:52:07 -0400, Arne Vajh|+j wrote:
    Developers are important for an OS!

    Users attract developers, not so much the other way round.

    No applications mean no users. Nobody is interested in a platform
    with no applications.

    If a platform attract enough developers to get enough applications
    that it sell well, then it becomes much easier to get even more
    developers, because they face a bigger market.

    Success create more success. But there is an initial hurdle.

    Look at Iphone versus Android: ApplerCOs platform was seen as way cooler,
    and attracted more of the cool developers. So it got more apps. But
    Android offered a wider range of choice and out-of-the-box functionality. That attracted the users. It took years for Android to close the app gap, nevertheless that wasnrCOt enough to keep Iphone dominant.

    It took some years before Android got more millions of apps
    than iOS.

    But having most millions of apps does not matter. What matters
    is that the platform has the apps that are important.

    And it did not take long before most of the important
    apps supported both Android and iOS.

    Remember Windows Phone? Microsoft was actually paying developers to put
    apps on its platform. But in its user experience it was trying too much to ape Apple, which is why it lost out to Android.

    There were multiple reasons for WP's failure. But the most
    important was probably lack of apps.

    Lots of of people did buy a WP device. Sales topped around 35
    million/year. Still way behind Android and iOS, but not bad.
    Problem was that they switched back to iOS and Android after
    1 or 2 WP devices.

    Reason was rarely that they did not like the UI. They had seen
    that and tried it before they bought the device. The reason was
    typical that they were missing the important apps.

    Companies decided to support iOS and Android. And when WP
    arrived there were little appetite to add a third. Each
    platform cost money (cross platform like Cordova etc. never
    really caught on) and a frequent opinion was that
    2 was OK but 3-10 was too much.

    So WP was either missing, had a third party unsupported app
    utilizing API reverse engineered from Android/iOS or the
    company did provide it but a few months later than
    Android/iOS.

    People got tired of that. Even if they liked the phone
    as such.

    Arne



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Sat Nov 1 22:13:08 2025
    From Newsgroup: comp.os.vms

    On Sat, 1 Nov 2025 17:44:02 -0400, Arne Vajh|+j wrote:

    On 10/30/2025 6:26 PM, Lawrence DrCOOliveiro wrote:

    On Thu, 30 Oct 2025 15:52:07 -0400, Arne Vajh|+j wrote:

    Developers are important for an OS!

    Users attract developers, not so much the other way round.

    No applications mean no users. Nobody is interested in a platform
    with no applications.

    And yet Android succeeded, even as the leading developers turned up
    their noses at it. They much preferred ApplerCOs platform.

    Look at Iphone versus Android: ApplerCOs platform was seen as way
    cooler, and attracted more of the cool developers. So it got more
    apps. But Android offered a wider range of choice and
    out-of-the-box functionality. That attracted the users. It took
    years for Android to close the app gap, nevertheless that wasnrCOt
    enough to keep Iphone dominant.

    It took some years before Android got more millions of apps than
    iOS.

    But having most millions of apps does not matter. What matters is
    that the platform has the apps that are important.

    Which ones were important in the beginning? The big ones on Iphone
    were simply not available on Android.

    And it did not take long before most of the important
    apps supported both Android and iOS.

    Remember Windows Phone? But in its user experience it was trying
    too much to ape Apple, which is why it lost out to Android.

    There were multiple reasons for WP's failure. But the most important
    was probably lack of apps.

    Microsoft was paying major developers to put apps on its platform.
    It didnrCOt help.

    Lots of of people did buy a WP device. Sales topped around 35
    million/year. Still way behind Android and iOS, but not bad.

    Why was Nokia, the leading Windows Phone device maker, losing money so
    badly, then?

    Companies decided to support iOS and Android.

    Initially it was only IOS. They only added Android *after* it became
    popular.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sat Nov 1 20:02:30 2025
    From Newsgroup: comp.os.vms

    On 11/1/2025 4:14 PM, Lawrence DrCOOliveiro wrote:
    On Sat, 1 Nov 2025 16:40 +0000 (GMT Standard Time), John Dallman wrote:
    Microsoft tried to get my employer to offer our toolkit libraries for
    WinRT and Windows RT.

    The two were different things, you realize.

    But deploying on one require coding for the other, so ...

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sat Nov 1 20:14:12 2025
    From Newsgroup: comp.os.vms

    On 11/1/2025 6:13 PM, Lawrence DrCOOliveiro wrote:
    On Sat, 1 Nov 2025 17:44:02 -0400, Arne Vajh|+j wrote:
    But having most millions of apps does not matter. What matters is
    that the platform has the apps that are important.

    Which ones were important in the beginning? The big ones on Iphone
    were simply not available on Android.

    Companies decided to support iOS and Android.

    Initially it was only IOS. They only added Android *after* it became
    popular.

    That is not reality.

    Companies started supporting Android very quickly.

    Numbers say: 10000 apps after 1 year, 100000 apps after 2 years.

    A half year after app store launched it was obvious that
    a company wanting to be on smartphones needed to support both.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Sun Nov 2 01:30:26 2025
    From Newsgroup: comp.os.vms

    On Sat, 1 Nov 2025 20:14:12 -0400, Arne Vajh|+j wrote:

    On 11/1/2025 6:13 PM, Lawrence DrCOOliveiro wrote:

    On Sat, 1 Nov 2025 17:44:02 -0400, Arne Vajh|+j wrote:

    But having most millions of apps does not matter. What matters is
    that the platform has the apps that are important.

    Which ones were important in the beginning? The big ones on Iphone
    were simply not available on Android.

    Companies decided to support iOS and Android.

    Initially it was only IOS. They only added Android *after* it became
    popular.

    That is not reality.

    Companies started supporting Android very quickly.

    No, they started supporting Windows Phone very quickly. Those professional pundits like Gartner and IDC predicted that it would dominate the
    smartphone market after a couple of more years. That was the general expectation, for quite a long while.

    It took that long before reporters stopped treating MicrosoftrCOs new mobile announcements with the deepest respect, and started to realize that they werenrCOt worth the electrons they were written on.

    Numbers say: 10000 apps after 1 year, 100000 apps after 2 years.

    Compared to how much for IOS?

    A half year after app store launched it was obvious that
    a company wanting to be on smartphones needed to support both.

    It took much longer than that.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Sun Nov 2 01:34:06 2025
    From Newsgroup: comp.os.vms

    On Sat, 1 Nov 2025 20:02:30 -0400, Arne Vajh|+j wrote:

    On 11/1/2025 4:14 PM, Lawrence DrCOOliveiro wrote:

    On Sat, 1 Nov 2025 16:40 +0000 (GMT Standard Time), John Dallman wrote:
    Microsoft tried to get my employer to offer our toolkit libraries for
    WinRT and Windows RT.

    The two were different things, you realize.

    But deploying on one require coding for the other, so ...

    Did you know there is no mention of Windows RT in the Wikipedia article on WinRT
    <https://en.wikipedia.org/wiki/Windows_Runtime>?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sat Nov 1 21:44:05 2025
    From Newsgroup: comp.os.vms

    On 11/1/2025 9:34 PM, Lawrence DrCOOliveiro wrote:
    On Sat, 1 Nov 2025 20:02:30 -0400, Arne Vajh|+j wrote:
    On 11/1/2025 4:14 PM, Lawrence DrCOOliveiro wrote:
    On Sat, 1 Nov 2025 16:40 +0000 (GMT Standard Time), John Dallman wrote: >>>> Microsoft tried to get my employer to offer our toolkit libraries for
    WinRT and Windows RT.

    The two were different things, you realize.

    But deploying on one require coding for the other, so ...

    Did you know there is no mention of Windows RT in the Wikipedia article on WinRT
    <https://en.wikipedia.org/wiki/Windows_Runtime>?

    That is true.

    But there are references in the other direction.

    https://en.wikipedia.org/wiki/Windows_RT link to https://en.wikipedia.org/wiki/Windows_Runtime.

    Arne


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Simon Clubley@clubley@remove_me.eisner.decus.org-Earth.UFP to comp.os.vms on Mon Nov 3 13:31:08 2025
    From Newsgroup: comp.os.vms

    On 2025-10-30, Arne Vajhoj <arne@vajhoej.dk> wrote:
    On 10/30/2025 9:12 AM, Simon Clubley wrote:
    On 2025-10-30, gcalliet <gerard.calliet@pia-sofer.fr> wrote:
    It seems now, because the strategy used by VSI or its investor has been
    for ten years a strategy copied on strategies for legacies OS (like
    z/os...), the option of a VMS revival as an alternate OS solution is
    almost dead.

    z/OS is responsible for keeping a good portion of today's world running.
    I would hardly call that a legacy OS.

    z/OS is still used for a lot of very important systems.

    But it is also an OS that companies are actively
    moving away from.


    Interesting. I can see how some people on the edges might be considering
    such a move, but at the very core of the z/OS world are companies that
    I thought such a move would be absolutely impossible to consider.

    What are they moving to, and how are they satisfying the extremely high constraints both on software and hardware availability, failure detection,
    and recovery that z/OS and its underlying hardware provides ?

    z/OS has a unique set of capabilities when it comes to the absolutely
    critical this _MUST_ continue working or the country/company dies area.

    In the VMS world, VMS disaster tolerant clusters were literally a generation ahead of what everyone else had as it took 20 years for rivals to be able
    to match the fully shared-everything disaster tolerant functionality that
    VMS has.

    Likewise, to replace z/OS, any replacement hardware and software must also
    have the same unique capabilities that z/OS, and the hardware it runs on,
    has. What is the general ecosystem, at both software and hardware level,
    that these people are moving to ?

    Simon.
    --
    Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
    Walking destinations on a map are further away than they appear.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Mon Nov 3 15:18:57 2025
    From Newsgroup: comp.os.vms

    On 11/3/2025 8:31 AM, Simon Clubley wrote:
    On 2025-10-30, Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/30/2025 9:12 AM, Simon Clubley wrote:
    On 2025-10-30, gcalliet <gerard.calliet@pia-sofer.fr> wrote:
    It seems now, because the strategy used by VSI or its investor has been >>>> for ten years a strategy copied on strategies for legacies OS (like
    z/os...), the option of a VMS revival as an alternate OS solution is
    almost dead.

    z/OS is responsible for keeping a good portion of today's world running. >>> I would hardly call that a legacy OS.

    z/OS is still used for a lot of very important systems.

    But it is also an OS that companies are actively
    moving away from.


    Interesting. I can see how some people on the edges might be considering
    such a move, but at the very core of the z/OS world are companies that
    I thought such a move would be absolutely impossible to consider.

    Everybody is considering it.

    Most have started the migration process.

    Few have completed the migration process (only major bank
    to have done so should be Capital One).

    But to illustrate the general sentiment see these.

    https://www.datacenterdynamics.com/en/news/jpmorgan-spent-2bn-on-new-data-centers-in-2021-and-plans-to-spend-more/

    Jamie Dimon 2022:

    <quote>
    Asked to give more detail on the technology expenditure, Dimon said the company's credit card business runs applications on a mainframe in an
    old data center which are going to be moved to the cloud: "Card runs a mainframe, which is quite good," he said. The mainframe handles 60
    million accounts efficiently and economically, and has been updated
    recently, he said: "But it's a mainframe system in the old data center."

    Moving to the cloud is not about savings, he said: "When it gets
    modernized, to the cloud, the cost savings by running that and
    marginalizing it will be $30 million or $40 million a year. I want the
    $30 million. [But] that isn't the reason we're doing it."

    In the cloud, the data can be fed to applications looking at its risk, marketing, fraud, and real-time offers." These can be added more
    rapidly, than on a mainframe which can only be modified occasionally:
    "You touch a mainframe system, you've got to be a little careful when
    you go into it to make some modifications. In the old days, you used to
    modify that mainframe system four times a year. Now you can go in and modernize a little piece of it every week, every day."
    </quote>

    https://www.jpmorganchase.com/ir/annual-report/2023/ar-ceo-letters

    Jamie Dimon 2023:

    <quote>
    OUR JOURNEY TO THE CLOUD

    Getting our technology to the cloud rCo whether the public cloud or the private cloud rCo is essential to fully maximize all of our capabilities, including the power of our data. The cloud offers many benefits: 1) it accelerates the speed of delivery of new services; 2) it simultaneously reduces the cost of compute power and enables, when needed, an
    extraordinary amount of compute capability rCo called burst computing; 3)
    it provides that compute capability across all of our data; and 4) it
    allows us to be able to constantly and quickly adopt new technologies
    because updated cloud services are continually being added rCo more so in
    the public cloud, where we benefit from the innovation that all cloud providers create, than in the private cloud, where innovation is only
    our own.

    Of course, we are learning a lot along the way. For example, we know we
    should carefully pick which applications and which data go to the public
    cloud versus the private cloud because of the expense, security and capabilities required. In addition, it is critical that we eventually
    use multiple clouds to avoid lock-in. And we intend to maintain our own expertise so that werCOre never reliant on the expertise of others even if that requires additional money.

    We invested approximately $2 billion to build four new, modern, private cloud-based, highly reliable and efficient data centers in the United
    States (we have 32 data centers globally). To date, about 50% of our applications run a large part of their processing in the public or
    private cloud. Approximately 70% of our data is now running in the
    public or private cloud. By the end of 2024, we aim to have 70% of applications and 75% of data moved to the public or private cloud. The
    new data centers are around 30% more efficient than our existing legacy
    data centers. Going to the public cloud can provide 30% additional
    efficiency if done correctly (efficiency improves when your data and applications have been modified, or rCLrefactored,rCY to enable new cloud services). We have been constantly updating most of our global data
    centers, and by the end of this year, we can start closing some that are larger, older and less efficient.
    </quote>

    The financial world is changing.

    It is practically guaranteed that many migration projects
    will get delayed.

    But they migrating.

    Arne


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Mon Nov 3 15:28:12 2025
    From Newsgroup: comp.os.vms

    On 11/3/2025 8:31 AM, Simon Clubley wrote:
    On 2025-10-30, Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/30/2025 9:12 AM, Simon Clubley wrote:
    z/OS is responsible for keeping a good portion of today's world running. >>> I would hardly call that a legacy OS.

    z/OS is still used for a lot of very important systems.

    But it is also an OS that companies are actively
    moving away from.

    Interesting. I can see how some people on the edges might be considering
    such a move, but at the very core of the z/OS world are companies that
    I thought such a move would be absolutely impossible to consider.

    What are they moving to, and how are they satisfying the extremely high constraints both on software and hardware availability, failure detection, and recovery that z/OS and its underlying hardware provides ?

    z/OS has a unique set of capabilities when it comes to the absolutely critical this _MUST_ continue working or the country/company dies area.

    Likewise, to replace z/OS, any replacement hardware and software must also have the same unique capabilities that z/OS, and the hardware it runs on, has. What is the general ecosystem, at both software and hardware level,
    that these people are moving to ?

    Mainframes were unique in last century regarding integrity, availability
    and performance but not today.

    Standard distributed environment, load sharing (horizontal scaling) applications, standard RDBMS with transaction and XA transaction
    support, auto scaling VM or container solutions, massive scaling
    capable NoSQL databases.

    It can be made to work.

    It can also be made not to work, but ....

    Arne


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Simon Clubley@clubley@remove_me.eisner.decus.org-Earth.UFP to comp.os.vms on Tue Nov 4 13:59:59 2025
    From Newsgroup: comp.os.vms

    On 2025-11-03, Arne Vajhoj <arne@vajhoej.dk> wrote:

    Mainframes were unique in last century regarding integrity, availability
    and performance but not today.

    Standard distributed environment, load sharing (horizontal scaling) applications, standard RDBMS with transaction and XA transaction
    support, auto scaling VM or container solutions, massive scaling
    capable NoSQL databases.

    It can be made to work.


    It can also be made to _appear_ to work. And probably will, at least in
    the short term.

    It can also be made not to work, but ....


    I was aware this was going on, but not to this level. So, in the name of
    {short term whatever}, yet another chunk of the critical infrastructure
    that keeps this planet running is in the process of being added to the
    massive monoculture that is a single point of failure when a vulnerability
    or flaw is discovered. :-(

    People thought the public cloud service failures were bad. That's going
    to be nothing compared to what happens if an enemy (state level or otherwise) decides to cripple our way of life and now has massive nice juicy targets
    to take down, all of which are running the same technology infrastructure.

    These people are thinking about how they can make profit for their companies
    in the short term. I'm thinking that perhaps society should be forcing them instead to design things so that they can keep society running even when
    they are under attack.

    A society that allows critical systems to move towards a single monoculture without any backup systems or other redundancy is a society that has lost
    the plot.

    When the STS computers were being designed, NASA went through a massive
    formal process to validate and verify them. Even after all that, they
    _still_ added a 5th computer system designed by a different team in case something happened to the primary systems that they had missed.

    If you are important enough to provide services that help keep society
    running, then you should be forced to do the same. The question isn't
    about how much this extra infrastructure costs, but is instead about the
    cost to society if you don't do it.

    I've been thinking quite a bit recently about just how bad monocultures
    and short term thinking can be from a society being able to continue functioning point of view. Just look at the massive damage done by
    attacks on major companies here in the UK over the last year, all of
    which should not have had single points of failure like that. :-(

    Simon.
    --
    Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
    Walking destinations on a map are further away than they appear.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Subcommandante XDelta@vlf@star.enet.dec.com to comp.os.vms on Wed Nov 5 07:57:31 2025
    From Newsgroup: comp.os.vms

    On 5/11/2025 12:59 am, Simon Clubley wrote:
    On 2025-11-03, Arne Vajh|+j <arne@vajhoej.dk> wrote:

    Mainframes were unique in last century regarding integrity, availability
    and performance but not today.

    Standard distributed environment, load sharing (horizontal scaling)
    applications, standard RDBMS with transaction and XA transaction
    support, auto scaling VM or container solutions, massive scaling
    capable NoSQL databases.

    It can be made to work.


    It can also be made to _appear_ to work. And probably will, at least in
    the short term.

    It can also be made not to work, but ....

    :
    :

    I've been thinking quite a bit recently about just how bad monocultures
    and short term thinking can be from a society being able to continue functioning point of view. Just look at the massive damage done by
    attacks on major companies here in the UK over the last year, all of
    which should not have had single points of failure like that. :-(

    Simon.


    Steady on, old chap, going on like that, about the cloud-computing
    clown-car, will get you setting up a chapter, cluster node of the VMS Generations group, tout suite, stat! :-)
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.os.vms on Tue Nov 4 22:17:00 2025
    From Newsgroup: comp.os.vms

    In article <10e5ei3$1dacc$1@dont-email.me>, arne@vajhoej.dk (Arne Vajhoj) wrote:

    I think you would need to wrap:

    --(call)-->[WinRT API in .winmd file]C++/CX wrapper
    component--(call)-->C Win32 DLL

    If they'd told us that, we'd have considered it. But they just insisted
    we could compile directly, without telling us how.

    For a long time, you couldn't use the full C/C++ run-time in a Windows
    Store app. They eventually changed that, and at the same time allowed
    ordinary WIN32 apps into the store. So all interest in producing apps
    that complied with the Windows Store limitations vanished.

    C++/CX may have been the dialect that one of their consultants insisted
    we had to support, but could not tell us why, except that customers would
    want it. Since the customer who wanted the product in question didn't
    want it, we were sceptical. Eventually he admitted that he got bonuses
    for getting ISVs to do this, so we stopped listening to him.

    John
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Tue Nov 4 22:25:56 2025
    From Newsgroup: comp.os.vms

    On Tue, 4 Nov 2025 22:17 +0000 (GMT Standard Time), John Dallman wrote:

    For a long time, you couldn't use the full C/C++ run-time in a Windows
    Store app.

    The story of Microsoft: every important-seeming new idea turns out to have
    odd limitations that discourage users and developers from embracing it. By
    the time they fix those limitations (if ever), the punters have lost
    interest.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Tue Nov 4 19:13:36 2025
    From Newsgroup: comp.os.vms

    On 11/4/2025 5:17 PM, John Dallman wrote:
    In article <10e5ei3$1dacc$1@dont-email.me>, arne@vajhoej.dk (Arne Vajh|+j) wrote:
    I think you would need to wrap:

    --(call)-->[WinRT API in .winmd file]C++/CX wrapper
    component--(call)-->C Win32 DLL

    If they'd told us that, we'd have considered it. But they just insisted
    we could compile directly, without telling us how.

    For a long time, you couldn't use the full C/C++ run-time in a Windows
    Store app. They eventually changed that, and at the same time allowed ordinary WIN32 apps into the store. So all interest in producing apps
    that complied with the Windows Store limitations vanished.

    C++/CX may have been the dialect that one of their consultants insisted
    we had to support, but could not tell us why, except that customers would want it. Since the customer who wanted the product in question didn't
    want it, we were sceptical. Eventually he admitted that he got bonuses
    for getting ISVs to do this, so we stopped listening to him.

    C++/CX is a C++ extension with builtin support for WinRT just like
    C++/CLI is a C++ extension with support for .NET.

    Supposedly they share a lot of syntax in the extended part - just
    do different things.

    Neither is widely used.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Tue Nov 4 19:34:38 2025
    From Newsgroup: comp.os.vms

    On 11/4/2025 8:59 AM, Simon Clubley wrote:
    I was aware this was going on, but not to this level. So, in the name of {short term whatever}, yet another chunk of the critical infrastructure
    that keeps this planet running is in the process of being added to the massive monoculture that is a single point of failure when a vulnerability
    or flaw is discovered. :-(

    People thought the public cloud service failures were bad. That's going
    to be nothing compared to what happens if an enemy (state level or otherwise) decides to cripple our way of life and now has massive nice juicy targets
    to take down, all of which are running the same technology infrastructure.

    These people are thinking about how they can make profit for their companies in the short term. I'm thinking that perhaps society should be forcing them instead to design things so that they can keep society running even when
    they are under attack.

    A society that allows critical systems to move towards a single monoculture without any backup systems or other redundancy is a society that has lost
    the plot.

    When the STS computers were being designed, NASA went through a massive formal process to validate and verify them. Even after all that, they
    _still_ added a 5th computer system designed by a different team in case something happened to the primary systems that they had missed.

    If you are important enough to provide services that help keep society running, then you should be forced to do the same. The question isn't
    about how much this extra infrastructure costs, but is instead about the
    cost to society if you don't do it.

    I've been thinking quite a bit recently about just how bad monocultures
    and short term thinking can be from a society being able to continue functioning point of view. Just look at the massive damage done by
    attacks on major companies here in the UK over the last year, all of
    which should not have had single points of failure like that. :-(

    When the fixed part of the cost for an instance of a type of
    product increases relative to the total market revenue for
    that type of product, then the number of instances of that
    type of products goes down. The reality of market economics.

    It has hit the lower levels of tech stacks pretty hard.
    No real monopolies but not that many options.

    Main players:

    cloud vendors: AWS, Azure, GCP, OCI
    servers: Dell, HPE, Lenovo
    CPU: x86-64, ARM64
    OS: Linux, Windows
    Virtualization: ESXi, KVM, Hyper-V
    Containers: Kubernetes, Docker Swarm

    More options when we go to the higher levels in the
    tech stacks.

    The lower levels do have security vulnerabilities. Usually
    harder to exploit than the higher level ones, but for a
    state actor ready to do something like Stuxnet, then ...

    I believe JPM is spreading out a bit with both private cloud
    and multiple public cloud vendors.

    But I am sure that VSI would be happy if JPM decided
    to run some VMS systems as part of OS diversification.

    :-)

    And a Spring Boot micro-service should run fine on VMS
    (but not Spring Boot Native as GraalVM does not support VMS).

    Arne



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Simon Clubley@clubley@remove_me.eisner.decus.org-Earth.UFP to comp.os.vms on Wed Nov 5 13:25:30 2025
    From Newsgroup: comp.os.vms

    On 2025-11-04, Subcommandante XDelta <vlf@star.enet.dec.com> wrote:

    Steady on, old chap, going on like that, about the cloud-computing
    clown-car, will get you setting up a chapter, cluster node of the VMS Generations group, tout suite, stat! :-)

    Cloud computing has its place, in some situations at least, provided
    it is only a _part_ of a larger ecosystem and _if_ there are disaster
    recovery procedures in place for if it becomes unavailable.

    It is not a method by which organisations can offload their responsibility
    to run secure and highly available computing services.

    BTW, in terms of new types of attacks, I am just waiting to see how long
    it takes for the current US administration to realise they can force
    compliance in some other countries by issuing a threat to disable the
    US-based computing infrastructure those other countries use unless they
    comply with US demands.

    For example, do any of the current South American countries which are
    currently a US target have a US-based cloud infrastructure ?

    Given the way other countries have capitulated recently after initially standing up to the US, I suspect there are now people within the US administration emboldened enough to think they could actually get away
    with that. :-(

    Simon.
    --
    Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
    Walking destinations on a map are further away than they appear.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bill@bill.gunshannon@gmail.com to comp.os.vms on Thu Nov 6 08:44:53 2025
    From Newsgroup: comp.os.vms

    On 11/5/2025 8:25 AM, Simon Clubley wrote:
    On 2025-11-04, Subcommandante XDelta <vlf@star.enet.dec.com> wrote:

    Steady on, old chap, going on like that, about the cloud-computing
    clown-car, will get you setting up a chapter, cluster node of the VMS
    Generations group, tout suite, stat! :-)

    Cloud computing has its place, in some situations at least, provided
    it is only a _part_ of a larger ecosystem and _if_ there are disaster recovery procedures in place for if it becomes unavailable.


    Cloud computing. That system where you hand all your data over to
    someone you have absolutely no basis to trust.

    When you give your data to someone else it is no longer your data.

    "Two people can keep a secret, as long as one of them is dead."

    bill


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Fri Nov 7 14:01:48 2025
    From Newsgroup: comp.os.vms

    In article <10eaaqr$2sqg0$1@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
    On 2025-10-30, Arne Vajhoj <arne@vajhoej.dk> wrote:
    On 10/30/2025 9:12 AM, Simon Clubley wrote:
    On 2025-10-30, gcalliet <gerard.calliet@pia-sofer.fr> wrote:
    It seems now, because the strategy used by VSI or its investor has been >>>> for ten years a strategy copied on strategies for legacies OS (like
    z/os...), the option of a VMS revival as an alternate OS solution is
    almost dead.

    z/OS is responsible for keeping a good portion of today's world running. >>> I would hardly call that a legacy OS.

    z/OS is still used for a lot of very important systems.

    But it is also an OS that companies are actively
    moving away from.


    Interesting. I can see how some people on the edges might be considering
    such a move, but at the very core of the z/OS world are companies that
    I thought such a move would be absolutely impossible to consider.

    What are they moving to, and how are they satisfying the extremely high >constraints both on software and hardware availability, failure detection, >and recovery that z/OS and its underlying hardware provides ?

    z/OS has a unique set of capabilities when it comes to the absolutely >critical this _MUST_ continue working or the country/company dies area.

    I'm curious: what, in your view, are those capabilities?

    In the VMS world, VMS disaster tolerant clusters were literally a generation >ahead of what everyone else had as it took 20 years for rivals to be able
    to match the fully shared-everything disaster tolerant functionality that
    VMS has.

    I adore the VMS model, but at this point, I think it is fair to
    say that it comes from an era where providing those capabilities
    at the OS layer was critical. Now, the OS has effectively
    become a commodity (as has the hardware, for that matter) while
    those capabilities are provided at the application and
    infrastructure layer. In that world, these things being
    integrated at the OS layer matters much less.

    I remember when I had the realization, and being somewhat
    aghast, when I realized that all of the infrastructure we'd
    built out for distributed authentication and authorization was
    totally irrelevant to the web applications people were building
    on those systems. Similarly, all of the distributed filesystem
    infrastructure and so forth just didn't matter anymore, because
    the way people consumed and used data as mediated by a browser
    was fundamentally different than the host-based interactive
    environment.

    Likewise, to replace z/OS, any replacement hardware and software must also >have the same unique capabilities that z/OS, and the hardware it runs on, >has. What is the general ecosystem, at both software and hardware level,
    that these people are moving to ?

    I think a bigger issue is lock-in. We _know_ how to build
    performant, reliable, distributed systems. What we don't seem
    able to collectively do is migrate away from 50 years of history
    with proprietary technology. Mainframes work, they're reliable,
    and they're low-risk. It's dealing with the ISAM, CICS, VTAM,
    DB2, COBOL extensions, etc, etc, etc, that are slowing migration
    off of them because that's migrating to a fundamentally
    different model, which is both hard and high-risk.

    As for the cloud, the number of organizations moving back
    on-prem for very good reasons shouldn't be discounted.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Simon Clubley@clubley@remove_me.eisner.decus.org-Earth.UFP to comp.os.vms on Mon Nov 10 14:12:14 2025
    From Newsgroup: comp.os.vms

    On 2025-11-07, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
    In article <10eaaqr$2sqg0$1@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
    On 2025-10-30, Arne Vajhoj <arne@vajhoej.dk> wrote:
    On 10/30/2025 9:12 AM, Simon Clubley wrote:
    On 2025-10-30, gcalliet <gerard.calliet@pia-sofer.fr> wrote:
    It seems now, because the strategy used by VSI or its investor has been >>>>> for ten years a strategy copied on strategies for legacies OS (like
    z/os...), the option of a VMS revival as an alternate OS solution is >>>>> almost dead.

    z/OS is responsible for keeping a good portion of today's world running. >>>> I would hardly call that a legacy OS.

    z/OS is still used for a lot of very important systems.

    But it is also an OS that companies are actively
    moving away from.


    Interesting. I can see how some people on the edges might be considering >>such a move, but at the very core of the z/OS world are companies that
    I thought such a move would be absolutely impossible to consider.

    What are they moving to, and how are they satisfying the extremely high >>constraints both on software and hardware availability, failure detection, >>and recovery that z/OS and its underlying hardware provides ?

    z/OS has a unique set of capabilities when it comes to the absolutely >>critical this _MUST_ continue working or the country/company dies area.

    I'm curious: what, in your view, are those capabilities?


    That's a good question. I am hard pressed to identify one single feature,
    but can identify a range of features, that when combined together, help to produce a solid robust system for mission critical computing.

    For example, I like the predictive failure analysis capabilities (and I wish VMS had something like that).

    I like the multiple levels of hardware failure detection and automatic
    recovery without system downtime.

    I like the way the hardware and z/OS and layered products software are
    tightly integrated into a coherent whole.

    I like the way the software was designed with a very tight single-minded
    focus on providing certain capabilities in highly demanding environments instead of in some undirected rambling evolution.

    I like the way the hardware and software have evolved, in a designed way,
    to address business needs, without becoming bloated (unlike modern software stacks). A lean system has many less failure points and less points of vulnerability than a bloated system.

    I like the whole CICS transaction functionality and failure recovery model.

    Likewise, to replace z/OS, any replacement hardware and software must also >>have the same unique capabilities that z/OS, and the hardware it runs on, >>has. What is the general ecosystem, at both software and hardware level, >>that these people are moving to ?

    I think a bigger issue is lock-in. We _know_ how to build
    performant, reliable, distributed systems. What we don't seem
    able to collectively do is migrate away from 50 years of history
    with proprietary technology. Mainframes work, they're reliable,
    and they're low-risk. It's dealing with the ISAM, CICS, VTAM,
    DB2, COBOL extensions, etc, etc, etc, that are slowing migration
    off of them because that's migrating to a fundamentally
    different model, which is both hard and high-risk.


    Question: are they low-risk because they were designed to do one thing
    and to do it very well in extremely demanding environments ?

    Are the replacements higher-risk because they are more of a generic infrastructure and the mission critical workloads need to be force-fitted
    into them ?

    BTW, what is the general replacement for CICS transaction processing and
    how does the replacement functionality compare to CICS ?

    As for the cloud, the number of organizations moving back
    on-prem for very good reasons shouldn't be discounted.


    Yes, and I hope the latest batch of critical system movers do not
    repeat those same mistakes.

    Simon.
    --
    Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
    Walking destinations on a map are further away than they appear.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bill@bill.gunshannon@gmail.com to comp.os.vms on Mon Nov 10 10:19:46 2025
    From Newsgroup: comp.os.vms

    On 11/10/2025 9:12 AM, Simon Clubley wrote:


    Question: are they low-risk because they were designed to do one thing
    and to do it very well in extremely demanding environments ?

    Are the replacements higher-risk because they are more of a generic infrastructure and the mission critical workloads need to be force-fitted into them ?


    And here you finally hit the crux of the matter.
    People wonder why I am still a strong supporter if COBOL.
    The reason is simple. It was a language designed to do
    a particular task and it does it well. Now we have this
    desire to replace it with something generic. I feel this
    is a bad idea.

    Thin of IBM as the same problem only on a much grander scale.
    Not just a language but a whole system with a target in mind.
    And today you have people suggesting they replace that system
    with something totally generic. Why would that be a good idea?

    And then we get back to the cloud. When you hand your data over
    to a third party, it is no longer your data. The term Zero Trust
    is bandied about all the time. And yet people agree to trust all
    their data and even their business itself in the hands of someone
    with no earned trust and nothing to lose in the event of failure.

    Not all change from the past is progress.

    bill



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Mon Nov 10 15:35:48 2025
    From Newsgroup: comp.os.vms

    On 11/10/2025 9:12 AM, Simon Clubley wrote:
    In article <10eaaqr$2sqg0$1@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
    z/OS has a unique set of capabilities when it comes to the absolutely
    critical this _MUST_ continue working or the country/company dies area.

    I like the whole CICS transaction functionality and failure recovery model.

    BTW, what is the general replacement for CICS transaction processing and
    how does the replacement functionality compare to CICS ?

    (if someone reallyly like CICS then TXSeries should provide a lot of the
    CICS functionality for AIX/Linux/Windows)

    CICS is basically just an application server with a transaction monitor supporting transactional components.

    At the very high level VMS ACMS had (has) a similar role.

    Once upon a time that was state of the art functionality.

    But times has changed.

    Lots of options to deploy transactional components today.

    Various platform software comes with transactional support: practically
    all relational databases, many NoSQL database (including MongoDB and BDB), message queue servers (RabbitMQ, ActiveMQ/ArtemisMQ, Kafka etc.) etc..

    What that means is that it is trivial for practically any language
    (including script languages like PHP and Python!) to do transactions
    for a single data source.

    For multiple data sources XA transactions has been standardized, so data sources and client libraries supporting XA transactions (above list minus SQLite, MongoDB and Kafka) plus a transaction monitor and it works.

    It is not even difficult to do in languages like Java, C# etc..

    For those that do not like XA transactions or the platform software does
    not support it, then there is the SAGA pattern and compensating transaction model.

    Yes that requires some coding, but it is a model that has been implemented hundreds of thousands of times, so developers (working on that type of application) know how to do it.

    Arne


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Mon Nov 10 15:37:27 2025
    From Newsgroup: comp.os.vms

    On 11/10/2025 9:12 AM, Simon Clubley wrote:
    On 2025-11-07, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
    As for the cloud, the number of organizations moving back
    on-prem for very good reasons shouldn't be discounted.

    Yes, and I hope the latest batch of critical system movers do not
    repeat those same mistakes.

    Lot of companies migrate off cloud, but really no companies
    migrate off cloud.

    It depends on exactly what we are talking about.

    The traditional model from 20 years ago - a data center with:
    * a mainframe
    * some commercial Unixes on a RISC platform
    * some relative unique Linux and Windows server on x86-64 or
    ESXi on x86-64

    New model with:
    * Linux containers deployed to Kubernetes cluster
    * standardized Linux VM's

    Nobody is going back from the new model to the traditional model.

    Everybody is staying with the new model.

    But a significant number of companies are migrating either fully or
    partially
    from public cloud to private cloud.

    Still the new model. But the companies own the hardware instead of Amazon/Microsoft/Google/Oracle owning it. And the companies manage their
    own platform software instead of using managed services.

    The reasons are not technical but:
    1) Cost - if the company have a relative stable workload and have necessarry
    inhouse IT expertise, then it is often lower cost.
    2) Regulatory issues (data can not leave state/country) or poltical reasons
    (dependency on another country).

    But the practical change is often small.

    One of the most public migrations is that of 37signals. Not because they
    are a huge company, but because DHH is a very public person. They moved off
    AWS and to their own servers. Servers that they paid for, but servers that
    are hosted in a colocation data center and servers that was installed by
    a third party company. It is quite possible that no 37signals employee has
    ever been in the same rooms as their servers. From a technical
    perspective not
    so big a difference between AWS data center and co location data center, but they saved money from paying Dell once instead of Amazon monthly!

    Arne



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Mon Nov 10 15:43:50 2025
    From Newsgroup: comp.os.vms

    On 11/10/2025 10:19 AM, bill wrote:
    On 11/10/2025 9:12 AM, Simon Clubley wrote:
    Question: are they low-risk because they were designed to do one thing
    and to do it very well in extremely demanding environments ?

    Are the replacements higher-risk because they are more of a generic
    infrastructure and the mission critical workloads need to be force-fitted
    into them ?

    And here you finally hit the crux of the matter.
    People wonder why I am still a strong supporter if COBOL.
    The reason is simple.-a It was a language designed to do
    a particular task and it does it well.-a Now we have this
    desire to replace it with something generic.-a I feel this
    is a bad idea.

    Thin of IBM as the same problem only-a on a much grander scale.
    Not just a language but a whole system with a target in mind.
    And today you have people suggesting they replace that system
    with something totally generic.-a Why would that be a good idea?

    I cannot follow your argument.

    You have a problem that require feature A.

    You have X with feature A and Y with features A and B.

    Y is not less suited for the problem just because it has
    extra features.

    Y is more expensive to develop and maintain.

    Y is harder to learn.

    I am all for simplicity, but for different reasons.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.os.vms on Tue Nov 11 14:47:39 2025
    From Newsgroup: comp.os.vms

    Simon Clubley <clubley@remove_me.eisner.decus.org-earth.ufp> wrote:
    On 2025-11-07, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
    In article <10eaaqr$2sqg0$1@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
    On 2025-10-30, Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 10/30/2025 9:12 AM, Simon Clubley wrote:
    On 2025-10-30, gcalliet <gerard.calliet@pia-sofer.fr> wrote:
    It seems now, because the strategy used by VSI or its investor has been >>>>>> for ten years a strategy copied on strategies for legacies OS (like >>>>>> z/os...), the option of a VMS revival as an alternate OS solution is >>>>>> almost dead.

    z/OS is responsible for keeping a good portion of today's world running. >>>>> I would hardly call that a legacy OS.

    z/OS is still used for a lot of very important systems.

    But it is also an OS that companies are actively
    moving away from.


    Interesting. I can see how some people on the edges might be considering >>>such a move, but at the very core of the z/OS world are companies that
    I thought such a move would be absolutely impossible to consider.

    What are they moving to, and how are they satisfying the extremely high >>>constraints both on software and hardware availability, failure detection, >>>and recovery that z/OS and its underlying hardware provides ?

    z/OS has a unique set of capabilities when it comes to the absolutely >>>critical this _MUST_ continue working or the country/company dies area.

    I'm curious: what, in your view, are those capabilities?


    That's a good question. I am hard pressed to identify one single feature,
    but can identify a range of features, that when combined together, help to produce a solid robust system for mission critical computing.

    For example, I like the predictive failure analysis capabilities (and I wish VMS had something like that).

    I like the multiple levels of hardware failure detection and automatic recovery without system downtime.

    I like the way the hardware and z/OS and layered products software are tightly integrated into a coherent whole.

    I like the way the software was designed with a very tight single-minded focus on providing certain capabilities in highly demanding environments instead of in some undirected rambling evolution.

    I like the way the hardware and software have evolved, in a designed way,
    to address business needs, without becoming bloated (unlike modern software stacks). A lean system has many less failure points and less points of vulnerability than a bloated system.

    Sorry, your claim about "designed way" looks out of place. z/OS is
    a descendant of OS/360. OS/360 attempted to support all possible
    uses and consequently put in a lot of complexity and bloat. It
    quickly turned out that actual needs are different, so system
    evolved. Its designers realised that it is rather bad fit for
    some use case, so concentrated on its traditional uses. But it
    still carry things which make no sense in modern times, but are
    there just to support old applications that expect old interfaces.

    In modern system there is no need for "below the line" (or "below
    the bar"). IIUC IBM still pretends that disks have CKD organization.
    IBM did a lot of things differently than a rest of industry and
    I am affraid that there is a lot of code to support old applications
    working in IBM way.

    I like the whole CICS transaction functionality and failure recovery model.

    Likewise, to replace z/OS, any replacement hardware and software must also >>>have the same unique capabilities that z/OS, and the hardware it runs on, >>>has. What is the general ecosystem, at both software and hardware level, >>>that these people are moving to ?

    I think a bigger issue is lock-in. We _know_ how to build
    performant, reliable, distributed systems. What we don't seem
    able to collectively do is migrate away from 50 years of history
    with proprietary technology. Mainframes work, they're reliable,
    and they're low-risk. It's dealing with the ISAM, CICS, VTAM,
    DB2, COBOL extensions, etc, etc, etc, that are slowing migration
    off of them because that's migrating to a fundamentally
    different model, which is both hard and high-risk.


    Question: are they low-risk because they were designed to do one thing
    and to do it very well in extremely demanding environments ?

    Are the replacements higher-risk because they are more of a generic infrastructure and the mission critical workloads need to be force-fitted into them ?

    No. Mainfraimes are low risk because change is risky. That is,
    if you wanted to port some modern software to z/OS, then there
    would be risk. Of course, mainfraimes have advantage of long
    experience with "enterprise" data processing, but current
    Linux vendors also have a lot of experience. And there are
    kinds of processing that never were popular on mainfraimes,
    so actually alternatives may offer more experience.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.os.vms on Tue Nov 11 15:23:29 2025
    From Newsgroup: comp.os.vms

    bill <bill.gunshannon@gmail.com> wrote:
    On 11/10/2025 9:12 AM, Simon Clubley wrote:


    Question: are they low-risk because they were designed to do one thing
    and to do it very well in extremely demanding environments ?

    Are the replacements higher-risk because they are more of a generic
    infrastructure and the mission critical workloads need to be force-fitted
    into them ?


    And here you finally hit the crux of the matter.
    People wonder why I am still a strong supporter if COBOL.
    The reason is simple. It was a language designed to do
    a particular task and it does it well. Now we have this
    desire to replace it with something generic. I feel this
    is a bad idea.

    Well, Cobol represents practices of 1960 business data
    processing. At that time it was state of the art.
    But state of the art changed. Cobol somewhat adapted
    but it slow to this. So your claim of "does it well"
    does not look true, unless by "it" you mean
    "replicating Cobol data processing from the sixties".

    To expand a bit more, Cobol has essentially unfixable problem
    with verbosity. Defining a function need a several lines of
    overhead code. Function calls are more verbose than in other
    languages. There are fixable problems, which however may
    appear when dealing with real Cobol code. In particular
    Cobol support old control structures. In new program you
    can use new control structures, but convering uses of old
    control strucures to new ones need effort and it is likely
    that a bit more effort would be enough to convert whole
    program to a different language.

    BTW: VSI Cobol manual documents in reasonable detail old
    constructs, but leaves almost undocumented new features.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Tue Nov 11 10:50:57 2025
    From Newsgroup: comp.os.vms

    On 11/3/2025 8:31 AM, Simon Clubley wrote:
    What are they moving to, and how are they satisfying the extremely high constraints both on software and hardware availability, failure detection, and recovery that z/OS and its underlying hardware provides ?

    z/OS has a unique set of capabilities when it comes to the absolutely critical this _MUST_ continue working or the country/company dies area.

    Note that even though z/OS and mainframes generally have a
    good track recording regarding availability, then it is not
    a magic solution - they can also have problems.

    Banks having mainframe problems are rare but far from
    unheard of.

    UK Barclays January 2025.

    https://www.forbes.com/sites/barrycollins/2025/03/08/barclays-down-again-as-uk-banking-pain-continues/

    <quote>
    The company told MPs that the failure in January resulted in just over
    half of online payments failing. The failure was attributed to a rCLsevere degradationrCY in the performance of the companyrCOs mainframe computer. </quote>

    Denmark Danske Bank March 2003.

    https://danskebank.com/news-and-insights/news-archive/press-releases/2003/pr03042003

    Short version:
    * during routine HW maintenance disk system for DB2 lost
    power
    * crash left data in an unexpected state triggering
    several until then undetected software errors in DB2
    * as a result many of the banks systems was unavailable
    for most of a week [Danske Bank is Denmark biggest bank!]
    * and panic was close - the danish central bank sent
    out a billion dollars extra in liquidity to cover
    any risk of bank runs [the above link does not tell that
    story but it was elsewhere in the danish IT press at
    the time]

    Arne



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Tue Nov 11 20:54:46 2025
    From Newsgroup: comp.os.vms

    On Tue, 11 Nov 2025 14:47:39 -0000 (UTC), Waldek Hebisch wrote:

    Mainfraimes are low risk because change is risky. That is, if you
    wanted to port some modern software to z/OS, then there would be
    risk.

    zSeries machines run Linux, too! Officially supported by IBM itself!
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Tue Nov 11 20:57:18 2025
    From Newsgroup: comp.os.vms

    On Tue, 11 Nov 2025 10:50:57 -0500, Arne Vajh|+j wrote:

    Note that even though z/OS and mainframes generally have a good
    track recording regarding availability, then it is not a magic
    solution - they can also have problems.

    Mainframes were never designed for high availability. It was normal to
    run them 24/7, simply to try to get as much as possible out of them
    because they are/were so expensive to buy. But it was no big deal if
    they had to be taken down for, say, an hour a week for rCLpreventive maintenancerCY or to switch OSes or whatever.

    Back in the 1980s, you had to reboot an IBM machine just to switch
    daylight saving on or off. Not sure if thatrCOs been fixed yet ...
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Tue Nov 11 20:59:53 2025
    From Newsgroup: comp.os.vms

    On Tue, 11 Nov 2025 15:23:29 -0000 (UTC), Waldek Hebisch wrote:

    Well, Cobol represents practices of 1960 business data processing.
    At that time it was state of the art. But state of the art changed.
    Cobol somewhat adapted but it slow to this.

    The example I like to mention is the rise of the SQL DBMS. These
    became very important for rCLbusiness data processingrCY use in the 1980s.
    But the best way to interface to one of these is by dynamically
    generating SQL command strings. And guess what: dynamic string
    handling is something that was specifically left out of COBOL, because
    it was not seen as important for rCLbusinessrCY use.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Tue Nov 11 18:56:53 2025
    From Newsgroup: comp.os.vms

    On 11/11/2025 3:57 PM, Lawrence DrCOOliveiro wrote:
    On Tue, 11 Nov 2025 10:50:57 -0500, Arne Vajh|+j wrote:
    Note that even though z/OS and mainframes generally have a good
    track recording regarding availability, then it is not a magic
    solution - they can also have problems.

    Mainframes were never designed for high availability. It was normal to
    run them 24/7, simply to try to get as much as possible out of them
    because they are/were so expensive to buy. But it was no big deal if
    they had to be taken down for, say, an hour a week for rCLpreventive maintenancerCY or to switch OSes or whatever.

    24x7 vs 16x5 is not about HA - HA is about whether the system
    can continue to serve users in case part of a box or an entire
    box fail - 24x7 vs 16x5 is about architecture.

    Once upon a time it was common to shutdown an application
    at night and run various batch jobs, do backups etc.. z/OS
    or VMS or Unix.

    Many of the old applications still work that way. And
    it is one of the reasons why nobody want to do a 1:1
    conversion from Cobol or PL/I to Java or C# or whatever -
    the application need to be rearchitected to work a different
    way.

    Arne





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Tue Nov 11 19:57:54 2025
    From Newsgroup: comp.os.vms

    On 11/11/2025 3:59 PM, Lawrence DrCOOliveiro wrote:
    On Tue, 11 Nov 2025 15:23:29 -0000 (UTC), Waldek Hebisch wrote:
    Well, Cobol represents practices of 1960 business data processing.
    At that time it was state of the art. But state of the art changed.
    Cobol somewhat adapted but it slow to this.

    The example I like to mention is the rise of the SQL DBMS. These
    became very important for rCLbusiness data processingrCY use in the 1980s.

    Yes.

    And the preferred languages was Cobol and PL/I.

    But the best way to interface to one of these is by dynamically
    generating SQL command strings.

    If you are writing a hobby program the math looks like:

    dynamic SQL strings : 2 minutes of work to write code

    the right way : 30 minutes of work to write code

    If you are writing a program for doing account operations in
    a bank expect:

    dynamic SQL strings : 2 minutes of work to write code + 60 minutes
    review time for each of 5 senior engineers

    the right way : 30 minutes of work to write code

    And guess what: dynamic string handling is something that was specifically left out of COBOL, because
    it was not seen as important for rCLbusinessrCY use.

    Nonsense.

    Cobol does dynamic string handling just fine.

    Not as good as Java, Python, PHP and other newer languages.

    But better than Fortran, C and many other common languages
    back then.

    (and I believe we have told you so before)

    Arne


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Tue Nov 11 20:02:13 2025
    From Newsgroup: comp.os.vms

    On 11/11/2025 7:57 PM, Arne Vajh|+j wrote:
    On 11/11/2025 3:59 PM, Lawrence DrCOOliveiro wrote:
    On Tue, 11 Nov 2025 15:23:29 -0000 (UTC), Waldek Hebisch wrote:
    Well, Cobol represents practices of 1960 business data processing.
    At that time it was state of the art. But state of the art changed.
    Cobol somewhat adapted but it slow to this.

    The example I like to mention is the rise of the SQL DBMS. These
    became very important for rCLbusiness data processingrCY use in the 1980s.

    -a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a And guess what: dynamic string
    handling is something that was specifically left out of COBOL, because
    it was not seen as important for rCLbusinessrCY use.

    Nonsense.

    Cobol does dynamic string handling just fine.

    Not as good as Java, Python, PHP and other newer languages.

    But better than Fortran, C and many other common languages
    back then.

    (and I believe we have told you so before)

    Demo:

    $ type dynsql.eco
    IDENTIFICATION DIVISION.
    PROGRAM-ID. DYNSQL.

    ENVIRONMENT DIVISION.
    CONFIGURATION SECTION.
    SPECIAL-NAMES.
    ARGUMENT-VALUE IS COMMAND-LINE-ARGUMENT.
    DATA DIVISION.
    WORKING-STORAGE SECTION.
    EXEC SQL INCLUDE SQLCA END-EXEC.
    EXEC SQL BEGIN DECLARE SECTION END-EXEC.
    01 CON PIC X(255).
    01 USR PIC X(255).
    01 PWD PIC X(255).
    01 SQLSTR PIC X(255).
    01 F1 PIC S9(9) BINARY.
    01 F2 PIC X(50).
    EXEC SQL END DECLARE SECTION END-EXEC.
    01 TEMP PIC 9(9) DISPLAY.
    01 F2VAL PIC X(50).

    PROCEDURE DIVISION.
    MAIN-PARAGRAPH.
    MOVE "" TO F2VAL
    ACCEPT F2VAL FROM COMMAND-LINE-ARGUMENT
    MOVE "test" TO CON
    MOVE "SYSADM" TO USR
    MOVE "hemmeligt" TO PWD
    EXEC SQL CONNECT TO :CON USER :USR USING :PWD END-EXEC
    IF F2VAL = ""
    MOVE "SELECT f1,f2 FROM t1" TO SQLSTR
    ELSE
    STRING "SELECT f1,f2 FROM t1 WHERE f2='"
    F2VAL
    "'" DELIMITED BY SIZE INTO SQLSTR
    END-IF
    EXEC SQL PREPARE 'mystmt' FROM :SQLSTR END-EXEC
    EXEC SQL ALLOCATE 'mycurs' CURSOR FOR 'mystmt' END-EXEC
    EXEC SQL OPEN 'mycurs' END-EXEC
    MOVE 0 TO SQLCODE
    PERFORM UNTIL NOT SQLCODE = 0
    EXEC SQL FETCH 'mycurs' INTO :f1, :f2 END-EXEC
    IF SQLCODE = 0 THEN
    MOVE F1 TO TEMP
    DISPLAY TEMP " " F2
    END-IF
    END-PERFORM
    EXEC SQL CLOSE 'mycurs' END-EXEC
    STOP RUN.
    $ esql/cobol dynsql

    Mimer SQL Embedded SQL Preprocessor Version 11.0.8E
    Copyright (C) Mimer Information Technology AB. All rights reserved.

    dynsql.eco

    $ cobol/ansi dynsql
    $ link dynsql + mimer$lib:mimer$sql/opt
    $ mcr []dynsql
    000000001 A
    000000002 BB
    000000003 CCC
    $ mcr []dynsql BB
    000000002 BB
    $ mcr []dynsql "BB' OR 'X'='X"
    000000001 A
    000000002 BB
    000000003 CCC

    Voila. A Cobol program using embedded SQL vulnerable to
    SQL injection. That is extremely rare!!

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris Townley@news@cct-net.co.uk to comp.os.vms on Wed Nov 12 01:50:05 2025
    From Newsgroup: comp.os.vms

    On 12/11/2025 00:57, Arne Vajh|+j wrote:
    On 11/11/2025 3:59 PM, Lawrence DrCOOliveiro wrote:
    On Tue, 11 Nov 2025 15:23:29 -0000 (UTC), Waldek Hebisch wrote:
    Well, Cobol represents practices of 1960 business data processing.
    At that time it was state of the art. But state of the art changed.
    Cobol somewhat adapted but it slow to this.

    The example I like to mention is the rise of the SQL DBMS. These
    became very important for rCLbusiness data processingrCY use in the 1980s.

    Yes.

    And the preferred languages was Cobol and PL/I.

    But the best way to interface to one of these is by dynamically
    generating SQL command strings.

    If you are writing a hobby program the math looks like:

    dynamic SQL strings : 2 minutes of work to write code

    the right way : 30 minutes of work to write code

    If you are writing a program for doing account operations in
    a bank expect:

    dynamic SQL strings : 2 minutes of work to write code + 60 minutes
    review time for each of 5 senior engineers

    the right way : 30 minutes of work to write code

    -a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a And guess what: dynamic string
    handling is something that was specifically left out of COBOL, because
    it was not seen as important for rCLbusinessrCY use.

    Nonsense.

    Cobol does dynamic string handling just fine.

    Not as good as Java, Python, PHP and other newer languages.

    But better than Fortran, C and many other common languages
    back then.

    (and I believe we have told you so before)

    Basic does it fairly well
    --
    Chris
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Tue Nov 11 21:00:46 2025
    From Newsgroup: comp.os.vms

    On 11/11/2025 8:50 PM, Chris Townley wrote:
    On 12/11/2025 00:57, Arne Vajh|+j wrote:
    On 11/11/2025 3:59 PM, Lawrence DrCOOliveiro wrote:
    -a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a And guess what: dynamic string
    handling is something that was specifically left out of COBOL, because
    it was not seen as important for rCLbusinessrCY use.

    Nonsense.

    Cobol does dynamic string handling just fine.

    Not as good as Java, Python, PHP and other newer languages.

    But better than Fortran, C and many other common languages
    back then.

    (and I believe we have told you so before)

    Basic does it fairly well

    Basic and Pascal most be the two most modern languages
    when it comes to strings among the traditional VMS
    languages.

    The absolute worst language for strings must be
    Fortran 66.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Wed Nov 12 03:48:27 2025
    From Newsgroup: comp.os.vms

    On Tue, 11 Nov 2025 19:57:54 -0500, Arne Vajh|+j wrote:

    Cobol does dynamic string handling just fine.

    Try using it to construct an ad-hoc SQL query based on a set of fields
    that a user might or might not fill in (i.e. omitting the ones left
    blank), and yourCOll see what I mean.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Wed Nov 12 03:51:23 2025
    From Newsgroup: comp.os.vms

    On Tue, 11 Nov 2025 20:02:13 -0500, Arne Vajh|+j wrote:

    Voila. A Cobol program using embedded SQL vulnerable to SQL injection.

    And it took you so much code to achieve that result, too. You managed to
    make PHP look concise!
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Wed Nov 12 03:56:05 2025
    From Newsgroup: comp.os.vms

    On Tue, 11 Nov 2025 18:56:53 -0500, Arne Vajh|+j wrote:

    HA is about whether the system can continue to serve users in case part
    of a box or an entire box fail - 24x7 vs 16x5 is about architecture.

    High availability is measured in rCLninesrCY -- e.g. five nines, six nines ... even seven nines.

    How do big enterprises (like Google) achieve that? By not using
    mainframes. They set up data centres full of off-the-shelf PC hardware --
    one article I remember from over a decade ago said that Google, at that
    time, had 460,000 servers.

    All the hardware is obtained as cheaply as possible, except one component:
    the power supply. They buy quality for that, for power-efficiency reasons.
    As for the rest, it doesnrCOt matter if a box falls over every minute, or a hard drive crashes every few minutes; they have higher-level redundancy
    and recovery procedures that can routinely recover from all those
    failures, without the users ever noticing.

    No mainframe can match that.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Wed Nov 12 13:43:58 2025
    From Newsgroup: comp.os.vms

    In article <10f10gl$16kvh$5@dont-email.me>,
    Lawrence D Oliveiro <ldo@nz.invalid> wrote:
    On Tue, 11 Nov 2025 18:56:53 -0500, Arne Vajh|+j wrote:

    HA is about whether the system can continue to serve users in case part
    of a box or an entire box fail - 24x7 vs 16x5 is about architecture.

    High availability is measured in rCLninesrCY -- e.g. five nines, six nines ...
    even seven nines.

    I don't normally reply to the troll, but, in this case multiple
    factual misstatements deserve to be corrected.

    How do big enterprises (like Google) achieve that? By not using
    mainframes.

    This has absolutely nothing to do with it. Google used COTS x86
    gear because of cost, period. The software was then architected
    to make this work well, and reliably.

    Google achieves high availability because its internal systems
    have been architected that way. But doing so is incredibly
    expensive, in multiple dimensions, and the solutions are unique
    to Google.

    They set up data centres full of off-the-shelf PC hardware --

    This has not been true for two decades. Google designs and
    manufactures its own computers for its datacenters. They are
    nowhere close to COTS systems anymore.

    one article I remember from over a decade ago said that Google, at that >time, had 460,000 servers.

    Google has O(10^7) CPUs in O(10^6) computers in spread across
    O(10^2) data centers, distributed globally. There are multiple
    layers of redundancy and load balancing spreading traffic around
    and routing around problems (which pop up regularly at that
    scale). It also has automated monitoring, some automated
    recovery, and and a small army of SREs and data center
    technicians keeping everything running. It's not magic.

    All the hardware is obtained as cheaply as possible,

    This has not been true for 15+ years. There was a time, early
    in Google's life, when this was true, but those days are long
    gone. Google has a highly developed, _highly_ skilled, internal
    platforms team that designs and builds its own hardware at
    nearly all levels of the stack. Very little is off the shelf
    anymore, and none of it is "cheap".

    except one component:
    the power supply. They buy quality for that, for power-efficiency reasons. >As for the rest, it doesnrCOt matter if a box falls over every minute, or a >hard drive crashes every few minutes; they have higher-level redundancy
    and recovery procedures that can routinely recover from all those
    failures, without the users ever noticing.

    This is true, but also nearly unique to the workloads Google
    puts on its systems.

    No mainframe can match that.

    Totally an apples and oranges comparison.

    Google doesn't run workloads that look anything at all like what
    traditionally runs on a mainframe.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Wed Nov 12 17:04:17 2025
    From Newsgroup: comp.os.vms

    In article <10esrru$1qu6$1@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
    On 2025-11-07, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
    In article <10eaaqr$2sqg0$1@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
    On 2025-10-30, Arne Vajhoj <arne@vajhoej.dk> wrote:
    On 10/30/2025 9:12 AM, Simon Clubley wrote:
    On 2025-10-30, gcalliet <gerard.calliet@pia-sofer.fr> wrote:
    It seems now, because the strategy used by VSI or its investor has been >>>>>> for ten years a strategy copied on strategies for legacies OS (like >>>>>> z/os...), the option of a VMS revival as an alternate OS solution is >>>>>> almost dead.

    z/OS is responsible for keeping a good portion of today's world running. >>>>> I would hardly call that a legacy OS.

    z/OS is still used for a lot of very important systems.

    But it is also an OS that companies are actively
    moving away from.


    Interesting. I can see how some people on the edges might be considering >>>such a move, but at the very core of the z/OS world are companies that
    I thought such a move would be absolutely impossible to consider.

    What are they moving to, and how are they satisfying the extremely high >>>constraints both on software and hardware availability, failure detection, >>>and recovery that z/OS and its underlying hardware provides ?

    z/OS has a unique set of capabilities when it comes to the absolutely >>>critical this _MUST_ continue working or the country/company dies area.

    I'm curious: what, in your view, are those capabilities?

    That's a good question. I am hard pressed to identify one single feature,
    but can identify a range of features, that when combined together, help to >produce a solid robust system for mission critical computing.

    For example, I like the predictive failure analysis capabilities (and I wish >VMS had something like that).

    This is certainly an area where other systems lag behind, but as
    x86 systems (for example) increase support for RAS and MCA/MCAX,
    they are rapidly catching up. The SoCs and interconnects have
    the capability at the hardware level, but the software is not
    there (Linux in particular was lagging the last time I looked
    closely).

    I like the multiple levels of hardware failure detection and automatic >recovery without system downtime.

    Fair, but this is not unique to IBM or even mainframes; most
    server-grade systems support auto offlining storage devices and
    hotplug; some also support this for CPUs and/or DRAM.

    However, I would argue that this speaks to a system view that
    was becoming obsolete, but is (perhaps ironically) coming back
    into fashion.

    A couple of decades ago, the realization was that, for certain
    workloads, you were better off providing availability by
    horizontal scaling and if building availability in software at
    the application level. If a machine fell over and took out a
    job, oh well; just restart it on another node. No need for the
    complexity of handling that on a single node.

    Google, for instance, did this somewhat famously for web search,
    where regularly indexing (essentially) the entire world wide web
    was required. The MapReduce framework put the self-healing into
    the job/sharding layer: if a shard was being slow, MR just
    restarted it. This ended up pervading the software stack, to
    the point that regular maintenance jobs (for instance, to update
    software) would just restart the machine regardless of what jobs
    were running on it; the borg scheduler would just spin them up
    elsewhere, and whatever framework was being used by them would
    coordinate things appropriately.

    But note that web search is an embarassingly parallel problem,
    which is amenable to such things. Many other workloads are not;
    this really broke down for e.g. GCP, where you can't just knock
    over a customer VM and restart it somewhere else with no
    coordination.

    Furthermore, as core counts are increasing significantly, now
    regularly exceeding 255 on high end parts, this is become more
    expensive. With so many different things running on a single
    node, "just reboot" as a means to fixing things doesn't scale.

    I like the way the hardware and z/OS and layered products software are >tightly integrated into a coherent whole.

    I like the way the software was designed with a very tight single-minded >focus on providing certain capabilities in highly demanding environments >instead of in some undirected rambling evolution.

    I like the way the hardware and software have evolved, in a designed way,
    to address business needs, without becoming bloated (unlike modern software >stacks). A lean system has many less failure points and less points of >vulnerability than a bloated system.

    I dunno, I always felt that mainframe software was bloated and
    baroque. VTAM, ISAM, JCL...ick.

    The hardware/software co-design advantage is very real however.
    That's one reason we do hardware/software codesign at work.

    I like the whole CICS transaction functionality and failure recovery model.

    As has been pointed out, this exists outside of the CICS system
    as well. XA is pretty well standard at this point.

    Likewise, to replace z/OS, any replacement hardware and software must also >>>have the same unique capabilities that z/OS, and the hardware it runs on, >>>has. What is the general ecosystem, at both software and hardware level, >>>that these people are moving to ?

    I think a bigger issue is lock-in. We _know_ how to build
    performant, reliable, distributed systems. What we don't seem
    able to collectively do is migrate away from 50 years of history
    with proprietary technology. Mainframes work, they're reliable,
    and they're low-risk. It's dealing with the ISAM, CICS, VTAM,
    DB2, COBOL extensions, etc, etc, etc, that are slowing migration
    off of them because that's migrating to a fundamentally
    different model, which is both hard and high-risk.

    Question: are they low-risk because they were designed to do one thing
    and to do it very well in extremely demanding environments ?

    Are the replacements higher-risk because they are more of a generic >infrastructure and the mission critical workloads need to be force-fitted >into them ?

    I think it's low-risk because those applications have been
    running in production for many years, in some cases, decades;
    they're well-tested and debugged, and the rate of change is very
    low.

    The alternatives are higher-risk because it's not just the
    underlying OS or hardware that's changing, but the entire
    application model.

    It's my sense that so many migrate-off-the-mainframe projects
    fail not because the mainframe is so singularly unmatched, but
    because those projects are world-shifts, in which _everything_
    changes: the hardware and host OS, but also the application code
    itself, the user interface, database, etc. I suspect that if it
    were feasible to just lift the code and data off of the
    mainframe and plop it onto something else, most would work just
    fine. But that's essentially never what happens. The mainframe
    is, at this point, so alien in the larger scheme of things that
    it's impossible to "just" move with a recompile.

    OTOH, I suspect that for _many_ projects if you were running on,
    say, Solaris, it's pretty straight-forward to recompile for
    Linux and run with more or less the same stack. Of course there
    is a lot of heavy lifting one must do in terms of testing and
    qualification, but that is qualitatively different than having
    no choice other than rewriting everything from the ground up.

    BTW, what is the general replacement for CICS transaction processing and
    how does the replacement functionality compare to CICS ?

    This is outside of my wheelhouse, but back in the day things
    like BEA Tuxedo could do this and more.

    As for the cloud, the number of organizations moving back
    on-prem for very good reasons shouldn't be discounted.

    Yes, and I hope the latest batch of critical system movers do not
    repeat those same mistakes.

    I'm not sure what mistakes you're referring to, but let's hope
    that system maintainers make fewer mistakes generally. :-D

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Wed Nov 12 14:54:13 2025
    From Newsgroup: comp.os.vms

    On 11/11/2025 10:56 PM, Lawrence DrCOOliveiro wrote:
    On Tue, 11 Nov 2025 18:56:53 -0500, Arne Vajh|+j wrote:
    HA is about whether the system can continue to serve users in case part
    of a box or an entire box fail - 24x7 vs 16x5 is about architecture.

    High availability is measured in rCLninesrCY -- e.g. five nines, six nines ...
    even seven nines.

    How do big enterprises (like Google) achieve that? By not using
    mainframes. They set up data centres full of off-the-shelf PC hardware --
    one article I remember from over a decade ago said that Google, at that
    time, had 460,000 servers.

    All the hardware is obtained as cheaply as possible, except one component: the power supply. They buy quality for that, for power-efficiency reasons.
    As for the rest, it doesnrCOt matter if a box falls over every minute, or a hard drive crashes every few minutes; they have higher-level redundancy
    and recovery procedures that can routinely recover from all those
    failures, without the users ever noticing.

    No mainframe can match that.

    Of course mainframes can match that.

    The fundamental mechanism is the same for mainframes and
    let us call it modern distributed environments.

    You need N systems running to handle load. There is
    a probability Pd of one system becoming unavailable.
    You want Pr probability of handling the load.

    You can calculate how many systems M you need to
    achieve that.

    N is smaller, Pd is smaller and the cost of a
    system is much bigger for mainframes than for
    x86-64 servers.

    But the formula is the same. You can do the math.

    IBM mainframes use OS clustering (like VMS) called
    SysPlex. The modern distributed environments use
    pure application level clustering. But that is
    the "how" not the "what".

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Wed Nov 12 15:12:40 2025
    From Newsgroup: comp.os.vms

    On 11/11/2025 10:48 PM, Lawrence DrCOOliveiro wrote:
    On Tue, 11 Nov 2025 19:57:54 -0500, Arne Vajh|+j wrote:
    Cobol does dynamic string handling just fine.

    Try using it to construct an ad-hoc SQL query based on a set of fields
    that a user might or might not fill in (i.e. omitting the ones left
    blank), and yourCOll see what I mean.

    Somehow I think you are missing something very fundamental
    about programming.

    To build dynamic SQL strings you need support for a few
    basic features:
    * loops
    * conditional blocks
    * string concatanation

    Cobol does support that.

    That support does not depend on whether there is 1 or 100
    optional values.

    100 just require more lines than 1.

    I showed an example with 1,

    Doing 100 would not really show more.

    (strictly speaking 100 would show loop, which 1 did
    not, but believe me Cobol does loops)

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Wed Nov 12 21:01:20 2025
    From Newsgroup: comp.os.vms

    On Wed, 12 Nov 2025 15:12:40 -0500, Arne Vajh|+j wrote:

    To build dynamic SQL strings you need support for a few
    basic features:
    * loops
    * conditional blocks
    * string concatanation

    Cobol does support that.

    But not arbitrary-length dynamic strings.

    And not functional constructs that let you put the loops and
    conditionals inside the string-construction expression.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Wed Nov 12 21:02:39 2025
    From Newsgroup: comp.os.vms

    On Wed, 12 Nov 2025 14:54:13 -0500, Arne Vajh|+j wrote:

    On 11/11/2025 10:56 PM, Lawrence DrCOOliveiro wrote:

    No mainframe can match that.

    Of course mainframes can match that.

    Nobody can afford to buy enough mainframes to match that.

    IBM mainframes use OS clustering (like VMS) called
    SysPlex.

    Do either of those scale to 460,000 nodes?

    No, they donrCOt.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Wed Nov 12 16:06:56 2025
    From Newsgroup: comp.os.vms

    On 11/12/2025 4:01 PM, Lawrence DrCOOliveiro wrote:
    On Wed, 12 Nov 2025 15:12:40 -0500, Arne Vajh|+j wrote:
    To build dynamic SQL strings you need support for a few
    basic features:
    * loops
    * conditional blocks
    * string concatanation

    Cobol does support that.

    But not arbitrary-length dynamic strings.

    And not functional constructs that let you put the loops and
    conditionals inside the string-construction expression.

    True.

    But that does not impact whether you can do it in Cobol.

    It just impacts how many lines of code you need to do it.

    Arne



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Wed Nov 12 16:10:39 2025
    From Newsgroup: comp.os.vms

    On 11/12/2025 4:02 PM, Lawrence DrCOOliveiro wrote:
    On Wed, 12 Nov 2025 14:54:13 -0500, Arne Vajh|+j wrote:
    On 11/11/2025 10:56 PM, Lawrence DrCOOliveiro wrote:
    No mainframe can match that.

    Of course mainframes can match that.

    Nobody can afford to buy enough mainframes to match that.

    IBM mainframes use OS clustering (like VMS) called
    SysPlex.

    Do either of those scale to 460,000 nodes?

    No, they donrCOt.

    I believe the topic was whether mainframes can achieve
    the availability - the required number of nines. They
    can.

    Whether mainframes can do web scale like Google Search
    and FaceBook is a totally different question. I don't
    think they can - they are not designed for that level
    of scalability.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Wade@g4ugm@dave.invalid to comp.os.vms on Wed Nov 12 21:43:40 2025
    From Newsgroup: comp.os.vms

    On 12/11/2025 21:02, Lawrence DrCOOliveiro wrote:
    On Wed, 12 Nov 2025 14:54:13 -0500, Arne Vajh|+j wrote:

    On 11/11/2025 10:56 PM, Lawrence DrCOOliveiro wrote:

    No mainframe can match that.

    Of course mainframes can match that.

    Nobody can afford to buy enough mainframes to match that.


    They are competitively priced, provided you only run Linux...
    ... and a hypervisor.

    IBM mainframes use OS clustering (like VMS) called
    SysPlex.

    Do either of those scale to 460,000 nodes?

    No, they donrCOt.

    Why won't it scale to 460,000 nodes? Why would you need that many nodes,
    well unless you are google?

    .. if you need that many nodes you could borrow a Spinaker machine from Manchester uni...

    https://www.scieng.manchester.ac.uk/tomorrowlabs/spinnaker/

    https://en.wikipedia.org/wiki/SpiNNaker

    .. modern "mainframes" are not "mainframes" in the traditional sense,
    they are virtual clusters, similar in technology to intel clusters so
    you get the same scalability over the same underlying connectivity, so
    fibre, as you get with INTEL clusters.

    They have some innovative features in the area of NUMA, Cache management
    and instruction sets optimised for the execution of "C" code.


    Dave

    p.s. No one should assume the world stands still. A virtual Intel/X64
    cluster has nothing in common with a PC from the 1990s. A current IBM Mainframe has little in common with an S/360 from 1960's EXCEPT the
    modern mainframe will run user mode 24-bit code from the start of time.

    A modern Wintel cluster WON'T run 16-bit DOS or Windows code except
    under emulation.

    pps I know, don't feed the troll



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Wed Nov 12 16:57:38 2025
    From Newsgroup: comp.os.vms

    On 11/12/2025 4:43 PM, David Wade wrote:
    On 12/11/2025 21:02, Lawrence DrCOOliveiro wrote:
    On Wed, 12 Nov 2025 14:54:13 -0500, Arne Vajh|+j wrote:
    IBM mainframes use OS clustering (like VMS) called
    SysPlex.

    Do either of those scale to 460,000 nodes?

    No, they donrCOt.

    Why won't it scale to 460,000 nodes?

    Maybe because IBM only supports SysPlex up to 32 nodes.

    :-)

    (VMS does up to 96!!)

    Why would you need that many nodes,
    well unless you are google?

    Nobody need that many nodes for what mainframes are actually
    used for.

    There are companies that need web scale. Also other companies
    than Google.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Thu Nov 13 02:45:19 2025
    From Newsgroup: comp.os.vms

    On Wed, 12 Nov 2025 21:43:40 +0000, David Wade wrote:

    Why won't it scale to 460,000 nodes?

    Because a cluster of on the order of tens of machines (like your SysPlex
    and VMScluster) can depend on algorithms with polynomial complexity, that would no longer be practicable when you have hundreds of thousands of
    nodes.

    Why would you need that many nodes, well unless you are google?

    All the hyperscalers are running clusters of that sort of size.

    And not just them. Supercomputers are now built out of millions of nodes,
    with the added twist of having a high-speed interconnect.

    LetrCOs see you build a SysPlex or VMScluster on that sort of scale ...

    p.s. No one should assume the world stands still. A virtual Intel/X64
    cluster has nothing in common with a PC from the 1990s. A current IBM Mainframe has little in common with an S/360 from 1960's EXCEPT the
    modern mainframe will run user mode 24-bit code from the start of time.

    Did you know that when Debian boots on an IBM mainframe, it has to pretend itrCOs getting punched cards from a card reader?

    rCLWorld doesnrCOt stand stillrCY and rCLlittle in common with the 1960srCY my bum ...
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Wade@g4ugm@dave.invalid to comp.os.vms on Thu Nov 13 10:36:17 2025
    From Newsgroup: comp.os.vms

    On 13/11/2025 02:45, Lawrence DrCOOliveiro wrote:
    On Wed, 12 Nov 2025 21:43:40 +0000, David Wade wrote:

    Why won't it scale to 460,000 nodes?

    Because a cluster of on the order of tens of machines (like your SysPlex
    and VMScluster) can depend on algorithms with polynomial complexity, that would no longer be practicable when you have hundreds of thousands of
    nodes.

    Why would you need that many nodes, well unless you are google?

    All the hyperscalers are running clusters of that sort of size.


    Are they really tightly couple clusters, or load balanced front ends...


    And not just them. Supercomputers are now built out of millions of nodes, with the added twist of having a high-speed interconnect.

    LetrCOs see you build a SysPlex or VMScluster on that sort of scale ...


    Very specialist hardware...

    p.s. No one should assume the world stands still. A virtual Intel/X64
    cluster has nothing in common with a PC from the 1990s. A current IBM
    Mainframe has little in common with an S/360 from 1960's EXCEPT the
    modern mainframe will run user mode 24-bit code from the start of time.

    Did you know that when Debian boots on an IBM mainframe, it has to pretend itrCOs getting punched cards from a card reader?


    It does not "have to" pretend its cards, its just convenient to do so.
    How is this different from a VMWare cluster having to pretend its
    booting from a CD?

    I guess it is a bit of a strange concept, booting straight into a 64-bit
    OS with GB of real storage from fixed length 80-byte records...


    rCLWorld doesnrCOt stand stillrCY and rCLlittle in common with the 1960srCY my
    bum ...

    There is a world of difference between "backwards compatibility" and "standing still"..

    Dave
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Simon Clubley@clubley@remove_me.eisner.decus.org-Earth.UFP to comp.os.vms on Thu Nov 13 13:42:37 2025
    From Newsgroup: comp.os.vms

    On 2025-11-12, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
    In article <10esrru$1qu6$1@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
    On 2025-11-07, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
    As for the cloud, the number of organizations moving back
    on-prem for very good reasons shouldn't be discounted.

    Yes, and I hope the latest batch of critical system movers do not
    repeat those same mistakes.

    I'm not sure what mistakes you're referring to, but let's hope
    that system maintainers make fewer mistakes generally. :-D


    I was referring to the mistake of getting rid of your local systems,
    and local systems knowledge, in favour of moving everything into the
    public clouds and outsourcing your local systems knowledge and
    development to third party vendors.

    This works for some people, but not for others, and there appears to have
    been quite a drive by senior management in general of inappropriate
    movement away from local control and knowledge so that it "becomes someone else's problem".

    The problem is that it isn't someone else's problem, it's still their
    problem, as more than a few people have found out the hard way, promptly followed by spending more money to move things back in house again.

    Simon.
    --
    Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
    Walking destinations on a map are further away than they appear.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Thu Nov 13 16:13:06 2025
    From Newsgroup: comp.os.vms

    On 11/13/2025 5:36 AM, David Wade wrote:
    On 13/11/2025 02:45, Lawrence DrCOOliveiro wrote:
    On Wed, 12 Nov 2025 21:43:40 +0000, David Wade wrote:
    Why won't it scale to 460,000 nodes?

    Because a cluster of on the order of tens of machines (like your SysPlex
    and VMScluster) can depend on algorithms with polynomial complexity, that
    would no longer be practicable when you have hundreds of thousands of
    nodes.

    Why would you need that many nodes, well unless you are google?

    All the hyperscalers are running clusters of that sort of size.

    Are they really tightly couple clusters, or load balanced front ends...

    Note that the hundreds of thousands of nodes are all clusters
    not one cluster.

    But none of the clusters will be OS clusters like z/OS
    SysPlex or VMS cluster. Some form of application cluster.

    Companies of that size are all unique, but let us invent
    a hypothetical company SuperBigBiz.

    Transaction side:

    customers
    |
    v
    load balancer with sticky sessions
    |
    v
    2000 node.js instances stateless + 2000 readonly copies of static content
    |
    v
    load balancer
    |
    v
    5000 Java SpringBoot micro-service instances stateless
    smart client libraries
    |
    v
    100 sharded Redis instances + 1000 sharded PostgreSQL instances

    Analytical side:

    Python loaders running on all nodes
    smart client library
    |
    v
    10 sharded Kafka instances
    ^
    |
    100 Python loader instances
    smart client library
    |
    v
    5000 sharded Cassandra instances
    ^
    |
    smart client library
    1000 Spark instances splitting work
    ^
    |
    smart client library
    1 PC querying data

    That gives 14210 instances (VM or container).

    Much smaller than Google, FaceBook etc., but still bigger than
    what you would do with z/OS or VMS.

    Also most large companies have multiple different transactional flows.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Fri Nov 14 05:47:09 2025
    From Newsgroup: comp.os.vms

    On Wed, 12 Nov 2025 16:10:39 -0500, Arne Vajh|+j wrote:

    On 11/12/2025 4:02 PM, Lawrence DrCOOliveiro wrote:

    On Wed, 12 Nov 2025 14:54:13 -0500, Arne Vajh|+j wrote:

    On 11/11/2025 10:56 PM, Lawrence DrCOOliveiro wrote:

    No mainframe can match that.

    Of course mainframes can match that.

    Nobody can afford to buy enough mainframes to match that.

    IBM mainframes use OS clustering (like VMS) called SysPlex.

    Do either of those scale to 460,000 nodes?

    No, they donrCOt.

    I believe the topic was whether mainframes can achieve the availability
    - the required number of nines. They can.

    No they canrCOt. Mainframes were never designed for high availability.

    How many nines does IBM offer?

    Hint: look at this intro from IBM itself <https://www.ibm.com/think/topics/high-availability>. Do they mention
    their own mainframes? No. Do they mention cloud and Linux companies? Yes.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Fri Nov 14 16:49:49 2025
    From Newsgroup: comp.os.vms

    On 11/14/2025 12:47 AM, Lawrence DrCOOliveiro wrote:
    On Wed, 12 Nov 2025 16:10:39 -0500, Arne Vajh|+j wrote:
    I believe the topic was whether mainframes can achieve the availability
    - the required number of nines. They can.

    No they canrCOt. Mainframes were never designed for high availability.

    How many nines does IBM offer?

    Hint: look at this intro from IBM itself <https://www.ibm.com/think/topics/high-availability>. Do they mention
    their own mainframes? No. Do they mention cloud and Linux companies? Yes.

    Better hint - their page about z resiliency:

    https://www.ibm.com/products/z/resiliency

    <quote>
    For clients running z/OS v3.1 or higher with a configured high
    availability IBM software stack on IBM z16 or IBM z17, users can expect
    up to 99.999999% availability or 315.58 milliseconds of downtime per
    year when using a GDPS 4.7 Continuous Availability (CA) configuration
    and workloads.
    </quote>

    That is a lot of nines.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Fri Nov 14 16:58:44 2025
    From Newsgroup: comp.os.vms

    On 11/12/2025 2:54 PM, Arne Vajh|+j wrote:
    On 11/11/2025 10:56 PM, Lawrence DrCOOliveiro wrote:
    As for the rest, it doesnrCOt matter if a box falls over every minute, or a >> hard drive crashes every few minutes; they have higher-level redundancy
    and recovery procedures that can routinely recover from all those
    failures, without the users ever noticing.

    No mainframe can match that.

    Of course mainframes can match that.

    The fundamental mechanism is the same for mainframes and
    let us call it modern distributed environments.

    You need N systems running to handle load. There is
    a probability Pd of one system becoming unavailable.
    You want Pr probability of handling the load.

    You can calculate how many systems M you need to
    achieve that.

    N is smaller, Pd is smaller and the cost of a
    system is much bigger for mainframes than for
    x86-64 servers.

    But the formula is the same. You can do the math.

    If we take the simple case of N = 1 then:

    M = ceil(log(1 - Pr)/ log(Pd))

    make it happen.

    If Pr = 0.99999 then:

    Pd = 0.1 => M = 5
    Pd = 0.01 => M = 3
    Pd = 0.001 => M = 2
    Pd = 0.0001 => M = 2
    Pd = 0.00001 => M = 1

    Arne


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Fri Nov 14 23:35:46 2025
    From Newsgroup: comp.os.vms

    On Wed, 12 Nov 2025 16:06:56 -0500, Arne Vajh|+j wrote:

    On 11/12/2025 4:01 PM, Lawrence DrCOOliveiro wrote:

    On Wed, 12 Nov 2025 15:12:40 -0500, Arne Vajh|+j wrote:

    To build dynamic SQL strings you need support for a few basic
    features:
    * loops
    * conditional blocks
    * string concatanation

    Cobol does support that.

    But not arbitrary-length dynamic strings.

    And not functional constructs that let you put the loops and
    conditionals inside the string-construction expression.

    True.

    But that does not impact whether you can do it in Cobol.

    It just impacts how many lines of code you need to do it.

    More code means more work to write and maintain, and more chance for bugs
    to get in.

    Remember, this stuff is already a well-known source of security vulnerabilities. The last thing you need is more maintenance headaches.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Fri Nov 14 23:39:25 2025
    From Newsgroup: comp.os.vms

    On Thu, 13 Nov 2025 10:36:17 +0000, David Wade wrote:

    On 13/11/2025 02:45, Lawrence DrCOOliveiro wrote:

    Did you know that when Debian boots on an IBM mainframe, it has to
    pretend itrCOs getting punched cards from a card reader?

    It does not "have to" pretend its cards, its just convenient to do so.
    How is this different from a VMWare cluster having to pretend its
    booting from a CD?

    DonrCOt know, donrCOt care. I donrCOt use that proprietary Broadcom crap.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.os.vms on Sat Nov 15 00:24:04 2025
    From Newsgroup: comp.os.vms

    Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:
    On Wed, 12 Nov 2025 16:06:56 -0500, Arne Vajh|+j wrote:

    On 11/12/2025 4:01 PM, Lawrence DrCOOliveiro wrote:

    On Wed, 12 Nov 2025 15:12:40 -0500, Arne Vajh|+j wrote:

    To build dynamic SQL strings you need support for a few basic
    features:
    * loops
    * conditional blocks
    * string concatanation

    Cobol does support that.

    But not arbitrary-length dynamic strings.

    And not functional constructs that let you put the loops and
    conditionals inside the string-construction expression.

    True.

    But that does not impact whether you can do it in Cobol.

    It just impacts how many lines of code you need to do it.

    More code means more work to write and maintain, and more chance for bugs
    to get in.

    Remember, this stuff is already a well-known source of security vulnerabilities. The last thing you need is more maintenance headaches.

    Well, Cobol is not good essentially for any code. But for routine
    database queries I want fixed query structure with data filling
    slots. Which is provided by embedded SQL and several alternatives.
    I do not want arbitrary strings as queries: with fixed query
    structure correctness is not hard, with dynamic strings one
    needs to consider a lot of weird corner cases.

    Of course, for ad hoc queries you need dynamic query structure,
    but ability to specify query structure should be limited to trusted
    users.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Sat Nov 15 02:41:52 2025
    From Newsgroup: comp.os.vms

    On Sat, 15 Nov 2025 00:24:04 -0000 (UTC), Waldek Hebisch wrote:

    But for routine database queries I want fixed query structure with
    data filling slots. Which is provided by embedded SQL and several alternatives. I do not want arbitrary strings as queries: with fixed
    query structure correctness is not hard, with dynamic strings one
    needs to consider a lot of weird corner cases.

    True enough. Fine for canned reports, standard batch processing runs
    etc. Except COBOL never had any official standard, did it, for these
    rCLEXEC SQLrCY templates.

    Of course, for ad hoc queries you need dynamic query structure,
    but ability to specify query structure should be limited to trusted
    users.

    Not if the query is written correctly, which is not hard to do. I
    posted example Python code for this a few times in this group over the
    years ... I could probably dig it up and post it again ...
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Fri Nov 14 22:18:22 2025
    From Newsgroup: comp.os.vms

    On 11/14/2025 9:41 PM, Lawrence DrCOOliveiro wrote:
    On Sat, 15 Nov 2025 00:24:04 -0000 (UTC), Waldek Hebisch wrote:
    But for routine database queries I want fixed query structure with
    data filling slots. Which is provided by embedded SQL and several
    alternatives. I do not want arbitrary strings as queries: with fixed
    query structure correctness is not hard, with dynamic strings one
    needs to consider a lot of weird corner cases.

    True enough. Fine for canned reports, standard batch processing runs
    etc. Except COBOL never had any official standard, did it, for these
    rCLEXEC SQLrCY templates.

    ISO 9075 part 2

    Of course, for ad hoc queries you need dynamic query structure,
    but ability to specify query structure should be limited to trusted
    users.

    Not if the query is written correctly, which is not hard to do.

    C program do not have memory leaks or out of bounds array access
    if written correctly.

    But developers occasionally make mistakes.

    Injection is still in top 5 on OWASP top 10.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Sat Nov 15 06:00:40 2025
    From Newsgroup: comp.os.vms

    On Fri, 14 Nov 2025 22:18:22 -0500, Arne Vajh|+j wrote:

    On 11/14/2025 9:41 PM, Lawrence DrCOOliveiro wrote:

    On Sat, 15 Nov 2025 00:24:04 -0000 (UTC), Waldek Hebisch wrote:

    But for routine database queries I want fixed query structure with
    data filling slots. Which is provided by embedded SQL and several
    alternatives. I do not want arbitrary strings as queries: with
    fixed query structure correctness is not hard, with dynamic
    strings one needs to consider a lot of weird corner cases.

    True enough. Fine for canned reports, standard batch processing
    runs etc. Except COBOL never had any official standard, did it, for
    these rCLEXEC SQLrCY templates.

    ISO 9075 part 2

    Something about rCLdata type correspondencesrCY? Not, as I was expecting, rCLlanguage constructs for COBOLrCY? (i.e. not sure what the relevance
    is.)

    Of course, for ad hoc queries you need dynamic query structure,
    but ability to specify query structure should be limited to
    trusted users.

    Not if the query is written correctly, which is not hard to do.

    C program do not have memory leaks or out of bounds array access if
    written correctly.

    As you may have noticed, it wasnrCOt C I was recommending for this.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Andreas Eder@a_eder_muc@web.de to comp.os.vms on Sat Nov 15 12:15:32 2025
    From Newsgroup: comp.os.vms

    On Fr 14 Nov 2025 at 22:18, Arne Vajh|+j <arne@vajhoej.dk> wrote:

    C program do not have memory leaks or out of bounds array access
    if written correctly.

    That is true for any language, isn't it?

    'Andreas
    --
    ceterum censeo redmondinem esse delendam
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sat Nov 15 08:27:31 2025
    From Newsgroup: comp.os.vms

    On 11/15/2025 6:15 AM, Andreas Eder wrote:
    On Fr 14 Nov 2025 at 22:18, Arne Vajh|+j <arne@vajhoej.dk> wrote:
    C program do not have memory leaks or out of bounds array access
    if written correctly.

    That is true for any language, isn't it?

    Some languages use GC and some languages actually
    catch out of bounds array access when they happen.

    My point was that the argument of "doing XYZ is ok
    if done correctly" is weird, because problems
    only happens when something is not done correctly.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sat Nov 15 09:22:33 2025
    From Newsgroup: comp.os.vms

    On 11/15/2025 1:00 AM, Lawrence DrCOOliveiro wrote:
    On Fri, 14 Nov 2025 22:18:22 -0500, Arne Vajh|+j wrote:
    On 11/14/2025 9:41 PM, Lawrence DrCOOliveiro wrote:
    On Sat, 15 Nov 2025 00:24:04 -0000 (UTC), Waldek Hebisch wrote:
    But for routine database queries I want fixed query structure with
    data filling slots. Which is provided by embedded SQL and several
    alternatives. I do not want arbitrary strings as queries: with
    fixed query structure correctness is not hard, with dynamic
    strings one needs to consider a lot of weird corner cases.

    True enough. Fine for canned reports, standard batch processing
    runs etc. Except COBOL never had any official standard, did it, for
    these rCLEXEC SQLrCY templates.

    ISO 9075 part 2

    Something about rCLdata type correspondencesrCY? Not, as I was expecting, rCLlanguage constructs for COBOLrCY? (i.e. not sure what the relevance
    is.)

    Embedded SQL is not a language construct, but a preprocessor construct.

    So it is:

    Cobol code with EXEC SQL---(preprocessor)--->plain Cobol code---(Cobol compiler)--->object code

    EXEC SQL ... END-EXEC is in itself very simple.

    The tricky part is the mapping between SQL data types and
    Cobol data types.

    And the handling of errors.

    Of course, for ad hoc queries you need dynamic query structure,
    but ability to specify query structure should be limited to
    trusted users.

    Not if the query is written correctly, which is not hard to do.

    C program do not have memory leaks or out of bounds array access if
    written correctly.

    As you may have noticed, it wasnrCOt C I was recommending for this.

    The point is that all problems arise because something is not
    written correctly.

    If everything was written correctly there would not be any
    any software bugs at all.

    But we have many decades of experience for that people do
    not always write code correctly.

    Best practice is not just to write code correctly, but to
    do things in a way that make it more difficult not to
    write code correctly.

    C was just an example of that. An obvious example due to
    the ongoing debate about memory safe languages. Few are
    buying the argument "C is fine because the programmers can just
    write the code correctly".

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Sat Nov 15 22:16:45 2025
    From Newsgroup: comp.os.vms

    On Sat, 15 Nov 2025 09:22:33 -0500, Arne Vajh|+j wrote:

    On 11/15/2025 1:00 AM, Lawrence DrCOOliveiro wrote:

    On Fri, 14 Nov 2025 22:18:22 -0500, Arne Vajh|+j wrote:

    On 11/14/2025 9:41 PM, Lawrence DrCOOliveiro wrote:

    Except COBOL never had any official standard, did it, for these
    rCLEXEC SQLrCY templates.

    ISO 9075 part 2

    Something about rCLdata type correspondencesrCY? Not, as I was expecting,
    rCLlanguage constructs for COBOLrCY? (i.e. not sure what the relevance is.)

    Embedded SQL is not a language construct, but a preprocessor construct.

    But COBOL doesnrCOt have a standard preprocessor. Or a standard definition
    for rCLEmbedded SQLrCY, whether in this ISO spec or any other.

    The tricky part is the mapping between SQL data types and Cobol data
    types.

    Much easier in a dynamic language with a modern-style assortment of
    standard types, like Python.

    And the handling of errors.

    I just let the default exception handling report malformed SQL errors, and treat them like program bugs. I.e. I have to fix my code to *not* generate malformed SQL.

    The only time so far IrCOve needed to explicitly catch an SQL error is with rCLIntegrityErrorrCY-type exceptions, which can occur if you try to insert a record with a duplicate value for a unique key. I only do so where this reflects a user error.

    The point is that all problems arise because something is not written correctly.

    The point is that some languages are better suited to this sort of problem than others. Trying to wrestle your way through with an antiquated
    language is not a recipe for producing quality code.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sat Nov 15 18:12:19 2025
    From Newsgroup: comp.os.vms

    On 11/15/2025 5:16 PM, Lawrence DrCOOliveiro wrote:
    On Sat, 15 Nov 2025 09:22:33 -0500, Arne Vajh|+j wrote:
    On 11/15/2025 1:00 AM, Lawrence DrCOOliveiro wrote:
    On Fri, 14 Nov 2025 22:18:22 -0500, Arne Vajh|+j wrote:
    On 11/14/2025 9:41 PM, Lawrence DrCOOliveiro wrote:

    Except COBOL never had any official standard, did it, for these
    rCLEXEC SQLrCY templates.

    ISO 9075 part 2

    Something about rCLdata type correspondencesrCY? Not, as I was expecting, >>> rCLlanguage constructs for COBOLrCY? (i.e. not sure what the relevance is.) >>
    Embedded SQL is not a language construct, but a preprocessor construct.

    But COBOL doesnrCOt have a standard preprocessor. Or a standard definition for rCLEmbedded SQLrCY, whether in this ISO spec or any other.

    The embedded SQL pre-processor typical comes from the database vendor.

    The ISO SQL standard (part 2 cover the native languages, part 10 cover
    Java and possible other object oriented languages) and industry
    practices makes it work fine.

    The tricky part is the mapping between SQL data types and Cobol data
    types.

    Much easier in a dynamic language with a modern-style assortment of
    standard types, like Python.

    The basic types has not changed since the time of Cobol.

    But obviously a dynamically typed language do not have the problem
    of having to declare query result variable of the correct type.

    And the handling of errors.

    I just let the default exception handling report malformed SQL errors, and treat them like program bugs. I.e. I have to fix my code to *not* generate malformed SQL.

    The only time so far IrCOve needed to explicitly catch an SQL error is with rCLIntegrityErrorrCY-type exceptions, which can occur if you try to insert a record with a duplicate value for a unique key. I only do so where this reflects a user error.

    Most languages used for embedded SQL does not use exceptions, so
    that is not an option.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Sun Nov 16 02:00:25 2025
    From Newsgroup: comp.os.vms

    In article <10f4n8c$25lkk$1@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
    On 2025-11-12, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
    In article <10esrru$1qu6$1@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
    On 2025-11-07, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
    As for the cloud, the number of organizations moving back
    on-prem for very good reasons shouldn't be discounted.

    Yes, and I hope the latest batch of critical system movers do not
    repeat those same mistakes.

    I'm not sure what mistakes you're referring to, but let's hope
    that system maintainers make fewer mistakes generally. :-D

    I was referring to the mistake of getting rid of your local systems,
    and local systems knowledge, in favour of moving everything into the
    public clouds and outsourcing your local systems knowledge and
    development to third party vendors.

    This works for some people, but not for others, and there appears to have >been quite a drive by senior management in general of inappropriate
    movement away from local control and knowledge so that it "becomes someone >else's problem".

    The problem is that it isn't someone else's problem, it's still their >problem, as more than a few people have found out the hard way, promptly >followed by spending more money to move things back in house again.

    Ah, ok. Yes, I agree; discarding local domain knowledge is
    rarely --- if ever --- a good idea.

    It seems axiomatic that movement of systems should be done with
    care, and only after evaluating whether such a move is a good
    idea holistically. Clearly, a lot of people moved "to the
    cloud" who either did no such analysis, or did not account for a
    number of variables if they did.

    On the one hand, I kind of get that: there are a lot of unknowns
    when doing such things, and those may not be discovered until
    after the fact. On the other hand, once you've been around for
    a while, you know this is the case and should anticipate it.

    Among the arguments for the cloud are that provisioning,
    building, and maintaining datacenters is one of the core
    competencies of the hyperscalars. And that is true; the Googles
    and Amazons and Microsofts of the world do this better than
    anybody else. So you get tremendous economies of scale with
    respect to hardware and its maintenance if you leverage renting
    capacity from them. Further, you get to skip all of the capital
    expenses of building out your own infrastructure. Yay! And
    elasticity is attractive: you don't have to provision for (read:
    always pay for) your peak usage; you can adjust over time and
    that can save.

    But your workload is _your_ workload. The cloud provider
    doesn't have any insight into your requirements there, really,
    and if you're not a sufficiently large customer, they won't
    really care all that much either. At Google, we certainly made
    a good-faith effort, but some things just weren't worth it from
    the perspective of deciding where to spend engineering
    resources. I used to joke that we were sort of like the spacing
    guild from "Dune": the Atreides and Harkonnen's could have jobs
    running on the same machine and never know it. However, none of
    that is an excuse to throw away knowledge of your own workload.

    But just like renting instead of owning a home, you're subject
    to the landlord raising the rent on you. And once you hit a
    certain scale, the economies of scale argument begins to break
    down. Hence, re-homing back on-prem in a lot of cases.

    Provided you still know how to run your own stuff, of course.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Tue Nov 18 07:25:45 2025
    From Newsgroup: comp.os.vms

    On Sat, 15 Nov 2025 18:12:19 -0500, Arne Vajh|+j wrote:

    On 11/15/2025 5:16 PM, Lawrence DrCOOliveiro wrote:
    On Sat, 15 Nov 2025 09:22:33 -0500, Arne Vajh|+j wrote:
    On 11/15/2025 1:00 AM, Lawrence DrCOOliveiro wrote:
    On Fri, 14 Nov 2025 22:18:22 -0500, Arne Vajh|+j wrote:
    On 11/14/2025 9:41 PM, Lawrence DrCOOliveiro wrote:

    Except COBOL never had any official standard, did it, for these
    rCLEXEC SQLrCY templates.

    ISO 9075 part 2

    Something about rCLdata type correspondencesrCY? Not, as I was expecting, >>>> rCLlanguage constructs for COBOLrCY? (i.e. not sure what the relevance >>>> is.)

    Embedded SQL is not a language construct, but a preprocessor
    construct.

    But COBOL doesnrCOt have a standard preprocessor. Or a standard
    definition for rCLEmbedded SQLrCY, whether in this ISO spec or any other.

    The embedded SQL pre-processor typical comes from the database vendor.

    But there is no specification in the language standard for how it should
    work. So your code ends up being non-portable -- defeating much of the
    point of using COBOL.

    The ISO SQL standard (part 2 cover the native languages, part 10 cover
    Java and possible other object oriented languages) and industry
    practices makes it work fine.

    There was nothing in there that I could see about the syntax of SQL
    embedding, though.

    The tricky part is the mapping between SQL data types and Cobol data
    types.

    Much easier in a dynamic language with a modern-style assortment of
    standard types, like Python.

    The basic types has not changed since the time of Cobol.

    ThatrCOs the trouble. But Python includes handy things like dynamic lists/ tuples, dictionaries and sets, which are very handy for collecting data
    from SQL databases, and for putting data into SQL databases. And
    iterators, so you donrCOt have to retrieve the entire query result set into memory at once, you can pull in just as much as you can deal with at once.

    But obviously a dynamically typed language do not have the problem of
    having to declare query result variable of the correct type.

    Another advantage when dealing with ad-hoc queries!

    And the handling of errors.

    I just let the default exception handling report malformed SQL errors,
    and treat them like program bugs. I.e. I have to fix my code to *not*
    generate malformed SQL.

    The only time so far IrCOve needed to explicitly catch an SQL error is
    with rCLIntegrityErrorrCY-type exceptions, which can occur if you try to
    insert a record with a duplicate value for a unique key. I only do so
    where this reflects a user error.

    Most languages used for embedded SQL does not use exceptions, so that is
    not an option.

    Which ones? ItrCOs not just Python that has them, C++ and Java do, too. Why would you use a language that didnrCOt have exceptions to work with SQL?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Tue Nov 18 07:29:42 2025
    From Newsgroup: comp.os.vms

    On Fri, 14 Nov 2025 16:49:49 -0500, Arne Vajh|+j wrote:

    On 11/14/2025 12:47 AM, Lawrence DrCOOliveiro wrote:

    On Wed, 12 Nov 2025 16:10:39 -0500, Arne Vajh|+j wrote:

    I believe the topic was whether mainframes can achieve the
    availability - the required number of nines. They can.

    No they canrCOt. Mainframes were never designed for high availability.

    How many nines does IBM offer?

    Hint: look at this intro from IBM itself
    <https://www.ibm.com/think/topics/high-availability>. Do they mention
    their own mainframes? No. Do they mention cloud and Linux companies?
    Yes.

    Better hint - their page about z resiliency:

    https://www.ibm.com/products/z/resiliency

    <quote>
    For clients running z/OS v3.1 or higher with a configured high
    availability IBM software stack on IBM z16 or IBM z17, users can expect
    up to 99.999999% availability or 315.58 milliseconds of downtime per
    year when using a GDPS 4.7 Continuous Availability (CA) configuration
    and workloads.
    </quote>

    That is a lot of nines.

    Did you notice the footnote?

    <https://www.ibm.com/products/z/resiliency#footnote>:

    1 IBM z17 systems, with GDPS, IBM DS8000 series storage with
    HyperSwap, and running a Red Hat OpenShift Container Platform
    environment, are designed to deliver 99.999999% availability.

    Now, what does Red Hat got to do with mainframe resiliency? In fact, the mainframe doesnrCOt really have anything to do with it, does it? ItrCOs all down to Linux-based high-availability technologies, like OpenShift. All
    the resiliency is effectively coming from that.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Tue Nov 18 09:19:35 2025
    From Newsgroup: comp.os.vms

    On 11/18/2025 2:25 AM, Lawrence DrCOOliveiro wrote:
    On Sat, 15 Nov 2025 18:12:19 -0500, Arne Vajh|+j wrote:
    On 11/15/2025 5:16 PM, Lawrence DrCOOliveiro wrote:
    On Sat, 15 Nov 2025 09:22:33 -0500, Arne Vajh|+j wrote:
    On 11/15/2025 1:00 AM, Lawrence DrCOOliveiro wrote:
    On Fri, 14 Nov 2025 22:18:22 -0500, Arne Vajh|+j wrote:
    On 11/14/2025 9:41 PM, Lawrence DrCOOliveiro wrote:

    Except COBOL never had any official standard, did it, for these
    rCLEXEC SQLrCY templates.

    ISO 9075 part 2

    Something about rCLdata type correspondencesrCY? Not, as I was expecting, >>>>> rCLlanguage constructs for COBOLrCY? (i.e. not sure what the relevance >>>>> is.)

    Embedded SQL is not a language construct, but a preprocessor
    construct.

    But COBOL doesnrCOt have a standard preprocessor. Or a standard
    definition for rCLEmbedded SQLrCY, whether in this ISO spec or any other. >>
    The embedded SQL pre-processor typical comes from the database vendor.

    But there is no specification in the language standard for how it should work.

    The language compiler does not see any embedded SQL - the embedded SQL pre-processor outputs plain Cobol (or C or whatever).

    So your code ends up being non-portable

    If the SQL used is database specific, then it only works with that
    database.

    If the non-embedded Cobol code is VMS (or other platform) specific,
    then it only works there.

    An embedded SQL application may very well be non-portable, but
    not due to the use of embedded SQL.

    The ISO SQL standard (part 2 cover the native languages, part 10 cover
    Java and possible other object oriented languages) and industry
    practices makes it work fine.

    There was nothing in there that I could see about the syntax of SQL embedding, though.

    The SQL language is same SQL language as the database is offering
    via other API's.

    The mapping to host language variables is the tricky part.

    Then you just need to wrap it.

    Cobol: EXEC SQL ... END-EXEC
    C, Pascal etc.: EXEC SQL ...;
    Fortran: EXEC SQL ...
    Java: #sql ...;

    The tricky part is the mapping between SQL data types and Cobol data
    types.

    Much easier in a dynamic language with a modern-style assortment of
    standard types, like Python.

    The basic types has not changed since the time of Cobol.

    ThatrCOs the trouble. But Python includes handy things like dynamic lists/ tuples, dictionaries and sets, which are very handy for collecting data
    from SQL databases, and for putting data into SQL databases.

    Things progress. Cobol is from 1960. Some progress has been
    made since then.

    :-)

    And iterators, so you donrCOt have to retrieve the entire query result set into memory at once, you can pull in just as much as you can deal with at once.

    That works fine in old languages as well.

    And the handling of errors.

    I just let the default exception handling report malformed SQL errors,
    and treat them like program bugs. I.e. I have to fix my code to *not*
    generate malformed SQL.

    The only time so far IrCOve needed to explicitly catch an SQL error is
    with rCLIntegrityErrorrCY-type exceptions, which can occur if you try to >>> insert a record with a duplicate value for a unique key. I only do so
    where this reflects a user error.

    Most languages used for embedded SQL does not use exceptions, so that is
    not an option.

    Which ones? ItrCOs not just Python that has them, C++ and Java do, too. Why would you use a language that didnrCOt have exceptions to work with SQL?

    The two biggest languages for embedded SQL must be Cobol and C.

    Neither has exceptions.

    Newer languages rarely use embedded SQL.

    Embedded SQL got defined for Java - I assume IBM and Oracle pushed hard
    for it - but nobody is using it.

    Arne





    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Tue Nov 18 13:52:44 2025
    From Newsgroup: comp.os.vms

    On 11/18/2025 2:29 AM, Lawrence DrCOOliveiro wrote:
    On Fri, 14 Nov 2025 16:49:49 -0500, Arne Vajh|+j wrote:
    Better hint - their page about z resiliency:

    https://www.ibm.com/products/z/resiliency

    <quote>
    For clients running z/OS v3.1 or higher with a configured high
    availability IBM software stack on IBM z16 or IBM z17, users can expect
    up to 99.999999% availability or 315.58 milliseconds of downtime per
    year when using a GDPS 4.7 Continuous Availability (CA) configuration
    and workloads.
    </quote>

    That is a lot of nines.

    Did you notice the footnote?

    <https://www.ibm.com/products/z/resiliency#footnote>:

    1 IBM z17 systems, with GDPS, IBM DS8000 series storage with
    HyperSwap, and running a Red Hat OpenShift Container Platform
    environment, are designed to deliver 99.999999% availability.

    Now, what does Red Hat got to do with mainframe resiliency? In fact, the mainframe doesnrCOt really have anything to do with it, does it? ItrCOs all down to Linux-based high-availability technologies, like OpenShift. All
    the resiliency is effectively coming from that.

    You need to read it all.

    They can do that uptime for different software stacks:

    MongoDB on k8s on Linux on z/VM on mainframe
    DB2 on z/OS on mainframe
    IMS on z/OS on mainframe

    But even for the MongoDB k8s case the mainframe
    contribute to the expected uptime due to the low
    number of active physical boxes.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From gcalliet@gerard.calliet@pia-sofer.fr to comp.os.vms on Thu Nov 20 13:05:15 2025
    From Newsgroup: comp.os.vms

    Le 04/11/2025 |a 21:57, Subcommandante XDelta a |-crit-a:
    On 5/11/2025 12:59 am, Simon Clubley wrote:
    On 2025-11-03, Arne Vajh|+j <arne@vajhoej.dk> wrote:

    Mainframes were unique in last century regarding integrity, availability >>> and performance but not today.

    Standard distributed environment, load sharing (horizontal scaling)
    applications, standard RDBMS with transaction and XA transaction
    support, auto scaling VM or container solutions, massive scaling
    capable NoSQL databases.

    It can be made to work.


    It can also be made to _appear_ to work. And probably will, at least in
    the short term.

    It can also be made not to work, but ....

    :
    :

    I've been thinking quite a bit recently about just how bad monocultures
    and short term thinking can be from a society being able to continue
    functioning point of view. Just look at the massive damage done by
    attacks on major companies here in the UK over the last year, all of
    which should not have had single points of failure like that. :-(

    Simon.


    Steady on, old chap, going on like that, about the cloud-computing
    clown-car, will get you setting up a chapter, cluster node of the VMS Generations group, tout suite, stat! :-)
    Because as a french people I'm proud VMSgenerations has been quoted, and because I see a very interesting thread, I give another taste for the
    thread.

    The good luck of DEC is bounded with a simple concept: locality. At that
    time minicomputers where an alternative to mainframes thanks - but not
    only - to locality: you can get computing power next door of your
    department or research laboratory.

    Been local, or being centralized: it seems all the time we go between
    choosing locality because it(s better, and choosing centrality, because
    it better, and again, again.

    I think VMS could help to choose hybrid solutions. The more critic
    operations localy, the less critic operations somewher in a cloud, or
    one or two centralized data centers, back-boned by localy known VMS
    clusters.

    Another key of success of VMS has been mastery. I think VMS as a general alternative OS can help getting mastery.

    So:
    1) a general OS can be booted on a bare metal hardware
    2) the VM + cloud only solution which is being the future of VMS that
    propose VSI is not a way for real hybrid solutions
    3) a not general OS, pushed from windows or linux cannot give a real
    future to VMS
    4) a general OS is supported by a real community, otherwise it cannot be
    seen as a long term solution - the way VSI is centralizing the offer,
    not opening collaboration on development on Open Sources, the decisions
    (made in 2014) to not support users clubs, to not do any marketing are
    not good choices to open for a future to a VMS community

    (My opinions)

    G|-rard Calliet
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Thu Nov 20 23:07:21 2025
    From Newsgroup: comp.os.vms

    On Tue, 18 Nov 2025 09:19:35 -0500, Arne Vajh|+j wrote:

    On 11/18/2025 2:25 AM, Lawrence DrCOOliveiro wrote:

    The language compiler does not see any embedded SQL - the embedded SQL pre-processor outputs plain Cobol (or C or whatever).

    So your code ends up being non-portable

    If the SQL used is database specific, then it only works with that
    database.

    ItrCOs quite common to have applications in a range of languages
    all accessing the same database.

    ItrCOs not so common to have different compilers for what is supposed to
    be the same language, require different syntax for embedding that SQL.

    Then you just need to wrap it.

    Cobol: EXEC SQL ... END-EXEC

    But there is no standard in COBOL for how to do this wrapping.

    And iterators, so you donrCOt have to retrieve the entire query
    result set into memory at once, you can pull in just as much as you
    can deal with at once.

    That works fine in old languages as well.

    Those old languages donrCOt have iterators.

    Remember that, in a language like Python, iterators are not
    specifically a mechanism for database queries; they have other uses as
    well.

    The two biggest languages for embedded SQL must be Cobol and C.

    Neither has exceptions.

    And neither is suited to ad-hoc queries. Their mindset is that of
    templated queries for doing a limited set of queries for bulk data
    processing.

    Newer languages rarely use embedded SQL.

    Precisely.

    Embedded SQL got defined for Java - I assume IBM and Oracle pushed
    hard for it - but nobody is using it.

    Funny, that ...
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Thu Nov 20 23:09:34 2025
    From Newsgroup: comp.os.vms

    On Tue, 18 Nov 2025 13:52:44 -0500, Arne Vajh|+j wrote:

    On 11/18/2025 2:29 AM, Lawrence DrCOOliveiro wrote:

    On Fri, 14 Nov 2025 16:49:49 -0500, Arne Vajh|+j wrote:

    Better hint - their page about z resiliency:

    https://www.ibm.com/products/z/resiliency

    <quote>
    For clients running z/OS v3.1 or higher with a configured high
    availability IBM software stack on IBM z16 or IBM z17, users can
    expect up to 99.999999% availability or 315.58 milliseconds of
    downtime per year when using a GDPS 4.7 Continuous Availability
    (CA) configuration and workloads.
    </quote>

    That is a lot of nines.

    Did you notice the footnote?

    <https://www.ibm.com/products/z/resiliency#footnote>:

    1 IBM z17 systems, with GDPS, IBM DS8000 series storage with
    HyperSwap, and running a Red Hat OpenShift Container Platform
    environment, are designed to deliver 99.999999% availability.

    Now, what does Red Hat got to do with mainframe resiliency? In
    fact, the mainframe doesnrCOt really have anything to do with it,
    does it? ItrCOs all down to Linux-based high-availability
    technologies, like OpenShift. All the resiliency is effectively
    coming from that.

    You need to read it all.

    They can do that uptime for different software stacks:

    MongoDB on k8s on Linux on z/VM on mainframe
    DB2 on z/OS on mainframe
    IMS on z/OS on mainframe

    There is no actual mention on that page of being able to achieve such
    a high level of nines without Linux. None.

    But even for the MongoDB k8s case the mainframe contribute to the
    expected uptime due to the low number of active physical boxes.

    No they donrCOt. They donrCOt make any contribution to the nines at all;
    all that is coming from the Linux stack.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Thu Nov 20 19:31:39 2025
    From Newsgroup: comp.os.vms

    On 11/20/2025 6:09 PM, Lawrence DrCOOliveiro wrote:
    On Tue, 18 Nov 2025 13:52:44 -0500, Arne Vajh|+j wrote:
    On 11/18/2025 2:29 AM, Lawrence DrCOOliveiro wrote:
    <https://www.ibm.com/products/z/resiliency#footnote>:

    1 IBM z17 systems, with GDPS, IBM DS8000 series storage with
    HyperSwap, and running a Red Hat OpenShift Container Platform
    environment, are designed to deliver 99.999999% availability.

    Now, what does Red Hat got to do with mainframe resiliency? In
    fact, the mainframe doesnrCOt really have anything to do with it,
    does it? ItrCOs all down to Linux-based high-availability
    technologies, like OpenShift. All the resiliency is effectively
    coming from that.

    You need to read it all.

    They can do that uptime for different software stacks:

    MongoDB on k8s on Linux on z/VM on mainframe
    DB2 on z/OS on mainframe
    IMS on z/OS on mainframe

    There is no actual mention on that page of being able to achieve such
    a high level of nines without Linux. None.

    Let us make a little pop quiz.

    We have 3 tech stacks:

    1)
    MongoDB on k8s on Linux on z/VM on mainframe
    2)
    DB2 on z/OS on mainframe
    3)
    IMS on z/OS on mainframe

    We have 2 configs:

    A)
    <quote>
    IBM z17 systems, with GDPS, IBM DS8000 series storage with HyperSwap,
    and running a Red Hat OpenShift Container Platform environment, are
    designed to deliver 99.999999% availability.

    DISCLAIMER: IBM internal data based on measurements and projections was
    used in calculating the expected value. Necessary components include IBM
    z17; IBM z/VM V7.3 systems or above collected in a Single System Image,
    each running RHOCP 4.14 or above; IBM Operations Manager; GDPS 4.6 or
    above for management of data recovery and virtual machine recovery
    across metro distance systems and storage, including Metro Multi-site
    workload and GDPS Global; and IBM DS8000 series storage with IBM
    HyperSwap. A MongoDB v4.4 workload was used. Necessary resiliency
    technology must be enabled, including z/VM Single System Image
    clustering, GDPS xDR Proxy for z/VM, and Red Hat OpenShift Data
    Foundation (ODF) 4.14 or above for management of local storage devices. Application-induced outages are not included in the above measurements.
    Other configurations (hardware or software) may provide different
    availability characteristics.
    </quote>

    B)
    <quote>
    For clients running z/OS v3.1 or higher with a configured high
    availability IBM software stack on IBM z16 or IBM z17, users can expect
    up to 99.999999% availability or 315.58 milliseconds of downtime per
    year when using a GDPS 4.7 Continuous Availability (CA) configuration
    and workloads.

    DISCLAIMER: The claim is based on IBM internal data and a GDPS CA
    three-site configuration, 2 active Sysplex sites and 1 Disaster Recovery
    (DR) site, consisting of z/OS 3.1 or higher with a Recovery Time
    objective (RTO) of 2 minutes or less, one of the required GDPS CA IBM middleware stack workloads and replication products running on IBM z16
    or IBM z17. GDPS CA includes resiliency features such as Parallel
    Sysplex enabled data sharing applications, GDPS Metro Mirror replication (Hyperswap), software replication, and other CA configuration documented
    high availability features. A supported GDPS CA middleware stack could
    include CICS v6.2, IMS v15.5, MQ v9.4, and Db2 v13 or at later releases. Clients must follow maintenance, configuration, capacity planning and
    testing best practices for the entire software stack and hardware configuration. This includes enabling all the resiliency technology for
    their workloads as defined by GDPS CA, z/OS, and workload related
    software products. Other configurations may have different availability characteristics.
    </quote>

    Who can match tech stacks and config?

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Thu Nov 20 19:36:15 2025
    From Newsgroup: comp.os.vms

    On 11/20/2025 6:09 PM, Lawrence DrCOOliveiro wrote:
    On Tue, 18 Nov 2025 13:52:44 -0500, Arne Vajh|+j wrote:
    But even for the MongoDB k8s case the mainframe contribute to the
    expected uptime due to the low number of active physical boxes.

    No they donrCOt. They donrCOt make any contribution to the nines at all;
    all that is coming from the Linux stack.

    You need to do the math.

    If we say Pn means uptime for n systems, then:

    Pn = 1 - (1 - P1)*n

    P1 = 1 - (1 - Pn)**(1/n)

    and combined with:

    P1 = MIN(P1hardware, P1os, P1app)

    then we know that:

    P1hardware >= 1 - (1 - Pn)**(1/n)

    Or more specifically then for n=2 we know that
    P2 = 8 nines means that P1hardware >= 4 nines.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Thu Nov 20 19:41:59 2025
    From Newsgroup: comp.os.vms

    On 11/20/2025 6:07 PM, Lawrence DrCOOliveiro wrote:
    On Tue, 18 Nov 2025 09:19:35 -0500, Arne Vajh|+j wrote:

    On 11/18/2025 2:25 AM, Lawrence DrCOOliveiro wrote:

    The language compiler does not see any embedded SQL - the embedded SQL
    pre-processor outputs plain Cobol (or C or whatever).

    So your code ends up being non-portable

    If the SQL used is database specific, then it only works with that
    database.

    ItrCOs quite common to have applications in a range of languages
    all accessing the same database.

    ItrCOs not so common to have different compilers for what is supposed to
    be the same language, require different syntax for embedding that SQL.

    Sounds true.

    But does not have anything to do with the fact that one the most
    common ways to make embedded SQL non-portable between databases
    is to use database specific SQL.

    Then you just need to wrap it.

    Cobol: EXEC SQL ... END-EXEC

    But there is no standard in COBOL for how to do this wrapping.

    As explain two times, the the Cobol compiler does not see those.

    But people does not have any problems putting EXEC SQL in front
    of their SQL and END-EXEC after.

    And iterators, so you donrCOt have to retrieve the entire query
    result set into memory at once, you can pull in just as much as you
    can deal with at once.

    That works fine in old languages as well.

    Those old languages donrCOt have iterators.

    They still fetch rows conceptually one at a time (on the wire
    likely in small bundles).

    That does not require an iterator.

    In fact many database API's does not even have the option
    of fetching all rows into memory.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Thu Nov 20 19:55:21 2025
    From Newsgroup: comp.os.vms

    Let me demo SQLPY "Embedded SQL for Python".

    :-) :-) :-)

    Mostly based on SQLJ.

    $ type test.sqlpy
    import pymysql

    *sql context ConCtx

    *sql iterator T1Iter

    def t1_get_one(con, f2):
    ctx = ConCtx(con)
    it = T1Iter()
    *sql [ctx] it = { SELECT f1 FROM t1 WHERE f2 = :f2}
    *sql { fetch :it INTO :f1 }
    return f1

    def t1_get_all(con):
    ctx = ConCtx(con)
    it = T1Iter()
    *sql [ctx] it = { SELECT f1,f2 FROM t1 }
    res = []
    while not it.endfetch():
    *sql { fetch :it INTO :f1, :f2 }
    res.append([f1, f2])
    return res

    def t1_put(con, f1, f2):
    ctx = ConCtx(con)
    *sql [ctx] { INSERT INTO t1 VALUES(:f1,:f2) }

    def t1_remove(con, f1):
    ctx = ConCtx(con)
    *sql [ctx] { DELETE FROM t1 WHERE f1 = :f1 }

    def t1_display(data):
    for row in data:
    print('%d %s' % (row[0], row[1]))

    con = pymysql.connect(host='arnepc5',user='arne',password='hemmeligt',db='Test')
    f1 = t1_get_one(con, 'BB')
    print(f1)
    data = t1_get_all(con)
    t1_display(data)
    t1_put(con, 999, 'XXX')
    data = t1_get_all(con)
    t1_display(data)
    t1_remove(con, 999)
    data = t1_get_all(con)
    t1_display(data)
    con.commit()
    con.close()

    $ python sqlpy.py TEST.sqlpy TEST.py
    $ python TEST.py
    2
    1 A
    2 BB
    3 CCC
    1 A
    2 BB
    3 CCC
    999 XXX
    1 A
    2 BB
    3 CCC

    Arne


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Fri Nov 21 01:32:51 2025
    From Newsgroup: comp.os.vms

    On Thu, 20 Nov 2025 19:31:39 -0500, Arne Vajh|+j wrote:

    On 11/20/2025 6:09 PM, Lawrence DrCOOliveiro wrote:

    There is no actual mention on that page of being able to achieve such
    a high level of nines without Linux. None.


    A)
    <quote>
    A MongoDB v4.4 workload was used.

    MongoDB is Linux-only. DoesnrCOt run under z/OS.

    B)
    <quote>
    ... one of the required GDPS CA IBM middleware stack workloads ...

    I wonder what these are, and if any of them come *without* Linux
    somewhere in the mix?

    Who can match tech stacks and config?

    Any hyperscaler can, much more cost-effectively, without IBM
    mainframes being involved.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Fri Nov 21 19:09:48 2025
    From Newsgroup: comp.os.vms

    On 11/20/2025 7:55 PM, Arne Vajh|+j wrote:
    Let me demo SQLPY "Embedded SQL for Python".

    :-) :-) :-)

    Mostly based on SQLJ.

    $ type test.sqlpy

    $ python sqlpy.py TEST.sqlpy TEST.py
    $ python TEST.py

    And for the curious here is sqlpy.py (it is just a
    bunch of regex):

    import sys
    import re

    PAT_CONTEXT = r"^\*sql context (\w+)$"
    PAT_ITERATOR = r"^\*sql iterator (\w+)$"
    PAT_QUERY = r"^(\W*)\*sql \[(\w+)\] (\w+) = {(.*)}$"
    PAT_UPDATE = r"^(\W*)\*sql \[(\w+)\] {(.*)}$"
    PAT_PARAM = r":(\w+)"
    PAT_FETCH = r"^(\W*)\*sql { fetch :(\w+) INTO (.*) }$"

    def params(sqlstr):
    return re.findall(PAT_PARAM, sqlstr)

    sqlpy = open(sys.argv[1], "r")
    py = open(sys.argv[2], "w")
    for line in sqlpy:
    pureline = line.rstrip()
    pat_context = re.match(PAT_CONTEXT, pureline)
    pat_iterator = re.match(PAT_ITERATOR, pureline)
    pat_query = re.match(PAT_QUERY, pureline)
    pat_fetch = re.match(PAT_FETCH, pureline)
    pat_update = re.match(PAT_UPDATE, pureline)
    if pat_context:
    py.write("class %s:\n" % (pat_context.group(1)))
    py.write(" def __init__(self, con):\n")
    py.write(" self.con = con\n")
    elif pat_iterator:
    py.write("class %s:\n" % (pat_iterator.group(1)))
    py.write(" def prepare(self, cur):\n")
    py.write(" self.cur = cur\n")
    py.write(" def execute(self, sql, p = ()):\n")
    py.write(" self.cur.execute(sql, p)\n")
    py.write(" self.row = self.cur.fetchone()\n")
    py.write(" def fetchone(self):\n")
    py.write(" res = self.row\n")
    py.write(" self.row = self.cur.fetchone()\n")
    py.write(" if self.row == None:\n")
    py.write(" self.cur.close()\n")
    py.write(" return res\n")
    py.write(" def endfetch(self):\n")
    py.write(" return self.row == None\n")
    elif pat_query:
    py.write("%s%s.prepare(%s.con.cursor())\n" %
    (pat_query.group(1), pat_query.group(3), pat_query.group(2)))
    sql = pat_query.group(4).lstrip().rstrip()
    p = params(sql)
    if len(p) == 0:
    py.write("%s%s.execute('%s')\n" % (pat_query.group(1), pat_query.group(3), sql))
    elif len(p) == 1:
    sql = re.sub(PAT_PARAM, "%s", sql)
    py.write("%s%s.execute('%s', (%s,))\n" %
    (pat_query.group(1), pat_query.group(3), sql, p[0]))
    else:
    sql = re.sub(PAT_PARAM, "%s", sql)
    py.write("%s%s.execute('%s', (%s))\n" %
    (pat_query.group(1), pat_query.group(3), sql, ",".join(p)))
    elif pat_fetch:
    p = params( pat_fetch.group(3))
    if len(p) == 1:
    py.write("%s%s = %s.fetchone()\n" % (pat_fetch.group(1),
    p[0] + ",", pat_fetch.group(2)))
    else:
    py.write("%s%s = %s.fetchone()\n" % (pat_fetch.group(1), ",".join(p), pat_fetch.group(2)))
    elif pat_update:
    py.write("%sc = %s.con.cursor()\n" % (pat_update.group(1), pat_update.group(2)))
    sql = pat_update.group(3).lstrip().rstrip()
    p = params(sql)
    if len(p) == 0:
    py.write("%sc.execute('%s')\n" % (pat_update.group(1), sql))
    elif len(p) == 1:
    sql = re.sub(PAT_PARAM, "%s", sql)
    py.write("%sc.execute('%s', (%s,))\n" %
    (pat_update.group(1), sql, p[0]))
    else:
    sql = re.sub(PAT_PARAM, "%s", sql)
    py.write("%sc.execute('%s', (%s))\n" %
    (pat_update.group(1), sql, ",".join(p)))
    py.write("%sc.close()\n" % (pat_update.group(1)))
    else:
    py.write(line)
    py.close()
    sqlpy.close()

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Dave Froble@davef@tsoft-inc.com to comp.os.vms on Sat Nov 29 19:49:23 2025
    From Newsgroup: comp.os.vms

    On 11/11/2025 10:23 AM, Waldek Hebisch wrote:
    bill <bill.gunshannon@gmail.com> wrote:
    On 11/10/2025 9:12 AM, Simon Clubley wrote:


    Question: are they low-risk because they were designed to do one thing
    and to do it very well in extremely demanding environments ?

    Are the replacements higher-risk because they are more of a generic
    infrastructure and the mission critical workloads need to be force-fitted >>> into them ?


    And here you finally hit the crux of the matter.
    People wonder why I am still a strong supporter if COBOL.
    The reason is simple. It was a language designed to do
    a particular task and it does it well. Now we have this
    desire to replace it with something generic. I feel this
    is a bad idea.

    Well, Cobol represents practices of 1960 business data
    processing.

    Sometimes things don't really change. You count to 10 the same way now as in 1960. (Trivial example)

    At that time it was state of the art.
    But state of the art changed. Cobol somewhat adapted
    but it slow to this. So your claim of "does it well"
    does not look true, unless by "it" you mean
    "replicating Cobol data processing from the sixties".

    To expand a bit more, Cobol has essentially unfixable problem
    with verbosity.

    Now this is opinion, and really a poor argument. While I detest the verbosity in most things, that is my choice, not the problem you claim.

    Defining a function need a several lines of
    overhead code. Function calls are more verbose than in other
    languages. There are fixable problems, which however may
    appear when dealing with real Cobol code. In particular
    Cobol support old control structures. In new program you
    can use new control structures, but convering uses of old
    control strucures to new ones need effort and it is likely
    that a bit more effort would be enough to convert whole
    program to a different language.

    I apologize in advance, but that is idiotic. Any re-write of any non-trivial application in another language, will never be complete. There will be errors and things will be lost. IT WILL HAPPEN !!! And when done, what will be
    the gains in a sideways move?
    --
    David Froble Tel: 724-529-0450
    Dave Froble Enterprises, Inc. E-Mail: davef@tsoft-inc.com
    DFE Ultralights, Inc.
    170 Grimplin Road
    Vanderbilt, PA 15486
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.os.vms on Sun Nov 30 05:44:11 2025
    From Newsgroup: comp.os.vms

    Dave Froble <davef@tsoft-inc.com> wrote:
    On 11/11/2025 10:23 AM, Waldek Hebisch wrote:
    bill <bill.gunshannon@gmail.com> wrote:
    On 11/10/2025 9:12 AM, Simon Clubley wrote:


    Question: are they low-risk because they were designed to do one thing >>>> and to do it very well in extremely demanding environments ?

    Are the replacements higher-risk because they are more of a generic
    infrastructure and the mission critical workloads need to be force-fitted >>>> into them ?


    And here you finally hit the crux of the matter.
    People wonder why I am still a strong supporter if COBOL.
    The reason is simple. It was a language designed to do
    a particular task and it does it well. Now we have this
    desire to replace it with something generic. I feel this
    is a bad idea.

    Well, Cobol represents practices of 1960 business data
    processing.

    Sometimes things don't really change. You count to 10 the same way now as in
    1960. (Trivial example)

    Sometimes you may be able to do your data processing as in 1960.
    But it is very unlikely to be good way now.

    At that time it was state of the art.
    But state of the art changed. Cobol somewhat adapted
    but it slow to this. So your claim of "does it well"
    does not look true, unless by "it" you mean
    "replicating Cobol data processing from the sixties".

    To expand a bit more, Cobol has essentially unfixable problem
    with verbosity.

    Now this is opinion, and really a poor argument. While I detest the verbosity
    in most things, that is my choice, not the problem you claim.

    Once you have choosen Cobol you can not avoid verbosity (you can
    make it even more verbose if you want, but can not avoid it due
    to way the language works). And while I sometimes may choose more
    verbose way of expressing things if I think that it helps clarity,
    Cobol verbosity is essentially worthless, Cobol forces you to
    write more code for frequently used constructs that could be
    written more concisely in other languages and what is important
    more concise way does not cause any confusion.

    Defining a function need a several lines of
    overhead code. Function calls are more verbose than in other
    languages. There are fixable problems, which however may
    appear when dealing with real Cobol code. In particular
    Cobol support old control structures. In new program you
    can use new control structures, but convering uses of old
    control strucures to new ones need effort and it is likely
    that a bit more effort would be enough to convert whole
    program to a different language.

    I apologize in advance, but that is idiotic. Any re-write of any non-trivial
    application in another language, will never be complete. There will be errors
    and things will be lost. IT WILL HAPPEN !!! And when done, what will be
    the gains in a sideways move?

    Point is that change to new control structures is not unlike
    change to another language. If done by hand on non-trivial
    application may lead to error. Concerning your "never", successful
    conversions to a different langage did happen. Sensible convertion
    is likely to be a semiautomatic process, doing changes by hand is too
    labor intensive and risks errors, fully automatic process may be
    impossible, but combination may work. Concerning gains,
    goal is easier maintanence. If a program works fine and there
    is little demand for modifications/additions, then cost and risk
    of changes is likely to dominate and program will be kept as
    is, meaning that ongoing maintanence will have to deal with
    old style code and related problems.

    Usual trap is that people used to old ways, especially ones using
    single language have trouble programming well in newer languages.
    And new people prefer writing new code to working with old code.
    Which leads to rewrites loosing features/solutions present in
    old code, cost overruns when people doing rewrite realize that
    problem is harder than they expected etc.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Sun Nov 30 06:33:06 2025
    From Newsgroup: comp.os.vms

    On Sun, 30 Nov 2025 05:44:11 -0000 (UTC), Waldek Hebisch wrote:

    Sometimes you may be able to do your data processing as in 1960. But
    it is very unlikely to be good way now.

    I was watching this mini-doco on the history of MUMPS <https://www.youtube.com/watch?v=7g1K-tLEATw>. That achieved quite a
    bit of its success from being more productive than COBOL.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sun Nov 30 14:21:39 2025
    From Newsgroup: comp.os.vms

    On 11/30/2025 1:33 AM, Lawrence DrCOOliveiro wrote:
    On Sun, 30 Nov 2025 05:44:11 -0000 (UTC), Waldek Hebisch wrote:
    Sometimes you may be able to do your data processing as in 1960. But
    it is very unlikely to be good way now.

    I was watching this mini-doco on the history of MUMPS <https://www.youtube.com/watch?v=7g1K-tLEATw>. That achieved quite a
    bit of its success from being more productive than COBOL.

    Or DIBOL (Synergy DBL today) or various Digital Basic (including
    VMS Basic).

    I would deem MUMPS obsolete as well today even though it is still
    used in healthcare and a little bit in finance.

    Note though that VMS support is being dropped.

    2017.1 was last version of Intersystems Cache to support VMS.

    6.2 was last version of GT.M to support VMS.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sun Nov 30 14:23:04 2025
    From Newsgroup: comp.os.vms

    On 11/30/2025 2:21 PM, Arne Vajh|+j wrote:
    Note though that VMS support is being dropped.

    2017.1 was last version of Intersystems Cache to support VMS.

    VMS Alpha and VMS Itanium

    6.2 was last version of GT.M to support VMS.

    VMS Alpha.

    Note free available for download from SF!!

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sun Nov 30 16:04:49 2025
    From Newsgroup: comp.os.vms

    On 11/30/2025 2:21 PM, Arne Vajh|+j wrote:
    I would deem MUMPS obsolete as well today even though it is still
    used in healthcare and a little bit in finance.

    This is sort of OK:

    test()
    for i=1:1:3 do
    . write "Hi from Mumps!",!
    quit

    But if we abbreviate commands as Mumps allow then it becomes
    practically unreadable for those not knowing Mumps:

    test2()
    f i=1:1:3 d
    . w "Hi from Mumps!",!
    q

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sun Nov 30 16:09:46 2025
    From Newsgroup: comp.os.vms

    On 11/30/2025 4:04 PM, Arne Vajh|+j wrote:
    On 11/30/2025 2:21 PM, Arne Vajh|+j wrote:
    I would deem MUMPS obsolete as well today even though it is still
    used in healthcare and a little bit in finance.

    This is sort of OK:

    test()
    -a for i=1:1:3 do
    -a-a-a . write "Hi from Mumps!",!
    -a quit

    But if we abbreviate commands as Mumps allow then it becomes
    practically unreadable for those not knowing Mumps:

    test2()
    -a f i=1:1:3 d
    -a-a-a . w "Hi from Mumps!",!
    -a q

    The selling point is the automatic persistence of global
    variables:

    $ type globinit.m
    globinit()
    set ^v=0
    set ^m("xyz")="ABC"
    quit
    $ mumps globinit
    $ link globinit
    $ run globinit
    $ type glob.m
    glob()
    set ^v=^v+1
    write ^v,!
    set ^m("xyz")=^m("xyz")_"."
    write ^m("xyz"),!
    quit
    $ mumps glob
    $ link glob
    $ run glob
    1
    ABC.

    $ run glob
    2
    ABC..

    $ run glob
    3
    ABC...

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sun Nov 30 16:11:05 2025
    From Newsgroup: comp.os.vms

    On 11/30/2025 4:09 PM, Arne Vajh|+j wrote:
    The selling point is the automatic persistence of global
    variables:

    If someone want to try on their Alpha, then to setup the
    persistence:

    $ del mumps.gld;*
    $ del mumps.dat;*
    $ del data.dat;*
    $ run gtm$dist:gde
    add/name me /region=here
    add/region here /dynamic=data
    add/segment data /file=data
    exit
    $ run gtm$dist:mupip
    create
    $

    (it took me a little time to figure that out)

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris Townley@news@cct-net.co.uk to comp.os.vms on Sun Nov 30 21:49:20 2025
    From Newsgroup: comp.os.vms

    On 30/11/2025 21:09, Arne Vajh|+j wrote:
    On 11/30/2025 4:04 PM, Arne Vajh|+j wrote:

    < snip >
    The selling point is the automatic persistence of global
    variables:

    < snip >


    Arne


    Why would you want that?
    --
    Chris
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.os.vms on Sun Nov 30 22:18:07 2025
    From Newsgroup: comp.os.vms

    Chris Townley <news@cct-net.co.uk> wrote:
    On 30/11/2025 21:09, Arne Vajh|+j wrote:
    On 11/30/2025 4:04 PM, Arne Vajh|+j wrote:

    < snip >
    The selling point is the automatic persistence of global
    variables:

    < snip >


    Arne


    Why would you want that?

    Think database. MUMPS globals really are a non-relational database. Non-persistent database would be of limited use.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sun Nov 30 17:44:29 2025
    From Newsgroup: comp.os.vms

    On 11/30/2025 4:49 PM, Chris Townley wrote:
    On 30/11/2025 21:09, Arne Vajh|+j wrote:
    On 11/30/2025 4:04 PM, Arne Vajh|+j wrote:
    The selling point is the automatic persistence of global
    variables:

    Why would you want that?

    It is pretty smart for persistence - instead of various
    API calls doing SQL or ORM or whatever, then you just let
    the variable name start with ^ and the system handles
    both retrieving and storing the data.

    But I guess you wonder over labeling it global variables.

    Well - this is before my time, so this is just guessing.

    When MUMPS were invented then RAM was very expensive. According
    to Wikipdia then MUMPS first ran on a PDP-7 and PDP-9. And
    a PDP-9 came standard with 8192 words of RAM. Yuck.

    So having one big program with data in global variables in
    RAM and calling a bunch of subroutines to work on the data
    may not have fit into RAM.

    Instead having the global variables on disk and having
    multiple small standalone programs working on the data
    may fit better.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Mon Dec 1 13:37:27 2025
    From Newsgroup: comp.os.vms

    In article <10gg48s$3srom$1@dont-email.me>,
    Dave Froble <davef@tsoft-inc.com> wrote:
    On 11/11/2025 10:23 AM, Waldek Hebisch wrote:
    bill <bill.gunshannon@gmail.com> wrote:
    On 11/10/2025 9:12 AM, Simon Clubley wrote:


    Question: are they low-risk because they were designed to do one thing >>>> and to do it very well in extremely demanding environments ?

    Are the replacements higher-risk because they are more of a generic
    infrastructure and the mission critical workloads need to be force-fitted >>>> into them ?


    And here you finally hit the crux of the matter.
    People wonder why I am still a strong supporter if COBOL.
    The reason is simple. It was a language designed to do
    a particular task and it does it well. Now we have this
    desire to replace it with something generic. I feel this
    is a bad idea.

    Well, Cobol represents practices of 1960 business data
    processing.

    Sometimes things don't really change. You count to 10 the same way now as in >1960. (Trivial example)

    At that time it was state of the art.
    But state of the art changed. Cobol somewhat adapted
    but it slow to this. So your claim of "does it well"
    does not look true, unless by "it" you mean
    "replicating Cobol data processing from the sixties".

    To expand a bit more, Cobol has essentially unfixable problem
    with verbosity.

    Now this is opinion, and really a poor argument. While I detest the verbosity
    in most things, that is my choice, not the problem you claim.

    Defining a function need a several lines of
    overhead code. Function calls are more verbose than in other
    languages. There are fixable problems, which however may
    appear when dealing with real Cobol code. In particular
    Cobol support old control structures. In new program you
    can use new control structures, but convering uses of old
    control strucures to new ones need effort and it is likely
    that a bit more effort would be enough to convert whole
    program to a different language.

    I apologize in advance, but that is idiotic. Any re-write of any non-trivial >application in another language, will never be complete. There will be errors >and things will be lost. IT WILL HAPPEN !!! And when done, what will be
    the gains in a sideways move?

    I got the impression Waldek was referring to updating programs
    written to old versions of COBOL to use facilities introduced in
    newer versions of COBOL, though perhaps I am mistaken.

    Regardless, this raises an interesting point: the latest version
    of COBOL is, I believe, COBOL 2023. But that language is rather
    different than the original 1960 COBOL. So even simply updating
    a COBOL program is akin to rewriting it in another language.

    I've long suspected (but I admit I have no evidence to support
    this) that one of the reasons there is so much COBOL code in the
    world is because, when making non-trivial changes, programmers
    first _copy_ large sections of the program and then modify the
    copy, to avoid introducing bugs into existing functionality.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Simon Clubley@clubley@remove_me.eisner.decus.org-Earth.UFP to comp.os.vms on Mon Dec 1 13:49:58 2025
    From Newsgroup: comp.os.vms

    On 2025-11-29, Dave Froble <davef@tsoft-inc.com> wrote:

    Sometimes things don't really change. You count to 10 the same way now as in
    1960. (Trivial example)


    Are you sure ? I thought maths teaching was heading in a new direction
    in multiple parts of your country as shown by this example (which is way
    too close to actually being realistic, especially with the "support" infrastructure from the people around the teacher):

    https://www.youtube.com/watch?v=Zh3Yz3PiXZw


    Now this is opinion, and really a poor argument. While I detest the verbosity
    in most things, that is my choice, not the problem you claim.


    Back on topic, COBOL is very verbose, but I also hate way too concise
    languages where the language designers don't even allow words like
    "function" to be spelt out in full. You read code many more times than
    you write it and having cryptic syntax makes that a lot harder to achieve.

    Something like Ada was designed for readability, and I wish all other
    languages followed that example.

    Just waiting for the moment when a newcomer designs a new language which
    has syntax resembling TECO... :-)

    Simon.
    --
    Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
    Walking destinations on a map are further away than they appear.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Mon Dec 1 18:59:13 2025
    From Newsgroup: comp.os.vms

    In article <10gk6e6$1bcst$3@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
    On 2025-11-29, Dave Froble <davef@tsoft-inc.com> wrote:

    Sometimes things don't really change. You count to 10 the same way now as in
    1960. (Trivial example)

    Are you sure ? I thought maths teaching was heading in a new direction
    in multiple parts of your country as shown by this example (which is way
    too close to actually being realistic, especially with the "support" >infrastructure from the people around the teacher):

    https://www.youtube.com/watch?v=Zh3Yz3PiXZw

    You know, Simon, I recall you posting that you were against the
    opposition candidate in our last election because you disliked
    her laugh. In your own country, Nigel Farage and his party seem disconcertingly close to power, with their ill-advised "Empire
    2.0" aspirations; might I remind you that most of the former
    members of Empire 1.0 are still trying to recover?

    It's very easy to throw stones, but not terribly advisable when
    you yourself are in a glass house.

    At least stop adding these things as parentheticals onto posts
    that _also_ carry technical content.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Simon Clubley@clubley@remove_me.eisner.decus.org-Earth.UFP to comp.os.vms on Mon Dec 1 19:58:22 2025
    From Newsgroup: comp.os.vms

    On 2025-12-01, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
    In article <10gk6e6$1bcst$3@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
    On 2025-11-29, Dave Froble <davef@tsoft-inc.com> wrote:

    Sometimes things don't really change. You count to 10 the same way now as in
    1960. (Trivial example)

    Are you sure ? I thought maths teaching was heading in a new direction
    in multiple parts of your country as shown by this example (which is way >>too close to actually being realistic, especially with the "support" >>infrastructure from the people around the teacher):

    https://www.youtube.com/watch?v=Zh3Yz3PiXZw

    You know, Simon, I recall you posting that you were against the
    opposition candidate in our last election because you disliked
    her laugh. In your own country, Nigel Farage and his party seem disconcertingly close to power, with their ill-advised "Empire
    2.0" aspirations; might I remind you that most of the former
    members of Empire 1.0 are still trying to recover?


    I was trying to refer to the following in a good natured way, which are articles discussing the maths problem in the US I became aware of recently,
    and which is what drove me to post the above.

    https://www.theatlantic.com/ideas/2025/11/math-decline-ucsd/684973/ https://www.latimes.com/california/story/2025-09-23/low-tests-scores-show-math-crisis-began-a-decade-ago-and-worsened
    https://apnews.com/article/math-scores-china-security-b60b740c480270d552d750c15ed287b6

    How on earth can someone not know how to divide a fraction by two ?

    Oh, and the laugh was only a part of it. It was her inability to
    act in a way expected of a US president. I believe the phrase I used
    at the time was a lack of gravitas, plus her inability to conduct
    serious interviews without collapsing into word salad.

    It appears some people are beginning to see through Reform, and we also
    have the first past the post system. I am hoping that's enough to stop
    him from gaining a majority, but our traditional parties (all of them)
    need to _seriously_ up their game.

    It's very easy to throw stones, but not terribly advisable when
    you yourself are in a glass house.

    At least stop adding these things as parentheticals onto posts
    that _also_ carry technical content.


    Did you read the rest of the posting Dan ?

    Simon.
    --
    Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
    Walking destinations on a map are further away than they appear.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Mon Dec 1 16:02:12 2025
    From Newsgroup: comp.os.vms

    On 12/1/2025 8:37 AM, Dan Cross wrote:
    In article <10gg48s$3srom$1@dont-email.me>,
    Dave Froble <davef@tsoft-inc.com> wrote:
    On 11/11/2025 10:23 AM, Waldek Hebisch wrote:
    Defining a function need a several lines of
    overhead code. Function calls are more verbose than in other
    languages. There are fixable problems, which however may
    appear when dealing with real Cobol code. In particular
    Cobol support old control structures. In new program you
    can use new control structures, but convering uses of old
    control strucures to new ones need effort and it is likely
    that a bit more effort would be enough to convert whole
    program to a different language.

    I apologize in advance, but that is idiotic. Any re-write of any non-trivial
    application in another language, will never be complete. There will be errors
    and things will be lost. IT WILL HAPPEN !!! And when done, what will be
    the gains in a sideways move?

    I got the impression Waldek was referring to updating programs
    written to old versions of COBOL to use facilities introduced in
    newer versions of COBOL, though perhaps I am mistaken.

    Regardless, this raises an interesting point: the latest version
    of COBOL is, I believe, COBOL 2023. But that language is rather
    different than the original 1960 COBOL. So even simply updating
    a COBOL program is akin to rewriting it in another language.

    The Cobol standard has been continuously updated over
    the decades. But very few are using the new stuff added
    the last 25 years.

    For good reasons.

    Let us say that a company:
    * have a big Cobol application
    * want to add a significant chunk of new functionality
    * that new functionality could be implemented using
    features from recent versions of Cobol standard

    Options:
    A) implement it in Cobol using features from recent
    versions of Cobol standard and have the team learn
    the new stuff
    B) implement it in old style Cobol, because that is what
    the team knows
    C) implement it in some other language where the functionality is
    common and call it from Cobol
    D) implement it in some other language where the functionality is
    common and put it in a separate service in middleware tier and
    keep the old Cobol application untouched
    E) say NO - can't do it

    Few will choose #A. #B, #C and #D are simply more attractive.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Mon Dec 1 21:23:10 2025
    From Newsgroup: comp.os.vms

    In article <10gk6e6$1bcst$3@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
    Now this is opinion, and really a poor argument. While I detest the verbosity
    in most things, that is my choice, not the problem you claim.

    Back on topic, COBOL is very verbose, but I also hate way too concise >languages where the language designers don't even allow words like
    "function" to be spelt out in full. You read code many more times than
    you write it and having cryptic syntax makes that a lot harder to achieve.

    Excessive verbosity can be a hindrance to readability, but
    finding a balance with concision is more art that science. I
    don't feel the need to spell out "function" when there's an
    acceptable abbreviation that means the same thing ("fn"/"fun"/
    etc). That said, a lot of early Unix code that omitted vowels
    for brevity was utterly abstruse.

    Something like Ada was designed for readability, and I wish all other >languages followed that example.

    Unfortunately, what's considered "readable" is both subjective
    and depends on the audience. Personally, I don't find Ada more
    readable because they it forces me to write `function` instead
    of `fn` or `procedure` instead of `proc`. If anything, I find
    the split between two types of subprograms less readadable, no
    matter how it's presented syntacticaly. Similarly, I don't find
    the use of `begin` and `end` keywords more readable than `{` and
    `}`, or similar lexical glyphs. I understand that others feel
    differently.

    If anything, I find it less readable since it is less visually
    distinct (perhaps, if I my eyesight was even worse than it
    already is, I would feel differently).

    Just waiting for the moment when a newcomer designs a new language which
    has syntax resembling TECO... :-)

    Or APL.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Mon Dec 1 21:26:05 2025
    From Newsgroup: comp.os.vms

    In article <10gkvoj$1me0l$2@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 12/1/2025 8:37 AM, Dan Cross wrote:
    In article <10gg48s$3srom$1@dont-email.me>,
    Dave Froble <davef@tsoft-inc.com> wrote:
    On 11/11/2025 10:23 AM, Waldek Hebisch wrote:
    Defining a function need a several lines of
    overhead code. Function calls are more verbose than in other
    languages. There are fixable problems, which however may
    appear when dealing with real Cobol code. In particular
    Cobol support old control structures. In new program you
    can use new control structures, but convering uses of old
    control strucures to new ones need effort and it is likely
    that a bit more effort would be enough to convert whole
    program to a different language.

    I apologize in advance, but that is idiotic. Any re-write of any non-trivial
    application in another language, will never be complete. There will be errors
    and things will be lost. IT WILL HAPPEN !!! And when done, what will be
    the gains in a sideways move?

    I got the impression Waldek was referring to updating programs
    written to old versions of COBOL to use facilities introduced in
    newer versions of COBOL, though perhaps I am mistaken.

    Regardless, this raises an interesting point: the latest version
    of COBOL is, I believe, COBOL 2023. But that language is rather
    different than the original 1960 COBOL. So even simply updating
    a COBOL program is akin to rewriting it in another language.

    The Cobol standard has been continuously updated over
    the decades. But very few are using the new stuff added
    the last 25 years.

    For good reasons.

    Let us say that a company:
    * have a big Cobol application
    * want to add a significant chunk of new functionality
    * that new functionality could be implemented using
    features from recent versions of Cobol standard

    Options:
    A) implement it in Cobol using features from recent
    versions of Cobol standard and have the team learn
    the new stuff
    B) implement it in old style Cobol, because that is what
    the team knows
    C) implement it in some other language where the functionality is
    common and call it from Cobol
    D) implement it in some other language where the functionality is
    common and put it in a separate service in middleware tier and
    keep the old Cobol application untouched
    E) say NO - can't do it

    Few will choose #A. #B, #C and #D are simply more attractive.

    Yup. This is the thing that few COBOL fans seem to admit (hi,
    Bill): they like to point out that most of the complaints about
    COBOL are about very old versions of the language, and that most
    of them have been addressed in recent revisions. Ok, fair point
    maybe, but irrelevant if the code base one is working in has not
    been modernized to take advantage of those new facilities.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bill@bill.gunshannon@gmail.com to comp.os.vms on Mon Dec 1 17:46:55 2025
    From Newsgroup: comp.os.vms

    On 12/1/2025 4:02 PM, Arne Vajh|+j wrote:
    On 12/1/2025 8:37 AM, Dan Cross wrote:
    In article <10gg48s$3srom$1@dont-email.me>,
    Dave Froble-a <davef@tsoft-inc.com> wrote:
    On 11/11/2025 10:23 AM, Waldek Hebisch wrote:
    Defining a function need a several lines of
    overhead code.-a Function calls are more verbose than in other
    languages.-a There are fixable problems, which however may
    appear when dealing with real Cobol code.-a In particular
    Cobol support old control structures.-a In new program you
    can use new control structures, but convering uses of old
    control strucures to new ones need effort and it is likely
    that a bit more effort would be enough to convert whole
    program to a different language.

    I apologize in advance, but that is idiotic.-a Any re-write of any
    non-trivial
    application in another language, will never be complete. There will
    be errors
    and things will be lost.-a IT-a-a WILL-a-a HAPPEN-a-a !!!-a And when done, >>> what will be
    the gains in a sideways move?

    I got the impression Waldek was referring to updating programs
    written to old versions of COBOL to use facilities introduced in
    newer versions of COBOL, though perhaps I am mistaken.

    Regardless, this raises an interesting point: the latest version
    of COBOL is, I believe, COBOL 2023.-a But that language is rather
    different than the original 1960 COBOL.-a So even simply updating
    a COBOL program is akin to rewriting it in another language.

    The Cobol standard has been continuously updated over
    the decades. But very few are using the new stuff added
    the last 25 years.

    For good reasons.

    Let us say that a company:
    * have a big Cobol application
    * want to add a significant chunk of new functionality
    * that new functionality could be implemented using
    -a features from recent versions of Cobol standard

    Options:
    A) implement it in Cobol using features from recent
    -a-a versions of Cobol standard and have the team learn
    -a-a the new stuff
    B) implement it in old style Cobol, because that is what
    -a-a the team knows
    C) implement it in some other language where the functionality is
    -a-a common and call it from Cobol
    D) implement it in some other language where the functionality is
    -a-a common and put it in a separate service in middleware tier and
    -a-a keep the old Cobol application untouched
    E) say NO - can't do it

    Few will choose #A. #B, #C and #D are simply more attractive.

    Not really true. The only thing COBOL professionals have, for
    the most part, refused to use is the OOP stuff. Some of the
    other changes that are within the COBOL model were very welcome
    additions. Like EVALUATE. Got rid of a lot of multiple page
    IF-THEN-ELSE monstrosities.

    bill


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bill@bill.gunshannon@gmail.com to comp.os.vms on Mon Dec 1 17:50:08 2025
    From Newsgroup: comp.os.vms

    On 12/1/2025 4:23 PM, Dan Cross wrote:
    In article <10gk6e6$1bcst$3@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
    Now this is opinion, and really a poor argument. While I detest the verbosity
    in most things, that is my choice, not the problem you claim.

    Back on topic, COBOL is very verbose, but I also hate way too concise
    languages where the language designers don't even allow words like
    "function" to be spelt out in full. You read code many more times than
    you write it and having cryptic syntax makes that a lot harder to achieve.

    Excessive verbosity can be a hindrance to readability, but
    finding a balance with concision is more art that science. I
    don't feel the need to spell out "function" when there's an
    acceptable abbreviation that means the same thing ("fn"/"fun"/
    etc). That said, a lot of early Unix code that omitted vowels
    for brevity was utterly abstruse.

    Something like Ada was designed for readability, and I wish all other
    languages followed that example.

    Unfortunately, what's considered "readable" is both subjective
    and depends on the audience. Personally, I don't find Ada more
    readable because they it forces me to write `function` instead
    of `fn` or `procedure` instead of `proc`. If anything, I find
    the split between two types of subprograms less readadable, no
    matter how it's presented syntacticaly. Similarly, I don't find
    the use of `begin` and `end` keywords more readable than `{` and
    `}`, or similar lexical glyphs. I understand that others feel
    differently.

    If anything, I find it less readable since it is less visually
    distinct (perhaps, if I my eyesight was even worse than it
    already is, I would feel differently).

    Just waiting for the moment when a newcomer designs a new language which
    has syntax resembling TECO... :-)

    Or APL.

    Nothing wrong with APL, if the task is within the languages domain.
    But then, I am one of the last advocates for domain specific rather
    than generic languages.

    bill


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Mon Dec 1 18:50:57 2025
    From Newsgroup: comp.os.vms

    On 12/1/2025 5:46 PM, bill wrote:
    On 12/1/2025 4:02 PM, Arne Vajh|+j wrote:
    On 12/1/2025 8:37 AM, Dan Cross wrote:
    I got the impression Waldek was referring to updating programs
    written to old versions of COBOL to use facilities introduced in
    newer versions of COBOL, though perhaps I am mistaken.

    Regardless, this raises an interesting point: the latest version
    of COBOL is, I believe, COBOL 2023.-a But that language is rather
    different than the original 1960 COBOL.-a So even simply updating
    a COBOL program is akin to rewriting it in another language.

    The Cobol standard has been continuously updated over
    the decades. But very few are using the new stuff added
    the last 25 years.

    Not really true. The only thing COBOL professionals have, for
    the most part, refused to use is the OOP stuff.-a Some of the
    other changes that are within the COBOL model were very welcome
    additions.-a Like EVALUATE.-a Got rid of a lot of multiple page
    IF-THEN-ELSE monstrosities.

    EVALUATE came with COBOL 85. That is not within the
    last 25 years.

    New features within last 25 years besides OOP include:
    * recursion support
    * unicode support
    * pointers and dynamic memory allocation
    ^ XML support
    * collection classes

    Have you seen COBOL code using those?

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Mon Dec 1 20:06:29 2025
    From Newsgroup: comp.os.vms

    On 12/1/2025 8:37 AM, Dan Cross wrote:
    I've long suspected (but I admit I have no evidence to support
    this) that one of the reasons there is so much COBOL code in the
    world is because, when making non-trivial changes, programmers
    first _copy_ large sections of the program and then modify the
    copy, to avoid introducing bugs into existing functionality.

    Copying and modifying code instead of creating reusable libraries
    has been used by bad programmers in all languages.

    But last century then Cobol and Basic were the two easiest
    languages to learn and Cobol was one of the languages with
    most jobs. So it seems likely that a large number of bad
    programmers picked Cobol. Bringing bad habits with them.

    Today I would expect that crowd to pick client side JavaScript
    and server side PHP.

    There is also something in the Cobol language.

    Large files with one data division, lots of paragraphs
    and lots of perform's is easy to code, but it is also
    bad for reusable code.

    It is sort of the same as having large C or Pascal files
    with all variables global and all functions/procedures
    without arguments.

    It is possible to do it right, but when people have
    to chose between the easy way and the right way, then ...

    Arne




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Mon Dec 1 20:14:15 2025
    From Newsgroup: comp.os.vms

    On 12/1/2025 8:06 PM, Arne Vajh|+j wrote:
    There is also something in the Cobol language.

    Large files with one data division, lots of paragraphs
    and lots of perform's is easy to code, but it is also
    bad for reusable code.

    It is sort of the same as having large C or Pascal files
    with all variables global and all functions/procedures
    without arguments.

    It is possible to do it right, but when people have
    to chose between the easy way and the right way, then ...

    And a long post to illustrate.

    $ type m1.cob
    identification division.
    program-id.m1.
    *
    data division.
    working-storage section.
    01 ia.
    03 ia-elm pic 9(8) comp occurs 5 times.
    01 bia.
    03 bia-elm pic 9(8) comp occurs 7 times.
    01 xa.
    03 xa-elm comp-2 occurs 5 times.
    01 i pic 9(8) comp.
    01 j pic 9(8) comp.
    01 startj pic 9(8) comp.
    01 temp-ia-elm pic 9(8) display.
    01 temp-bia-elm pic 9(8) display.
    01 temp-xa-elm pic 9(8)v9(2) display.
    01 temp-i pic 9(8) display.
    01 temp-ia pic 9(8) comp.
    01 temp-bia pic 9(8) comp.
    01 temp-xa comp-2.
    *
    procedure division.
    main-paragraph.
    move 3 to ia-elm(1)
    move 5 to ia-elm(2)
    move 7 to ia-elm(3)
    move 6 to ia-elm(4)
    move 4 to ia-elm(5)
    display "Before:"
    perform print-ia
    perform sort-ia
    display "After:"
    perform print-ia
    move 3 to bia-elm(1)
    move 5 to bia-elm(2)
    move 7 to bia-elm(3)
    move 6 to bia-elm(4)
    move 4 to bia-elm(5)
    move 2 to bia-elm(6)
    move 8 to bia-elm(7)
    display "Before:"
    perform print-bia
    perform sort-bia
    display "After:"
    perform print-bia
    move 3.3 to xa-elm(1)
    move 5.5 to xa-elm(2)
    move 7.7 to xa-elm(3)
    move 6.6 to xa-elm(4)
    move 4.4 to xa-elm(5)
    display "Before:"
    perform print-xa
    perform sort-xa
    display "After:"
    perform print-xa
    stop run.
    sort-ia.
    perform varying i from 1 by 1 until i >= 5
    compute startj = i + 1
    perform varying j from startj by 1 until j > 5
    if ia-elm(j) < ia-elm(i) then
    move ia-elm(j) to temp-ia
    move ia-elm(i) to ia-elm(j)
    move temp-ia to ia-elm(i)
    end-if
    end-perform
    end-perform.
    print-ia.
    perform varying i from 1 by 1 until i > 5
    move i to temp-i
    move ia-elm(i) to temp-ia-elm
    display temp-i " : " temp-ia-elm
    end-perform.
    sort-bia.
    perform varying i from 1 by 1 until i >= 7
    compute startj = i + 1
    perform varying j from startj by 1 until j > 7
    if bia-elm(j) < bia-elm(i) then
    move bia-elm(j) to temp-bia
    move bia-elm(i) to bia-elm(j)
    move temp-bia to bia-elm(i)
    end-if
    end-perform
    end-perform.
    print-bia.
    perform varying i from 1 by 1 until i > 7
    move i to temp-i
    move bia-elm(i) to temp-bia-elm
    display temp-i " : " temp-bia-elm
    end-perform.
    sort-xa.
    perform varying i from 1 by 1 until i >= 5
    compute startj = i + 1
    perform varying j from startj by 1 until j > 5
    if xa-elm(j) < xa-elm(i) then
    move xa-elm(j) to temp-xa
    move xa-elm(i) to xa-elm(j)
    move temp-xa to xa-elm(i)
    end-if
    end-perform
    end-perform.
    print-xa.
    perform varying i from 1 by 1 until i > 5
    move i to temp-i
    move xa-elm(i) to temp-xa-elm
    display temp-i " : " temp-xa-elm
    end-perform.
    $ cob M1
    $ link M1
    $ run M1
    Before:
    00000001 : 00000003
    00000002 : 00000005
    00000003 : 00000007
    00000004 : 00000006
    00000005 : 00000004
    After:
    00000001 : 00000003
    00000002 : 00000004
    00000003 : 00000005
    00000004 : 00000006
    00000005 : 00000007
    Before:
    00000001 : 00000003
    00000002 : 00000005
    00000003 : 00000007
    00000004 : 00000006
    00000005 : 00000004
    00000006 : 00000002
    00000007 : 00000008
    After:
    00000001 : 00000002
    00000002 : 00000003
    00000003 : 00000004
    00000004 : 00000005
    00000005 : 00000006
    00000006 : 00000007
    00000007 : 00000008
    Before:
    00000001 : 0000000330
    00000002 : 0000000550
    00000003 : 0000000770
    00000004 : 0000000660
    00000005 : 0000000440
    After:
    00000001 : 0000000330
    00000002 : 0000000440
    00000003 : 0000000550
    00000004 : 0000000660
    00000005 : 0000000770
    $ type lib2.cob
    identification division.
    program-id.sort-i.

    data division.
    working-storage section.
    01 i pic 9(8) comp.
    01 j pic 9(8) comp.
    01 startj pic 9(8) comp.
    01 temp-ia pic 9(8) comp.
    linkage section.
    01 n-ia pic 9(8) comp.
    01 ia.
    03 ia-elm pic 9(8) comp occurs 0 to 1000 times depending on n-ia.

    procedure division using n-ia, ia.
    main-paragraph.
    perform varying i from 1 by 1 until i >= n-ia
    compute startj = i + 1
    perform varying j from startj by 1 until j > n-ia
    if ia-elm(j) < ia-elm(i) then
    move ia-elm(j) to temp-ia
    move ia-elm(i) to ia-elm(j)
    move temp-ia to ia-elm(i)
    end-if
    end-perform
    end-perform.
    end program sort-i.
    ****
    identification division.
    program-id.print-i.

    data division.
    working-storage section.
    01 i pic 9(8) comp.
    01 temp-ia-elm pic 9(8) display.
    01 temp-i pic 9(8) display.
    linkage section.
    01 n-ia pic 9(8) comp.
    01 ia.
    03 ia-elm pic 9(8) comp occurs 0 to 1000 times depending on n-ia.

    procedure division using n-ia, ia.
    main-paragraph.
    perform varying i from 1 by 1 until i > n-ia
    move i to temp-i
    move ia-elm(i) to temp-ia-elm
    display temp-i " : " temp-ia-elm
    end-perform.
    end program print-i.
    ****
    identification division.
    program-id.sort-x.

    data division.
    working-storage section.
    01 i pic 9(8) comp.
    01 j pic 9(8) comp.
    01 startj pic 9(8) comp.
    01 temp-xa comp-2.
    linkage section.
    01 n-xa pic 9(8) comp.
    01 xa.
    03 xa-elm comp-2 occurs 0 to 1000 times depending on n-xa.

    procedure division using n-xa, xa.
    main-paragraph.
    perform varying i from 1 by 1 until i >= n-xa
    compute startj = i + 1
    perform varying j from startj by 1 until j > n-xa
    if xa-elm(j) < xa-elm(i) then
    move xa-elm(j) to temp-xa
    move xa-elm(i) to xa-elm(j)
    move temp-xa to xa-elm(i)
    end-if
    end-perform
    end-perform.
    end program sort-x.
    ****
    identification division.
    program-id.print-x.

    data division.
    working-storage section.
    01 i pic 9(8) comp.
    01 temp-xa-elm pic 9(8)v9(2) display.
    01 temp-i pic 9(8) display.
    linkage section.
    01 n-xa pic 9(8) comp.
    01 xa.
    03 xa-elm comp-2 occurs 0 to 1000 times depending on n-xa.

    procedure division using n-xa, xa.
    main-paragraph.
    perform varying i from 1 by 1 until i > n-xa
    move i to temp-i
    move xa-elm(i) to temp-xa-elm
    display temp-i " : " temp-xa-elm
    end-perform.
    end program print-x.
    $ type m2.cob
    identification division.
    program-id.m2.
    *
    data division.
    working-storage section.
    01 ia.
    03 ia-elm pic 9(8) comp occurs 5 times.
    01 bia.
    03 bia-elm pic 9(8) comp occurs 7 times.
    01 xa.
    03 xa-elm comp-2 occurs 5 times.
    01 n pic 9(8) comp.
    01 i pic 9(8) comp.
    01 j pic 9(8) comp.
    01 startj pic 9(8) comp.
    01 temp-ia-elm pic 9(8) display.
    01 temp-bia-elm pic 9(8) display.
    01 temp-xa-elm pic 9(8)v9(2) display.
    01 temp-i pic 9(8) display.
    01 temp-ia pic 9(8) comp.
    01 temp-bia pic 9(8) comp.
    01 temp-xa comp-2.
    *
    procedure division.
    main-paragraph.
    move 3 to ia-elm(1)
    move 5 to ia-elm(2)
    move 7 to ia-elm(3)
    move 6 to ia-elm(4)
    move 4 to ia-elm(5)
    move 5 to n
    display "Before:"
    call "print-i" using n, ia
    call "sort-i" using n, ia
    display "After:"
    call "print-i" using n, ia
    move 3 to bia-elm(1)
    move 5 to bia-elm(2)
    move 7 to bia-elm(3)
    move 6 to bia-elm(4)
    move 4 to bia-elm(5)
    move 2 to bia-elm(6)
    move 8 to bia-elm(7)
    move 7 to n
    display "Before:"
    call "print-i" using n, bia
    call "sort-i" using n, bia
    display "After:"
    call "print-i" using n, bia
    move 3.3 to xa-elm(1)
    move 5.5 to xa-elm(2)
    move 7.7 to xa-elm(3)
    move 6.6 to xa-elm(4)
    move 4.4 to xa-elm(5)
    move 5 to n
    display "Before:"
    call "print-x" using n, xa
    call "sort-x" using n, xa
    display "After:"
    call "print-x" using n, xa
    stop run.
    $ cob M2
    $ cob LIB2
    $ link M2 + LIB2
    $ run M2
    Before:
    00000001 : 00000003
    00000002 : 00000005
    00000003 : 00000007
    00000004 : 00000006
    00000005 : 00000004
    After:
    00000001 : 00000003
    00000002 : 00000004
    00000003 : 00000005
    00000004 : 00000006
    00000005 : 00000007
    Before:
    00000001 : 00000003
    00000002 : 00000005
    00000003 : 00000007
    00000004 : 00000006
    00000005 : 00000004
    00000006 : 00000002
    00000007 : 00000008
    After:
    00000001 : 00000002
    00000002 : 00000003
    00000003 : 00000004
    00000004 : 00000005
    00000005 : 00000006
    00000006 : 00000007
    00000007 : 00000008
    Before:
    00000001 : 0000000330
    00000002 : 0000000550
    00000003 : 0000000770
    00000004 : 0000000660
    00000005 : 0000000440
    After:
    00000001 : 0000000330
    00000002 : 0000000440
    00000003 : 0000000550
    00000004 : 0000000660
    00000005 : 0000000770
    $ type sort.cob
    identification division.
    program-id.:NAME:.

    data division.
    working-storage section.
    01 i pic 9(8) comp.
    01 j pic 9(8) comp.
    01 startj pic 9(8) comp.
    01 temp-a :T:.
    linkage section.
    01 n-a pic 9(8) comp.
    01 a.
    03 a-elm :T: occurs 0 to 1000 times depending on n-a.

    procedure division using n-a, a.
    main-paragraph.
    perform varying i from 1 by 1 until i >= n-a
    compute startj = i + 1
    perform varying j from startj by 1 until j > n-a
    if a-elm(j) < a-elm(i) then
    move a-elm(j) to temp-a
    move a-elm(i) to a-elm(j)
    move temp-a to a-elm(i)
    end-if
    end-perform
    end-perform.
    end program :NAME:.
    $ type print.cob
    identification division.
    program-id.:NAME:.

    data division.
    working-storage section.
    01 i pic 9(8) comp.
    01 temp-a-elm :PT:.
    01 temp-i pic 9(8) display.
    linkage section.
    01 n-a pic 9(8) comp.
    01 a.
    03 a-elm :T: occurs 0 to 1000 times depending on n-a.

    procedure division using n-a, a.
    main-paragraph.
    perform varying i from 1 by 1 until i > n-a
    move i to temp-i
    move a-elm(i) to temp-a-elm
    display temp-i " : " temp-a-elm
    end-perform.
    end program :NAME:.
    $ type lib3.cob
    copy "sort.cob" replacing ==:NAME:== by ==sort-i== ==:T:== by ==pic 9(8) comp==.
    ****
    copy "print.cob" replacing ==:NAME:== by ==print-i== ==:T:== by ==pic
    9(8) comp== ==:PT:== by ==pic 9(8) display==.
    ****
    copy "sort.cob" replacing ==:NAME:== by ==sort-x== ==:T:== by ==comp-2==.
    ****
    copy "print.cob" replacing ==:NAME:== by ==print-x== ==:T:== by
    ==comp-2== ==:PT:== by ==pic 9(8)v9(2) display==.
    $ type m3.cob
    identification division.
    program-id.m2.
    *
    data division.
    working-storage section.
    01 ia.
    03 ia-elm pic 9(8) comp occurs 5 times.
    01 bia.
    03 bia-elm pic 9(8) comp occurs 7 times.
    01 xa.
    03 xa-elm comp-2 occurs 5 times.
    01 n pic 9(8) comp.
    01 i pic 9(8) comp.
    01 j pic 9(8) comp.
    01 startj pic 9(8) comp.
    01 temp-ia-elm pic 9(8) display.
    01 temp-bia-elm pic 9(8) display.
    01 temp-xa-elm pic 9(8)v9(2) display.
    01 temp-i pic 9(8) display.
    01 temp-ia pic 9(8) comp.
    01 temp-bia pic 9(8) comp.
    01 temp-xa comp-2.
    *
    procedure division.
    main-paragraph.
    move 3 to ia-elm(1)
    move 5 to ia-elm(2)
    move 7 to ia-elm(3)
    move 6 to ia-elm(4)
    move 4 to ia-elm(5)
    move 5 to n
    display "Before:"
    call "print-i" using n, ia
    call "sort-i" using n, ia
    display "After:"
    call "print-i" using n, ia
    move 3 to bia-elm(1)
    move 5 to bia-elm(2)
    move 7 to bia-elm(3)
    move 6 to bia-elm(4)
    move 4 to bia-elm(5)
    move 2 to bia-elm(6)
    move 8 to bia-elm(7)
    move 7 to n
    display "Before:"
    call "print-i" using n, bia
    call "sort-i" using n, bia
    display "After:"
    call "print-i" using n, bia
    move 3.3 to xa-elm(1)
    move 5.5 to xa-elm(2)
    move 7.7 to xa-elm(3)
    move 6.6 to xa-elm(4)
    move 4.4 to xa-elm(5)
    move 5 to n
    display "Before:"
    call "print-x" using n, xa
    call "sort-x" using n, xa
    display "After:"
    call "print-x" using n, xa
    stop run.
    $ cob M3
    $ cob LIB3
    $ link M3 + LIB3
    $ run M3
    Before:
    00000001 : 00000003
    00000002 : 00000005
    00000003 : 00000007
    00000004 : 00000006
    00000005 : 00000004
    After:
    00000001 : 00000003
    00000002 : 00000004
    00000003 : 00000005
    00000004 : 00000006
    00000005 : 00000007
    Before:
    00000001 : 00000003
    00000002 : 00000005
    00000003 : 00000007
    00000004 : 00000006
    00000005 : 00000004
    00000006 : 00000002
    00000007 : 00000008
    After:
    00000001 : 00000002
    00000002 : 00000003
    00000003 : 00000004
    00000004 : 00000005
    00000005 : 00000006
    00000006 : 00000007
    00000007 : 00000008
    Before:
    00000001 : 0000000330
    00000002 : 0000000550
    00000003 : 0000000770
    00000004 : 0000000660
    00000005 : 0000000440
    After:
    00000001 : 0000000330
    00000002 : 0000000440
    00000003 : 0000000550
    00000004 : 0000000660
    00000005 : 0000000770

    Note that I am not good enough in Cobol to know if this is
    all standard Cobol, but it happens to work with VMS Cobol.

    I am somewhat skeptical about if any Cobol developers use
    the generic style in the '3' example.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bill@bill.gunshannon@gmail.com to comp.os.vms on Mon Dec 1 20:15:59 2025
    From Newsgroup: comp.os.vms

    On 12/1/2025 8:06 PM, Arne Vajh|+j wrote:
    On 12/1/2025 8:37 AM, Dan Cross wrote:
    I've long suspected (but I admit I have no evidence to support
    this) that one of the reasons there is so much COBOL code in the
    world is because, when making non-trivial changes, programmers
    first _copy_ large sections of the program and then modify the
    copy, to avoid introducing bugs into existing functionality.

    Copying and modifying code instead of creating reusable libraries
    has been used by bad programmers in all languages.

    But last century then Cobol and Basic were the two easiest
    languages to learn and Cobol was one of the languages with
    most jobs. So it seems likely that a large number of bad
    programmers picked Cobol. Bringing bad habits with them.

    Today I would expect that crowd to pick client side JavaScript
    and server side PHP.

    There is also something in the Cobol language.

    Large files with one data division, lots of paragraphs
    and lots of perform's is easy to code, but it is also
    bad for reusable code.

    It is sort of the same as having large C or Pascal files
    with all variables global and all functions/procedures
    without arguments.

    It is possible to do it right, but when people have
    to chose between the easy way and the right way, then ...


    I take it you have never worked in a real COBOL shop.

    bill


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bill@bill.gunshannon@gmail.com to comp.os.vms on Mon Dec 1 20:23:39 2025
    From Newsgroup: comp.os.vms

    On 12/1/2025 6:50 PM, Arne Vajh|+j wrote:
    On 12/1/2025 5:46 PM, bill wrote:
    On 12/1/2025 4:02 PM, Arne Vajh|+j wrote:
    On 12/1/2025 8:37 AM, Dan Cross wrote:
    I got the impression Waldek was referring to updating programs
    written to old versions of COBOL to use facilities introduced in
    newer versions of COBOL, though perhaps I am mistaken.

    Regardless, this raises an interesting point: the latest version
    of COBOL is, I believe, COBOL 2023.-a But that language is rather
    different than the original 1960 COBOL.-a So even simply updating
    a COBOL program is akin to rewriting it in another language.

    The Cobol standard has been continuously updated over
    the decades. But very few are using the new stuff added
    the last 25 years.

    Not really true. The only thing COBOL professionals have, for
    the most part, refused to use is the OOP stuff.-a Some of the
    other changes that are within the COBOL model were very welcome
    additions.-a Like EVALUATE.-a Got rid of a lot of multiple page
    IF-THEN-ELSE monstrosities.

    EVALUATE came with COBOL 85. That is not within the
    last 25 years.

    New features within last 25 years besides OOP include:
    * recursion support
    * unicode support
    * pointers and dynamic memory allocation
    ^ XML support
    * collection classes

    Have you seen COBOL code using those?

    I have seen and used pointers but not in production code as at 75
    I am not finding many places that want me to work. :-)

    XML isn't really anything to do with the language it's a file
    format. Probably has no place in the language itself.

    UNICODE the same thing. It could be done fairly easily with a library
    but isn't really anything that COBOL had to have as a part of the
    language.

    Wouldn't classes fall under OOP. Like other long time COBOL
    programmers I never saw where that brought anything to help
    the tasks COBOL was intended for. But it is probably great
    for people using the wrong language for a particular job.

    Now recursion! There's something useful. Have to take a look
    at it.

    bill


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Mon Dec 1 20:31:27 2025
    From Newsgroup: comp.os.vms

    On 12/1/2025 8:15 PM, bill wrote:
    On 12/1/2025 8:06 PM, Arne Vajh|+j wrote:
    But last century then Cobol and Basic were the two easiest
    languages to learn and Cobol was one of the languages with
    most jobs. So it seems likely that a large number of bad
    programmers picked Cobol. Bringing bad habits with them.

    Today I would expect that crowd to pick client side JavaScript
    and server side PHP.

    There is also something in the Cobol language.

    Large files with one data division, lots of paragraphs
    and lots of perform's is easy to code, but it is also
    bad for reusable code.

    It is sort of the same as having large C or Pascal files
    with all variables global and all functions/procedures
    without arguments.

    It is possible to do it right, but when people have
    to chose between the easy way and the right way, then ...

    I take it you have never worked in a real COBOL shop.

    That is true.

    I was with the Fortran people not the Cobol people.

    But that does not change that:
    * back in those days then there were some people
    doing Cobol that should not have - this is widely
    known - I believe the not so nice name for them
    back then was "list programmers" (I was told about
    that by a Cobol programmer when I took the DEC course
    VMS for Programmers back in the mid 80's)
    * PERFORM of paragraphs is not a good way to
    write reusable code

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Mon Dec 1 20:44:17 2025
    From Newsgroup: comp.os.vms

    On 12/1/2025 8:23 PM, bill wrote:
    On 12/1/2025 6:50 PM, Arne Vajh|+j wrote:
    New features within last 25 years besides OOP include:
    * recursion support
    * unicode support
    * pointers and dynamic memory allocation
    ^ XML support
    * collection classes

    Have you seen COBOL code using those?

    I have seen and used pointers but not in production code as at 75
    I am not finding many places that want me to work.-a :-)

    XML isn't really anything to do with the language it's a file
    format. Probably has no place in the language itself.

    They did:

    ISO/IEC TR 24716:2007, Information technology -- Programming languages,
    their environment and system software interfaces -- Native COBOL Syntax
    for XML Support

    I have no idea what it does, so I don't know if it makes any sense.

    UNICODE the same thing.-a It could be done fairly easily with a library
    but isn't really anything that COBOL had to have as a part of the
    language.

    Good unicode support require support in both language and
    basic RTL.

    As an example (I am not claiming that it is good support!!) see C++:

    std::string
    std::wstring
    std::u16string
    std::u32string

    "ABC"
    L"ABC"
    u8"ABC"
    u"ABC"
    U"ABC"

    Wouldn't classes fall under OOP.

    Classes is part of OOP that was added in Cobol 2002.

    Collection classes was added in:

    ISO/IEC TR 24717:2009, Information technology -- Programming languages,
    their environments and system software interfaces -- Collection classes
    for programming language COBOL

    I have never seen it used and I do not know how they work. But if it is
    like collection classes in most other programming languages, then it
    is predefined container classes for list, map/dictionary etc..

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Mon Dec 1 20:55:33 2025
    From Newsgroup: comp.os.vms

    On 12/1/2025 8:44 PM, Arne Vajh|+j wrote:
    On 12/1/2025 8:23 PM, bill wrote:
    Wouldn't classes fall under OOP.

    Classes is part of OOP that was added in Cobol 2002.

    Collection classes was added in:

    ISO/IEC TR 24717:2009, Information technology -- Programming languages, their environments and system software interfaces -- Collection classes
    for programming language COBOL

    I have never seen it used and I do not know how they work. But if it is
    like collection classes in most other programming languages, then it
    is predefined container classes for list, map/dictionary etc..

    Refs:

    https://en.cppreference.com/w/cpp/container.html

    https://docs.oracle.com/javase/8/docs/technotes/guides/collections/index.html

    https://learn.microsoft.com/en-us/dotnet/standard/collections/

    https://docs.python.org/3/library/collections.html

    Arne




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bill@bill.gunshannon@gmail.com to comp.os.vms on Mon Dec 1 21:39:01 2025
    From Newsgroup: comp.os.vms

    On 12/1/2025 8:31 PM, Arne Vajh|+j wrote:
    On 12/1/2025 8:15 PM, bill wrote:
    On 12/1/2025 8:06 PM, Arne Vajh|+j wrote:
    But last century then Cobol and Basic were the two easiest
    languages to learn and Cobol was one of the languages with
    most jobs. So it seems likely that a large number of bad
    programmers picked Cobol. Bringing bad habits with them.

    Today I would expect that crowd to pick client side JavaScript
    and server side PHP.

    There is also something in the Cobol language.

    Large files with one data division, lots of paragraphs
    and lots of perform's is easy to code, but it is also
    bad for reusable code.

    It is sort of the same as having large C or Pascal files
    with all variables global and all functions/procedures
    without arguments.

    It is possible to do it right, but when people have
    to chose between the easy way and the right way, then ...

    I take it you have never worked in a real COBOL shop.

    That is true.

    I was with the Fortran people not the Cobol people.

    I did Fortran, too. Often in the same department (and with the
    same rules) as the COBOL.


    But that does not change that:
    * back in those days

    Exactly what do you consider to be "back in those days"?

    then there were some people
    -a doing Cobol that should not have - this is widely
    -a known -

    Not widely known in the circles I worked in. If I were not a
    competent COBOL programmer I would have been eliminated.

    I once worked in a government facility. They have strange HR rules.
    We had a woman who was totally incapable of writing a coherent COBOL
    program. Because it was government she could not be fired. So she
    came in everyday and read the newspaper because our boss refused to
    let her touch any code.

    The only other bad example I came in contact with was work done by
    contractors who had no idea how to do COBOL and DBMS. I came in
    many years after they had done their damage and fixed it all.
    But that was in more recent times. In COBOL's heyday the programmers
    were a lot better than most programmers I see today.

    I believe the not so nice name for them
    -a back then was "list programmers" (I was told about
    -a that by a Cobol programmer when I took the DEC course
    -a VMS for Programmers back in the mid 80's)
    * PERFORM of paragraphs is not a good way to
    -a write reusable code

    Matter of opinion. In most of the shops I worked we reused a lot
    of code. We had librarians who were responsible for maintaining
    both the executable and source libraries.

    bill


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Mon Dec 1 22:03:08 2025
    From Newsgroup: comp.os.vms

    On 12/1/2025 9:39 PM, bill wrote:
    On 12/1/2025 8:31 PM, Arne Vajh|+j wrote:
    But that does not change that:
    * back in those days

    Exactly what do you consider to be "back in those days"?

    Let us say before 1995.

    Then I guess the segment I am thinking of started to switch
    to VB6 and then a few years later ASP and a few years later again
    PHP.

    -a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a then there were some people
    -a-a doing Cobol that should not have - this is widely
    -a-a known -

    Not widely known in the circles I worked in.-a If I were not a
    competent COBOL programmer I would have been eliminated.

    Programming is a skill like any other skill.

    Normal/gaussian distribution.

    Few very good.
    Some good.
    Lot okay.
    Some not so good.
    Few really bad.

    Some of the latter two categories do end up being hired. By mistake,
    because the company can not get any other or something else.

    What language did those people pick back then?

    VB6, ASP and PHP was not invented yet.

    They did not have a chance with C++ or Ada. They would never get
    anything to compile. Out of the question.

    Fortran 77 is also an easy language to learn, but it was mostly
    taught in natural science and social science studies. And the people
    I am talking about could not pass the math exam for those.

    C can be a bit more tricky and was also a bit computer science
    and electric engineering oriented at that time, so math exam
    again.

    They practically had to pick Cobol or Basic.

    More jobs in Cobol so most picked Cobol.

    Not a problem with Cobol or Basic or the people that were
    actually good in those languages.

    Just how things work.

    Today we have them mostly in PHP.

    Not using Laravel and writing OO PHP.

    1000 lines long PHP files with SQL execution and HTML output
    totally mixed up and wrong indentation etc..

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bill@bill.gunshannon@gmail.com to comp.os.vms on Mon Dec 1 22:05:05 2025
    From Newsgroup: comp.os.vms

    On 12/1/2025 8:44 PM, Arne Vajh|+j wrote:
    On 12/1/2025 8:23 PM, bill wrote:
    On 12/1/2025 6:50 PM, Arne Vajh|+j wrote:
    New features within last 25 years besides OOP include:
    * recursion support
    * unicode support
    * pointers and dynamic memory allocation
    ^ XML support
    * collection classes

    Have you seen COBOL code using those?

    I have seen and used pointers but not in production code as at 75
    I am not finding many places that want me to work.-a :-)

    XML isn't really anything to do with the language it's a file
    format. Probably has no place in the language itself.

    They did:

    ISO/IEC TR 24716:2007, Information technology -- Programming languages, their environment and system software interfaces -- Native COBOL Syntax
    for XML Support

    I have no idea what it does, so I don't know if it makes any sense.

    Ivory tower types have always been shoving crap into COBOL.
    Kinda like Ada. Originally an US Air Force idea intended
    to replace Jovial and other odd things used for things like
    flying jets. Then the committee got hold of it. When the
    first release came out the US Air Force refused to use it
    even after the DOD Mandate.


    UNICODE the same thing.-a It could be done fairly easily with a library
    but isn't really anything that COBOL had to have as a part of the
    language.

    Good unicode support require support in both language and
    basic RTL.

    Don't agree. COBOL was intended to keep track of money, inventory,
    personnel, etc. UNICODE, per se, brings nothing to the table for
    any of that. And, as designed, it did support alternate character
    sets.


    As an example (I am not claiming that it is good support!!) see C++:

    std::string
    std::wstring
    std::u16string
    std::u32string

    "ABC"
    L"ABC"
    u8"ABC"
    u"ABC"
    U"ABC"

    Wouldn't classes fall under OOP.

    Classes is part of OOP that was added in Cobol 2002.

    And the COBOL Community refused to drink the Kool-Aid.
    While there may actually be a place for OOP, the work
    COBOL was intended to do isn't it. Academia tried to
    force it down everyone's throats and were outraged
    when some refused. (And took their revenge which is
    being felt more and more every day now!!) I know a
    number of massive ISes in use today that have been in
    use for around a half century that were written in COBOL
    and continue to function in COBOL. Lack of OOP hasn't
    affected them at all.


    Collection classes was added in:

    ISO/IEC TR 24717:2009, Information technology -- Programming languages, their environments and system software interfaces -- Collection classes
    for programming language COBOL

    I have never seen it used and I do not know how they work. But if it is
    like collection classes in most other programming languages, then it
    is predefined container classes for list, map/dictionary etc..

    Which does what for COBOL?

    bill


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Mon Dec 1 22:18:38 2025
    From Newsgroup: comp.os.vms

    On 12/1/2025 10:05 PM, bill wrote:
    On 12/1/2025 8:44 PM, Arne Vajh|+j wrote:
    On 12/1/2025 8:23 PM, bill wrote:
    UNICODE the same thing.-a It could be done fairly easily with a library
    but isn't really anything that COBOL had to have as a part of the
    language.

    Good unicode support require support in both language and
    basic RTL.

    Don't agree.

    It does - see C++ examples given.

    -a COBOL was intended to keep track of money, inventory,
    personnel, etc.-a UNICODE, per se,-a brings nothing to the table for
    any of that.

    Whether supporting unicode requires language and basic RTL support
    is not the same question as whether unicode support is desirable.

    But unicode support becomes somewhat desirable when you need to
    support all Latin languages and very desirable when you start
    supporting non-Latin languages.

    Wouldn't classes fall under OOP.

    Classes is part of OOP that was added in Cobol 2002.

    And the COBOL Community refused to drink the Kool-Aid.
    While there may actually be a place for OOP, the work
    COBOL was intended to do isn't it.-a Academia tried to
    force it down everyone's throats and were outraged
    when some refused. (And took their revenge which is
    being felt more and more every day now!!)-a I know a
    number of massive ISes in use today that have been in
    use for around a half century that were written in COBOL
    and continue to function in COBOL. Lack of OOP hasn't
    affected them at all.

    The Cobol code still does what it did when it was written.

    But don't expect leadership of the orgs to be happy with
    them.

    Cost, speed of development, integration with other systems etc..

    Collection classes was added in:

    ISO/IEC TR 24717:2009, Information technology -- Programming
    languages, their environments and system software interfaces --
    Collection classes for programming language COBOL

    I have never seen it used and I do not know how they work. But if it is
    like collection classes in most other programming languages, then it
    is predefined container classes for list, map/dictionary etc..

    Which does what for COBOL?

    Not sure what they would do in Cobol, if they
    were actually used.

    In other languages they have more or less made arrays
    obsolete.

    :-)

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Tue Dec 2 12:59:39 2025
    From Newsgroup: comp.os.vms

    In article <mp73b5Fjl5lU3@mid.individual.net>,
    bill <bill.gunshannon@gmail.com> wrote:
    On 12/1/2025 8:44 PM, Arne Vajh|+j wrote:
    On 12/1/2025 8:23 PM, bill wrote:
    [snip]
    UNICODE the same thing.-a It could be done fairly easily with a library
    but isn't really anything that COBOL had to have as a part of the
    language.

    Good unicode support require support in both language and
    basic RTL.

    Don't agree. COBOL was intended to keep track of money, inventory, >personnel, etc. UNICODE, per se, brings nothing to the table for
    any of that. And, as designed, it did support alternate character
    sets.

    Well, taking personnel as an example, people have names, don't
    they? Not everyone uses the Latin alphabet, and even for those
    that (largely) do, some folks have diacritical marks in their
    name and so forth. It's nice to be able to represent those.

    Classes is part of OOP that was added in Cobol 2002.

    And the COBOL Community refused to drink the Kool-Aid.
    While there may actually be a place for OOP, the work
    COBOL was intended to do isn't it. Academia tried to
    force it down everyone's throats and were outraged
    when some refused. (And took their revenge which is
    being felt more and more every day now!!) I know a
    number of massive ISes in use today that have been in
    use for around a half century that were written in COBOL
    and continue to function in COBOL. Lack of OOP hasn't
    affected them at all.

    Maybe, maybe not, but what you are describing is survivorship
    bias. It's entirely possible that _some_ COBOL programs might
    have benefited substantially from employing some OO techniques;
    because they didn't really try, we don't know.

    Consider a summarization report for a payroll run that is
    sourced from data accumulated over the run. This really isn't
    my wheelhouse, but I could well imagine representing the
    accumulated state in an object and doing the actual accumulation
    via method calls on that object; perhaps part of the process of
    accumulation is performing some transformation; that logic could
    be centralized in those methods.

    One doesn't need to do it that way, of course, but as an
    organizational style it's honestly not bad, and would fit very
    well into the types of tasks common in the COBOL world.

    There's a lot of COBOL out there. Maybe someone's tried this
    and decided that it wasn't the best way to go about things. But
    that's qualitatively different than rejecting the idea out of
    hand because it's an academic exercise in self-abuse.

    Collection classes was added in:

    ISO/IEC TR 24717:2009, Information technology -- Programming languages,
    their environments and system software interfaces -- Collection classes
    for programming language COBOL

    I have never seen it used and I do not know how they work. But if it is
    like collection classes in most other programming languages, then it
    is predefined container classes for list, map/dictionary etc..

    Which does what for COBOL?

    Makes it so that you never have to implement a linked list,
    binary search tree, or hash table ever again.

    I tend to think of COBOL as a DSL for expressing "business
    logic": it's optimized for expressing relatively simple rules
    applied over and over again across a large data set. If that's
    the case, then a COBOL programmer may never need those things.

    But I imagine even COBOL folks like the flexibility of
    data-driven designs, for which such things can be useful for
    building lookup-tables and so forth.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Tue Dec 2 13:50:52 2025
    From Newsgroup: comp.os.vms

    In article <10gle2k$1q97g$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 12/1/2025 8:37 AM, Dan Cross wrote:
    I've long suspected (but I admit I have no evidence to support
    this) that one of the reasons there is so much COBOL code in the
    world is because, when making non-trivial changes, programmers
    first _copy_ large sections of the program and then modify the
    copy, to avoid introducing bugs into existing functionality.

    Copying and modifying code instead of creating reusable libraries
    has been used by bad programmers in all languages.

    I think it's a little deeper than that.

    But last century then Cobol and Basic were the two easiest
    languages to learn and Cobol was one of the languages with
    most jobs. So it seems likely that a large number of bad
    programmers picked Cobol. Bringing bad habits with them.

    Today I would expect that crowd to pick client side JavaScript
    and server side PHP.

    There is also something in the Cobol language.

    Large files with one data division, lots of paragraphs
    and lots of perform's is easy to code, but it is also
    bad for reusable code.

    It is sort of the same as having large C or Pascal files
    with all variables global and all functions/procedures
    without arguments.

    It is possible to do it right, but when people have
    to chose between the easy way and the right way, then ...

    An issue with COBOL is that, given procedures A, B, ..., Z,
    written sequentially in source, `PERFORM A THRU Z` means that it
    is difficult to see when procedures B, C, ..., Y are called just
    through visual inspection since calls to them are implicit; you
    really need semantically aware tools to do that. So if you need
    to change paragraph D, then you run the risk of implicitly
    changing dependent behavior in your system unintentionally. You
    might end up violating some assumption you didn't even know
    existed; talk about spooky action at a distance.

    Most COBOL programs were written before the era of automated,
    unit-level testing, so it's extremely unlikely you've got a big
    suite of tests you can run to attempt to catch such issues.

    I imagine that a this results in a lot of (unnecessary)
    duplication.

    I have written about this many times.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Tue Dec 2 13:56:25 2025
    From Newsgroup: comp.os.vms

    In article <mp6kd3Fjl5nU2@mid.individual.net>,
    bill <bill.gunshannon@gmail.com> wrote:
    On 12/1/2025 4:23 PM, Dan Cross wrote:
    In article <10gk6e6$1bcst$3@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
    Now this is opinion, and really a poor argument. While I detest the verbosity
    in most things, that is my choice, not the problem you claim.

    Back on topic, COBOL is very verbose, but I also hate way too concise
    languages where the language designers don't even allow words like
    "function" to be spelt out in full. You read code many more times than
    you write it and having cryptic syntax makes that a lot harder to achieve. >>
    Excessive verbosity can be a hindrance to readability, but
    finding a balance with concision is more art that science. I
    don't feel the need to spell out "function" when there's an
    acceptable abbreviation that means the same thing ("fn"/"fun"/
    etc). That said, a lot of early Unix code that omitted vowels
    for brevity was utterly abstruse.

    Something like Ada was designed for readability, and I wish all other
    languages followed that example.

    Unfortunately, what's considered "readable" is both subjective
    and depends on the audience. Personally, I don't find Ada more
    readable because they it forces me to write `function` instead
    of `fn` or `procedure` instead of `proc`. If anything, I find
    the split between two types of subprograms less readadable, no
    matter how it's presented syntacticaly. Similarly, I don't find
    the use of `begin` and `end` keywords more readable than `{` and
    `}`, or similar lexical glyphs. I understand that others feel
    differently.

    If anything, I find it less readable since it is less visually
    distinct (perhaps, if I my eyesight was even worse than it
    already is, I would feel differently).

    Just waiting for the moment when a newcomer designs a new language which >>> has syntax resembling TECO... :-)

    Or APL.

    Nothing wrong with APL, if the task is within the languages domain.

    Kinda ruining the joke, but....

    The language itself is ok. For that matter, as a language TECO
    is ok. It's the syntactic and lexical structure of both that
    are an issue. Re: APL, what does the little wheel thing do
    again?

    But then, I am one of the last advocates for domain specific rather
    than generic languages.

    I don't think that's true. DSLs are more popular than ever.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Tue Dec 2 10:25:14 2025
    From Newsgroup: comp.os.vms

    On 12/2/2025 8:50 AM, Dan Cross wrote:
    In article <10gle2k$1q97g$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 12/1/2025 8:37 AM, Dan Cross wrote:
    I've long suspected (but I admit I have no evidence to support
    this) that one of the reasons there is so much COBOL code in the
    world is because, when making non-trivial changes, programmers
    first _copy_ large sections of the program and then modify the
    copy, to avoid introducing bugs into existing functionality.

    Copying and modifying code instead of creating reusable libraries
    has been used by bad programmers in all languages.

    I think it's a little deeper than that.

    There is also something in the Cobol language.

    Large files with one data division, lots of paragraphs
    and lots of perform's is easy to code, but it is also
    bad for reusable code.

    It is sort of the same as having large C or Pascal files
    with all variables global and all functions/procedures
    without arguments.

    It is possible to do it right, but when people have
    to chose between the easy way and the right way, then ...

    An issue with COBOL is that, given procedures A, B, ..., Z,
    written sequentially in source, `PERFORM A THRU Z` means that it
    is difficult to see when procedures B, C, ..., Y are called just
    through visual inspection since calls to them are implicit; you
    really need semantically aware tools to do that. So if you need
    to change paragraph D, then you run the risk of implicitly
    changing dependent behavior in your system unintentionally. You
    might end up violating some assumption you didn't even know
    existed; talk about spooky action at a distance.

    That is a classical argument found on the internet.

    But I am not convinced that it is critical.

    It is all within one file.

    $ search foobar.cob thru,through

    should reveal if the feature is used.

    Unless the file is very long and the code is very ugly, then
    I believe it should be relative easy possible to track the
    perform flow even in VT mode EDT or EVE.

    Most COBOL programs were written before the era of automated,
    unit-level testing, so it's extremely unlikely you've got a big
    suite of tests you can run to attempt to catch such issues.

    I imagine that a this results in a lot of (unnecessary)
    duplication.

    That may actually have a huge impact.

    No unit tests is a common reason not to change any existing code.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Tue Dec 2 15:42:08 2025
    From Newsgroup: comp.os.vms

    In article <10gks0t$1kmd0$1@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
    [snip]
    How on earth can someone not know how to divide a fraction by two ?

    I think this is discarding all nuance from a complex issue.

    Much of what the Atlantic article described, for instance, is
    due to the lingering fallout from the pandemic: an utterly
    unprecedented event in our lifetimes. Most kids entering
    college now had their education (and much of their social
    development) severely curtailed, due to circumstances affecting
    that entire globe that were completely out of those kids'
    control; or that of their parents, for that matter. To ignore
    all of that and basically declare, "Americans are igorant" is,
    itself, ignorant.

    Oh, and the laugh was only a part of it. It was her inability to
    act in a way expected of a US president. I believe the phrase I used
    at the time was a lack of gravitas, plus her inability to conduct
    serious interviews without collapsing into word salad.

    Oh please. You regurgitated right-wing talking points at the
    time, and now you bemoan that the orange stain won?

    And you still did that, knowing full-well the alternative?

    This is exactly the sort of shallow, smugly facile "argument"
    that got the US into the mess we're in now.

    It appears some people are beginning to see through Reform, and we also
    have the first past the post system. I am hoping that's enough to stop
    him from gaining a majority, but our traditional parties (all of them)
    need to _seriously_ up their game.

    The same is true in this country. The Democratic party is
    failing, utterly and miserably, as an opposition party. But do
    you know what does not help? Having folks across the pond lob
    these little jabs at those of us who understand all too well the
    dire nature of the situation. It's neither funny, when you're
    in a situation where people are being yanked off the street and
    disappeared into concentration camps while a non-trivial
    minority of the population gleefully cheers it on, nor helpful.

    It's very easy to throw stones, but not terribly advisable when
    you yourself are in a glass house.

    At least stop adding these things as parentheticals onto posts
    that _also_ carry technical content.

    Did you read the rest of the posting Dan ?

    I did. And I responded. But I split my response to the
    technical part of your post into another response, so as not to
    conflate the two topics.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Tue Dec 2 15:46:25 2025
    From Newsgroup: comp.os.vms

    In article <10gn0cq$2d8ve$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 12/2/2025 8:50 AM, Dan Cross wrote:
    In article <10gle2k$1q97g$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 12/1/2025 8:37 AM, Dan Cross wrote:
    I've long suspected (but I admit I have no evidence to support
    this) that one of the reasons there is so much COBOL code in the
    world is because, when making non-trivial changes, programmers
    first _copy_ large sections of the program and then modify the
    copy, to avoid introducing bugs into existing functionality.

    Copying and modifying code instead of creating reusable libraries
    has been used by bad programmers in all languages.

    I think it's a little deeper than that.

    There is also something in the Cobol language.

    Large files with one data division, lots of paragraphs
    and lots of perform's is easy to code, but it is also
    bad for reusable code.

    It is sort of the same as having large C or Pascal files
    with all variables global and all functions/procedures
    without arguments.

    It is possible to do it right, but when people have
    to chose between the easy way and the right way, then ...

    An issue with COBOL is that, given procedures A, B, ..., Z,
    written sequentially in source, `PERFORM A THRU Z` means that it
    is difficult to see when procedures B, C, ..., Y are called just
    through visual inspection since calls to them are implicit; you
    really need semantically aware tools to do that. So if you need
    to change paragraph D, then you run the risk of implicitly
    changing dependent behavior in your system unintentionally. You
    might end up violating some assumption you didn't even know
    existed; talk about spooky action at a distance.

    That is a classical argument found on the internet.

    Yes. I myself have been making it for years.

    But I am not convinced that it is critical.

    It is all within one file.

    $ search foobar.cob thru,through

    should reveal if the feature is used.

    Unless the file is very long and the code is very ugly, then
    I believe it should be relative easy possible to track the
    perform flow even in VT mode EDT or EVE.

    I'm not at all convinced of that in a large code base; call
    graphs resulting in such `PERFORM`s can be too big to trace by
    hand. And many of these extant COBOL applications are quite
    large, indeed.

    Most COBOL programs were written before the era of automated,
    unit-level testing, so it's extremely unlikely you've got a big
    suite of tests you can run to attempt to catch such issues.

    I imagine that a this results in a lot of (unnecessary)
    duplication.

    That may actually have a huge impact.

    No unit tests is a common reason not to change any existing code.

    I'm sure that it does.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Tue Dec 2 10:57:38 2025
    From Newsgroup: comp.os.vms

    On 12/2/2025 10:46 AM, Dan Cross wrote:
    In article <10gn0cq$2d8ve$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 12/2/2025 8:50 AM, Dan Cross wrote:
    An issue with COBOL is that, given procedures A, B, ..., Z,
    written sequentially in source, `PERFORM A THRU Z` means that it
    is difficult to see when procedures B, C, ..., Y are called just
    through visual inspection since calls to them are implicit; you
    really need semantically aware tools to do that. So if you need
    to change paragraph D, then you run the risk of implicitly
    changing dependent behavior in your system unintentionally. You
    might end up violating some assumption you didn't even know
    existed; talk about spooky action at a distance.

    That is a classical argument found on the internet.

    Yes. I myself have been making it for years.

    But I am not convinced that it is critical.

    It is all within one file.

    $ search foobar.cob thru,through

    should reveal if the feature is used.

    Unless the file is very long and the code is very ugly, then
    I believe it should be relative easy possible to track the
    perform flow even in VT mode EDT or EVE.

    I'm not at all convinced of that in a large code base; call
    graphs resulting in such `PERFORM`s can be too big to trace by
    hand. And many of these extant COBOL applications are quite
    large, indeed.

    There are lots of hundreds of thousands or millions of lines of
    code applications.

    But hopefully not as single file.

    :-)

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Tue Dec 2 19:03:15 2025
    From Newsgroup: comp.os.vms

    In article <10gn29j$2d8ve$2@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 12/2/2025 10:46 AM, Dan Cross wrote:
    In article <10gn0cq$2d8ve$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 12/2/2025 8:50 AM, Dan Cross wrote:
    An issue with COBOL is that, given procedures A, B, ..., Z,
    written sequentially in source, `PERFORM A THRU Z` means that it
    is difficult to see when procedures B, C, ..., Y are called just
    through visual inspection since calls to them are implicit; you
    really need semantically aware tools to do that. So if you need
    to change paragraph D, then you run the risk of implicitly
    changing dependent behavior in your system unintentionally. You
    might end up violating some assumption you didn't even know
    existed; talk about spooky action at a distance.

    That is a classical argument found on the internet.

    Yes. I myself have been making it for years.

    But I am not convinced that it is critical.

    It is all within one file.

    $ search foobar.cob thru,through

    should reveal if the feature is used.

    Unless the file is very long and the code is very ugly, then
    I believe it should be relative easy possible to track the
    perform flow even in VT mode EDT or EVE.

    I'm not at all convinced of that in a large code base; call
    graphs resulting in such `PERFORM`s can be too big to trace by
    hand. And many of these extant COBOL applications are quite
    large, indeed.

    There are lots of hundreds of thousands or millions of lines of
    code applications.

    But hopefully not as single file.

    I don't see how that's relevant. If a call comes from outside
    of a file that results in that PERFORM, you've still got the
    same problem. The point is, some very distant part of the
    system may be relying on that implicit behavior. You really
    have no way to tell.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Simon Clubley@clubley@remove_me.eisner.decus.org-Earth.UFP to comp.os.vms on Wed Dec 3 14:07:05 2025
    From Newsgroup: comp.os.vms

    On 2025-12-02, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
    In article <10gks0t$1kmd0$1@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
    [snip]
    How on earth can someone not know how to divide a fraction by two ?

    I think this is discarding all nuance from a complex issue.

    Much of what the Atlantic article described, for instance, is
    due to the lingering fallout from the pandemic: an utterly
    unprecedented event in our lifetimes. Most kids entering
    college now had their education (and much of their social
    development) severely curtailed, due to circumstances affecting
    that entire globe that were completely out of those kids'
    control; or that of their parents, for that matter. To ignore
    all of that and basically declare, "Americans are igorant" is,
    itself, ignorant.


    I thought the structural problems had existed for a long time and
    that the pandemic had only made more severe the structural problems
    which already existed. The Atlantic article talks about the latest
    decline starting about 2013.

    BTW, I was interested to read about the issues and tradeoffs around
    the stopping of standardised testing during the application process
    in some higher education establishments a few years ago.

    Simon.
    --
    Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
    Walking destinations on a map are further away than they appear.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Dave Froble@davef@tsoft-inc.com to comp.os.vms on Mon Dec 8 23:57:03 2025
    From Newsgroup: comp.os.vms

    On 12/1/2025 8:49 AM, Simon Clubley wrote:
    On 2025-11-29, Dave Froble <davef@tsoft-inc.com> wrote:

    Sometimes things don't really change. You count to 10 the same way now as in
    1960. (Trivial example)


    Are you sure ? I thought maths teaching was heading in a new direction
    in multiple parts of your country as shown by this example (which is way
    too close to actually being realistic, especially with the "support" infrastructure from the people around the teacher):

    https://www.youtube.com/watch?v=Zh3Yz3PiXZw


    Now this is opinion, and really a poor argument. While I detest the verbosity
    in most things, that is my choice, not the problem you claim.


    Back on topic, COBOL is very verbose, but I also hate way too concise languages where the language designers don't even allow words like
    "function" to be spelt out in full. You read code many more times than
    you write it and having cryptic syntax makes that a lot harder to achieve.

    Strongly agree ...

    Something like Ada was designed for readability, and I wish all other languages followed that example.

    Just waiting for the moment when a newcomer designs a new language which
    has syntax resembling TECO... :-)

    Save the world, shoot the idiot before it spreads ...
    --
    David Froble Tel: 724-529-0450
    Dave Froble Enterprises, Inc. E-Mail: davef@tsoft-inc.com
    DFE Ultralights, Inc.
    170 Grimplin Road
    Vanderbilt, PA 15486
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Dave Froble@davef@tsoft-inc.com to comp.os.vms on Tue Dec 9 00:19:37 2025
    From Newsgroup: comp.os.vms

    On 12/3/2025 9:07 AM, Simon Clubley wrote:

    BTW, I was interested to read about the issues and tradeoffs around
    the stopping of standardised testing during the application process
    in some higher education establishments a few years ago.

    Nothing wrong with tests. They can be helpful. But let me tell you what is wrong with depending on tests. SOME PEOPLE JUST DON'T DO WELL WITH TESTS!

    Case in point. My son always had problems with taking tests. I don't understand it, but that was a problem for him. Does that make him less than those who do well with tests?

    Now he is a NRC licensed reactor operator at a nuclear power station. Yes, there was testing, and it was difficult for him. But testing is not how the job
    is learned. People actually practiced the job under close supervision before they were trusted to do the job. Perhaps still a type of testing.

    Lately, when special operations are required, He is the one called upon, because
    he is trusted to perform the job correctly, over most of the other operators.

    I guess what I'm trying to say is while tests can be helpful, they are not necessarily the only way of determining competence.
    --
    David Froble Tel: 724-529-0450
    Dave Froble Enterprises, Inc. E-Mail: davef@tsoft-inc.com
    DFE Ultralights, Inc.
    170 Grimplin Road
    Vanderbilt, PA 15486
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Wed Dec 10 18:38:39 2025
    From Newsgroup: comp.os.vms

    In article <10h8bgq$k7on$1@dont-email.me>,
    Dave Froble <davef@tsoft-inc.com> wrote:
    On 12/3/2025 9:07 AM, Simon Clubley wrote:

    BTW, I was interested to read about the issues and tradeoffs around
    the stopping of standardised testing during the application process
    in some higher education establishments a few years ago.

    Nothing wrong with tests. They can be helpful. But let me tell you what is >wrong with depending on tests. SOME PEOPLE JUST DON'T DO WELL WITH TESTS!

    Hear, hear. Tests are _a_ way to measure progress of a group at
    scale, but they're terrible for measuring individual progress.

    Let's be honest: we use tests because we haven't figured out a
    better way to measure relative progress across a group. But
    that doesn't mean that tests are a _good_ way to go about this.

    Case in point. My son always had problems with taking tests. I don't >understand it, but that was a problem for him. Does that make him less than >those who do well with tests?

    Nope.

    Now he is a NRC licensed reactor operator at a nuclear power station. Yes, >there was testing, and it was difficult for him. But testing is not how the job
    is learned. People actually practiced the job under close supervision before >they were trusted to do the job. Perhaps still a type of testing.

    Lately, when special operations are required, He is the one called upon, because
    he is trusted to perform the job correctly, over most of the other operators.

    Good for your son; it sounds like he's doing very well for
    himself.

    I guess what I'm trying to say is while tests can be helpful, they are not >necessarily the only way of determining competence.

    100% this.

    I served in the Marines with a bunch of folks who were
    incredibly smart and talented. A lot of them didn't go to
    college or get a university education; some did, but it was a
    struggle. A lot overcame some seriously hard backgrounds. One
    guy was probably the most intelligent people I've ever met; he
    had no inclination to continue his education, as he much
    preferred working with his hands (I don't know whether he went
    to a trade school after the Marines, though).

    Being in the same room as some of those folks was an incredibly
    humbling and instructive experience about not judging people on
    superficial criteria...like artificial indicators of academic
    performance.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Sun Dec 14 01:37:42 2025
    From Newsgroup: comp.os.vms

    On Sun, 30 Nov 2025 22:18:07 -0000 (UTC), Waldek Hebisch wrote:

    Chris Townley <news@cct-net.co.uk> wrote:

    On 30/11/2025 21:09, Arne Vajh|+j wrote:

    The selling point is the automatic persistence of global variables:

    Why would you want that?

    Think database. MUMPS globals really are a non-relational database. Non-persistent database would be of limited use.

    Quite easy to do in Python. Being able to implement this on top of more general underlying features is how Python is able to keep its core
    language so small.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Wed Dec 17 21:19:46 2025
    From Newsgroup: comp.os.vms

    On 11/11/2025 10:50 AM, Arne Vajh|+j wrote:
    On 11/3/2025 8:31 AM, Simon Clubley wrote:
    What are they moving to, and how are they satisfying the extremely high
    constraints both on software and hardware availability, failure
    detection,
    and recovery that z/OS and its underlying hardware provides ?

    z/OS has a unique set of capabilities when it comes to the absolutely
    critical this _MUST_ continue working or the country/company dies area.

    Note that even though z/OS and mainframes generally have a
    good track recording regarding availability, then it is not
    a magic solution - they can also have problems.

    Banks having mainframe problems are rare but far from
    unheard of.

    And speaking of.

    A lot of banking services were down Monday in Denmark.

    Because a bank mainframe was down for 5 hours.

    Both the company and the country survived. :-)

    As often is the case the root cause was simple and
    stupid. A capacity management application took away
    the resources needed to process transactions.

    Arne



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Thu Dec 18 06:47:13 2025
    From Newsgroup: comp.os.vms

    On Wed, 17 Dec 2025 21:19:46 -0500, Arne Vajh|+j wrote:

    Because a bank mainframe was down for 5 hours.

    How many nines reliability are mainframes capable of?
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Niels S. Eliasen@nse@eliasen.co to comp.os.vms on Fri Dec 19 11:16:47 2025
    From Newsgroup: comp.os.vms

    Hi Arne
    I am impressed... :_) where did you read/see this ??

    On 2025-12-18, Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 11/11/2025 10:50 AM, Arne Vajh|+j wrote:
    On 11/3/2025 8:31 AM, Simon Clubley wrote:
    What are they moving to, and how are they satisfying the extremely high
    constraints both on software and hardware availability, failure
    detection,
    and recovery that z/OS and its underlying hardware provides ?

    z/OS has a unique set of capabilities when it comes to the absolutely
    critical this _MUST_ continue working or the country/company dies area.

    Note that even though z/OS and mainframes generally have a
    good track recording regarding availability, then it is not
    a magic solution - they can also have problems.

    Banks having mainframe problems are rare but far from
    unheard of.

    And speaking of.

    A lot of banking services were down Monday in Denmark.

    Because a bank mainframe was down for 5 hours.

    Both the company and the country survived. :-)

    As often is the case the root cause was simple and
    stupid. A capacity management application took away
    the resources needed to process transactions.

    Arne



    --
    kind regards/mvh

    Niels S. Eliasen

    Eliasen Consult
    Oregaardsvaengevej 1
    DK-4720 Pr|ast|+
    Tel/Cell: +45 21779590
    mailto:niels@eliasen.co
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Fri Dec 19 08:07:05 2025
    From Newsgroup: comp.os.vms

    On 12/19/2025 6:16 AM, Niels S. Eliasen wrote:
    On 2025-12-18, Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 11/11/2025 10:50 AM, Arne Vajh|+j wrote:
    On 11/3/2025 8:31 AM, Simon Clubley wrote:
    What are they moving to, and how are they satisfying the extremely high >>>> constraints both on software and hardware availability, failure
    detection,
    and recovery that z/OS and its underlying hardware provides ?

    z/OS has a unique set of capabilities when it comes to the absolutely
    critical this _MUST_ continue working or the country/company dies area. >>>
    Note that even though z/OS and mainframes generally have a
    good track recording regarding availability, then it is not
    a magic solution - they can also have problems.

    Banks having mainframe problems are rare but far from
    unheard of.

    And speaking of.

    A lot of banking services were down Monday in Denmark.

    Because a bank mainframe was down for 5 hours.

    Both the company and the country survived. :-)

    As often is the case the root cause was simple and
    stupid. A capacity management application took away
    the resources needed to process transactions.

    I am impressed... :_) where did you read/see this ??

    I saw the story on LinkedIn.

    But the company has been very honest about it.

    https://www.jndata.dk/driftsforstyrrelser/

    <quote>
    JN Data har natten igennem arbejdet p|N at finde kerne|Nrsagen til driftsudfaldet p|N vores Mainframe. De f|+rste unders|+gelser tyder p|N, at
    en forkert kommando i JN Datas kapacitetsstyringsv|arkt|+j var
    kerne|Nrsagen til driftsudfaldet. Den forkerte kommando bet|+d, at der
    blev tildelt for lidt kapacitet til at afvikle kundernes transaktioner.
    Som konsekvens deraf blev services som kreditkort samt net- og mobilbank utilg|angelige for JN Datas kunder Jyske Bank, Bankdata, BEC og Nykredit.
    Vi arbejder nu videre med den bagvedliggende |Nrsag til dette for at
    sikre, at en lignende fejl ikke kan opst|N fremadrettet.
    </quote>

    And for those that does not read Danish:

    <quote>
    JN Data has been working through the night to find the root cause of
    the outage on our Mainframe. Initial investigations indicate that an
    incorrect command in JN Data's capacity management tool was the root
    cause of the outage. The incorrect command meant that too little
    capacity was allocated to process customers' transactions. As a result, services such as credit cards and online and mobile banking became
    unavailable for JN Data's customers Jyske Bank, Bankdata, BEC and
    Nykredit. We are now working on the underlying cause of this to ensure
    that a similar error cannot occur in the future.
    </quote>

    I think it has been in CW as well.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Fri Dec 19 08:53:12 2025
    From Newsgroup: comp.os.vms

    On 12/19/2025 8:07 AM, Arne Vajh|+j wrote:
    And for those that does not read Danish:

    <quote>
    JN Data has been working through the night to find the root cause of
    the outage on our Mainframe. Initial investigations indicate that an incorrect command in JN Data's capacity management tool was the root
    cause of the outage. The incorrect command meant that too little
    capacity was allocated to process customers' transactions. As a result, services such as credit cards and online and mobile banking became unavailable for JN Data's customers Jyske Bank, Bankdata, BEC and
    Nykredit. We are now working on the underlying cause of this to ensure
    that a similar error cannot occur in the future.
    </quote>

    Which is not very technical, but if I translate to VMS in
    a "creative" way:

    <humor>
    mod produser /cpu=00:00:01 /wsmax=10 /wsext=20 /pgflq=100

    nice - look at all that free CPU and memory

    hmm - why are all the phones rinning?
    </humor>

    :-)

    Arne


    --- Synchronet 3.21a-Linux NewsLink 1.2