• Re: Bootcamp

    From antispam@antispam@fricas.org (Waldek Hebisch) to comp.os.vms on Sun Jul 6 00:36:51 2025
    From Newsgroup: comp.os.vms

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Thu, 3 Jul 2025 10:56:26 -0400, Arne Vajhøj wrote:

    5) The idea of emulating one OS on another OS is questionable
    in itself. It is not that difficult to achieve 90-95%
    compatibility. But 100% compatibility is very hard. Because
    the core OS design tend to spill over into
    userland semantics. It is always tricky to emulate *nix
    on VMS and it would be be tricky to emulate VMS on *nix.

    It was always tricky to emulate *nix on proprietary OSes. But emulating proprietary OSes on Linux does actually work a lot better. Look at WINE, which has progressed to the point where it can be the basis of a
    successful shipping product (the Steam Deck) that lets users run Windows games without Windows. That works so well, it puts true Windows-based handheld competitors in the shade.

    You mention Wine, but do you know what you are talking about? At
    the start Wine project had idea similar to yours: write loader
    for Windows binaries, redirect system library calls to equivalent
    Linux system/library calls and call it done. The loader part
    went smoothly, but they relatively quickly (in around 2 years)
    discoverd that devil is in emulating Windows libraries. Initial
    idea of redirecting calls to "equivalent" Linux calls turned out
    to be no go. Instead, they decided to effectively create
    Windows clone. That is Wine provides several libraries which
    are supposed to behave identically as corresponding Windows
    libraries and use the same interfaces. Only at lowest level
    they have calls to Linux libraries. In light of Wine experience,
    approach taken by VSI is quite natural.

    Why part has taken so much time? We do not know. One could
    expect that only small part of kernel is architecture dependent.
    Given that this is third port architecture dependent parts
    should be well-know to developers and clearly speparated from
    machine independent parts. There are probably some performance
    critical libraries written in native assembly (not Macro32!).
    Of course compilers (or rather their backends) are architecure
    dependent. There is also question of device drivers, while
    they can be architecture independent the set of devices
    available on x86-64 seem to differ from Itanium or Alpha.

    Given 40+ developement team (this seem to correspond to publicaly
    available information about VSI) and considering 10kloc/year
    developer productivity (I think this is reasonable estimate for
    system type work) in 4 years VSI could create about 1.6 Mloc
    of new code. We do not know size of VMS kernel, but at first
    glance this 1.6 Mloc is enough to cover architecure dependent
    parts of VMS. So one could expect port in 4-5 years or faster
    if architecure dependent parts are smaller. IIUC initial
    VSI estimate was similar.

    What went wrong? Clearly VSI hit some difficulties. Public
    information indicates that work on compilers took more time
    than expected (and that could slow down other work as it
    depends on working compilers). Note that compilers are
    neccessary for success of VMS and in compier work VSI
    actually worked close to your suggestion: they reuse
    open source backend and just add VMS-specific extentions
    and frontends. But without knowing what took time we do
    not know if some alternative approach would work better.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence D'Oliveiro@ldo@nz.invalid to comp.os.vms on Sun Jul 6 03:22:46 2025
    From Newsgroup: comp.os.vms

    On Sun, 6 Jul 2025 00:36:51 -0000 (UTC), Waldek Hebisch wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    It was always tricky to emulate *nix on proprietary OSes. But
    emulating proprietary OSes on Linux does actually work a lot
    better. Look at WINE, which has progressed to the point where it
    can be the basis of a successful shipping product (the Steam Deck)
    that lets users run Windows games without Windows. That works so
    well, it puts true Windows-based handheld competitors in the shade.

    You mention Wine, but do you know what you are talking about?

    Just look at the success of the Steam Deck, and you’ll see.

    What went wrong? Clearly VSI hit some difficulties. Public information indicates that work on compilers took more time than expected (and that
    could slow down other work as it depends on working compilers).

    Weren’t they using existing code-generation tools like LLVM? That should have saved them a lot of work.

    No, the sheer job of reimplementing the entire kernel stack (including
    custom driver support) on a new architecture was what slowed them down.
    And the effort should have been avoided.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.os.vms on Sun Jul 6 12:52:22 2025
    From Newsgroup: comp.os.vms

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Sun, 6 Jul 2025 00:36:51 -0000 (UTC), Waldek Hebisch wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    It was always tricky to emulate *nix on proprietary OSes. But
    emulating proprietary OSes on Linux does actually work a lot
    better. Look at WINE, which has progressed to the point where it
    can be the basis of a successful shipping product (the Steam Deck)
    that lets users run Windows games without Windows. That works so
    well, it puts true Windows-based handheld competitors in the shade.

    You mention Wine, but do you know what you are talking about?

    Just look at the success of the Steam Deck, and you’ll see.

    Well, in Usenet discussion it is easy to snip/ignore inconvenient
    facts that I gave. In real life such approach does not work.

    What went wrong? Clearly VSI hit some difficulties. Public information
    indicates that work on compilers took more time than expected (and that
    could slow down other work as it depends on working compilers).

    Weren’t they using existing code-generation tools like LLVM? That should have saved them a lot of work.

    Should, yes. Yet clearly compilers were late. You should recalibrate
    your estimates of effort. In particular reusing independently
    developed piece of code frequently involves a lot of work.

    No, the sheer job of reimplementing the entire kernel stack (including custom driver support) on a new architecture was what slowed them down.
    And the effort should have been avoided.

    There are no indicatianos of substantial reimplementation. Official
    info says that new or substantially reworked code is in C. But
    w also have information that amount of Macro32 and Bliss did not
    substantially decrease. So, (almost all) old code is still in use.
    It could be that small changes to old code took a lot of time.
    It could be that some new pieces were particularly tricky.
    However, you should understand that porting really means replicating
    exisiting behaviour on new hardware. Replicating behaviour gets
    more tricky if you change more parts and especially if you want
    to target a high level interface.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bill@bill.gunshannon@gmail.com to comp.os.vms on Sun Jul 6 11:02:01 2025
    From Newsgroup: comp.os.vms

    On 7/5/2025 8:36 PM, Waldek Hebisch wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Thu, 3 Jul 2025 10:56:26 -0400, Arne Vajhøj wrote:

    5) The idea of emulating one OS on another OS is questionable
    in itself. It is not that difficult to achieve 90-95%
    compatibility. But 100% compatibility is very hard. Because
    the core OS design tend to spill over into
    userland semantics. It is always tricky to emulate *nix
    on VMS and it would be be tricky to emulate VMS on *nix.

    It was always tricky to emulate *nix on proprietary OSes. But emulating
    proprietary OSes on Linux does actually work a lot better. Look at WINE,
    which has progressed to the point where it can be the basis of a
    successful shipping product (the Steam Deck) that lets users run Windows
    games without Windows. That works so well, it puts true Windows-based
    handheld competitors in the shade.

    You mention Wine, but do you know what you are talking about? At
    the start Wine project had idea similar to yours: write loader
    for Windows binaries, redirect system library calls to equivalent
    Linux system/library calls and call it done. The loader part
    went smoothly, but they relatively quickly (in around 2 years)
    discoverd that devil is in emulating Windows libraries. Initial
    idea of redirecting calls to "equivalent" Linux calls turned out
    to be no go. Instead, they decided to effectively create
    Windows clone. That is Wine provides several libraries which
    are supposed to behave identically as corresponding Windows
    libraries and use the same interfaces. Only at lowest level
    they have calls to Linux libraries. In light of Wine experience,
    approach taken by VSI is quite natural.

    Why part has taken so much time? We do not know. One could
    expect that only small part of kernel is architecture dependent.
    Given that this is third port architecture dependent parts
    should be well-know to developers and clearly speparated from
    machine independent parts. There are probably some performance
    critical libraries written in native assembly (not Macro32!).
    Of course compilers (or rather their backends) are architecure
    dependent. There is also question of device drivers, while
    they can be architecture independent the set of devices
    available on x86-64 seem to differ from Itanium or Alpha.

    Given 40+ developement team (this seem to correspond to publicaly
    available information about VSI) and considering 10kloc/year
    developer productivity (I think this is reasonable estimate for
    system type work) in 4 years VSI could create about 1.6 Mloc
    of new code. We do not know size of VMS kernel, but at first
    glance this 1.6 Mloc is enough to cover architecure dependent
    parts of VMS. So one could expect port in 4-5 years or faster
    if architecure dependent parts are smaller. IIUC initial
    VSI estimate was similar.

    What went wrong? Clearly VSI hit some difficulties. Public
    information indicates that work on compilers took more time
    than expected (and that could slow down other work as it
    depends on working compilers). Note that compilers are
    neccessary for success of VMS and in compier work VSI
    actually worked close to your suggestion: they reuse
    open source backend and just add VMS-specific extentions
    and frontends. But without knowing what took time we do
    not know if some alternative approach would work better.

    Please stop feeding the troll. He is a Linux weenie, couldn't
    care less about VMS and would love to see it turned into just
    another Linux distro.

    bill

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bill@bill.gunshannon@gmail.com to comp.os.vms on Sun Jul 6 11:04:45 2025
    From Newsgroup: comp.os.vms

    On 7/6/2025 8:52 AM, Waldek Hebisch wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Sun, 6 Jul 2025 00:36:51 -0000 (UTC), Waldek Hebisch wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    It was always tricky to emulate *nix on proprietary OSes. But
    emulating proprietary OSes on Linux does actually work a lot
    better. Look at WINE, which has progressed to the point where it
    can be the basis of a successful shipping product (the Steam Deck)
    that lets users run Windows games without Windows. That works so
    well, it puts true Windows-based handheld competitors in the shade.

    You mention Wine, but do you know what you are talking about?

    Just look at the success of the Steam Deck, and you’ll see.

    Well, in Usenet discussion it is easy to snip/ignore inconvenient
    facts that I gave. In real life such approach does not work.

    What went wrong? Clearly VSI hit some difficulties. Public information >>> indicates that work on compilers took more time than expected (and that
    could slow down other work as it depends on working compilers).

    Weren’t they using existing code-generation tools like LLVM? That should >> have saved them a lot of work.

    Should, yes. Yet clearly compilers were late. You should recalibrate
    your estimates of effort. In particular reusing independently
    developed piece of code frequently involves a lot of work.

    No, the sheer job of reimplementing the entire kernel stack (including
    custom driver support) on a new architecture was what slowed them down.
    And the effort should have been avoided.

    See what I mean!!! He wants VMS gone. I don't know why he hangs
    out here other than to annoy real VMS users.


    There are no indicatianos of substantial reimplementation. Official
    info says that new or substantially reworked code is in C. But
    w also have information that amount of Macro32 and Bliss did not substantially decrease. So, (almost all) old code is still in use.
    It could be that small changes to old code took a lot of time.
    It could be that some new pieces were particularly tricky.
    However, you should understand that porting really means replicating exisiting behaviour on new hardware. Replicating behaviour gets
    more tricky if you change more parts and especially if you want
    to target a high level interface.


    Damn it, stop feeding the troll.

    bill

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence D'Oliveiro@ldo@nz.invalid to comp.os.vms on Sun Jul 6 21:38:42 2025
    From Newsgroup: comp.os.vms

    On Sun, 6 Jul 2025 12:52:22 -0000 (UTC), Waldek Hebisch wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    No, the sheer job of reimplementing the entire kernel stack (including
    custom driver support) on a new architecture was what slowed them down.
    And the effort should have been avoided.

    There are no indicatianos of substantial reimplementation.

    All previous implementations were on hardware that DEC/Compaq/HP
    controlled. Not any more. Now they have to work on already-existing
    hardware, that conforms to standards they donrCOt control.

    So yes, the job of creating drivers conforming to their own proprietary
    API for all that hardware would be quite huge.

    And it should have been avoided.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Stephen Hoffman@seaohveh@hoffmanlabs.invalid to comp.os.vms on Fri Jul 11 17:13:11 2025
    From Newsgroup: comp.os.vms

    On 2025-07-06 00:36:51 +0000, Waldek Hebisch said:

    You mention Wine, but do you know what you are talking about? At the
    start Wine project had idea similar to yours: write loader for Windows binaries, redirect system library calls to equivalent Linux
    system/library calls and call it done. The loader part
    went smoothly, but they relatively quickly (in around 2 years)
    discoverd that devil is in emulating Windows libraries. Initial idea
    of redirecting...

    Some folks are seemingly unfamiliar with OpenVMS and OpenVMS apps, and apparently also seemingly unfamiliar with Linux, and with a fondness
    for unworkable suggestions. Not that I too don't have a fondness for unworkable suggestions.

    What you've posted has been highlighted before. As has porting VAX/VMS
    to the Mach kernel, which actually happened. (Hi, Chris!) It also
    doesn't appreciably move the operating system work forward. Ports
    ~never do.

    And there is a vendor that already provides custom solutions based on
    porting parts of the APIs to another platform, with Sector7. What
    Sector7 offers very much parallels Proton and Wine, too. But unlike VSI
    and Sector7, there are a whole lot more users of each of those
    candidate apps than the often-one-off apps found on OpenVMS. That
    disparity increases the effort involved for each app, and for the users
    of that app.

    And at the end of all that work, what's left? Outsourcing third-party
    OpenVMS app support to VSI, on a compatibility API? They can offer that
    now, and without creating Proton and Wine.

    Given 40+ developement team (this seem to correspond to publicaly
    available information about VSI) and considering 10kloc/year developer productivity...
    ...What went wrong? Clearly VSI hit some difficulties...

    40 or 50 engineers is far too small for a project of the scale and
    scope of a feature-competitive operating system. For a competitive
    platform, I'd be looking to build (slowly) to 2000, andquite possibly
    more. But that takes revenues and reinvestments.

    As an example of scale and scope that ties back to Valve and their
    efforts with Wine and Proton and Steam Deck and other functions, Valve
    may well presently have as many job openings as VSI has engineers: https://www.glassdoor.com/Jobs/Valve-Corporation-Jobs-E24849.htm https://www.valvesoftware.com/en/
    --
    Pure Personal Opinion | HoffmanLabs LLC

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence D'Oliveiro@ldo@nz.invalid to comp.os.vms on Fri Jul 11 22:51:46 2025
    From Newsgroup: comp.os.vms

    On Fri, 11 Jul 2025 17:58:56 -0400, Stephen Hoffman wrote:

    Some bozo once wrote: "...VSI spends years creating an inevitably-somewhat-incomplete third-party Linux porting kit for
    customer OpenVMS apps ...

    Still, not as many years as it took to port OpenVMS to x86-64.

    ... and the end goal of the intended customers then
    inexorably shifts toward the removal of that porting kit, and probably
    in the best case the whole effort inevitably degrades into apps ported
    top and running on VSI Linux.

    If it was a VSI-proprietary Linux, then that would defeat the point. The
    whole point about moving to Linux is that it offers you a roadmap to the future, free of vendor lock-in.

    And I'd be willing to bet money VSI will need a number of modifications
    to the Linux kernel, too.

    Not sure why. If Microsoft can make do with relatively modest changes to
    get a Linux kernel into WSL2 under Windows, a much simpler, older OS like
    VMS would hardly be a bigger job.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence D'Oliveiro@ldo@nz.invalid to comp.os.vms on Fri Jul 11 22:59:36 2025
    From Newsgroup: comp.os.vms

    On Fri, 11 Jul 2025 17:13:11 -0400, Stephen Hoffman wrote:

    What you've posted has been highlighted before. As has porting VAX/VMS
    to the Mach kernel, which actually happened.

    Yes, but microkernels are their own amusing little dead-end, arenrCOt they.

    It also doesn't appreciably move the operating system work forward.
    Ports ~never do.

    Funny. If it wasnrCOt for ports, Unix would be nothing more than yet another footnote in that list of interesting museum-piece OSes from the 1960s/
    1970s. And Linux would not now run on about two dozen different major processor architectures, and be essentially dominating the entire
    computing landscape.

    Ports are what make a piece of software portable.

    And there is a vendor that already provides custom solutions based on
    porting parts of the APIs to another platform, with Sector7. What
    Sector7 offers very much parallels Proton and Wine, too.

    But Sector7rCOs offerings seem to be incomplete. For example, I could find
    no mention of being able to offer DECnet support. Which is something
    available on Linux.

    40 or 50 engineers is far too small for a project of the scale and scope
    of a feature-competitive operating system.

    The Linux kernel has something like 1000 active contributors at any one
    time. You canrCOt compete with that. But why not leverage that power?

    For a competitive platform, I'd be looking to build (slowly) to
    2000, andquite possibly more. But that takes revenues and
    reinvestments.

    Those revenues clearly arenrCOt there. They might have been if VSI had a shipping product five years earlier. So it seems like there is no way your suggested strategy would have worked.

    As an example of scale and scope that ties back to Valve and their
    efforts with Wine and Proton and Steam Deck and other functions, Valve
    may well presently have as many job openings as VSI has engineers ...

    And remember, Valve didnrCOt do this on their own. They build on (and contribute back to) the work of the existing open-source community.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Fri Jul 11 20:16:58 2025
    From Newsgroup: comp.os.vms

    On 7/11/2025 5:58 PM, Stephen Hoffman wrote:
    On 2025-07-06 12:52:22 +0000, Waldek Hebisch said:
    There are no indicatianos of substantial reimplementation.-a Official
    info says that new or substantially reworked code is in C.-a But w also
    have information that amount of Macro32 and Bliss did not
    substantially decrease.-a So, (almost all) old code is still in use.
    It could be that small changes to old code took a lot of time. It
    could be that some new pieces were particularly tricky. However, you
    should understand that porting really means replicating exisiting
    behaviour on new hardware.-a Replicating behaviour gets more tricky if
    you change more parts and especially if you want to target a high
    level interface.

    You're correct. Reworking existing working code is quite often an
    immense mistake.

    It usually fails. If not always fails.

    And bringing a source-to-source translation tooling or an LLM can be helpful, and can also introduce new issues and new bugs.

    About the only way a global rewrite can succeed rCo absent a stratospheric-scale budget for the rewrite, and maybe not even then rCo is an incremental rewrite, as the specific modules need more than trivial modifications.

    Large applications get rewritten all the time.

    The failure rate is pretty high, but there are also lots of successes.

    Two key factors for success are:
    - realistic approach: realistic scope, realistic time frame and
    realistic budget
    - good team - latest and greatest development methodology can not
    make a bad team succeed - people with skills and experience are
    needed for big projects

    The idea of a 1:1 port is usually bad. Yes - you can implement the
    exact same flow of your Cobol application in Java/C++/Go/C#,
    but that only solves a language problem not an architecture problem.
    You need to re-architect the solution: from ISAM to RDBMS,
    from vertical app scaling to horizontal app scaling, from 5x16 to
    7x24 operations etc..

    And that is the problem with the incremental rewrite - it lean
    more to existing architecture than changing architecture. The
    strangler pattern is rarely practical to implement.

    As an example of a success story Morgan Stanley recently told
    that they rewrote 9 million lines of Cobol using a LLM. But smart
    people - they did not let the LLM auto-convert the code (that
    would likely have resulted in a big mess) - instead they
    let the LLM document the code and produce requirements for the
    new code.

    Reworking a project of the scale of OpenVMS rCo easily a decade-long
    freeze rCo and for little benefit to VSI.

    True. It is difficult to see the business case for that.

    Arne


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bill@bill.gunshannon@gmail.com to comp.os.vms on Sat Jul 12 09:35:52 2025
    From Newsgroup: comp.os.vms

    On 7/11/2025 8:16 PM, Arne Vajh|+j wrote:
    On 7/11/2025 5:58 PM, Stephen Hoffman wrote:
    On 2025-07-06 12:52:22 +0000, Waldek Hebisch said:
    There are no indicatianos of substantial reimplementation.-a Official
    info says that new or substantially reworked code is in C.-a But w
    also have information that amount of Macro32 and Bliss did not
    substantially decrease.-a So, (almost all) old code is still in use.
    It could be that small changes to old code took a lot of time. It
    could be that some new pieces were particularly tricky. However, you
    should understand that porting really means replicating exisiting
    behaviour on new hardware.-a Replicating behaviour gets more tricky if
    you change more parts and especially if you want to target a high
    level interface.

    You're correct. Reworking existing working code is quite often an
    immense mistake.

    It usually fails. If not always fails.

    And bringing a source-to-source translation tooling or an LLM can be
    helpful, and can also introduce new issues and new bugs.

    About the only way a global rewrite can succeed rCo absent a
    stratospheric-scale budget for the rewrite, and maybe not even then rCo
    is an incremental rewrite, as the specific modules need more than
    trivial modifications.

    Large applications get rewritten all the time.

    The failure rate is pretty high, but there are also lots of successes.

    Two key factors for success are:
    - realistic approach: realistic scope, realistic time frame and
    -a realistic budget
    - good team - latest and greatest development methodology can not
    -a make a bad team succeed - people with skills and experience are
    -a needed for big projects

    The idea of a 1:1 port is usually bad. Yes - you can implement the
    exact same flow of your Cobol application in Java/C++/Go/C#,
    but that only solves a language problem not an architecture problem.

    The biggest problem with this the idea of going from a domain specific
    language to a general purpose language. While you can write an IS in
    pretty much any language (imagine rewriting the entire government
    payroll currently in COBOL in BASIC!!) there were real advantages to
    having domain specific languages. But then, no one today seems to even consider things like efficiency. Just throw more hardware at the
    problem. The DOD EMR. Used to be written in COBOL maintained by
    General Dynamics out of Maryland. Only major problem was inability
    to share info with the VA system which was written in MUMPS. So they
    changed both of them to basically the same system. It now takes
    20-30 minutes to just get logged on to the DOD system (VA is doing
    much better) and they still can't do something as simple as exchange prescriptions.

    You need to re-architect the solution: from ISAM to RDBMS,

    This is the only one I totally agree with but the original problem
    had nothing to do with the language. It had to do with the fact that
    RDBMS wasn't around when COBOL was written. I have been doing COBOL
    and RDBMS since 1980 and it was old code when I got there.

    The only bad example I know of this had nothing to do with the language
    but was totally on the shoulders of the one who hired government
    contractors to convert from file access to DBMS. A number of the
    programs I had to deal with involved the programmer reading from the
    DBMS into a file and then continuing to use the COBOL program to do
    the processing. Hardly the fault of COBOL.

    from vertical app scaling to horizontal app scaling,

    Not really sure what this means. :-)

    from 5x16 to
    7x24 operations etc..

    Certainly don't get this. Every place I ever saw COBOL was 24/7 and
    that is going back to at least 1972.

    bill



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sat Jul 12 10:41:01 2025
    From Newsgroup: comp.os.vms

    On 7/12/2025 9:35 AM, bill wrote:
    On 7/11/2025 8:16 PM, Arne Vajh|+j wrote:
    The idea of a 1:1 port is usually bad. Yes - you can implement the
    exact same flow of your Cobol application in Java/C++/Go/C#,
    but that only solves a language problem not an architecture problem.

    The biggest problem with this the idea of going from a domain specific language to a general purpose language.-a While you can write an IS in
    pretty much any language (imagine rewriting the entire government
    payroll currently in COBOL in BASIC!!) there were real advantages to
    having domain specific languages.-a But then, no one today seems to even consider things like efficiency.-a Just throw more hardware at the
    problem.

    That argument made sense 40 years ago, but I don't think there
    is much point today - the modern languages have the features
    the need like easy database access and decimal data type and
    the missing features like terminal screen and reporting are no
    longer needed.

    You need to re-architect the solution: from ISAM to RDBMS,

    This is the only one I totally agree with but the original problem
    had nothing to do with the language.-a It had to do with the fact that
    RDBMS wasn't around when COBOL was written.-a I have been doing COBOL
    and RDBMS since 1980 and it was old code when I got there.

    True.

    But it is still a relevant example of where 1:1 will go wrong. If
    you have a Cobol system using ISAM files, then do not want to convert
    it to a Java/C++/Go/C# system using ISAM files.

    from vertical app scaling to horizontal app scaling,

    Not really sure what this means.-a :-)

    You can call it cluster support.

    If you run out of CPU power, then instead of upgrading from a
    big expensive box to a very big very expensive box then you just
    add a cluster node more.

    -a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a from 5x16 to
    7x24 operations etc..

    Certainly don't get this.-a Every place I ever saw COBOL was 24/7 and
    that is going back to at least 1972.

    I would be surprised if you have never experienced a financial
    institution operating with a "transaction will be completed
    next day" model.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bill@bill.gunshannon@gmail.com to comp.os.vms on Sat Jul 12 11:02:27 2025
    From Newsgroup: comp.os.vms

    On 7/12/2025 10:41 AM, Arne Vajh|+j wrote:
    On 7/12/2025 9:35 AM, bill wrote:
    On 7/11/2025 8:16 PM, Arne Vajh|+j wrote:
    The idea of a 1:1 port is usually bad. Yes - you can implement the
    exact same flow of your Cobol application in Java/C++/Go/C#,
    but that only solves a language problem not an architecture problem.

    The biggest problem with this the idea of going from a domain specific
    language to a general purpose language.-a While you can write an IS in
    pretty much any language (imagine rewriting the entire government
    payroll currently in COBOL in BASIC!!) there were real advantages to
    having domain specific languages.-a But then, no one today seems to even
    consider things like efficiency.-a Just throw more hardware at the
    problem.

    That argument made sense 40 years ago, but I don't think there
    is much point today - the modern languages have the features
    the need like easy database access and decimal data type and
    the missing features like terminal screen and reporting are no
    longer needed.

    Jack of all trades, master of none.


    You need to re-architect the solution: from ISAM to RDBMS,

    This is the only one I totally agree with but the original problem
    had nothing to do with the language.-a It had to do with the fact that
    RDBMS wasn't around when COBOL was written.-a I have been doing COBOL
    and RDBMS since 1980 and it was old code when I got there.

    True.

    But it is still a relevant example of where 1:1 will go wrong.

    No one thinks 1:1 is a good idea. Many of us think converting to
    a different language, any different language, is not a good idea
    and carries with it risk that need not be taken. Using the logic
    that conversion is always a good think, why is anyone still on VMS?
    Why do people stay on VMS? Because in many cases it is the right
    tool for the job. The same can be said about "legacy" languages.

    If
    you have a Cobol system using ISAM files, then do not want to convert
    it to a Java/C++/Go/C# system using ISAM files.

    If you have a COBOL program using ISAM today it should have been
    converted to DBMS years ago. That does not imply that it should be
    converted to JAVA/C++/Go/C#. Unless we are talking about trivial
    programs, like balancing your checkbook, there are many potential
    problems in moving a well functioning "legacy" program to a new
    language. And to be totally honest, no apparent value.


    from vertical app scaling to horizontal app scaling,

    Not really sure what this means.-a :-)

    You can call it cluster support.

    If you run out of CPU power, then instead of upgrading from a
    big expensive box to a very big very expensive box then you just
    add a cluster node more.

    OK. But I don't see what that has to do with it being written in COBOL.
    Or are you saying that IBM Systems don't scale?


    -a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a from 5x16 to
    7x24 operations etc..

    Certainly don't get this.-a Every place I ever saw COBOL was 24/7 and
    that is going back to at least 1972.

    I would be surprised if you have never experienced a financial
    institution operating with a "transaction will be completed
    next day" model.

    I get that now. That has nothing to do with IT and everything to do
    with people and their being more "legacy" than the IS. I am finally
    starting to see change. My last automatic payment from DFAS wasn't
    really due until a Monday, but the funds showed up on a Saturday.
    Even things that once ran only nightly as "batch" are now processed
    almost immediately. But the people still only work 8 hours a day 5
    days a week and it is them that cause the apparent lag in most IT
    processing. Used to be systems went offline for 6-8 hours for backups.
    Today if they go offline at all it is for seconds to minutes. But, none
    of this was ever related to the language an IS was written in and
    rewriting it in JAVA/C++/Go/C# is not going to improve anything.

    bill

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sat Jul 12 11:13:48 2025
    From Newsgroup: comp.os.vms

    On 7/12/2025 11:02 AM, bill wrote:
    On 7/12/2025 10:41 AM, Arne Vajh|+j wrote:
    On 7/12/2025 9:35 AM, bill wrote:
    On 7/11/2025 8:16 PM, Arne Vajh|+j wrote:
    -a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a If
    you have a Cobol system using ISAM files, then do not want to convert
    it to a Java/C++/Go/C# system using ISAM files.

    If you have a COBOL program using ISAM today it should have been
    converted to DBMS years ago.-a That does not imply that it should be converted to JAVA/C++/Go/C#.

    No.

    But it implies that *if* you are rewriting it then it should also
    be converted from ISAM to RDBMS.

    Not 1:1 conversion.

    from vertical app scaling to horizontal app scaling,

    Not really sure what this means.-a :-)

    You can call it cluster support.

    If you run out of CPU power, then instead of upgrading from a
    big expensive box to a very big very expensive box then you just
    add a cluster node more.

    OK.-a But I don't see what that has to do with it being written in COBOL.
    Or are you saying that IBM Systems don't scale?

    Applications are not clusterable by magic - they need to be designed
    for it.

    So again if you are converting a non clusterable then it may be
    a good opportunity to convert it to clusterable instead of 1:1
    conversion.

    It is possible to buy pretty powerful systems. But N small systems
    with power 1 are cheaper than 1 huge system with power N. That was
    the case 40 years ago for VAX. It is the case today.

    -a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a from 5x16 to
    7x24 operations etc..

    Certainly don't get this.-a Every place I ever saw COBOL was 24/7 and
    that is going back to at least 1972.

    I would be surprised if you have never experienced a financial
    institution operating with a "transaction will be completed
    next day" model.

    I get that now.-a That has nothing to do with IT and everything to do
    with people and their being more "legacy" than the IS.-a I am finally starting to see change. My last automatic payment from DFAS wasn't
    really due until a Monday, but the funds showed up on a Saturday.
    Even things that once ran only nightly as "batch" are now processed
    almost immediately.-a But the people still only work 8 hours a day 5
    days a week and it is them that cause the apparent lag in most IT processing.-a Used to be systems went offline for 6-8 hours for backups. Today if they go offline at all it is for seconds to minutes.-a But, none
    of this was ever related to the language an IS was written in and
    rewriting it in JAVA/C++/Go/C# is not going to improve anything.

    Again. It impacts the design. If the system is designed to only
    do certain things at a certain time, then the logic in the system
    must be re-designed to do everything as quickly as possible.

    So again again if you rewrite an application, then you want
    to change that logic instead of doing the 1:1 conversion.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sat Jul 12 13:42:40 2025
    From Newsgroup: comp.os.vms

    On 7/12/2025 1:26 PM, bill wrote:
    On 7/12/2025 11:13 AM, Arne Vajh|+j wrote:
    So again again if you rewrite an application, then you want
    to change that logic instead of doing the 1:1 conversion.

    And this, of course, is where we disagree.-a You see rewrites as
    normal and the best way to go.-a I see them as usually a waste of
    time being called on for the wrong reasons.-a Because your peers
    at a conference laugh at your legacy system is no reason to rewrite
    it.-a (And, yes, I have seen senior management want to make major
    and often ridiculous changes based on something their peers said
    over lunch at a conference!!)

    There is a whole discipline dedicated to determining
    if, when and how to modernize IT systems.

    But mistakes are made.

    Some systems are attempted to be modernized even though they should not.

    Some systems are kept even though they should have been modernized.

    The second is probably more common than the first.

    WuMo:

    https://wumo.com/img/wumo/2020/07/wumo5efeff933b2cb2.74594194.jpg

    Arne




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From bill@bill.gunshannon@gmail.com to comp.os.vms on Sat Jul 12 19:52:02 2025
    From Newsgroup: comp.os.vms

    On 7/12/2025 1:42 PM, Arne Vajh|+j wrote:
    On 7/12/2025 1:26 PM, bill wrote:
    On 7/12/2025 11:13 AM, Arne Vajh|+j wrote:
    So again again if you rewrite an application, then you want
    to change that logic instead of doing the 1:1 conversion.

    And this, of course, is where we disagree.-a You see rewrites as
    normal and the best way to go.-a I see them as usually a waste of
    time being called on for the wrong reasons.-a Because your peers
    at a conference laugh at your legacy system is no reason to rewrite
    it.-a (And, yes, I have seen senior management want to make major
    and often ridiculous changes based on something their peers said
    over lunch at a conference!!)

    There is a whole discipline dedicated to determining
    if, when and how to modernize IT systems.

    But mistakes are made.

    Some systems are attempted to be modernized even though they should not.

    Some systems are kept even though they should have been modernized.

    It's funny to see someone say that here. The whole IT world has been
    saying that about VMS for a very long time. I would have thought here
    was the last bastion of "If it ain't broke, don't fix it."


    The second is probably more common than the first.

    "Being common" .NE. "right" .OR. "even necessarily a good idea".


    WuMo:

    https://wumo.com/img/wumo/2020/07/wumo5efeff933b2cb2.74594194.jpg


    bill


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sat Jul 12 22:32:38 2025
    From Newsgroup: comp.os.vms

    On 7/12/2025 7:52 PM, bill wrote:
    On 7/12/2025 1:42 PM, Arne Vajh|+j wrote:
    On 7/12/2025 1:26 PM, bill wrote:
    On 7/12/2025 11:13 AM, Arne Vajh|+j wrote:
    So again again if you rewrite an application, then you want
    to change that logic instead of doing the 1:1 conversion.

    And this, of course, is where we disagree.-a You see rewrites as
    normal and the best way to go.-a I see them as usually a waste of
    time being called on for the wrong reasons.-a Because your peers
    at a conference laugh at your legacy system is no reason to rewrite
    it.-a (And, yes, I have seen senior management want to make major
    and often ridiculous changes based on something their peers said
    over lunch at a conference!!)

    There is a whole discipline dedicated to determining
    if, when and how to modernize IT systems.

    But mistakes are made.

    Some systems are attempted to be modernized even though they should not.

    Some systems are kept even though they should have been modernized.

    It's funny to see someone say that here. The whole IT world has been
    saying that about VMS for a very long time.-a I would have thought here
    was the last bastion of "If it ain't broke, don't fix it."

    Based on previous discussions then there are several "If it ain't
    broke, don't fix it." people here.

    But I do not consider myself one of them.

    You evaluate benefits, cost and risk of upgrade projects
    and decide based on that analysis.

    And long term it is not so much a question about IF but more
    a question about WHEN and HOW. Is it now or in 3 years or in
    10 years? Just add functionality or rewrite some old parts
    or rewrite everything?

    The second is probably more common than the first.

    "Being common" .NE. "right" .OR. "even necessarily a good idea".

    I don't see how one mistake being more common than another
    mistake relates to right or good idea.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Mon Jul 14 13:05:30 2025
    From Newsgroup: comp.os.vms

    In article <mdfbo5Fu5l5U2@mid.individual.net>,
    bill <bill.gunshannon@gmail.com> wrote:
    On 7/12/2025 10:41 AM, Arne Vajh|+j wrote:
    On 7/12/2025 9:35 AM, bill wrote:
    On 7/11/2025 8:16 PM, Arne Vajh|+j wrote:
    The idea of a 1:1 port is usually bad. Yes - you can implement the
    exact same flow of your Cobol application in Java/C++/Go/C#,
    but that only solves a language problem not an architecture problem.

    The biggest problem with this the idea of going from a domain specific
    language to a general purpose language.-a While you can write an IS in
    pretty much any language (imagine rewriting the entire government
    payroll currently in COBOL in BASIC!!) there were real advantages to
    having domain specific languages.-a But then, no one today seems to even >>> consider things like efficiency.-a Just throw more hardware at the
    problem.

    That argument made sense 40 years ago, but I don't think there
    is much point today - the modern languages have the features
    the need like easy database access and decimal data type and
    the missing features like terminal screen and reporting are no
    longer needed.

    Jack of all trades, master of none.

    That's a bad take. It is true that any general-purpose language
    may not be great for a particular task, but it does not follow
    that just because a language is general purpose means that it a
    priori will not be good for a particular domain.

    When I work on a compiler, I prefer a functional language (it's
    so easy to write parsers in, say, OCaml) but when I work on
    kernels, I prefer Rust. I could certainly write a compiler in
    the latter, though I prefer the former for that kind of task.
    Both are GP languages; but they excel at different things.

    COBOL is an interesting case, in particular: it is perhaps best
    to think of it as a DSL for expressing business data processing
    rules, and at that, it excels. But that doesn't mean that other
    languages cannot be gainfully employed to do the same thing. We
    have a few decades of evidence now that, for example, Java can
    be used very effectively here.

    So the "jack of all trades, master of none" quip is just not
    supported by evidence at this point, and indeed, there are
    decades of contradictory evidence showing the claim to be false.

    You need to re-architect the solution: from ISAM to RDBMS,

    This is the only one I totally agree with but the original problem
    had nothing to do with the language.-a It had to do with the fact that
    RDBMS wasn't around when COBOL was written.-a I have been doing COBOL
    and RDBMS since 1980 and it was old code when I got there.

    True.

    But it is still a relevant example of where 1:1 will go wrong.

    No one thinks 1:1 is a good idea. Many of us think converting to
    a different language, any different language, is not a good idea
    and carries with it risk that need not be taken. Using the logic
    that conversion is always a good think, why is anyone still on VMS?
    Why do people stay on VMS? Because in many cases it is the right
    tool for the job. The same can be said about "legacy" languages.

    Programming languages, and operating systems, are fundamentally
    different things. You know this.

    You may believe that it's never a good idea to rewrite a COBOL
    system in some other language, but this ignores that using COBOL
    (or continuing to do so) does carry with it certain risks.

    First of all, it is a very old language, and while it _has_
    been modernized over the years, as you well know, that is only
    relevant if the code bases written in it have been updated to
    reflect the modernization of the language, which is a very
    different thing.

    Second, there is the matter of COBOL programs often being deeply
    entwined with the surrounding system environment. The issue of
    ISAM vs RDBMS has been raised, but that's just one: how about
    VTAM versus other UI paradigms, CICS versus other transaction
    monitors, and all of other surrounding supporting technologies
    (JCL!). IBM, in its infinite wisdom, has made it very difficult
    for new programmers to "break" into the mainframe world.

    Which brings me to point three, if you want to maintain legacy
    COBOL code, you need to bring in programmers who a) are already
    skilled in COBOL development (and maintanenance, which is a
    related by different skill) or b) somehow get them trained. OJT
    is one approach, sure, but you've got to find pepole who are
    willing to be trained, as well: if I were a new-grad, or even
    someone out of high school, and I were doing my cost/benefit
    analysis of where I wanted to apply my focus to maximize my
    career potential for the next 10-15 years, it'd be hard to
    justify learning COBOL. Sure, I could probably get a job
    working in a COBOL shop, but my options for future growth would
    be limited to other COBOL shops.

    you have a Cobol system using ISAM files, then do not want to convert
    it to a Java/C++/Go/C# system using ISAM files.

    If you have a COBOL program using ISAM today it should have been
    converted to DBMS years ago. That does not imply that it should be
    converted to JAVA/C++/Go/C#. Unless we are talking about trivial
    programs, like balancing your checkbook, there are many potential
    problems in moving a well functioning "legacy" program to a new
    language. And to be totally honest, no apparent value.

    It does not imply that, but asserting that it should not should
    be based on a solid argument, and sadly, your argument is
    ignoring legitimate risks associated with leaving a system in
    COBOL. They exist.

    That doesn't automatically imply a system should be rewritten,
    but it doesn't do anyone any good to pretend that those risks
    are just FUD.

    from vertical app scaling to horizontal app scaling,

    Not really sure what this means.-a :-)

    You can call it cluster support.

    If you run out of CPU power, then instead of upgrading from a
    big expensive box to a very big very expensive box then you just
    add a cluster node more.

    OK. But I don't see what that has to do with it being written in COBOL.

    It doesn't. I don't see why one can't horizontally scale a
    system written in COBOL. I don't know if that's the best way to
    go about things, but I don't know that it isn't, either.

    Or are you saying that IBM Systems don't scale?

    That's completely orthogonal. Whether a system is designed to
    scale horizontally or vertically is a function of the design of
    the system and how it was designed. Clearly, applications
    written for mainframes have been capable for horizontal scaling
    for decades.

    A separate question is whether systems designed for IBM
    environments and written in COBOL are typically designed with
    horizontal scaling in mind; I suspect that _most_ are not.

    -a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a from 5x16 to
    7x24 operations etc..

    Certainly don't get this.-a Every place I ever saw COBOL was 24/7 and
    that is going back to at least 1972.

    I would be surprised if you have never experienced a financial
    institution operating with a "transaction will be completed
    next day" model.

    I get that now. That has nothing to do with IT and everything to do
    with people and their being more "legacy" than the IS. I am finally
    starting to see change. My last automatic payment from DFAS wasn't
    really due until a Monday, but the funds showed up on a Saturday.
    Even things that once ran only nightly as "batch" are now processed
    almost immediately. But the people still only work 8 hours a day 5
    days a week and it is them that cause the apparent lag in most IT
    processing. Used to be systems went offline for 6-8 hours for backups.
    Today if they go offline at all it is for seconds to minutes. But, none
    of this was ever related to the language an IS was written in and
    rewriting it in JAVA/C++/Go/C# is not going to improve anything.

    This I agree with, but would add that often these sorts of
    delays are also the byproduct of arcane and outdated regulatory
    or business reasons. For examples, banks used to close at what
    felt like ridiculously early hours: 1pm to 3pm or something like
    that. The reason was that this gave the clerks time to balance
    their ledgers to reflect the day's transactions before the end
    of "normal" working hours. So while the bank closed for
    ordinary customer services, it remained open for its own
    business for some number of hours.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Mon Jul 14 14:27:22 2025
    From Newsgroup: comp.os.vms

    In article <104rup7$1mu3j$1@dont-email.me>,
    Stephen Hoffman <seaohveh@hoffmanlabs.invalid> wrote:
    [snip]
    What you've posted has been highlighted before. As has porting VAX/VMS
    to the Mach kernel, which actually happened. (Hi, Chris!) It also
    doesn't appreciably move the operating system work forward. Ports
    ~never do.

    I suppose this depends on what you mean by, "operating system
    work" in this context. If by that you mean doing new feature
    development to support some kind of customer requirement, then
    sure. But if you mean making the OS more portable, then no.

    40 or 50 engineers is far too small for a project of the scale and
    scope of a feature-competitive operating system. For a competitive >platform, I'd be looking to build (slowly) to 2000, andquite possibly
    more. But that takes revenues and reinvestments.

    How do you figure? 2,000 seems like an order of magnitude too
    many to be actively working on the OS itself.

    The number of contributors to Linux, for example, is huge; my
    simple count is almost 40k. But the volume of involement seems
    to follow a power law: the vast majority of those have never
    authored more than a handful of commits (or more than 1). The
    number who have authored more than 100 commits is much smaller;
    a little over 2k. More than 500? 475. More than 1,000, which
    is the number that I would consider to be actively working on
    the OS? 220, which seems about right for the Linux kernel's
    size and complexity.i I suspect that if my analysis were a bit
    more robust (for instance, combining counts for the same author
    using different email addresses or something) the numbers would
    be even smaller.

    As an example of scale and scope that ties back to Valve and their
    efforts with Wine and Proton and Steam Deck and other functions, Valve
    may well presently have as many job openings as VSI has engineers: >https://www.glassdoor.com/Jobs/Valve-Corporation-Jobs-E24849.htm >https://www.valvesoftware.com/en/

    Looking, many of these are game/audio/hardware/animaters/artists
    and business folks, not OS people. I suspect OS folks are in a
    very small minority there.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2