Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 23 |
Nodes: | 6 (0 / 6) |
Uptime: | 49:45:20 |
Calls: | 583 |
Files: | 1,138 |
Messages: | 111,301 |
On Thu, 3 Jul 2025 10:56:26 -0400, Arne Vajhøj wrote:
5) The idea of emulating one OS on another OS is questionable
in itself. It is not that difficult to achieve 90-95%
compatibility. But 100% compatibility is very hard. Because
the core OS design tend to spill over into
userland semantics. It is always tricky to emulate *nix
on VMS and it would be be tricky to emulate VMS on *nix.
It was always tricky to emulate *nix on proprietary OSes. But emulating proprietary OSes on Linux does actually work a lot better. Look at WINE, which has progressed to the point where it can be the basis of a
successful shipping product (the Steam Deck) that lets users run Windows games without Windows. That works so well, it puts true Windows-based handheld competitors in the shade.
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
It was always tricky to emulate *nix on proprietary OSes. But
emulating proprietary OSes on Linux does actually work a lot
better. Look at WINE, which has progressed to the point where it
can be the basis of a successful shipping product (the Steam Deck)
that lets users run Windows games without Windows. That works so
well, it puts true Windows-based handheld competitors in the shade.
You mention Wine, but do you know what you are talking about?
What went wrong? Clearly VSI hit some difficulties. Public information indicates that work on compilers took more time than expected (and that
could slow down other work as it depends on working compilers).
On Sun, 6 Jul 2025 00:36:51 -0000 (UTC), Waldek Hebisch wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
It was always tricky to emulate *nix on proprietary OSes. But
emulating proprietary OSes on Linux does actually work a lot
better. Look at WINE, which has progressed to the point where it
can be the basis of a successful shipping product (the Steam Deck)
that lets users run Windows games without Windows. That works so
well, it puts true Windows-based handheld competitors in the shade.
You mention Wine, but do you know what you are talking about?
Just look at the success of the Steam Deck, and you’ll see.
What went wrong? Clearly VSI hit some difficulties. Public information
indicates that work on compilers took more time than expected (and that
could slow down other work as it depends on working compilers).
Weren’t they using existing code-generation tools like LLVM? That should have saved them a lot of work.
No, the sheer job of reimplementing the entire kernel stack (including custom driver support) on a new architecture was what slowed them down.
And the effort should have been avoided.
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Thu, 3 Jul 2025 10:56:26 -0400, Arne Vajhøj wrote:
5) The idea of emulating one OS on another OS is questionable
in itself. It is not that difficult to achieve 90-95%
compatibility. But 100% compatibility is very hard. Because
the core OS design tend to spill over into
userland semantics. It is always tricky to emulate *nix
on VMS and it would be be tricky to emulate VMS on *nix.
It was always tricky to emulate *nix on proprietary OSes. But emulating
proprietary OSes on Linux does actually work a lot better. Look at WINE,
which has progressed to the point where it can be the basis of a
successful shipping product (the Steam Deck) that lets users run Windows
games without Windows. That works so well, it puts true Windows-based
handheld competitors in the shade.
You mention Wine, but do you know what you are talking about? At
the start Wine project had idea similar to yours: write loader
for Windows binaries, redirect system library calls to equivalent
Linux system/library calls and call it done. The loader part
went smoothly, but they relatively quickly (in around 2 years)
discoverd that devil is in emulating Windows libraries. Initial
idea of redirecting calls to "equivalent" Linux calls turned out
to be no go. Instead, they decided to effectively create
Windows clone. That is Wine provides several libraries which
are supposed to behave identically as corresponding Windows
libraries and use the same interfaces. Only at lowest level
they have calls to Linux libraries. In light of Wine experience,
approach taken by VSI is quite natural.
Why part has taken so much time? We do not know. One could
expect that only small part of kernel is architecture dependent.
Given that this is third port architecture dependent parts
should be well-know to developers and clearly speparated from
machine independent parts. There are probably some performance
critical libraries written in native assembly (not Macro32!).
Of course compilers (or rather their backends) are architecure
dependent. There is also question of device drivers, while
they can be architecture independent the set of devices
available on x86-64 seem to differ from Itanium or Alpha.
Given 40+ developement team (this seem to correspond to publicaly
available information about VSI) and considering 10kloc/year
developer productivity (I think this is reasonable estimate for
system type work) in 4 years VSI could create about 1.6 Mloc
of new code. We do not know size of VMS kernel, but at first
glance this 1.6 Mloc is enough to cover architecure dependent
parts of VMS. So one could expect port in 4-5 years or faster
if architecure dependent parts are smaller. IIUC initial
VSI estimate was similar.
What went wrong? Clearly VSI hit some difficulties. Public
information indicates that work on compilers took more time
than expected (and that could slow down other work as it
depends on working compilers). Note that compilers are
neccessary for success of VMS and in compier work VSI
actually worked close to your suggestion: they reuse
open source backend and just add VMS-specific extentions
and frontends. But without knowing what took time we do
not know if some alternative approach would work better.
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Sun, 6 Jul 2025 00:36:51 -0000 (UTC), Waldek Hebisch wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
It was always tricky to emulate *nix on proprietary OSes. But
emulating proprietary OSes on Linux does actually work a lot
better. Look at WINE, which has progressed to the point where it
can be the basis of a successful shipping product (the Steam Deck)
that lets users run Windows games without Windows. That works so
well, it puts true Windows-based handheld competitors in the shade.
You mention Wine, but do you know what you are talking about?
Just look at the success of the Steam Deck, and you’ll see.
Well, in Usenet discussion it is easy to snip/ignore inconvenient
facts that I gave. In real life such approach does not work.
What went wrong? Clearly VSI hit some difficulties. Public information >>> indicates that work on compilers took more time than expected (and that
could slow down other work as it depends on working compilers).
Weren’t they using existing code-generation tools like LLVM? That should >> have saved them a lot of work.
Should, yes. Yet clearly compilers were late. You should recalibrate
your estimates of effort. In particular reusing independently
developed piece of code frequently involves a lot of work.
No, the sheer job of reimplementing the entire kernel stack (including
custom driver support) on a new architecture was what slowed them down.
And the effort should have been avoided.
There are no indicatianos of substantial reimplementation. Official
info says that new or substantially reworked code is in C. But
w also have information that amount of Macro32 and Bliss did not substantially decrease. So, (almost all) old code is still in use.
It could be that small changes to old code took a lot of time.
It could be that some new pieces were particularly tricky.
However, you should understand that porting really means replicating exisiting behaviour on new hardware. Replicating behaviour gets
more tricky if you change more parts and especially if you want
to target a high level interface.
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
No, the sheer job of reimplementing the entire kernel stack (including
custom driver support) on a new architecture was what slowed them down.
And the effort should have been avoided.
There are no indicatianos of substantial reimplementation.
You mention Wine, but do you know what you are talking about? At the
start Wine project had idea similar to yours: write loader for Windows binaries, redirect system library calls to equivalent Linux
system/library calls and call it done. The loader part
went smoothly, but they relatively quickly (in around 2 years)
discoverd that devil is in emulating Windows libraries. Initial idea
of redirecting...
Given 40+ developement team (this seem to correspond to publicaly
available information about VSI) and considering 10kloc/year developer productivity...
...What went wrong? Clearly VSI hit some difficulties...
Some bozo once wrote: "...VSI spends years creating an inevitably-somewhat-incomplete third-party Linux porting kit for
customer OpenVMS apps ...
... and the end goal of the intended customers then
inexorably shifts toward the removal of that porting kit, and probably
in the best case the whole effort inevitably degrades into apps ported
top and running on VSI Linux.
And I'd be willing to bet money VSI will need a number of modifications
to the Linux kernel, too.
What you've posted has been highlighted before. As has porting VAX/VMS
to the Mach kernel, which actually happened.
It also doesn't appreciably move the operating system work forward.
Ports ~never do.
And there is a vendor that already provides custom solutions based on
porting parts of the APIs to another platform, with Sector7. What
Sector7 offers very much parallels Proton and Wine, too.
40 or 50 engineers is far too small for a project of the scale and scope
of a feature-competitive operating system.
For a competitive platform, I'd be looking to build (slowly) to
2000, andquite possibly more. But that takes revenues and
reinvestments.
As an example of scale and scope that ties back to Valve and their
efforts with Wine and Proton and Steam Deck and other functions, Valve
may well presently have as many job openings as VSI has engineers ...
On 2025-07-06 12:52:22 +0000, Waldek Hebisch said:
There are no indicatianos of substantial reimplementation.-a Official
info says that new or substantially reworked code is in C.-a But w also
have information that amount of Macro32 and Bliss did not
substantially decrease.-a So, (almost all) old code is still in use.
It could be that small changes to old code took a lot of time. It
could be that some new pieces were particularly tricky. However, you
should understand that porting really means replicating exisiting
behaviour on new hardware.-a Replicating behaviour gets more tricky if
you change more parts and especially if you want to target a high
level interface.
You're correct. Reworking existing working code is quite often an
immense mistake.
It usually fails. If not always fails.
And bringing a source-to-source translation tooling or an LLM can be helpful, and can also introduce new issues and new bugs.
About the only way a global rewrite can succeed rCo absent a stratospheric-scale budget for the rewrite, and maybe not even then rCo is an incremental rewrite, as the specific modules need more than trivial modifications.
Reworking a project of the scale of OpenVMS rCo easily a decade-long
freeze rCo and for little benefit to VSI.
On 7/11/2025 5:58 PM, Stephen Hoffman wrote:
On 2025-07-06 12:52:22 +0000, Waldek Hebisch said:
There are no indicatianos of substantial reimplementation.-a Official
info says that new or substantially reworked code is in C.-a But w
also have information that amount of Macro32 and Bliss did not
substantially decrease.-a So, (almost all) old code is still in use.
It could be that small changes to old code took a lot of time. It
could be that some new pieces were particularly tricky. However, you
should understand that porting really means replicating exisiting
behaviour on new hardware.-a Replicating behaviour gets more tricky if
you change more parts and especially if you want to target a high
level interface.
You're correct. Reworking existing working code is quite often an
immense mistake.
It usually fails. If not always fails.
And bringing a source-to-source translation tooling or an LLM can be
helpful, and can also introduce new issues and new bugs.
About the only way a global rewrite can succeed rCo absent a
stratospheric-scale budget for the rewrite, and maybe not even then rCo
is an incremental rewrite, as the specific modules need more than
trivial modifications.
Large applications get rewritten all the time.
The failure rate is pretty high, but there are also lots of successes.
Two key factors for success are:
- realistic approach: realistic scope, realistic time frame and
-a realistic budget
- good team - latest and greatest development methodology can not
-a make a bad team succeed - people with skills and experience are
-a needed for big projects
The idea of a 1:1 port is usually bad. Yes - you can implement the
exact same flow of your Cobol application in Java/C++/Go/C#,
but that only solves a language problem not an architecture problem.
You need to re-architect the solution: from ISAM to RDBMS,
from vertical app scaling to horizontal app scaling,
from 5x16 to
7x24 operations etc..
On 7/11/2025 8:16 PM, Arne Vajh|+j wrote:
The idea of a 1:1 port is usually bad. Yes - you can implement the
exact same flow of your Cobol application in Java/C++/Go/C#,
but that only solves a language problem not an architecture problem.
The biggest problem with this the idea of going from a domain specific language to a general purpose language.-a While you can write an IS in
pretty much any language (imagine rewriting the entire government
payroll currently in COBOL in BASIC!!) there were real advantages to
having domain specific languages.-a But then, no one today seems to even consider things like efficiency.-a Just throw more hardware at the
problem.
You need to re-architect the solution: from ISAM to RDBMS,
This is the only one I totally agree with but the original problem
had nothing to do with the language.-a It had to do with the fact that
RDBMS wasn't around when COBOL was written.-a I have been doing COBOL
and RDBMS since 1980 and it was old code when I got there.
from vertical app scaling to horizontal app scaling,
Not really sure what this means.-a :-)
-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a from 5x16 to
7x24 operations etc..
Certainly don't get this.-a Every place I ever saw COBOL was 24/7 and
that is going back to at least 1972.
On 7/12/2025 9:35 AM, bill wrote:
On 7/11/2025 8:16 PM, Arne Vajh|+j wrote:
The idea of a 1:1 port is usually bad. Yes - you can implement the
exact same flow of your Cobol application in Java/C++/Go/C#,
but that only solves a language problem not an architecture problem.
The biggest problem with this the idea of going from a domain specific
language to a general purpose language.-a While you can write an IS in
pretty much any language (imagine rewriting the entire government
payroll currently in COBOL in BASIC!!) there were real advantages to
having domain specific languages.-a But then, no one today seems to even
consider things like efficiency.-a Just throw more hardware at the
problem.
That argument made sense 40 years ago, but I don't think there
is much point today - the modern languages have the features
the need like easy database access and decimal data type and
the missing features like terminal screen and reporting are no
longer needed.
You need to re-architect the solution: from ISAM to RDBMS,
This is the only one I totally agree with but the original problem
had nothing to do with the language.-a It had to do with the fact that
RDBMS wasn't around when COBOL was written.-a I have been doing COBOL
and RDBMS since 1980 and it was old code when I got there.
True.
But it is still a relevant example of where 1:1 will go wrong.
If
you have a Cobol system using ISAM files, then do not want to convert
it to a Java/C++/Go/C# system using ISAM files.
from vertical app scaling to horizontal app scaling,
Not really sure what this means.-a :-)
You can call it cluster support.
If you run out of CPU power, then instead of upgrading from a
big expensive box to a very big very expensive box then you just
add a cluster node more.
-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a from 5x16 to
7x24 operations etc..
Certainly don't get this.-a Every place I ever saw COBOL was 24/7 and
that is going back to at least 1972.
I would be surprised if you have never experienced a financial
institution operating with a "transaction will be completed
next day" model.
On 7/12/2025 10:41 AM, Arne Vajh|+j wrote:
On 7/12/2025 9:35 AM, bill wrote:
On 7/11/2025 8:16 PM, Arne Vajh|+j wrote:-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a If
you have a Cobol system using ISAM files, then do not want to convert
it to a Java/C++/Go/C# system using ISAM files.
If you have a COBOL program using ISAM today it should have been
converted to DBMS years ago.-a That does not imply that it should be converted to JAVA/C++/Go/C#.
from vertical app scaling to horizontal app scaling,
Not really sure what this means.-a :-)
You can call it cluster support.
If you run out of CPU power, then instead of upgrading from a
big expensive box to a very big very expensive box then you just
add a cluster node more.
OK.-a But I don't see what that has to do with it being written in COBOL.
Or are you saying that IBM Systems don't scale?
-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a from 5x16 to
7x24 operations etc..
Certainly don't get this.-a Every place I ever saw COBOL was 24/7 and
that is going back to at least 1972.
I would be surprised if you have never experienced a financial
institution operating with a "transaction will be completed
next day" model.
I get that now.-a That has nothing to do with IT and everything to do
with people and their being more "legacy" than the IS.-a I am finally starting to see change. My last automatic payment from DFAS wasn't
really due until a Monday, but the funds showed up on a Saturday.
Even things that once ran only nightly as "batch" are now processed
almost immediately.-a But the people still only work 8 hours a day 5
days a week and it is them that cause the apparent lag in most IT processing.-a Used to be systems went offline for 6-8 hours for backups. Today if they go offline at all it is for seconds to minutes.-a But, none
of this was ever related to the language an IS was written in and
rewriting it in JAVA/C++/Go/C# is not going to improve anything.
On 7/12/2025 11:13 AM, Arne Vajh|+j wrote:
So again again if you rewrite an application, then you want
to change that logic instead of doing the 1:1 conversion.
And this, of course, is where we disagree.-a You see rewrites as
normal and the best way to go.-a I see them as usually a waste of
time being called on for the wrong reasons.-a Because your peers
at a conference laugh at your legacy system is no reason to rewrite
it.-a (And, yes, I have seen senior management want to make major
and often ridiculous changes based on something their peers said
over lunch at a conference!!)
On 7/12/2025 1:26 PM, bill wrote:
On 7/12/2025 11:13 AM, Arne Vajh|+j wrote:
So again again if you rewrite an application, then you want
to change that logic instead of doing the 1:1 conversion.
And this, of course, is where we disagree.-a You see rewrites as
normal and the best way to go.-a I see them as usually a waste of
time being called on for the wrong reasons.-a Because your peers
at a conference laugh at your legacy system is no reason to rewrite
it.-a (And, yes, I have seen senior management want to make major
and often ridiculous changes based on something their peers said
over lunch at a conference!!)
There is a whole discipline dedicated to determining
if, when and how to modernize IT systems.
But mistakes are made.
Some systems are attempted to be modernized even though they should not.
Some systems are kept even though they should have been modernized.
The second is probably more common than the first.
WuMo:
https://wumo.com/img/wumo/2020/07/wumo5efeff933b2cb2.74594194.jpg
On 7/12/2025 1:42 PM, Arne Vajh|+j wrote:
On 7/12/2025 1:26 PM, bill wrote:
On 7/12/2025 11:13 AM, Arne Vajh|+j wrote:
So again again if you rewrite an application, then you want
to change that logic instead of doing the 1:1 conversion.
And this, of course, is where we disagree.-a You see rewrites as
normal and the best way to go.-a I see them as usually a waste of
time being called on for the wrong reasons.-a Because your peers
at a conference laugh at your legacy system is no reason to rewrite
it.-a (And, yes, I have seen senior management want to make major
and often ridiculous changes based on something their peers said
over lunch at a conference!!)
There is a whole discipline dedicated to determining
if, when and how to modernize IT systems.
But mistakes are made.
Some systems are attempted to be modernized even though they should not.
Some systems are kept even though they should have been modernized.
It's funny to see someone say that here. The whole IT world has been
saying that about VMS for a very long time.-a I would have thought here
was the last bastion of "If it ain't broke, don't fix it."
The second is probably more common than the first.
"Being common" .NE. "right" .OR. "even necessarily a good idea".
On 7/12/2025 10:41 AM, Arne Vajh|+j wrote:
On 7/12/2025 9:35 AM, bill wrote:
On 7/11/2025 8:16 PM, Arne Vajh|+j wrote:
The idea of a 1:1 port is usually bad. Yes - you can implement the
exact same flow of your Cobol application in Java/C++/Go/C#,
but that only solves a language problem not an architecture problem.
The biggest problem with this the idea of going from a domain specific
language to a general purpose language.-a While you can write an IS in
pretty much any language (imagine rewriting the entire government
payroll currently in COBOL in BASIC!!) there were real advantages to
having domain specific languages.-a But then, no one today seems to even >>> consider things like efficiency.-a Just throw more hardware at the
problem.
That argument made sense 40 years ago, but I don't think there
is much point today - the modern languages have the features
the need like easy database access and decimal data type and
the missing features like terminal screen and reporting are no
longer needed.
Jack of all trades, master of none.
You need to re-architect the solution: from ISAM to RDBMS,
This is the only one I totally agree with but the original problem
had nothing to do with the language.-a It had to do with the fact that
RDBMS wasn't around when COBOL was written.-a I have been doing COBOL
and RDBMS since 1980 and it was old code when I got there.
True.
But it is still a relevant example of where 1:1 will go wrong.
No one thinks 1:1 is a good idea. Many of us think converting to
a different language, any different language, is not a good idea
and carries with it risk that need not be taken. Using the logic
that conversion is always a good think, why is anyone still on VMS?
Why do people stay on VMS? Because in many cases it is the right
tool for the job. The same can be said about "legacy" languages.
you have a Cobol system using ISAM files, then do not want to convert
it to a Java/C++/Go/C# system using ISAM files.
If you have a COBOL program using ISAM today it should have been
converted to DBMS years ago. That does not imply that it should be
converted to JAVA/C++/Go/C#. Unless we are talking about trivial
programs, like balancing your checkbook, there are many potential
problems in moving a well functioning "legacy" program to a new
language. And to be totally honest, no apparent value.
from vertical app scaling to horizontal app scaling,
Not really sure what this means.-a :-)
You can call it cluster support.
If you run out of CPU power, then instead of upgrading from a
big expensive box to a very big very expensive box then you just
add a cluster node more.
OK. But I don't see what that has to do with it being written in COBOL.
Or are you saying that IBM Systems don't scale?
-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a from 5x16 to
7x24 operations etc..
Certainly don't get this.-a Every place I ever saw COBOL was 24/7 and
that is going back to at least 1972.
I would be surprised if you have never experienced a financial
institution operating with a "transaction will be completed
next day" model.
I get that now. That has nothing to do with IT and everything to do
with people and their being more "legacy" than the IS. I am finally
starting to see change. My last automatic payment from DFAS wasn't
really due until a Monday, but the funds showed up on a Saturday.
Even things that once ran only nightly as "batch" are now processed
almost immediately. But the people still only work 8 hours a day 5
days a week and it is them that cause the apparent lag in most IT
processing. Used to be systems went offline for 6-8 hours for backups.
Today if they go offline at all it is for seconds to minutes. But, none
of this was ever related to the language an IS was written in and
rewriting it in JAVA/C++/Go/C# is not going to improve anything.
[snip]
What you've posted has been highlighted before. As has porting VAX/VMS
to the Mach kernel, which actually happened. (Hi, Chris!) It also
doesn't appreciably move the operating system work forward. Ports
~never do.
40 or 50 engineers is far too small for a project of the scale and
scope of a feature-competitive operating system. For a competitive >platform, I'd be looking to build (slowly) to 2000, andquite possibly
more. But that takes revenues and reinvestments.
As an example of scale and scope that ties back to Valve and their
efforts with Wine and Proton and Steam Deck and other functions, Valve
may well presently have as many job openings as VSI has engineers: >https://www.glassdoor.com/Jobs/Valve-Corporation-Jobs-E24849.htm >https://www.valvesoftware.com/en/