Hello,
We have got VMS/XDE (https://products.vmssoftware.com/vms-xde-beta). You
can develop VMS application on GNU/Linux.
(I learn receently the term, and I apologize for the rudness). WTF?
It seems being a very good technical effort. So perhaps some investment.
I cannot understand the (business) goal. But it seems investement is possible - but not for bare metal :( -.
It seems to be funny to use. But I don't see for whom this effort is
done - apart hobbyies enthousiasts -.
Again, I'll rumble. Is it a real way to join the new generations of developers, the Open Source world? With a non-opened packaged you get
free for n months, and which you have after that to buy?
I need more cleverness about all that. Please.
As usual, instructive things from Arn|| : (https://forum.vmssoftware.com/ viewtopic.php?f=45&t=9622&sid=a0364371ebeaeaa5038908bfeb92b4da)
G|-rard Calliet
It seems being a very good technical effort. So perhaps some investment.
I cannot understand the (business) goal. But it seems investement is possible - but not for bare metal :( -.
It seems to be funny to use. But I don't see for whom this effort is
done - apart hobbyies enthousiasts -.
It seems being a very good technical effort. So perhaps some investment.
I cannot understand the (business) goal. But it seems investement is possible - but not for bare metal :( -.
It seems to be funny to use. But I don't see for whom this effort is
done - apart hobbyies enthousiasts -.
This sort of thing seems to have worked pretty well for IBM and
development for z/OS. It seems that even developers for such
non-mainstream environments still want modern creature comforts.
On 2025-10-29, Dennis Boone <drb@ihatespam.msu.edu> wrote:
It seems to be funny to use. But I don't see for whom this effort is
done - apart hobbyies enthousiasts -.
Actually, this kind of approach seems a perfectly normal option if you
have any knowledge of embedded systems development. The main difference
is that you are developing applications to run on top of that embedded
system instead of pushing a system image via a JTAG port (for example).
I've long thought VMS systems should be considered as some kind of
a higher-level embedded system where applications are developed locally
and then packaged up and pushed onto the target VMS system. It looks
like VSI are moving in that same direction as well.
This sort of thing seems to have worked pretty well for IBM and
development for z/OS. It seems that even developers for such
non-mainstream environments still want modern creature comforts.
This is the exact example I was going to use until you beat me to it. :-)
When was the last time a 3270 class terminal for serious z/OS software development was acceptable to developers as the only development option ?
On 2025-10-29, Dennis Boone <drb@ihatespam.msu.edu> wrote:
It seems being a very good technical effort. So perhaps some investment. >>> I cannot understand the (business) goal. But it seems investement is
possible - but not for bare metal :( -.
The (potential) business goal is obvious if you have wide enough viewpoint and not just a VMS-specific viewpoint.
And VSI would not be putting the effort into this unless customers
had indicated interest for such an approach.
On 10/29/2025 2:48 PM, Simon Clubley wrote:
On 2025-10-29, Dennis Boone <drb@ihatespam.msu.edu> wrote:
It seems to be funny to use. But I don't see for whom this effort is
done - apart hobbyies enthousiasts -.
Actually, this kind of approach seems a perfectly normal option if you
have any knowledge of embedded systems development. The main difference
is that you are developing applications to run on top of that embedded
system instead of pushing a system image via a JTAG port (for example).
I've long thought VMS systems should be considered as some kind of
a higher-level embedded system where applications are developed locally
and then packaged up and pushed onto the target VMS system. It looks
like VSI are moving in that same direction as well.
This sort of thing seems to have worked pretty well for IBM and
development for z/OS.-a It seems that even developers for such
non-mainstream environments still want modern creature comforts.
This is the exact example I was going to use until you beat me to it. :-)
When was the last time a 3270 class terminal for serious z/OS software
development was acceptable to developers as the only development option ?
Developing on a different OS than the target is totally standard
today.
Yes - the 1/3 of development that is native code have some issues
that need solutions.
But the 2/3 of development that is non-native code (Java, .NET,
Python, JavaScript, PHP etc.) just do it.
The most common setup today must be development on Windows
targeting Linux servers.
Arne
On 29/10/2025 14:48, gcalliet wrote:
We have got VMS/XDE (https://products.vmssoftware.com/vms-xde-beta).
You can develop VMS application on GNU/Linux.
I am interested, if only to see how it works, so will give the beta a
try. Shame it doesn't support aarch64 - I did think of running it on a modern Pi!
On 10/29/2025 11:19 AM, Chris Townley wrote:
On 29/10/2025 14:48, gcalliet wrote:
We have got VMS/XDE (https://products.vmssoftware.com/vms-xde-beta).
You can develop VMS application on GNU/Linux.
I am interested, if only to see how it works, so will give the beta a
try. Shame it doesn't support aarch64 - I did think of running it on a
modern Pi!
I think given the architecture that would require VMS ARM64, which
does not exist. Yet.
Arne
On 29/10/2025 23:03, Arne Vajh|+j wrote:
On 10/29/2025 11:19 AM, Chris Townley wrote:
On 29/10/2025 14:48, gcalliet wrote:
We have got VMS/XDE (https://products.vmssoftware.com/vms-xde-beta).
You can develop VMS application on GNU/Linux.
I am interested, if only to see how it works, so will give the beta a
try. Shame it doesn't support aarch64 - I did think of running it on
a modern Pi!
I think given the architecture that would require VMS ARM64, which
does not exist. Yet.
Yep I realised that, but misread the first bit of PR - mea culpa
The (potential) business goal is obvious if you have wide enough viewpoint and not just a VMS-specific viewpoint
On 10/29/2025 7:16 PM, Chris Townley wrote:
On 29/10/2025 23:03, Arne Vajh|+j wrote:
On 10/29/2025 11:19 AM, Chris Townley wrote:
On 29/10/2025 14:48, gcalliet wrote:
We have got VMS/XDE (https://products.vmssoftware.com/vms-xde-beta). >>>>> You can develop VMS application on GNU/Linux.
I am interested, if only to see how it works, so will give the beta a >>>> try. Shame it doesn't support aarch64 - I did think of running it on
a modern Pi!
I think given the architecture that would require VMS ARM64, which
does not exist. Yet.
Yep I realised that, but misread the first bit of PR - mea culpa
That PR text is rather information free.
But Aleksandr has explained a little about how it works.
Le 29/10/2025 a 19:48, Simon Clubley a ocrit :
The (potential) business goal is obvious if you have wide enough viewpoint >> and not just a VMS-specific viewpoint
It's the point, Simon. And somehow Chris says the same thing comparing development for VMS and for z/os.
And again, if we agree on your opinion viewing VMS as some rich embedded
OS, again VMS/XDE is worth it.
And again and again, my view is and has always been VMS-specific. VMS as
a specific general OS, indeed.
It seems now, because the strategy used by VSI or its investor has been
for ten years a strategy copied on strategies for legacies OS (like z/os...), the option of a VMS revival as an alternate OS solution is
almost dead.
And so VMS/XDE is a good way making business for five or six years
before the real death of VMS. (Because in my opinion, there is no future
for an embedded VMS : not its real market, not competitive in the
embedded market).
On 2025-10-30, gcalliet <gerard.calliet@pia-sofer.fr> wrote:
Le 29/10/2025 |a 19:48, Simon Clubley a |-crit :
The (potential) business goal is obvious if you have wide enough viewpoint >>> and not just a VMS-specific viewpoint
It's the point, Simon. And somehow Chris says the same thing comparing
development for VMS and for z/os.
And again, if we agree on your opinion viewing VMS as some rich embedded
OS, again VMS/XDE is worth it.
And again and again, my view is and has always been VMS-specific. VMS as
a specific general OS, indeed.
You keep thinking about the world as it was 20 to 30 years ago, not how
it is today. If VMS is to have any part in today's world, it needs to be
in terms of how the world is today, not a quarter of a century ago.
It seems now, because the strategy used by VSI or its investor has been
for ten years a strategy copied on strategies for legacies OS (like
z/os...), the option of a VMS revival as an alternate OS solution is
almost dead.
z/OS is responsible for keeping a good portion of today's world running.
I would hardly call that a legacy OS.
And so VMS/XDE is a good way making business for five or six years
before the real death of VMS. (Because in my opinion, there is no future
for an embedded VMS : not its real market, not competitive in the
embedded market).
Embedded refers to the development method, not the target market.
Giving people the development tools they are asking for extends the
life of VMS instead of reducing it.
How many people still develop for z/OS directly on a 3270 class terminal instead of from a local PC ?
Simon.
It's the point, Simon. And somehow Chris says the same thing comparing development for VMS and for z/os.
And again, if we agree on your opinion viewing VMS as some rich embedded
OS, again VMS/XDE is worth it.
And again and again, my view is and has always been VMS-specific. VMS as
a specific general OS, indeed.
It seems now, because the strategy used by VSI or its investor has been
for ten years a strategy copied on strategies for legacies OS (like z/ os...), the option of a VMS revival as an alternate OS solution is
almost dead.
And so VMS/XDE is a good way making business for five or six years
before the real death of VMS. (Because in my opinion, there is no future
for an embedded VMS : not its real market, not competitive in the
embedded market).
Perhaps it's cool to develop on Linux something for VMS. But, becauseAs I understand it then it is not about saving license cost, but
the licensing is the same ostage-like-for-legacies, I'm not sure we'll
get any interest from new-a generations of developers.
On 2025-10-30, gcalliet <gerard.calliet@pia-sofer.fr> wrote:
It seems now, because the strategy used by VSI or its investor has been
for ten years a strategy copied on strategies for legacies OS (like
z/os...), the option of a VMS revival as an alternate OS solution is
almost dead.
z/OS is responsible for keeping a good portion of today's world running.
I would hardly call that a legacy OS.
Developers are important for an OS!
z/OS is still used for a lot of very important systems.
But it is also an OS that companies are actively moving away from.
I heard at Malm|| about "and sometime there will be a new VMS". As a wine level on Linux, and an interface to the Oracle cloud, I understand that
the best new VMS is just business as usual with no-VMS.
Remember Windows Phone? Microsoft was actually paying developers to
put apps on its platform. But in its user experience it was trying
too much to ape Apple, which is why it lost out to Android.
In article <10e0omq$n2t$14@dont-email.me>, ldo@nz.invalid (Lawrence D_Oliveiro) wrote:
Remember Windows Phone? Microsoft was actually paying developers to
put apps on its platform. But in its user experience it was trying
too much to ape Apple, which is why it lost out to Android.
That wasn't the problem. The difficulty was that people didn't actually
want to use Windows Phone.
Microsoft wanted the user experience to be like desktop Windows, but
since doing that directly was clearly impractical, they changed Windows
(at Windows 8) to be their idea of a phone OS. And everyone hated the
Windows 8 user interface, and were thus put off Windows Phone.
Microsoft tried to get my employer to offer our toolkit libraries for
WinRT and Windows RT. We use a domain-specific language that compiles to
C, not C++. It didn't appear to be possible to compile C for WinRT (or
later, for Windows Store apps). The compiler options for that didn't work with C files. Microsoft insisted it was possible, but could never tell us how. We gave up on them, and stuck to producing ordinary Windows DLLs,
Linux .so libraries and macOS dylibs.
In article <10e0omq$n2t$14@dont-email.me>, ldo@nz.invalid (Lawrence D_Oliveiro) wrote:
Remember Windows Phone? Microsoft was actually paying developers to put
apps on its platform. But in its user experience it was trying too much
to ape Apple, which is why it lost out to Android.
That wasn't the problem. The difficulty was that people didn't actually
want to use Windows Phone.
Microsoft wanted the user experience to be like desktop Windows, but
since doing that directly was clearly impractical, they changed Windows
(at Windows 8) to be their idea of a phone OS.
Microsoft tried to get my employer to offer our toolkit libraries for
WinRT and Windows RT.
But there were actually people that liked [Windows Phone].
A phone UI works better on a phone than on a desktop.
On Thu, 30 Oct 2025 15:52:07 -0400, Arne Vajh|+j wrote:
Developers are important for an OS!
Users attract developers, not so much the other way round.
Look at Iphone versus Android: ApplerCOs platform was seen as way cooler,
and attracted more of the cool developers. So it got more apps. But
Android offered a wider range of choice and out-of-the-box functionality. That attracted the users. It took years for Android to close the app gap, nevertheless that wasnrCOt enough to keep Iphone dominant.
Remember Windows Phone? Microsoft was actually paying developers to put
apps on its platform. But in its user experience it was trying too much to ape Apple, which is why it lost out to Android.
On 10/30/2025 6:26 PM, Lawrence DrCOOliveiro wrote:
On Thu, 30 Oct 2025 15:52:07 -0400, Arne Vajh|+j wrote:
Developers are important for an OS!
Users attract developers, not so much the other way round.
No applications mean no users. Nobody is interested in a platform
with no applications.
Look at Iphone versus Android: ApplerCOs platform was seen as way
cooler, and attracted more of the cool developers. So it got more
apps. But Android offered a wider range of choice and
out-of-the-box functionality. That attracted the users. It took
years for Android to close the app gap, nevertheless that wasnrCOt
enough to keep Iphone dominant.
It took some years before Android got more millions of apps than
iOS.
But having most millions of apps does not matter. What matters is
that the platform has the apps that are important.
And it did not take long before most of the important
apps supported both Android and iOS.
Remember Windows Phone? But in its user experience it was trying
too much to ape Apple, which is why it lost out to Android.
There were multiple reasons for WP's failure. But the most important
was probably lack of apps.
Microsoft was paying major developers to put apps on its platform.
It didnrCOt help.
Lots of of people did buy a WP device. Sales topped around 35
million/year. Still way behind Android and iOS, but not bad.
Companies decided to support iOS and Android.
On Sat, 1 Nov 2025 16:40 +0000 (GMT Standard Time), John Dallman wrote:
Microsoft tried to get my employer to offer our toolkit libraries for
WinRT and Windows RT.
The two were different things, you realize.
On Sat, 1 Nov 2025 17:44:02 -0400, Arne Vajh|+j wrote:
But having most millions of apps does not matter. What matters is
that the platform has the apps that are important.
Which ones were important in the beginning? The big ones on Iphone
were simply not available on Android.
Companies decided to support iOS and Android.
Initially it was only IOS. They only added Android *after* it became
popular.
On 11/1/2025 6:13 PM, Lawrence DrCOOliveiro wrote:
On Sat, 1 Nov 2025 17:44:02 -0400, Arne Vajh|+j wrote:
But having most millions of apps does not matter. What matters is
that the platform has the apps that are important.
Which ones were important in the beginning? The big ones on Iphone
were simply not available on Android.
Companies decided to support iOS and Android.
Initially it was only IOS. They only added Android *after* it became
popular.
That is not reality.
Companies started supporting Android very quickly.
Numbers say: 10000 apps after 1 year, 100000 apps after 2 years.
A half year after app store launched it was obvious that
a company wanting to be on smartphones needed to support both.
On 11/1/2025 4:14 PM, Lawrence DrCOOliveiro wrote:
On Sat, 1 Nov 2025 16:40 +0000 (GMT Standard Time), John Dallman wrote:
Microsoft tried to get my employer to offer our toolkit libraries for
WinRT and Windows RT.
The two were different things, you realize.
But deploying on one require coding for the other, so ...
On Sat, 1 Nov 2025 20:02:30 -0400, Arne Vajh|+j wrote:
On 11/1/2025 4:14 PM, Lawrence DrCOOliveiro wrote:
On Sat, 1 Nov 2025 16:40 +0000 (GMT Standard Time), John Dallman wrote: >>>> Microsoft tried to get my employer to offer our toolkit libraries for
WinRT and Windows RT.
The two were different things, you realize.
But deploying on one require coding for the other, so ...
Did you know there is no mention of Windows RT in the Wikipedia article on WinRT
<https://en.wikipedia.org/wiki/Windows_Runtime>?
On 10/30/2025 9:12 AM, Simon Clubley wrote:
On 2025-10-30, gcalliet <gerard.calliet@pia-sofer.fr> wrote:
It seems now, because the strategy used by VSI or its investor has been
for ten years a strategy copied on strategies for legacies OS (like
z/os...), the option of a VMS revival as an alternate OS solution is
almost dead.
z/OS is responsible for keeping a good portion of today's world running.
I would hardly call that a legacy OS.
z/OS is still used for a lot of very important systems.
But it is also an OS that companies are actively
moving away from.
On 2025-10-30, Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 10/30/2025 9:12 AM, Simon Clubley wrote:
On 2025-10-30, gcalliet <gerard.calliet@pia-sofer.fr> wrote:
It seems now, because the strategy used by VSI or its investor has been >>>> for ten years a strategy copied on strategies for legacies OS (like
z/os...), the option of a VMS revival as an alternate OS solution is
almost dead.
z/OS is responsible for keeping a good portion of today's world running. >>> I would hardly call that a legacy OS.
z/OS is still used for a lot of very important systems.
But it is also an OS that companies are actively
moving away from.
Interesting. I can see how some people on the edges might be considering
such a move, but at the very core of the z/OS world are companies that
I thought such a move would be absolutely impossible to consider.
On 2025-10-30, Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 10/30/2025 9:12 AM, Simon Clubley wrote:
z/OS is responsible for keeping a good portion of today's world running. >>> I would hardly call that a legacy OS.
z/OS is still used for a lot of very important systems.
But it is also an OS that companies are actively
moving away from.
Interesting. I can see how some people on the edges might be considering
such a move, but at the very core of the z/OS world are companies that
I thought such a move would be absolutely impossible to consider.
What are they moving to, and how are they satisfying the extremely high constraints both on software and hardware availability, failure detection, and recovery that z/OS and its underlying hardware provides ?
z/OS has a unique set of capabilities when it comes to the absolutely critical this _MUST_ continue working or the country/company dies area.
Likewise, to replace z/OS, any replacement hardware and software must also have the same unique capabilities that z/OS, and the hardware it runs on, has. What is the general ecosystem, at both software and hardware level,
that these people are moving to ?
Mainframes were unique in last century regarding integrity, availability
and performance but not today.
Standard distributed environment, load sharing (horizontal scaling) applications, standard RDBMS with transaction and XA transaction
support, auto scaling VM or container solutions, massive scaling
capable NoSQL databases.
It can be made to work.
It can also be made not to work, but ....
On 2025-11-03, Arne Vajh|+j <arne@vajhoej.dk> wrote:
Mainframes were unique in last century regarding integrity, availability
and performance but not today.
Standard distributed environment, load sharing (horizontal scaling)
applications, standard RDBMS with transaction and XA transaction
support, auto scaling VM or container solutions, massive scaling
capable NoSQL databases.
It can be made to work.
It can also be made to _appear_ to work. And probably will, at least in
the short term.
It can also be made not to work, but ....
I've been thinking quite a bit recently about just how bad monocultures
and short term thinking can be from a society being able to continue functioning point of view. Just look at the massive damage done by
attacks on major companies here in the UK over the last year, all of
which should not have had single points of failure like that. :-(
Simon.
I think you would need to wrap:
--(call)-->[WinRT API in .winmd file]C++/CX wrapper
component--(call)-->C Win32 DLL
For a long time, you couldn't use the full C/C++ run-time in a Windows
Store app.
In article <10e5ei3$1dacc$1@dont-email.me>, arne@vajhoej.dk (Arne Vajh|+j) wrote:
I think you would need to wrap:
--(call)-->[WinRT API in .winmd file]C++/CX wrapper
component--(call)-->C Win32 DLL
If they'd told us that, we'd have considered it. But they just insisted
we could compile directly, without telling us how.
For a long time, you couldn't use the full C/C++ run-time in a Windows
Store app. They eventually changed that, and at the same time allowed ordinary WIN32 apps into the store. So all interest in producing apps
that complied with the Windows Store limitations vanished.
C++/CX may have been the dialect that one of their consultants insisted
we had to support, but could not tell us why, except that customers would want it. Since the customer who wanted the product in question didn't
want it, we were sceptical. Eventually he admitted that he got bonuses
for getting ISVs to do this, so we stopped listening to him.
I was aware this was going on, but not to this level. So, in the name of {short term whatever}, yet another chunk of the critical infrastructure
that keeps this planet running is in the process of being added to the massive monoculture that is a single point of failure when a vulnerability
or flaw is discovered. :-(
People thought the public cloud service failures were bad. That's going
to be nothing compared to what happens if an enemy (state level or otherwise) decides to cripple our way of life and now has massive nice juicy targets
to take down, all of which are running the same technology infrastructure.
These people are thinking about how they can make profit for their companies in the short term. I'm thinking that perhaps society should be forcing them instead to design things so that they can keep society running even when
they are under attack.
A society that allows critical systems to move towards a single monoculture without any backup systems or other redundancy is a society that has lost
the plot.
When the STS computers were being designed, NASA went through a massive formal process to validate and verify them. Even after all that, they
_still_ added a 5th computer system designed by a different team in case something happened to the primary systems that they had missed.
If you are important enough to provide services that help keep society running, then you should be forced to do the same. The question isn't
about how much this extra infrastructure costs, but is instead about the
cost to society if you don't do it.
I've been thinking quite a bit recently about just how bad monocultures
and short term thinking can be from a society being able to continue functioning point of view. Just look at the massive damage done by
attacks on major companies here in the UK over the last year, all of
which should not have had single points of failure like that. :-(
Steady on, old chap, going on like that, about the cloud-computing
clown-car, will get you setting up a chapter, cluster node of the VMS Generations group, tout suite, stat! :-)
On 2025-11-04, Subcommandante XDelta <vlf@star.enet.dec.com> wrote:
Steady on, old chap, going on like that, about the cloud-computing
clown-car, will get you setting up a chapter, cluster node of the VMS
Generations group, tout suite, stat! :-)
Cloud computing has its place, in some situations at least, provided
it is only a _part_ of a larger ecosystem and _if_ there are disaster recovery procedures in place for if it becomes unavailable.
On 2025-10-30, Arne Vajhoj <arne@vajhoej.dk> wrote:
On 10/30/2025 9:12 AM, Simon Clubley wrote:
On 2025-10-30, gcalliet <gerard.calliet@pia-sofer.fr> wrote:
It seems now, because the strategy used by VSI or its investor has been >>>> for ten years a strategy copied on strategies for legacies OS (like
z/os...), the option of a VMS revival as an alternate OS solution is
almost dead.
z/OS is responsible for keeping a good portion of today's world running. >>> I would hardly call that a legacy OS.
z/OS is still used for a lot of very important systems.
But it is also an OS that companies are actively
moving away from.
Interesting. I can see how some people on the edges might be considering
such a move, but at the very core of the z/OS world are companies that
I thought such a move would be absolutely impossible to consider.
What are they moving to, and how are they satisfying the extremely high >constraints both on software and hardware availability, failure detection, >and recovery that z/OS and its underlying hardware provides ?
z/OS has a unique set of capabilities when it comes to the absolutely >critical this _MUST_ continue working or the country/company dies area.
In the VMS world, VMS disaster tolerant clusters were literally a generation >ahead of what everyone else had as it took 20 years for rivals to be able
to match the fully shared-everything disaster tolerant functionality that
VMS has.
Likewise, to replace z/OS, any replacement hardware and software must also >have the same unique capabilities that z/OS, and the hardware it runs on, >has. What is the general ecosystem, at both software and hardware level,
that these people are moving to ?
In article <10eaaqr$2sqg0$1@dont-email.me>,
Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
On 2025-10-30, Arne Vajhoj <arne@vajhoej.dk> wrote:
On 10/30/2025 9:12 AM, Simon Clubley wrote:
On 2025-10-30, gcalliet <gerard.calliet@pia-sofer.fr> wrote:
It seems now, because the strategy used by VSI or its investor has been >>>>> for ten years a strategy copied on strategies for legacies OS (like
z/os...), the option of a VMS revival as an alternate OS solution is >>>>> almost dead.
z/OS is responsible for keeping a good portion of today's world running. >>>> I would hardly call that a legacy OS.
z/OS is still used for a lot of very important systems.
But it is also an OS that companies are actively
moving away from.
Interesting. I can see how some people on the edges might be considering >>such a move, but at the very core of the z/OS world are companies that
I thought such a move would be absolutely impossible to consider.
What are they moving to, and how are they satisfying the extremely high >>constraints both on software and hardware availability, failure detection, >>and recovery that z/OS and its underlying hardware provides ?
z/OS has a unique set of capabilities when it comes to the absolutely >>critical this _MUST_ continue working or the country/company dies area.
I'm curious: what, in your view, are those capabilities?
Likewise, to replace z/OS, any replacement hardware and software must also >>have the same unique capabilities that z/OS, and the hardware it runs on, >>has. What is the general ecosystem, at both software and hardware level, >>that these people are moving to ?
I think a bigger issue is lock-in. We _know_ how to build
performant, reliable, distributed systems. What we don't seem
able to collectively do is migrate away from 50 years of history
with proprietary technology. Mainframes work, they're reliable,
and they're low-risk. It's dealing with the ISAM, CICS, VTAM,
DB2, COBOL extensions, etc, etc, etc, that are slowing migration
off of them because that's migrating to a fundamentally
different model, which is both hard and high-risk.
As for the cloud, the number of organizations moving back
on-prem for very good reasons shouldn't be discounted.
Question: are they low-risk because they were designed to do one thing
and to do it very well in extremely demanding environments ?
Are the replacements higher-risk because they are more of a generic infrastructure and the mission critical workloads need to be force-fitted into them ?
In article <10eaaqr$2sqg0$1@dont-email.me>,
Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
z/OS has a unique set of capabilities when it comes to the absolutely
critical this _MUST_ continue working or the country/company dies area.
I like the whole CICS transaction functionality and failure recovery model.
BTW, what is the general replacement for CICS transaction processing and
how does the replacement functionality compare to CICS ?
On 2025-11-07, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
As for the cloud, the number of organizations moving back
on-prem for very good reasons shouldn't be discounted.
Yes, and I hope the latest batch of critical system movers do not
repeat those same mistakes.
On 11/10/2025 9:12 AM, Simon Clubley wrote:
Question: are they low-risk because they were designed to do one thing
and to do it very well in extremely demanding environments ?
Are the replacements higher-risk because they are more of a generic
infrastructure and the mission critical workloads need to be force-fitted
into them ?
And here you finally hit the crux of the matter.
People wonder why I am still a strong supporter if COBOL.
The reason is simple.-a It was a language designed to do
a particular task and it does it well.-a Now we have this
desire to replace it with something generic.-a I feel this
is a bad idea.
Thin of IBM as the same problem only-a on a much grander scale.
Not just a language but a whole system with a target in mind.
And today you have people suggesting they replace that system
with something totally generic.-a Why would that be a good idea?
On 2025-11-07, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
In article <10eaaqr$2sqg0$1@dont-email.me>,
Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
On 2025-10-30, Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 10/30/2025 9:12 AM, Simon Clubley wrote:
On 2025-10-30, gcalliet <gerard.calliet@pia-sofer.fr> wrote:
It seems now, because the strategy used by VSI or its investor has been >>>>>> for ten years a strategy copied on strategies for legacies OS (like >>>>>> z/os...), the option of a VMS revival as an alternate OS solution is >>>>>> almost dead.
z/OS is responsible for keeping a good portion of today's world running. >>>>> I would hardly call that a legacy OS.
z/OS is still used for a lot of very important systems.
But it is also an OS that companies are actively
moving away from.
Interesting. I can see how some people on the edges might be considering >>>such a move, but at the very core of the z/OS world are companies that
I thought such a move would be absolutely impossible to consider.
What are they moving to, and how are they satisfying the extremely high >>>constraints both on software and hardware availability, failure detection, >>>and recovery that z/OS and its underlying hardware provides ?
z/OS has a unique set of capabilities when it comes to the absolutely >>>critical this _MUST_ continue working or the country/company dies area.
I'm curious: what, in your view, are those capabilities?
That's a good question. I am hard pressed to identify one single feature,
but can identify a range of features, that when combined together, help to produce a solid robust system for mission critical computing.
For example, I like the predictive failure analysis capabilities (and I wish VMS had something like that).
I like the multiple levels of hardware failure detection and automatic recovery without system downtime.
I like the way the hardware and z/OS and layered products software are tightly integrated into a coherent whole.
I like the way the software was designed with a very tight single-minded focus on providing certain capabilities in highly demanding environments instead of in some undirected rambling evolution.
I like the way the hardware and software have evolved, in a designed way,
to address business needs, without becoming bloated (unlike modern software stacks). A lean system has many less failure points and less points of vulnerability than a bloated system.
I like the whole CICS transaction functionality and failure recovery model.
Likewise, to replace z/OS, any replacement hardware and software must also >>>have the same unique capabilities that z/OS, and the hardware it runs on, >>>has. What is the general ecosystem, at both software and hardware level, >>>that these people are moving to ?
I think a bigger issue is lock-in. We _know_ how to build
performant, reliable, distributed systems. What we don't seem
able to collectively do is migrate away from 50 years of history
with proprietary technology. Mainframes work, they're reliable,
and they're low-risk. It's dealing with the ISAM, CICS, VTAM,
DB2, COBOL extensions, etc, etc, etc, that are slowing migration
off of them because that's migrating to a fundamentally
different model, which is both hard and high-risk.
Question: are they low-risk because they were designed to do one thing
and to do it very well in extremely demanding environments ?
Are the replacements higher-risk because they are more of a generic infrastructure and the mission critical workloads need to be force-fitted into them ?
On 11/10/2025 9:12 AM, Simon Clubley wrote:
Question: are they low-risk because they were designed to do one thing
and to do it very well in extremely demanding environments ?
Are the replacements higher-risk because they are more of a generic
infrastructure and the mission critical workloads need to be force-fitted
into them ?
And here you finally hit the crux of the matter.
People wonder why I am still a strong supporter if COBOL.
The reason is simple. It was a language designed to do
a particular task and it does it well. Now we have this
desire to replace it with something generic. I feel this
is a bad idea.
What are they moving to, and how are they satisfying the extremely high constraints both on software and hardware availability, failure detection, and recovery that z/OS and its underlying hardware provides ?
z/OS has a unique set of capabilities when it comes to the absolutely critical this _MUST_ continue working or the country/company dies area.
Mainfraimes are low risk because change is risky. That is, if you
wanted to port some modern software to z/OS, then there would be
risk.
Note that even though z/OS and mainframes generally have a good
track recording regarding availability, then it is not a magic
solution - they can also have problems.
Well, Cobol represents practices of 1960 business data processing.
At that time it was state of the art. But state of the art changed.
Cobol somewhat adapted but it slow to this.
On Tue, 11 Nov 2025 10:50:57 -0500, Arne Vajh|+j wrote:
Note that even though z/OS and mainframes generally have a good
track recording regarding availability, then it is not a magic
solution - they can also have problems.
Mainframes were never designed for high availability. It was normal to
run them 24/7, simply to try to get as much as possible out of them
because they are/were so expensive to buy. But it was no big deal if
they had to be taken down for, say, an hour a week for rCLpreventive maintenancerCY or to switch OSes or whatever.
On Tue, 11 Nov 2025 15:23:29 -0000 (UTC), Waldek Hebisch wrote:
Well, Cobol represents practices of 1960 business data processing.
At that time it was state of the art. But state of the art changed.
Cobol somewhat adapted but it slow to this.
The example I like to mention is the rise of the SQL DBMS. These
became very important for rCLbusiness data processingrCY use in the 1980s.
But the best way to interface to one of these is by dynamically
generating SQL command strings.
And guess what: dynamic string handling is something that was specifically left out of COBOL, because
it was not seen as important for rCLbusinessrCY use.
On 11/11/2025 3:59 PM, Lawrence DrCOOliveiro wrote:
On Tue, 11 Nov 2025 15:23:29 -0000 (UTC), Waldek Hebisch wrote:
Well, Cobol represents practices of 1960 business data processing.
At that time it was state of the art. But state of the art changed.
Cobol somewhat adapted but it slow to this.
The example I like to mention is the rise of the SQL DBMS. These
became very important for rCLbusiness data processingrCY use in the 1980s.
-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a And guess what: dynamic string
handling is something that was specifically left out of COBOL, because
it was not seen as important for rCLbusinessrCY use.
Nonsense.
Cobol does dynamic string handling just fine.
Not as good as Java, Python, PHP and other newer languages.
But better than Fortran, C and many other common languages
back then.
(and I believe we have told you so before)
On 11/11/2025 3:59 PM, Lawrence DrCOOliveiro wrote:
On Tue, 11 Nov 2025 15:23:29 -0000 (UTC), Waldek Hebisch wrote:
Well, Cobol represents practices of 1960 business data processing.
At that time it was state of the art. But state of the art changed.
Cobol somewhat adapted but it slow to this.
The example I like to mention is the rise of the SQL DBMS. These
became very important for rCLbusiness data processingrCY use in the 1980s.
Yes.
And the preferred languages was Cobol and PL/I.
But the best way to interface to one of these is by dynamically
generating SQL command strings.
If you are writing a hobby program the math looks like:
dynamic SQL strings : 2 minutes of work to write code
the right way : 30 minutes of work to write code
If you are writing a program for doing account operations in
a bank expect:
dynamic SQL strings : 2 minutes of work to write code + 60 minutes
review time for each of 5 senior engineers
the right way : 30 minutes of work to write code
-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a And guess what: dynamic string
handling is something that was specifically left out of COBOL, because
it was not seen as important for rCLbusinessrCY use.
Nonsense.
Cobol does dynamic string handling just fine.
Not as good as Java, Python, PHP and other newer languages.
But better than Fortran, C and many other common languages
back then.
(and I believe we have told you so before)
On 12/11/2025 00:57, Arne Vajh|+j wrote:
On 11/11/2025 3:59 PM, Lawrence DrCOOliveiro wrote:
-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a And guess what: dynamic string
handling is something that was specifically left out of COBOL, because
it was not seen as important for rCLbusinessrCY use.
Nonsense.
Cobol does dynamic string handling just fine.
Not as good as Java, Python, PHP and other newer languages.
But better than Fortran, C and many other common languages
back then.
(and I believe we have told you so before)
Basic does it fairly well
Cobol does dynamic string handling just fine.
Voila. A Cobol program using embedded SQL vulnerable to SQL injection.
HA is about whether the system can continue to serve users in case part
of a box or an entire box fail - 24x7 vs 16x5 is about architecture.
On Tue, 11 Nov 2025 18:56:53 -0500, Arne Vajh|+j wrote:
HA is about whether the system can continue to serve users in case part
of a box or an entire box fail - 24x7 vs 16x5 is about architecture.
High availability is measured in rCLninesrCY -- e.g. five nines, six nines ...
even seven nines.
How do big enterprises (like Google) achieve that? By not using
mainframes.
They set up data centres full of off-the-shelf PC hardware --
one article I remember from over a decade ago said that Google, at that >time, had 460,000 servers.
All the hardware is obtained as cheaply as possible,
except one component:
the power supply. They buy quality for that, for power-efficiency reasons. >As for the rest, it doesnrCOt matter if a box falls over every minute, or a >hard drive crashes every few minutes; they have higher-level redundancy
and recovery procedures that can routinely recover from all those
failures, without the users ever noticing.
No mainframe can match that.
On 2025-11-07, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
In article <10eaaqr$2sqg0$1@dont-email.me>,
Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
On 2025-10-30, Arne Vajhoj <arne@vajhoej.dk> wrote:
On 10/30/2025 9:12 AM, Simon Clubley wrote:
On 2025-10-30, gcalliet <gerard.calliet@pia-sofer.fr> wrote:
It seems now, because the strategy used by VSI or its investor has been >>>>>> for ten years a strategy copied on strategies for legacies OS (like >>>>>> z/os...), the option of a VMS revival as an alternate OS solution is >>>>>> almost dead.
z/OS is responsible for keeping a good portion of today's world running. >>>>> I would hardly call that a legacy OS.
z/OS is still used for a lot of very important systems.
But it is also an OS that companies are actively
moving away from.
Interesting. I can see how some people on the edges might be considering >>>such a move, but at the very core of the z/OS world are companies that
I thought such a move would be absolutely impossible to consider.
What are they moving to, and how are they satisfying the extremely high >>>constraints both on software and hardware availability, failure detection, >>>and recovery that z/OS and its underlying hardware provides ?
z/OS has a unique set of capabilities when it comes to the absolutely >>>critical this _MUST_ continue working or the country/company dies area.
I'm curious: what, in your view, are those capabilities?
That's a good question. I am hard pressed to identify one single feature,
but can identify a range of features, that when combined together, help to >produce a solid robust system for mission critical computing.
For example, I like the predictive failure analysis capabilities (and I wish >VMS had something like that).
I like the multiple levels of hardware failure detection and automatic >recovery without system downtime.
I like the way the hardware and z/OS and layered products software are >tightly integrated into a coherent whole.
I like the way the software was designed with a very tight single-minded >focus on providing certain capabilities in highly demanding environments >instead of in some undirected rambling evolution.
I like the way the hardware and software have evolved, in a designed way,
to address business needs, without becoming bloated (unlike modern software >stacks). A lean system has many less failure points and less points of >vulnerability than a bloated system.
I like the whole CICS transaction functionality and failure recovery model.
Likewise, to replace z/OS, any replacement hardware and software must also >>>have the same unique capabilities that z/OS, and the hardware it runs on, >>>has. What is the general ecosystem, at both software and hardware level, >>>that these people are moving to ?
I think a bigger issue is lock-in. We _know_ how to build
performant, reliable, distributed systems. What we don't seem
able to collectively do is migrate away from 50 years of history
with proprietary technology. Mainframes work, they're reliable,
and they're low-risk. It's dealing with the ISAM, CICS, VTAM,
DB2, COBOL extensions, etc, etc, etc, that are slowing migration
off of them because that's migrating to a fundamentally
different model, which is both hard and high-risk.
Question: are they low-risk because they were designed to do one thing
and to do it very well in extremely demanding environments ?
Are the replacements higher-risk because they are more of a generic >infrastructure and the mission critical workloads need to be force-fitted >into them ?
BTW, what is the general replacement for CICS transaction processing and
how does the replacement functionality compare to CICS ?
As for the cloud, the number of organizations moving back
on-prem for very good reasons shouldn't be discounted.
Yes, and I hope the latest batch of critical system movers do not
repeat those same mistakes.
On Tue, 11 Nov 2025 18:56:53 -0500, Arne Vajh|+j wrote:
HA is about whether the system can continue to serve users in case part
of a box or an entire box fail - 24x7 vs 16x5 is about architecture.
High availability is measured in rCLninesrCY -- e.g. five nines, six nines ...
even seven nines.
How do big enterprises (like Google) achieve that? By not using
mainframes. They set up data centres full of off-the-shelf PC hardware --
one article I remember from over a decade ago said that Google, at that
time, had 460,000 servers.
All the hardware is obtained as cheaply as possible, except one component: the power supply. They buy quality for that, for power-efficiency reasons.
As for the rest, it doesnrCOt matter if a box falls over every minute, or a hard drive crashes every few minutes; they have higher-level redundancy
and recovery procedures that can routinely recover from all those
failures, without the users ever noticing.
No mainframe can match that.
On Tue, 11 Nov 2025 19:57:54 -0500, Arne Vajh|+j wrote:
Cobol does dynamic string handling just fine.
Try using it to construct an ad-hoc SQL query based on a set of fields
that a user might or might not fill in (i.e. omitting the ones left
blank), and yourCOll see what I mean.
To build dynamic SQL strings you need support for a few
basic features:
* loops
* conditional blocks
* string concatanation
Cobol does support that.
On 11/11/2025 10:56 PM, Lawrence DrCOOliveiro wrote:
No mainframe can match that.
Of course mainframes can match that.
IBM mainframes use OS clustering (like VMS) called
SysPlex.
On Wed, 12 Nov 2025 15:12:40 -0500, Arne Vajh|+j wrote:
To build dynamic SQL strings you need support for a few
basic features:
* loops
* conditional blocks
* string concatanation
Cobol does support that.
But not arbitrary-length dynamic strings.
And not functional constructs that let you put the loops and
conditionals inside the string-construction expression.
On Wed, 12 Nov 2025 14:54:13 -0500, Arne Vajh|+j wrote:
On 11/11/2025 10:56 PM, Lawrence DrCOOliveiro wrote:
No mainframe can match that.
Of course mainframes can match that.
Nobody can afford to buy enough mainframes to match that.
IBM mainframes use OS clustering (like VMS) called
SysPlex.
Do either of those scale to 460,000 nodes?
No, they donrCOt.
On Wed, 12 Nov 2025 14:54:13 -0500, Arne Vajh|+j wrote:
On 11/11/2025 10:56 PM, Lawrence DrCOOliveiro wrote:
No mainframe can match that.
Of course mainframes can match that.
Nobody can afford to buy enough mainframes to match that.
IBM mainframes use OS clustering (like VMS) called
SysPlex.
Do either of those scale to 460,000 nodes?
No, they donrCOt.
On 12/11/2025 21:02, Lawrence DrCOOliveiro wrote:
On Wed, 12 Nov 2025 14:54:13 -0500, Arne Vajh|+j wrote:
IBM mainframes use OS clustering (like VMS) called
SysPlex.
Do either of those scale to 460,000 nodes?
No, they donrCOt.
Why won't it scale to 460,000 nodes?
Why would you need that many nodes,
well unless you are google?
Why won't it scale to 460,000 nodes?
Why would you need that many nodes, well unless you are google?
p.s. No one should assume the world stands still. A virtual Intel/X64
cluster has nothing in common with a PC from the 1990s. A current IBM Mainframe has little in common with an S/360 from 1960's EXCEPT the
modern mainframe will run user mode 24-bit code from the start of time.
On Wed, 12 Nov 2025 21:43:40 +0000, David Wade wrote:
Why won't it scale to 460,000 nodes?
Because a cluster of on the order of tens of machines (like your SysPlex
and VMScluster) can depend on algorithms with polynomial complexity, that would no longer be practicable when you have hundreds of thousands of
nodes.
Why would you need that many nodes, well unless you are google?
All the hyperscalers are running clusters of that sort of size.
And not just them. Supercomputers are now built out of millions of nodes, with the added twist of having a high-speed interconnect.
LetrCOs see you build a SysPlex or VMScluster on that sort of scale ...
p.s. No one should assume the world stands still. A virtual Intel/X64
cluster has nothing in common with a PC from the 1990s. A current IBM
Mainframe has little in common with an S/360 from 1960's EXCEPT the
modern mainframe will run user mode 24-bit code from the start of time.
Did you know that when Debian boots on an IBM mainframe, it has to pretend itrCOs getting punched cards from a card reader?
rCLWorld doesnrCOt stand stillrCY and rCLlittle in common with the 1960srCY my
bum ...
In article <10esrru$1qu6$1@dont-email.me>,
Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
On 2025-11-07, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
As for the cloud, the number of organizations moving back
on-prem for very good reasons shouldn't be discounted.
Yes, and I hope the latest batch of critical system movers do not
repeat those same mistakes.
I'm not sure what mistakes you're referring to, but let's hope
that system maintainers make fewer mistakes generally. :-D
On 13/11/2025 02:45, Lawrence DrCOOliveiro wrote:
On Wed, 12 Nov 2025 21:43:40 +0000, David Wade wrote:
Why won't it scale to 460,000 nodes?
Because a cluster of on the order of tens of machines (like your SysPlex
and VMScluster) can depend on algorithms with polynomial complexity, that
would no longer be practicable when you have hundreds of thousands of
nodes.
Why would you need that many nodes, well unless you are google?
All the hyperscalers are running clusters of that sort of size.
Are they really tightly couple clusters, or load balanced front ends...
On 11/12/2025 4:02 PM, Lawrence DrCOOliveiro wrote:
On Wed, 12 Nov 2025 14:54:13 -0500, Arne Vajh|+j wrote:
On 11/11/2025 10:56 PM, Lawrence DrCOOliveiro wrote:
No mainframe can match that.
Of course mainframes can match that.
Nobody can afford to buy enough mainframes to match that.
IBM mainframes use OS clustering (like VMS) called SysPlex.
Do either of those scale to 460,000 nodes?
No, they donrCOt.
I believe the topic was whether mainframes can achieve the availability
- the required number of nines. They can.
On Wed, 12 Nov 2025 16:10:39 -0500, Arne Vajh|+j wrote:
I believe the topic was whether mainframes can achieve the availability
- the required number of nines. They can.
No they canrCOt. Mainframes were never designed for high availability.
How many nines does IBM offer?
Hint: look at this intro from IBM itself <https://www.ibm.com/think/topics/high-availability>. Do they mention
their own mainframes? No. Do they mention cloud and Linux companies? Yes.
On 11/11/2025 10:56 PM, Lawrence DrCOOliveiro wrote:
As for the rest, it doesnrCOt matter if a box falls over every minute, or a >> hard drive crashes every few minutes; they have higher-level redundancy
and recovery procedures that can routinely recover from all those
failures, without the users ever noticing.
No mainframe can match that.
Of course mainframes can match that.
The fundamental mechanism is the same for mainframes and
let us call it modern distributed environments.
You need N systems running to handle load. There is
a probability Pd of one system becoming unavailable.
You want Pr probability of handling the load.
You can calculate how many systems M you need to
achieve that.
N is smaller, Pd is smaller and the cost of a
system is much bigger for mainframes than for
x86-64 servers.
But the formula is the same. You can do the math.
On 11/12/2025 4:01 PM, Lawrence DrCOOliveiro wrote:
On Wed, 12 Nov 2025 15:12:40 -0500, Arne Vajh|+j wrote:
To build dynamic SQL strings you need support for a few basic
features:
* loops
* conditional blocks
* string concatanation
Cobol does support that.
But not arbitrary-length dynamic strings.
And not functional constructs that let you put the loops and
conditionals inside the string-construction expression.
True.
But that does not impact whether you can do it in Cobol.
It just impacts how many lines of code you need to do it.
On 13/11/2025 02:45, Lawrence DrCOOliveiro wrote:
It does not "have to" pretend its cards, its just convenient to do so.
Did you know that when Debian boots on an IBM mainframe, it has to
pretend itrCOs getting punched cards from a card reader?
How is this different from a VMWare cluster having to pretend its
booting from a CD?
On Wed, 12 Nov 2025 16:06:56 -0500, Arne Vajh|+j wrote:
On 11/12/2025 4:01 PM, Lawrence DrCOOliveiro wrote:
On Wed, 12 Nov 2025 15:12:40 -0500, Arne Vajh|+j wrote:
To build dynamic SQL strings you need support for a few basic
features:
* loops
* conditional blocks
* string concatanation
Cobol does support that.
But not arbitrary-length dynamic strings.
And not functional constructs that let you put the loops and
conditionals inside the string-construction expression.
True.
But that does not impact whether you can do it in Cobol.
It just impacts how many lines of code you need to do it.
More code means more work to write and maintain, and more chance for bugs
to get in.
Remember, this stuff is already a well-known source of security vulnerabilities. The last thing you need is more maintenance headaches.
But for routine database queries I want fixed query structure with
data filling slots. Which is provided by embedded SQL and several alternatives. I do not want arbitrary strings as queries: with fixed
query structure correctness is not hard, with dynamic strings one
needs to consider a lot of weird corner cases.
Of course, for ad hoc queries you need dynamic query structure,
but ability to specify query structure should be limited to trusted
users.
On Sat, 15 Nov 2025 00:24:04 -0000 (UTC), Waldek Hebisch wrote:
But for routine database queries I want fixed query structure with
data filling slots. Which is provided by embedded SQL and several
alternatives. I do not want arbitrary strings as queries: with fixed
query structure correctness is not hard, with dynamic strings one
needs to consider a lot of weird corner cases.
True enough. Fine for canned reports, standard batch processing runs
etc. Except COBOL never had any official standard, did it, for these
rCLEXEC SQLrCY templates.
Of course, for ad hoc queries you need dynamic query structure,
but ability to specify query structure should be limited to trusted
users.
Not if the query is written correctly, which is not hard to do.
On 11/14/2025 9:41 PM, Lawrence DrCOOliveiro wrote:
On Sat, 15 Nov 2025 00:24:04 -0000 (UTC), Waldek Hebisch wrote:
But for routine database queries I want fixed query structure with
data filling slots. Which is provided by embedded SQL and several
alternatives. I do not want arbitrary strings as queries: with
fixed query structure correctness is not hard, with dynamic
strings one needs to consider a lot of weird corner cases.
True enough. Fine for canned reports, standard batch processing
runs etc. Except COBOL never had any official standard, did it, for
these rCLEXEC SQLrCY templates.
ISO 9075 part 2
Of course, for ad hoc queries you need dynamic query structure,
but ability to specify query structure should be limited to
trusted users.
Not if the query is written correctly, which is not hard to do.
C program do not have memory leaks or out of bounds array access if
written correctly.
C program do not have memory leaks or out of bounds array access
if written correctly.
On Fr 14 Nov 2025 at 22:18, Arne Vajh|+j <arne@vajhoej.dk> wrote:
C program do not have memory leaks or out of bounds array access
if written correctly.
That is true for any language, isn't it?
On Fri, 14 Nov 2025 22:18:22 -0500, Arne Vajh|+j wrote:
On 11/14/2025 9:41 PM, Lawrence DrCOOliveiro wrote:
On Sat, 15 Nov 2025 00:24:04 -0000 (UTC), Waldek Hebisch wrote:
But for routine database queries I want fixed query structure with
data filling slots. Which is provided by embedded SQL and several
alternatives. I do not want arbitrary strings as queries: with
fixed query structure correctness is not hard, with dynamic
strings one needs to consider a lot of weird corner cases.
True enough. Fine for canned reports, standard batch processing
runs etc. Except COBOL never had any official standard, did it, for
these rCLEXEC SQLrCY templates.
ISO 9075 part 2
Something about rCLdata type correspondencesrCY? Not, as I was expecting, rCLlanguage constructs for COBOLrCY? (i.e. not sure what the relevance
is.)
Of course, for ad hoc queries you need dynamic query structure,
but ability to specify query structure should be limited to
trusted users.
Not if the query is written correctly, which is not hard to do.
C program do not have memory leaks or out of bounds array access if
written correctly.
As you may have noticed, it wasnrCOt C I was recommending for this.
On 11/15/2025 1:00 AM, Lawrence DrCOOliveiro wrote:
On Fri, 14 Nov 2025 22:18:22 -0500, Arne Vajh|+j wrote:
On 11/14/2025 9:41 PM, Lawrence DrCOOliveiro wrote:
Except COBOL never had any official standard, did it, for these
rCLEXEC SQLrCY templates.
ISO 9075 part 2
Something about rCLdata type correspondencesrCY? Not, as I was expecting,
rCLlanguage constructs for COBOLrCY? (i.e. not sure what the relevance is.)
Embedded SQL is not a language construct, but a preprocessor construct.
The tricky part is the mapping between SQL data types and Cobol data
types.
And the handling of errors.
The point is that all problems arise because something is not written correctly.
On Sat, 15 Nov 2025 09:22:33 -0500, Arne Vajh|+j wrote:
On 11/15/2025 1:00 AM, Lawrence DrCOOliveiro wrote:
On Fri, 14 Nov 2025 22:18:22 -0500, Arne Vajh|+j wrote:Embedded SQL is not a language construct, but a preprocessor construct.
On 11/14/2025 9:41 PM, Lawrence DrCOOliveiro wrote:
Except COBOL never had any official standard, did it, for these
rCLEXEC SQLrCY templates.
ISO 9075 part 2
Something about rCLdata type correspondencesrCY? Not, as I was expecting, >>> rCLlanguage constructs for COBOLrCY? (i.e. not sure what the relevance is.) >>
But COBOL doesnrCOt have a standard preprocessor. Or a standard definition for rCLEmbedded SQLrCY, whether in this ISO spec or any other.
The tricky part is the mapping between SQL data types and Cobol data
types.
Much easier in a dynamic language with a modern-style assortment of
standard types, like Python.
And the handling of errors.
I just let the default exception handling report malformed SQL errors, and treat them like program bugs. I.e. I have to fix my code to *not* generate malformed SQL.
The only time so far IrCOve needed to explicitly catch an SQL error is with rCLIntegrityErrorrCY-type exceptions, which can occur if you try to insert a record with a duplicate value for a unique key. I only do so where this reflects a user error.
On 2025-11-12, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
In article <10esrru$1qu6$1@dont-email.me>,
Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
On 2025-11-07, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
As for the cloud, the number of organizations moving back
on-prem for very good reasons shouldn't be discounted.
Yes, and I hope the latest batch of critical system movers do not
repeat those same mistakes.
I'm not sure what mistakes you're referring to, but let's hope
that system maintainers make fewer mistakes generally. :-D
I was referring to the mistake of getting rid of your local systems,
and local systems knowledge, in favour of moving everything into the
public clouds and outsourcing your local systems knowledge and
development to third party vendors.
This works for some people, but not for others, and there appears to have >been quite a drive by senior management in general of inappropriate
movement away from local control and knowledge so that it "becomes someone >else's problem".
The problem is that it isn't someone else's problem, it's still their >problem, as more than a few people have found out the hard way, promptly >followed by spending more money to move things back in house again.
On 11/15/2025 5:16 PM, Lawrence DrCOOliveiro wrote:
On Sat, 15 Nov 2025 09:22:33 -0500, Arne Vajh|+j wrote:
On 11/15/2025 1:00 AM, Lawrence DrCOOliveiro wrote:
On Fri, 14 Nov 2025 22:18:22 -0500, Arne Vajh|+j wrote:
On 11/14/2025 9:41 PM, Lawrence DrCOOliveiro wrote:
Except COBOL never had any official standard, did it, for these
rCLEXEC SQLrCY templates.
ISO 9075 part 2
Something about rCLdata type correspondencesrCY? Not, as I was expecting, >>>> rCLlanguage constructs for COBOLrCY? (i.e. not sure what the relevance >>>> is.)
Embedded SQL is not a language construct, but a preprocessor
construct.
But COBOL doesnrCOt have a standard preprocessor. Or a standard
definition for rCLEmbedded SQLrCY, whether in this ISO spec or any other.
The embedded SQL pre-processor typical comes from the database vendor.
The ISO SQL standard (part 2 cover the native languages, part 10 cover
Java and possible other object oriented languages) and industry
practices makes it work fine.
The tricky part is the mapping between SQL data types and Cobol data
types.
Much easier in a dynamic language with a modern-style assortment of
standard types, like Python.
The basic types has not changed since the time of Cobol.
But obviously a dynamically typed language do not have the problem of
having to declare query result variable of the correct type.
And the handling of errors.
I just let the default exception handling report malformed SQL errors,
and treat them like program bugs. I.e. I have to fix my code to *not*
generate malformed SQL.
The only time so far IrCOve needed to explicitly catch an SQL error is
with rCLIntegrityErrorrCY-type exceptions, which can occur if you try to
insert a record with a duplicate value for a unique key. I only do so
where this reflects a user error.
Most languages used for embedded SQL does not use exceptions, so that is
not an option.
On 11/14/2025 12:47 AM, Lawrence DrCOOliveiro wrote:
On Wed, 12 Nov 2025 16:10:39 -0500, Arne Vajh|+j wrote:
I believe the topic was whether mainframes can achieve the
availability - the required number of nines. They can.
No they canrCOt. Mainframes were never designed for high availability.
How many nines does IBM offer?
Hint: look at this intro from IBM itself
<https://www.ibm.com/think/topics/high-availability>. Do they mention
their own mainframes? No. Do they mention cloud and Linux companies?
Yes.
Better hint - their page about z resiliency:
https://www.ibm.com/products/z/resiliency
<quote>
For clients running z/OS v3.1 or higher with a configured high
availability IBM software stack on IBM z16 or IBM z17, users can expect
up to 99.999999% availability or 315.58 milliseconds of downtime per
year when using a GDPS 4.7 Continuous Availability (CA) configuration
and workloads.
</quote>
That is a lot of nines.
On Sat, 15 Nov 2025 18:12:19 -0500, Arne Vajh|+j wrote:
On 11/15/2025 5:16 PM, Lawrence DrCOOliveiro wrote:
On Sat, 15 Nov 2025 09:22:33 -0500, Arne Vajh|+j wrote:The embedded SQL pre-processor typical comes from the database vendor.
On 11/15/2025 1:00 AM, Lawrence DrCOOliveiro wrote:
On Fri, 14 Nov 2025 22:18:22 -0500, Arne Vajh|+j wrote:
On 11/14/2025 9:41 PM, Lawrence DrCOOliveiro wrote:
Except COBOL never had any official standard, did it, for these
rCLEXEC SQLrCY templates.
ISO 9075 part 2
Something about rCLdata type correspondencesrCY? Not, as I was expecting, >>>>> rCLlanguage constructs for COBOLrCY? (i.e. not sure what the relevance >>>>> is.)
Embedded SQL is not a language construct, but a preprocessor
construct.
But COBOL doesnrCOt have a standard preprocessor. Or a standard
definition for rCLEmbedded SQLrCY, whether in this ISO spec or any other. >>
But there is no specification in the language standard for how it should work.
So your code ends up being non-portable
The ISO SQL standard (part 2 cover the native languages, part 10 cover
Java and possible other object oriented languages) and industry
practices makes it work fine.
There was nothing in there that I could see about the syntax of SQL embedding, though.
The tricky part is the mapping between SQL data types and Cobol data
types.
Much easier in a dynamic language with a modern-style assortment of
standard types, like Python.
The basic types has not changed since the time of Cobol.
ThatrCOs the trouble. But Python includes handy things like dynamic lists/ tuples, dictionaries and sets, which are very handy for collecting data
from SQL databases, and for putting data into SQL databases.
And iterators, so you donrCOt have to retrieve the entire query result set into memory at once, you can pull in just as much as you can deal with at once.
And the handling of errors.
I just let the default exception handling report malformed SQL errors,
and treat them like program bugs. I.e. I have to fix my code to *not*
generate malformed SQL.
The only time so far IrCOve needed to explicitly catch an SQL error is
with rCLIntegrityErrorrCY-type exceptions, which can occur if you try to >>> insert a record with a duplicate value for a unique key. I only do so
where this reflects a user error.
Most languages used for embedded SQL does not use exceptions, so that is
not an option.
Which ones? ItrCOs not just Python that has them, C++ and Java do, too. Why would you use a language that didnrCOt have exceptions to work with SQL?
On Fri, 14 Nov 2025 16:49:49 -0500, Arne Vajh|+j wrote:
Better hint - their page about z resiliency:
https://www.ibm.com/products/z/resiliency
<quote>
For clients running z/OS v3.1 or higher with a configured high
availability IBM software stack on IBM z16 or IBM z17, users can expect
up to 99.999999% availability or 315.58 milliseconds of downtime per
year when using a GDPS 4.7 Continuous Availability (CA) configuration
and workloads.
</quote>
That is a lot of nines.
Did you notice the footnote?
<https://www.ibm.com/products/z/resiliency#footnote>:
1 IBM z17 systems, with GDPS, IBM DS8000 series storage with
HyperSwap, and running a Red Hat OpenShift Container Platform
environment, are designed to deliver 99.999999% availability.
Now, what does Red Hat got to do with mainframe resiliency? In fact, the mainframe doesnrCOt really have anything to do with it, does it? ItrCOs all down to Linux-based high-availability technologies, like OpenShift. All
the resiliency is effectively coming from that.
On 5/11/2025 12:59 am, Simon Clubley wrote:Because as a french people I'm proud VMSgenerations has been quoted, and because I see a very interesting thread, I give another taste for the
On 2025-11-03, Arne Vajh|+j <arne@vajhoej.dk> wrote:
Mainframes were unique in last century regarding integrity, availability >>> and performance but not today.
Standard distributed environment, load sharing (horizontal scaling)
applications, standard RDBMS with transaction and XA transaction
support, auto scaling VM or container solutions, massive scaling
capable NoSQL databases.
It can be made to work.
It can also be made to _appear_ to work. And probably will, at least in
the short term.
It can also be made not to work, but ....
:
:
I've been thinking quite a bit recently about just how bad monocultures
and short term thinking can be from a society being able to continue
functioning point of view. Just look at the massive damage done by
attacks on major companies here in the UK over the last year, all of
which should not have had single points of failure like that. :-(
Simon.
Steady on, old chap, going on like that, about the cloud-computing
clown-car, will get you setting up a chapter, cluster node of the VMS Generations group, tout suite, stat! :-)
On 11/18/2025 2:25 AM, Lawrence DrCOOliveiro wrote:
The language compiler does not see any embedded SQL - the embedded SQL pre-processor outputs plain Cobol (or C or whatever).
So your code ends up being non-portable
If the SQL used is database specific, then it only works with that
database.
Then you just need to wrap it.
Cobol: EXEC SQL ... END-EXEC
And iterators, so you donrCOt have to retrieve the entire query
result set into memory at once, you can pull in just as much as you
can deal with at once.
That works fine in old languages as well.
The two biggest languages for embedded SQL must be Cobol and C.
Neither has exceptions.
Newer languages rarely use embedded SQL.
Embedded SQL got defined for Java - I assume IBM and Oracle pushed
hard for it - but nobody is using it.
On 11/18/2025 2:29 AM, Lawrence DrCOOliveiro wrote:
On Fri, 14 Nov 2025 16:49:49 -0500, Arne Vajh|+j wrote:
Better hint - their page about z resiliency:
https://www.ibm.com/products/z/resiliency
<quote>
For clients running z/OS v3.1 or higher with a configured high
availability IBM software stack on IBM z16 or IBM z17, users can
expect up to 99.999999% availability or 315.58 milliseconds of
downtime per year when using a GDPS 4.7 Continuous Availability
(CA) configuration and workloads.
</quote>
That is a lot of nines.
Did you notice the footnote?
<https://www.ibm.com/products/z/resiliency#footnote>:
1 IBM z17 systems, with GDPS, IBM DS8000 series storage with
HyperSwap, and running a Red Hat OpenShift Container Platform
environment, are designed to deliver 99.999999% availability.
Now, what does Red Hat got to do with mainframe resiliency? In
fact, the mainframe doesnrCOt really have anything to do with it,
does it? ItrCOs all down to Linux-based high-availability
technologies, like OpenShift. All the resiliency is effectively
coming from that.
You need to read it all.
They can do that uptime for different software stacks:
MongoDB on k8s on Linux on z/VM on mainframe
DB2 on z/OS on mainframe
IMS on z/OS on mainframe
But even for the MongoDB k8s case the mainframe contribute to the
expected uptime due to the low number of active physical boxes.
On Tue, 18 Nov 2025 13:52:44 -0500, Arne Vajh|+j wrote:
On 11/18/2025 2:29 AM, Lawrence DrCOOliveiro wrote:
<https://www.ibm.com/products/z/resiliency#footnote>:
1 IBM z17 systems, with GDPS, IBM DS8000 series storage with
HyperSwap, and running a Red Hat OpenShift Container Platform
environment, are designed to deliver 99.999999% availability.
Now, what does Red Hat got to do with mainframe resiliency? In
fact, the mainframe doesnrCOt really have anything to do with it,
does it? ItrCOs all down to Linux-based high-availability
technologies, like OpenShift. All the resiliency is effectively
coming from that.
You need to read it all.
They can do that uptime for different software stacks:
MongoDB on k8s on Linux on z/VM on mainframe
DB2 on z/OS on mainframe
IMS on z/OS on mainframe
There is no actual mention on that page of being able to achieve such
a high level of nines without Linux. None.
2)MongoDB on k8s on Linux on z/VM on mainframe
3)DB2 on z/OS on mainframe
IMS on z/OS on mainframe
On Tue, 18 Nov 2025 13:52:44 -0500, Arne Vajh|+j wrote:
But even for the MongoDB k8s case the mainframe contribute to the
expected uptime due to the low number of active physical boxes.
No they donrCOt. They donrCOt make any contribution to the nines at all;
all that is coming from the Linux stack.
On Tue, 18 Nov 2025 09:19:35 -0500, Arne Vajh|+j wrote:
On 11/18/2025 2:25 AM, Lawrence DrCOOliveiro wrote:
The language compiler does not see any embedded SQL - the embedded SQL
pre-processor outputs plain Cobol (or C or whatever).
So your code ends up being non-portable
If the SQL used is database specific, then it only works with that
database.
ItrCOs quite common to have applications in a range of languages
all accessing the same database.
ItrCOs not so common to have different compilers for what is supposed to
be the same language, require different syntax for embedding that SQL.
Then you just need to wrap it.
Cobol: EXEC SQL ... END-EXEC
But there is no standard in COBOL for how to do this wrapping.
And iterators, so you donrCOt have to retrieve the entire query
result set into memory at once, you can pull in just as much as you
can deal with at once.
That works fine in old languages as well.
Those old languages donrCOt have iterators.
On 11/20/2025 6:09 PM, Lawrence DrCOOliveiro wrote:
There is no actual mention on that page of being able to achieve such
a high level of nines without Linux. None.
A)
<quote>
A MongoDB v4.4 workload was used.
B)
<quote>
... one of the required GDPS CA IBM middleware stack workloads ...
Who can match tech stacks and config?
Let me demo SQLPY "Embedded SQL for Python".
:-) :-) :-)
Mostly based on SQLJ.
$ type test.sqlpy
$ python sqlpy.py TEST.sqlpy TEST.py
$ python TEST.py
bill <bill.gunshannon@gmail.com> wrote:
On 11/10/2025 9:12 AM, Simon Clubley wrote:
Question: are they low-risk because they were designed to do one thing
and to do it very well in extremely demanding environments ?
Are the replacements higher-risk because they are more of a generic
infrastructure and the mission critical workloads need to be force-fitted >>> into them ?
And here you finally hit the crux of the matter.
People wonder why I am still a strong supporter if COBOL.
The reason is simple. It was a language designed to do
a particular task and it does it well. Now we have this
desire to replace it with something generic. I feel this
is a bad idea.
Well, Cobol represents practices of 1960 business data
processing.
At that time it was state of the art.
But state of the art changed. Cobol somewhat adapted
but it slow to this. So your claim of "does it well"
does not look true, unless by "it" you mean
"replicating Cobol data processing from the sixties".
To expand a bit more, Cobol has essentially unfixable problem
with verbosity.
Defining a function need a several lines of
overhead code. Function calls are more verbose than in other
languages. There are fixable problems, which however may
appear when dealing with real Cobol code. In particular
Cobol support old control structures. In new program you
can use new control structures, but convering uses of old
control strucures to new ones need effort and it is likely
that a bit more effort would be enough to convert whole
program to a different language.
On 11/11/2025 10:23 AM, Waldek Hebisch wrote:
bill <bill.gunshannon@gmail.com> wrote:
On 11/10/2025 9:12 AM, Simon Clubley wrote:
Question: are they low-risk because they were designed to do one thing >>>> and to do it very well in extremely demanding environments ?
Are the replacements higher-risk because they are more of a generic
infrastructure and the mission critical workloads need to be force-fitted >>>> into them ?
And here you finally hit the crux of the matter.
People wonder why I am still a strong supporter if COBOL.
The reason is simple. It was a language designed to do
a particular task and it does it well. Now we have this
desire to replace it with something generic. I feel this
is a bad idea.
Well, Cobol represents practices of 1960 business data
processing.
Sometimes things don't really change. You count to 10 the same way now as in
1960. (Trivial example)
At that time it was state of the art.
But state of the art changed. Cobol somewhat adapted
but it slow to this. So your claim of "does it well"
does not look true, unless by "it" you mean
"replicating Cobol data processing from the sixties".
To expand a bit more, Cobol has essentially unfixable problem
with verbosity.
Now this is opinion, and really a poor argument. While I detest the verbosity
in most things, that is my choice, not the problem you claim.
Defining a function need a several lines of
overhead code. Function calls are more verbose than in other
languages. There are fixable problems, which however may
appear when dealing with real Cobol code. In particular
Cobol support old control structures. In new program you
can use new control structures, but convering uses of old
control strucures to new ones need effort and it is likely
that a bit more effort would be enough to convert whole
program to a different language.
I apologize in advance, but that is idiotic. Any re-write of any non-trivial
application in another language, will never be complete. There will be errors
and things will be lost. IT WILL HAPPEN !!! And when done, what will be
the gains in a sideways move?
Sometimes you may be able to do your data processing as in 1960. But
it is very unlikely to be good way now.
On Sun, 30 Nov 2025 05:44:11 -0000 (UTC), Waldek Hebisch wrote:
Sometimes you may be able to do your data processing as in 1960. But
it is very unlikely to be good way now.
I was watching this mini-doco on the history of MUMPS <https://www.youtube.com/watch?v=7g1K-tLEATw>. That achieved quite a
bit of its success from being more productive than COBOL.
Note though that VMS support is being dropped.
2017.1 was last version of Intersystems Cache to support VMS.
6.2 was last version of GT.M to support VMS.
I would deem MUMPS obsolete as well today even though it is still
used in healthcare and a little bit in finance.
On 11/30/2025 2:21 PM, Arne Vajh|+j wrote:
I would deem MUMPS obsolete as well today even though it is still
used in healthcare and a little bit in finance.
This is sort of OK:
test()
-a for i=1:1:3 do
-a-a-a . write "Hi from Mumps!",!
-a quit
But if we abbreviate commands as Mumps allow then it becomes
practically unreadable for those not knowing Mumps:
test2()
-a f i=1:1:3 d
-a-a-a . w "Hi from Mumps!",!
-a q
The selling point is the automatic persistence of global
variables:
On 11/30/2025 4:04 PM, Arne Vajh|+j wrote:
The selling point is the automatic persistence of global
variables:
Arne
On 30/11/2025 21:09, Arne Vajh|+j wrote:
On 11/30/2025 4:04 PM, Arne Vajh|+j wrote:
< snip >
The selling point is the automatic persistence of global
variables:
< snip >
Arne
Why would you want that?
On 30/11/2025 21:09, Arne Vajh|+j wrote:
On 11/30/2025 4:04 PM, Arne Vajh|+j wrote:
The selling point is the automatic persistence of global
variables:
Why would you want that?
On 11/11/2025 10:23 AM, Waldek Hebisch wrote:
bill <bill.gunshannon@gmail.com> wrote:
On 11/10/2025 9:12 AM, Simon Clubley wrote:
Question: are they low-risk because they were designed to do one thing >>>> and to do it very well in extremely demanding environments ?
Are the replacements higher-risk because they are more of a generic
infrastructure and the mission critical workloads need to be force-fitted >>>> into them ?
And here you finally hit the crux of the matter.
People wonder why I am still a strong supporter if COBOL.
The reason is simple. It was a language designed to do
a particular task and it does it well. Now we have this
desire to replace it with something generic. I feel this
is a bad idea.
Well, Cobol represents practices of 1960 business data
processing.
Sometimes things don't really change. You count to 10 the same way now as in >1960. (Trivial example)
At that time it was state of the art.
But state of the art changed. Cobol somewhat adapted
but it slow to this. So your claim of "does it well"
does not look true, unless by "it" you mean
"replicating Cobol data processing from the sixties".
To expand a bit more, Cobol has essentially unfixable problem
with verbosity.
Now this is opinion, and really a poor argument. While I detest the verbosity
in most things, that is my choice, not the problem you claim.
Defining a function need a several lines of
overhead code. Function calls are more verbose than in other
languages. There are fixable problems, which however may
appear when dealing with real Cobol code. In particular
Cobol support old control structures. In new program you
can use new control structures, but convering uses of old
control strucures to new ones need effort and it is likely
that a bit more effort would be enough to convert whole
program to a different language.
I apologize in advance, but that is idiotic. Any re-write of any non-trivial >application in another language, will never be complete. There will be errors >and things will be lost. IT WILL HAPPEN !!! And when done, what will be
the gains in a sideways move?
Sometimes things don't really change. You count to 10 the same way now as in
1960. (Trivial example)
Now this is opinion, and really a poor argument. While I detest the verbosity
in most things, that is my choice, not the problem you claim.
On 2025-11-29, Dave Froble <davef@tsoft-inc.com> wrote:
Sometimes things don't really change. You count to 10 the same way now as in
1960. (Trivial example)
Are you sure ? I thought maths teaching was heading in a new direction
in multiple parts of your country as shown by this example (which is way
too close to actually being realistic, especially with the "support" >infrastructure from the people around the teacher):
https://www.youtube.com/watch?v=Zh3Yz3PiXZw
In article <10gk6e6$1bcst$3@dont-email.me>,
Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
On 2025-11-29, Dave Froble <davef@tsoft-inc.com> wrote:
Sometimes things don't really change. You count to 10 the same way now as in
1960. (Trivial example)
Are you sure ? I thought maths teaching was heading in a new direction
in multiple parts of your country as shown by this example (which is way >>too close to actually being realistic, especially with the "support" >>infrastructure from the people around the teacher):
https://www.youtube.com/watch?v=Zh3Yz3PiXZw
You know, Simon, I recall you posting that you were against the
opposition candidate in our last election because you disliked
her laugh. In your own country, Nigel Farage and his party seem disconcertingly close to power, with their ill-advised "Empire
2.0" aspirations; might I remind you that most of the former
members of Empire 1.0 are still trying to recover?
It's very easy to throw stones, but not terribly advisable when
you yourself are in a glass house.
At least stop adding these things as parentheticals onto posts
that _also_ carry technical content.
In article <10gg48s$3srom$1@dont-email.me>,
Dave Froble <davef@tsoft-inc.com> wrote:
On 11/11/2025 10:23 AM, Waldek Hebisch wrote:
Defining a function need a several lines of
overhead code. Function calls are more verbose than in other
languages. There are fixable problems, which however may
appear when dealing with real Cobol code. In particular
Cobol support old control structures. In new program you
can use new control structures, but convering uses of old
control strucures to new ones need effort and it is likely
that a bit more effort would be enough to convert whole
program to a different language.
I apologize in advance, but that is idiotic. Any re-write of any non-trivial
application in another language, will never be complete. There will be errors
and things will be lost. IT WILL HAPPEN !!! And when done, what will be
the gains in a sideways move?
I got the impression Waldek was referring to updating programs
written to old versions of COBOL to use facilities introduced in
newer versions of COBOL, though perhaps I am mistaken.
Regardless, this raises an interesting point: the latest version
of COBOL is, I believe, COBOL 2023. But that language is rather
different than the original 1960 COBOL. So even simply updating
a COBOL program is akin to rewriting it in another language.
Now this is opinion, and really a poor argument. While I detest the verbosity
in most things, that is my choice, not the problem you claim.
Back on topic, COBOL is very verbose, but I also hate way too concise >languages where the language designers don't even allow words like
"function" to be spelt out in full. You read code many more times than
you write it and having cryptic syntax makes that a lot harder to achieve.
Something like Ada was designed for readability, and I wish all other >languages followed that example.
Just waiting for the moment when a newcomer designs a new language which
has syntax resembling TECO... :-)
On 12/1/2025 8:37 AM, Dan Cross wrote:
In article <10gg48s$3srom$1@dont-email.me>,
Dave Froble <davef@tsoft-inc.com> wrote:
On 11/11/2025 10:23 AM, Waldek Hebisch wrote:
Defining a function need a several lines of
overhead code. Function calls are more verbose than in other
languages. There are fixable problems, which however may
appear when dealing with real Cobol code. In particular
Cobol support old control structures. In new program you
can use new control structures, but convering uses of old
control strucures to new ones need effort and it is likely
that a bit more effort would be enough to convert whole
program to a different language.
I apologize in advance, but that is idiotic. Any re-write of any non-trivial
application in another language, will never be complete. There will be errors
and things will be lost. IT WILL HAPPEN !!! And when done, what will be
the gains in a sideways move?
I got the impression Waldek was referring to updating programs
written to old versions of COBOL to use facilities introduced in
newer versions of COBOL, though perhaps I am mistaken.
Regardless, this raises an interesting point: the latest version
of COBOL is, I believe, COBOL 2023. But that language is rather
different than the original 1960 COBOL. So even simply updating
a COBOL program is akin to rewriting it in another language.
The Cobol standard has been continuously updated over
the decades. But very few are using the new stuff added
the last 25 years.
For good reasons.
Let us say that a company:
* have a big Cobol application
* want to add a significant chunk of new functionality
* that new functionality could be implemented using
features from recent versions of Cobol standard
Options:
A) implement it in Cobol using features from recent
versions of Cobol standard and have the team learn
the new stuff
B) implement it in old style Cobol, because that is what
the team knows
C) implement it in some other language where the functionality is
common and call it from Cobol
D) implement it in some other language where the functionality is
common and put it in a separate service in middleware tier and
keep the old Cobol application untouched
E) say NO - can't do it
Few will choose #A. #B, #C and #D are simply more attractive.
On 12/1/2025 8:37 AM, Dan Cross wrote:
In article <10gg48s$3srom$1@dont-email.me>,
Dave Froble-a <davef@tsoft-inc.com> wrote:
On 11/11/2025 10:23 AM, Waldek Hebisch wrote:
Defining a function need a several lines of
overhead code.-a Function calls are more verbose than in other
languages.-a There are fixable problems, which however may
appear when dealing with real Cobol code.-a In particular
Cobol support old control structures.-a In new program you
can use new control structures, but convering uses of old
control strucures to new ones need effort and it is likely
that a bit more effort would be enough to convert whole
program to a different language.
I apologize in advance, but that is idiotic.-a Any re-write of any
non-trivial
application in another language, will never be complete. There will
be errors
and things will be lost.-a IT-a-a WILL-a-a HAPPEN-a-a !!!-a And when done, >>> what will be
the gains in a sideways move?
I got the impression Waldek was referring to updating programs
written to old versions of COBOL to use facilities introduced in
newer versions of COBOL, though perhaps I am mistaken.
Regardless, this raises an interesting point: the latest version
of COBOL is, I believe, COBOL 2023.-a But that language is rather
different than the original 1960 COBOL.-a So even simply updating
a COBOL program is akin to rewriting it in another language.
The Cobol standard has been continuously updated over
the decades. But very few are using the new stuff added
the last 25 years.
For good reasons.
Let us say that a company:
* have a big Cobol application
* want to add a significant chunk of new functionality
* that new functionality could be implemented using
-a features from recent versions of Cobol standard
Options:
A) implement it in Cobol using features from recent
-a-a versions of Cobol standard and have the team learn
-a-a the new stuff
B) implement it in old style Cobol, because that is what
-a-a the team knows
C) implement it in some other language where the functionality is
-a-a common and call it from Cobol
D) implement it in some other language where the functionality is
-a-a common and put it in a separate service in middleware tier and
-a-a keep the old Cobol application untouched
E) say NO - can't do it
Few will choose #A. #B, #C and #D are simply more attractive.
In article <10gk6e6$1bcst$3@dont-email.me>,
Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
Now this is opinion, and really a poor argument. While I detest the verbosity
in most things, that is my choice, not the problem you claim.
Back on topic, COBOL is very verbose, but I also hate way too concise
languages where the language designers don't even allow words like
"function" to be spelt out in full. You read code many more times than
you write it and having cryptic syntax makes that a lot harder to achieve.
Excessive verbosity can be a hindrance to readability, but
finding a balance with concision is more art that science. I
don't feel the need to spell out "function" when there's an
acceptable abbreviation that means the same thing ("fn"/"fun"/
etc). That said, a lot of early Unix code that omitted vowels
for brevity was utterly abstruse.
Something like Ada was designed for readability, and I wish all other
languages followed that example.
Unfortunately, what's considered "readable" is both subjective
and depends on the audience. Personally, I don't find Ada more
readable because they it forces me to write `function` instead
of `fn` or `procedure` instead of `proc`. If anything, I find
the split between two types of subprograms less readadable, no
matter how it's presented syntacticaly. Similarly, I don't find
the use of `begin` and `end` keywords more readable than `{` and
`}`, or similar lexical glyphs. I understand that others feel
differently.
If anything, I find it less readable since it is less visually
distinct (perhaps, if I my eyesight was even worse than it
already is, I would feel differently).
Just waiting for the moment when a newcomer designs a new language which
has syntax resembling TECO... :-)
Or APL.
On 12/1/2025 4:02 PM, Arne Vajh|+j wrote:
On 12/1/2025 8:37 AM, Dan Cross wrote:
I got the impression Waldek was referring to updating programs
written to old versions of COBOL to use facilities introduced in
newer versions of COBOL, though perhaps I am mistaken.
Regardless, this raises an interesting point: the latest version
of COBOL is, I believe, COBOL 2023.-a But that language is rather
different than the original 1960 COBOL.-a So even simply updating
a COBOL program is akin to rewriting it in another language.
The Cobol standard has been continuously updated over
the decades. But very few are using the new stuff added
the last 25 years.
Not really true. The only thing COBOL professionals have, for
the most part, refused to use is the OOP stuff.-a Some of the
other changes that are within the COBOL model were very welcome
additions.-a Like EVALUATE.-a Got rid of a lot of multiple page
IF-THEN-ELSE monstrosities.
I've long suspected (but I admit I have no evidence to support
this) that one of the reasons there is so much COBOL code in the
world is because, when making non-trivial changes, programmers
first _copy_ large sections of the program and then modify the
copy, to avoid introducing bugs into existing functionality.
There is also something in the Cobol language.
Large files with one data division, lots of paragraphs
and lots of perform's is easy to code, but it is also
bad for reusable code.
It is sort of the same as having large C or Pascal files
with all variables global and all functions/procedures
without arguments.
It is possible to do it right, but when people have
to chose between the easy way and the right way, then ...
On 12/1/2025 8:37 AM, Dan Cross wrote:
I've long suspected (but I admit I have no evidence to support
this) that one of the reasons there is so much COBOL code in the
world is because, when making non-trivial changes, programmers
first _copy_ large sections of the program and then modify the
copy, to avoid introducing bugs into existing functionality.
Copying and modifying code instead of creating reusable libraries
has been used by bad programmers in all languages.
But last century then Cobol and Basic were the two easiest
languages to learn and Cobol was one of the languages with
most jobs. So it seems likely that a large number of bad
programmers picked Cobol. Bringing bad habits with them.
Today I would expect that crowd to pick client side JavaScript
and server side PHP.
There is also something in the Cobol language.
Large files with one data division, lots of paragraphs
and lots of perform's is easy to code, but it is also
bad for reusable code.
It is sort of the same as having large C or Pascal files
with all variables global and all functions/procedures
without arguments.
It is possible to do it right, but when people have
to chose between the easy way and the right way, then ...
On 12/1/2025 5:46 PM, bill wrote:
On 12/1/2025 4:02 PM, Arne Vajh|+j wrote:
On 12/1/2025 8:37 AM, Dan Cross wrote:
I got the impression Waldek was referring to updating programs
written to old versions of COBOL to use facilities introduced in
newer versions of COBOL, though perhaps I am mistaken.
Regardless, this raises an interesting point: the latest version
of COBOL is, I believe, COBOL 2023.-a But that language is rather
different than the original 1960 COBOL.-a So even simply updating
a COBOL program is akin to rewriting it in another language.
The Cobol standard has been continuously updated over
the decades. But very few are using the new stuff added
the last 25 years.
Not really true. The only thing COBOL professionals have, for
the most part, refused to use is the OOP stuff.-a Some of the
other changes that are within the COBOL model were very welcome
additions.-a Like EVALUATE.-a Got rid of a lot of multiple page
IF-THEN-ELSE monstrosities.
EVALUATE came with COBOL 85. That is not within the
last 25 years.
New features within last 25 years besides OOP include:
* recursion support
* unicode support
* pointers and dynamic memory allocation
^ XML support
* collection classes
Have you seen COBOL code using those?
On 12/1/2025 8:06 PM, Arne Vajh|+j wrote:
But last century then Cobol and Basic were the two easiest
languages to learn and Cobol was one of the languages with
most jobs. So it seems likely that a large number of bad
programmers picked Cobol. Bringing bad habits with them.
Today I would expect that crowd to pick client side JavaScript
and server side PHP.
There is also something in the Cobol language.
Large files with one data division, lots of paragraphs
and lots of perform's is easy to code, but it is also
bad for reusable code.
It is sort of the same as having large C or Pascal files
with all variables global and all functions/procedures
without arguments.
It is possible to do it right, but when people have
to chose between the easy way and the right way, then ...
I take it you have never worked in a real COBOL shop.
On 12/1/2025 6:50 PM, Arne Vajh|+j wrote:
New features within last 25 years besides OOP include:
* recursion support
* unicode support
* pointers and dynamic memory allocation
^ XML support
* collection classes
Have you seen COBOL code using those?
I have seen and used pointers but not in production code as at 75
I am not finding many places that want me to work.-a :-)
XML isn't really anything to do with the language it's a file
format. Probably has no place in the language itself.
UNICODE the same thing.-a It could be done fairly easily with a library
but isn't really anything that COBOL had to have as a part of the
language.
Wouldn't classes fall under OOP.
On 12/1/2025 8:23 PM, bill wrote:
Wouldn't classes fall under OOP.
Classes is part of OOP that was added in Cobol 2002.
Collection classes was added in:
ISO/IEC TR 24717:2009, Information technology -- Programming languages, their environments and system software interfaces -- Collection classes
for programming language COBOL
I have never seen it used and I do not know how they work. But if it is
like collection classes in most other programming languages, then it
is predefined container classes for list, map/dictionary etc..
On 12/1/2025 8:15 PM, bill wrote:
On 12/1/2025 8:06 PM, Arne Vajh|+j wrote:
But last century then Cobol and Basic were the two easiest
languages to learn and Cobol was one of the languages with
most jobs. So it seems likely that a large number of bad
programmers picked Cobol. Bringing bad habits with them.
Today I would expect that crowd to pick client side JavaScript
and server side PHP.
There is also something in the Cobol language.
Large files with one data division, lots of paragraphs
and lots of perform's is easy to code, but it is also
bad for reusable code.
It is sort of the same as having large C or Pascal files
with all variables global and all functions/procedures
without arguments.
It is possible to do it right, but when people have
to chose between the easy way and the right way, then ...
I take it you have never worked in a real COBOL shop.
That is true.
I was with the Fortran people not the Cobol people.
But that does not change that:
* back in those days
then there were some people
-a doing Cobol that should not have - this is widely
-a known -
I believe the not so nice name for them
-a back then was "list programmers" (I was told about
-a that by a Cobol programmer when I took the DEC course
-a VMS for Programmers back in the mid 80's)
* PERFORM of paragraphs is not a good way to
-a write reusable code
On 12/1/2025 8:31 PM, Arne Vajh|+j wrote:
But that does not change that:
* back in those days
Exactly what do you consider to be "back in those days"?
-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a then there were some people
-a-a doing Cobol that should not have - this is widely
-a-a known -
Not widely known in the circles I worked in.-a If I were not a
competent COBOL programmer I would have been eliminated.
On 12/1/2025 8:23 PM, bill wrote:
On 12/1/2025 6:50 PM, Arne Vajh|+j wrote:
New features within last 25 years besides OOP include:
* recursion support
* unicode support
* pointers and dynamic memory allocation
^ XML support
* collection classes
Have you seen COBOL code using those?
I have seen and used pointers but not in production code as at 75
I am not finding many places that want me to work.-a :-)
XML isn't really anything to do with the language it's a file
format. Probably has no place in the language itself.
They did:
ISO/IEC TR 24716:2007, Information technology -- Programming languages, their environment and system software interfaces -- Native COBOL Syntax
for XML Support
I have no idea what it does, so I don't know if it makes any sense.
UNICODE the same thing.-a It could be done fairly easily with a library
but isn't really anything that COBOL had to have as a part of the
language.
Good unicode support require support in both language and
basic RTL.
As an example (I am not claiming that it is good support!!) see C++:
std::string
std::wstring
std::u16string
std::u32string
"ABC"
L"ABC"
u8"ABC"
u"ABC"
U"ABC"
Wouldn't classes fall under OOP.
Classes is part of OOP that was added in Cobol 2002.
Collection classes was added in:
ISO/IEC TR 24717:2009, Information technology -- Programming languages, their environments and system software interfaces -- Collection classes
for programming language COBOL
I have never seen it used and I do not know how they work. But if it is
like collection classes in most other programming languages, then it
is predefined container classes for list, map/dictionary etc..
On 12/1/2025 8:44 PM, Arne Vajh|+j wrote:
On 12/1/2025 8:23 PM, bill wrote:
UNICODE the same thing.-a It could be done fairly easily with a library
but isn't really anything that COBOL had to have as a part of the
language.
Good unicode support require support in both language and
basic RTL.
Don't agree.
-a COBOL was intended to keep track of money, inventory,
personnel, etc.-a UNICODE, per se,-a brings nothing to the table for
any of that.
Wouldn't classes fall under OOP.
Classes is part of OOP that was added in Cobol 2002.
And the COBOL Community refused to drink the Kool-Aid.
While there may actually be a place for OOP, the work
COBOL was intended to do isn't it.-a Academia tried to
force it down everyone's throats and were outraged
when some refused. (And took their revenge which is
being felt more and more every day now!!)-a I know a
number of massive ISes in use today that have been in
use for around a half century that were written in COBOL
and continue to function in COBOL. Lack of OOP hasn't
affected them at all.
Collection classes was added in:
ISO/IEC TR 24717:2009, Information technology -- Programming
languages, their environments and system software interfaces --
Collection classes for programming language COBOL
I have never seen it used and I do not know how they work. But if it is
like collection classes in most other programming languages, then it
is predefined container classes for list, map/dictionary etc..
Which does what for COBOL?
On 12/1/2025 8:44 PM, Arne Vajh|+j wrote:
On 12/1/2025 8:23 PM, bill wrote:
[snip]
UNICODE the same thing.-a It could be done fairly easily with a library
but isn't really anything that COBOL had to have as a part of the
language.
Good unicode support require support in both language and
basic RTL.
Don't agree. COBOL was intended to keep track of money, inventory, >personnel, etc. UNICODE, per se, brings nothing to the table for
any of that. And, as designed, it did support alternate character
sets.
Classes is part of OOP that was added in Cobol 2002.
And the COBOL Community refused to drink the Kool-Aid.
While there may actually be a place for OOP, the work
COBOL was intended to do isn't it. Academia tried to
force it down everyone's throats and were outraged
when some refused. (And took their revenge which is
being felt more and more every day now!!) I know a
number of massive ISes in use today that have been in
use for around a half century that were written in COBOL
and continue to function in COBOL. Lack of OOP hasn't
affected them at all.
Collection classes was added in:
ISO/IEC TR 24717:2009, Information technology -- Programming languages,
their environments and system software interfaces -- Collection classes
for programming language COBOL
I have never seen it used and I do not know how they work. But if it is
like collection classes in most other programming languages, then it
is predefined container classes for list, map/dictionary etc..
Which does what for COBOL?
On 12/1/2025 8:37 AM, Dan Cross wrote:
I've long suspected (but I admit I have no evidence to support
this) that one of the reasons there is so much COBOL code in the
world is because, when making non-trivial changes, programmers
first _copy_ large sections of the program and then modify the
copy, to avoid introducing bugs into existing functionality.
Copying and modifying code instead of creating reusable libraries
has been used by bad programmers in all languages.
But last century then Cobol and Basic were the two easiest
languages to learn and Cobol was one of the languages with
most jobs. So it seems likely that a large number of bad
programmers picked Cobol. Bringing bad habits with them.
Today I would expect that crowd to pick client side JavaScript
and server side PHP.
There is also something in the Cobol language.
Large files with one data division, lots of paragraphs
and lots of perform's is easy to code, but it is also
bad for reusable code.
It is sort of the same as having large C or Pascal files
with all variables global and all functions/procedures
without arguments.
It is possible to do it right, but when people have
to chose between the easy way and the right way, then ...
On 12/1/2025 4:23 PM, Dan Cross wrote:
In article <10gk6e6$1bcst$3@dont-email.me>,
Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
Excessive verbosity can be a hindrance to readability, butNow this is opinion, and really a poor argument. While I detest the verbosity
in most things, that is my choice, not the problem you claim.
Back on topic, COBOL is very verbose, but I also hate way too concise
languages where the language designers don't even allow words like
"function" to be spelt out in full. You read code many more times than
you write it and having cryptic syntax makes that a lot harder to achieve. >>
finding a balance with concision is more art that science. I
don't feel the need to spell out "function" when there's an
acceptable abbreviation that means the same thing ("fn"/"fun"/
etc). That said, a lot of early Unix code that omitted vowels
for brevity was utterly abstruse.
Something like Ada was designed for readability, and I wish all other
languages followed that example.
Unfortunately, what's considered "readable" is both subjective
and depends on the audience. Personally, I don't find Ada more
readable because they it forces me to write `function` instead
of `fn` or `procedure` instead of `proc`. If anything, I find
the split between two types of subprograms less readadable, no
matter how it's presented syntacticaly. Similarly, I don't find
the use of `begin` and `end` keywords more readable than `{` and
`}`, or similar lexical glyphs. I understand that others feel
differently.
If anything, I find it less readable since it is less visually
distinct (perhaps, if I my eyesight was even worse than it
already is, I would feel differently).
Just waiting for the moment when a newcomer designs a new language which >>> has syntax resembling TECO... :-)
Or APL.
Nothing wrong with APL, if the task is within the languages domain.
But then, I am one of the last advocates for domain specific rather
than generic languages.
In article <10gle2k$1q97g$1@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 12/1/2025 8:37 AM, Dan Cross wrote:
I've long suspected (but I admit I have no evidence to support
this) that one of the reasons there is so much COBOL code in the
world is because, when making non-trivial changes, programmers
first _copy_ large sections of the program and then modify the
copy, to avoid introducing bugs into existing functionality.
Copying and modifying code instead of creating reusable libraries
has been used by bad programmers in all languages.
I think it's a little deeper than that.
There is also something in the Cobol language.
Large files with one data division, lots of paragraphs
and lots of perform's is easy to code, but it is also
bad for reusable code.
It is sort of the same as having large C or Pascal files
with all variables global and all functions/procedures
without arguments.
It is possible to do it right, but when people have
to chose between the easy way and the right way, then ...
An issue with COBOL is that, given procedures A, B, ..., Z,
written sequentially in source, `PERFORM A THRU Z` means that it
is difficult to see when procedures B, C, ..., Y are called just
through visual inspection since calls to them are implicit; you
really need semantically aware tools to do that. So if you need
to change paragraph D, then you run the risk of implicitly
changing dependent behavior in your system unintentionally. You
might end up violating some assumption you didn't even know
existed; talk about spooky action at a distance.
Most COBOL programs were written before the era of automated,
unit-level testing, so it's extremely unlikely you've got a big
suite of tests you can run to attempt to catch such issues.
I imagine that a this results in a lot of (unnecessary)
duplication.
[snip]
How on earth can someone not know how to divide a fraction by two ?
Oh, and the laugh was only a part of it. It was her inability to
act in a way expected of a US president. I believe the phrase I used
at the time was a lack of gravitas, plus her inability to conduct
serious interviews without collapsing into word salad.
It appears some people are beginning to see through Reform, and we also
have the first past the post system. I am hoping that's enough to stop
him from gaining a majority, but our traditional parties (all of them)
need to _seriously_ up their game.
It's very easy to throw stones, but not terribly advisable when
you yourself are in a glass house.
At least stop adding these things as parentheticals onto posts
that _also_ carry technical content.
Did you read the rest of the posting Dan ?
On 12/2/2025 8:50 AM, Dan Cross wrote:
In article <10gle2k$1q97g$1@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 12/1/2025 8:37 AM, Dan Cross wrote:
I've long suspected (but I admit I have no evidence to support
this) that one of the reasons there is so much COBOL code in the
world is because, when making non-trivial changes, programmers
first _copy_ large sections of the program and then modify the
copy, to avoid introducing bugs into existing functionality.
Copying and modifying code instead of creating reusable libraries
has been used by bad programmers in all languages.
I think it's a little deeper than that.
There is also something in the Cobol language.
Large files with one data division, lots of paragraphs
and lots of perform's is easy to code, but it is also
bad for reusable code.
It is sort of the same as having large C or Pascal files
with all variables global and all functions/procedures
without arguments.
It is possible to do it right, but when people have
to chose between the easy way and the right way, then ...
An issue with COBOL is that, given procedures A, B, ..., Z,
written sequentially in source, `PERFORM A THRU Z` means that it
is difficult to see when procedures B, C, ..., Y are called just
through visual inspection since calls to them are implicit; you
really need semantically aware tools to do that. So if you need
to change paragraph D, then you run the risk of implicitly
changing dependent behavior in your system unintentionally. You
might end up violating some assumption you didn't even know
existed; talk about spooky action at a distance.
That is a classical argument found on the internet.
But I am not convinced that it is critical.
It is all within one file.
$ search foobar.cob thru,through
should reveal if the feature is used.
Unless the file is very long and the code is very ugly, then
I believe it should be relative easy possible to track the
perform flow even in VT mode EDT or EVE.
Most COBOL programs were written before the era of automated,
unit-level testing, so it's extremely unlikely you've got a big
suite of tests you can run to attempt to catch such issues.
I imagine that a this results in a lot of (unnecessary)
duplication.
That may actually have a huge impact.
No unit tests is a common reason not to change any existing code.
In article <10gn0cq$2d8ve$1@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 12/2/2025 8:50 AM, Dan Cross wrote:
An issue with COBOL is that, given procedures A, B, ..., Z,
written sequentially in source, `PERFORM A THRU Z` means that it
is difficult to see when procedures B, C, ..., Y are called just
through visual inspection since calls to them are implicit; you
really need semantically aware tools to do that. So if you need
to change paragraph D, then you run the risk of implicitly
changing dependent behavior in your system unintentionally. You
might end up violating some assumption you didn't even know
existed; talk about spooky action at a distance.
That is a classical argument found on the internet.
Yes. I myself have been making it for years.
But I am not convinced that it is critical.
It is all within one file.
$ search foobar.cob thru,through
should reveal if the feature is used.
Unless the file is very long and the code is very ugly, then
I believe it should be relative easy possible to track the
perform flow even in VT mode EDT or EVE.
I'm not at all convinced of that in a large code base; call
graphs resulting in such `PERFORM`s can be too big to trace by
hand. And many of these extant COBOL applications are quite
large, indeed.
On 12/2/2025 10:46 AM, Dan Cross wrote:
In article <10gn0cq$2d8ve$1@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 12/2/2025 8:50 AM, Dan Cross wrote:
An issue with COBOL is that, given procedures A, B, ..., Z,
written sequentially in source, `PERFORM A THRU Z` means that it
is difficult to see when procedures B, C, ..., Y are called just
through visual inspection since calls to them are implicit; you
really need semantically aware tools to do that. So if you need
to change paragraph D, then you run the risk of implicitly
changing dependent behavior in your system unintentionally. You
might end up violating some assumption you didn't even know
existed; talk about spooky action at a distance.
That is a classical argument found on the internet.
Yes. I myself have been making it for years.
But I am not convinced that it is critical.
It is all within one file.
$ search foobar.cob thru,through
should reveal if the feature is used.
Unless the file is very long and the code is very ugly, then
I believe it should be relative easy possible to track the
perform flow even in VT mode EDT or EVE.
I'm not at all convinced of that in a large code base; call
graphs resulting in such `PERFORM`s can be too big to trace by
hand. And many of these extant COBOL applications are quite
large, indeed.
There are lots of hundreds of thousands or millions of lines of
code applications.
But hopefully not as single file.
In article <10gks0t$1kmd0$1@dont-email.me>,
Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
[snip]
How on earth can someone not know how to divide a fraction by two ?
I think this is discarding all nuance from a complex issue.
Much of what the Atlantic article described, for instance, is
due to the lingering fallout from the pandemic: an utterly
unprecedented event in our lifetimes. Most kids entering
college now had their education (and much of their social
development) severely curtailed, due to circumstances affecting
that entire globe that were completely out of those kids'
control; or that of their parents, for that matter. To ignore
all of that and basically declare, "Americans are igorant" is,
itself, ignorant.
On 2025-11-29, Dave Froble <davef@tsoft-inc.com> wrote:
Sometimes things don't really change. You count to 10 the same way now as in
1960. (Trivial example)
Are you sure ? I thought maths teaching was heading in a new direction
in multiple parts of your country as shown by this example (which is way
too close to actually being realistic, especially with the "support" infrastructure from the people around the teacher):
https://www.youtube.com/watch?v=Zh3Yz3PiXZw
Now this is opinion, and really a poor argument. While I detest the verbosity
in most things, that is my choice, not the problem you claim.
Back on topic, COBOL is very verbose, but I also hate way too concise languages where the language designers don't even allow words like
"function" to be spelt out in full. You read code many more times than
you write it and having cryptic syntax makes that a lot harder to achieve.
Something like Ada was designed for readability, and I wish all other languages followed that example.
Just waiting for the moment when a newcomer designs a new language which
has syntax resembling TECO... :-)
BTW, I was interested to read about the issues and tradeoffs around
the stopping of standardised testing during the application process
in some higher education establishments a few years ago.
On 12/3/2025 9:07 AM, Simon Clubley wrote:
BTW, I was interested to read about the issues and tradeoffs around
the stopping of standardised testing during the application process
in some higher education establishments a few years ago.
Nothing wrong with tests. They can be helpful. But let me tell you what is >wrong with depending on tests. SOME PEOPLE JUST DON'T DO WELL WITH TESTS!
Case in point. My son always had problems with taking tests. I don't >understand it, but that was a problem for him. Does that make him less than >those who do well with tests?
Now he is a NRC licensed reactor operator at a nuclear power station. Yes, >there was testing, and it was difficult for him. But testing is not how the job
is learned. People actually practiced the job under close supervision before >they were trusted to do the job. Perhaps still a type of testing.
Lately, when special operations are required, He is the one called upon, because
he is trusted to perform the job correctly, over most of the other operators.
I guess what I'm trying to say is while tests can be helpful, they are not >necessarily the only way of determining competence.
Chris Townley <news@cct-net.co.uk> wrote:
On 30/11/2025 21:09, Arne Vajh|+j wrote:
Why would you want that?
The selling point is the automatic persistence of global variables:
Think database. MUMPS globals really are a non-relational database. Non-persistent database would be of limited use.
On 11/3/2025 8:31 AM, Simon Clubley wrote:
What are they moving to, and how are they satisfying the extremely high
constraints both on software and hardware availability, failure
detection,
and recovery that z/OS and its underlying hardware provides ?
z/OS has a unique set of capabilities when it comes to the absolutely
critical this _MUST_ continue working or the country/company dies area.
Note that even though z/OS and mainframes generally have a
good track recording regarding availability, then it is not
a magic solution - they can also have problems.
Banks having mainframe problems are rare but far from
unheard of.
Because a bank mainframe was down for 5 hours.
On 11/11/2025 10:50 AM, Arne Vajh|+j wrote:
On 11/3/2025 8:31 AM, Simon Clubley wrote:
What are they moving to, and how are they satisfying the extremely high
constraints both on software and hardware availability, failure
detection,
and recovery that z/OS and its underlying hardware provides ?
z/OS has a unique set of capabilities when it comes to the absolutely
critical this _MUST_ continue working or the country/company dies area.
Note that even though z/OS and mainframes generally have a
good track recording regarding availability, then it is not
a magic solution - they can also have problems.
Banks having mainframe problems are rare but far from
unheard of.
And speaking of.
A lot of banking services were down Monday in Denmark.
Because a bank mainframe was down for 5 hours.
Both the company and the country survived. :-)
As often is the case the root cause was simple and
stupid. A capacity management application took away
the resources needed to process transactions.
Arne
On 2025-12-18, Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 11/11/2025 10:50 AM, Arne Vajh|+j wrote:
On 11/3/2025 8:31 AM, Simon Clubley wrote:
What are they moving to, and how are they satisfying the extremely high >>>> constraints both on software and hardware availability, failureNote that even though z/OS and mainframes generally have a
detection,
and recovery that z/OS and its underlying hardware provides ?
z/OS has a unique set of capabilities when it comes to the absolutely
critical this _MUST_ continue working or the country/company dies area. >>>
good track recording regarding availability, then it is not
a magic solution - they can also have problems.
Banks having mainframe problems are rare but far from
unheard of.
And speaking of.
A lot of banking services were down Monday in Denmark.
Because a bank mainframe was down for 5 hours.
Both the company and the country survived. :-)
As often is the case the root cause was simple and
stupid. A capacity management application took away
the resources needed to process transactions.
I am impressed... :_) where did you read/see this ??
And for those that does not read Danish:
<quote>
JN Data has been working through the night to find the root cause of
the outage on our Mainframe. Initial investigations indicate that an incorrect command in JN Data's capacity management tool was the root
cause of the outage. The incorrect command meant that too little
capacity was allocated to process customers' transactions. As a result, services such as credit cards and online and mobile banking became unavailable for JN Data's customers Jyske Bank, Bankdata, BEC and
Nykredit. We are now working on the underlying cause of this to ensure
that a similar error cannot occur in the future.
</quote>
mod produser /cpu=00:00:01 /wsmax=10 /wsext=20 /pgflq=100
| Sysop: | Amessyroom |
|---|---|
| Location: | Fayetteville, NC |
| Users: | 54 |
| Nodes: | 6 (0 / 6) |
| Uptime: | 17:44:24 |
| Calls: | 742 |
| Files: | 1,218 |
| D/L today: |
4 files (8,203K bytes) |
| Messages: | 184,414 |
| Posted today: | 1 |