Context
-------
1)
My opinion is that DCL:
* is fine for interactive use
* is fine for small scripts (5-25 lines with at most a few lexical
functions, a few if statements and 'Pn' usage
* is not up to expectations for writing large scripts aka
programming in DCL
One can argue that DCL should not be used for programming, but fact
is that it is used that way.
So what is missing for programming? I would say biggest
items are:
* loops
* switch/case
* arrays
* user defined lexicals
Arne Vajh|+j formulated on Saturday :
So what is missing for programming? I would say biggest
items are:
* loops
* switch/case
* arrays
* user defined lexicals
I have been working on very long and complex DCL procedures in the past
(not anymore since I'm retired) and to be honest, have never been
bothered by the lack of specific loop, case, or array constructs. All
of that can be pretty well implemented with the DCL structures existing today, and can be made perfectly readable if you make the effort to
write your DCL code and document it clearly.
User written lexical functions, on the other hand, would be a real
bonus. There are, for one, still many parts of the operating system
for which you need to get information, and the only possible way to do
it is to parse some output (think of everything TCP/IP, for example),
while there is a callable interface available to get the info properly.
On 12/6/2025 5:42 AM, Marc Van Dyck wrote:
User written lexical functions, on the other hand, would be a real
bonus. There are, for one, still many parts of the operating system
for which you need to get information, and the only possible way to do
it is to parse some output (think of everything TCP/IP, for example),
while there is a callable interface available to get the info properly.
Yes.
And even though it is possible to run an exe that set a symbol
with lib$set_symbol, then getting it as a lexical would make it
more readable.
On 12/6/2025 10:33 AM, Arne Vajh|+j wrote:
On 12/6/2025 5:42 AM, Marc Van Dyck wrote:
User written lexical functions, on the other hand, would be a real
bonus. There are, for one, still many parts of the operating system
for which you need to get information, and the only possible way to do
it is to parse some output (think of everything TCP/IP, for example),
while there is a callable interface available to get the info properly.
Yes.
And even though it is possible to run an exe that set a symbol
with lib$set_symbol, then getting it as a lexical would make it
more readable.
Note that VSI could add it to existing DCL:
f$udl(shrimg, arg1, ...)
with a convention of entry point:
int udl$function(int narg, enum dcl_type *argtyp, void *argval, enum
dcl_typ *rettyp, void *retval)
If they wanted to.
New lexical functions have been added over time.
Arne
Let me just say that as the Unix shell advanced from ssh through ksh and
into modern bash (with some csh sidelines), at the same time that all these great scripting facilities were being added to the shell-- people stopped using the shell for scripting and moved to special scripting languages like perl and python.
So, while I am a fan of the unix shell (and less of a fan of DCL but still someone who appreciates DCL), I don't think most of the effort that has
gone into making a sophisticated command language has been of that much
use, since how people use the shell has changed.
That being the case, I would think it would be better to spend the effort into getting better python integration for VMS than in souping up DCL.
I have always wanted something like:
dcl.init() # open pseudo terminal
res1 = dcl.do(command1) # send command to pseudo terminal and return output ...
resn = dcl.do(commandn) # send command to pseudo terminal and return output dcl.close() # close pseudo terminal
Point being that reusing the same DCL process is better than
starting a new per command for certain things.
VSI cannot start classic evolution process of adding new features
to DCL over time.
What to do?
-----------
What I see left is the "RATFOR approach" (in this century it should
probbaly be called "transpiling approach", but I suspect more people
here know about RATFOR than all the transpiling to JavaScript being
done today). Pre-processing extended DCL to old DCL.
On 12/5/25 8:41 PM, Arne Vajh|+j wrote:
What I see left is the "RATFOR approach" (in this century it should
probbaly be called "transpiling approach", but I suspect more people
here know about RATFOR than all the transpiling to JavaScript being
done today). Pre-processing extended DCL to old DCL.
I don't see how transpilation could get you 64-bit integers, hashes,
regular expressions integrated into the language, or other things that
would be expected from a modern scripting language.
-a Even if user- written lexicals were possible, you couldn't really use them to create
or manage very interesting data structures given that DCL symbol values
are limited to 1024 characters.
I don't think VSI is really big enough to invent and maintain an
entirely new language. They should probably leave DCL as-is and start
porting .NET and thus PowerShell.-a As far as I know, all the relevant
bits are open source and MIT license, and PowerShell is intended to work
as both a CLI and a scripting language. It would be a big project, but probably smaller than creating a new DCL implementation.
On 12/6/2025 4:28 PM, Craig A. Berry wrote:
On 12/5/25 8:41 PM, Arne Vajh|+j wrote:
What I see left is the "RATFOR approach" (in this century it should
probbaly be called "transpiling approach", but I suspect more people
here know about RATFOR than all the transpiling to JavaScript being
done today). Pre-processing extended DCL to old DCL.
I don't see how transpilation could get you 64-bit integers, hashes,
regular expressions integrated into the language, or other things that
would be expected from a modern scripting language.
64 bit integers with external operations performance would be horrible.
I believe f$re_match and f$re_replace could work OK.
-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a -a Even if user-
written lexicals were possible, you couldn't really use them to create
or manage very interesting data structures given that DCL symbol values
are limited to 1024 characters.
I believe it is 8192 today.
And unless the code is really quirky then I would assume going
to 32K would be easy.
I don't think VSI is really big enough to invent and maintain an
entirely new language. They should probably leave DCL as-is and start
porting .NET and thus PowerShell.-a As far as I know, all the relevant
bits are open source and MIT license, and PowerShell is intended to work
as both a CLI and a scripting language. It would be a big project, but
probably smaller than creating a new DCL implementation.
.NET on VMS would be great.
Not just for PS but for a lot of stuff: C# language,
ASP.NET MVC + ASP.NET Web API etc..
It would also bring DBL to x86-64 as they support .NET
as platform.
All relevant parts of both PS and .NET should be MIT.
(Windows specific stuff are not relevant)
PS is *the* shell for Windows admins, but has it caught on
with Linux admins?
I would have thought those were mostly bash and Python.
BTW, I have never liked PS - it just doesn't appear logical
to me, but that is just my personal opinion.
And PS cmdlet's are like a combo of the existing DCL capability
to add verbs and the non-existing DCL capability for user defined
lexicals.
Let me just say that as the Unix shell advanced from ssh through ksh and
into modern bash (with some csh sidelines), at the same time that all these great scripting facilities were being added to the shell-- people stopped using the shell for scripting and moved to special scripting languages like perl and python.
So, while I am a fan of the unix shell (and less of a fan of DCL but still someone who appreciates DCL), I don't think most of the effort that has
gone into making a sophisticated command language has been of that much
use, since how people use the shell has changed.
That being the case, I would think it would be better to spend the effort into getting better python integration for VMS than in souping up DCL.
(And I really don't like python at all... the indentation being syntax reminds me far too much of JCL... but it's what everyone uses today.)
--scott
I think you will find it started with sh (aka the Bourne shell) rather
than ssh!
On 12/6/25 5:46 PM, Arne Vajh|+j wrote:
On 12/6/2025 4:28 PM, Craig A. Berry wrote:
-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a-a -a Even if user-
written lexicals were possible, you couldn't really use them to create
or manage very interesting data structures given that DCL symbol values
are limited to 1024 characters.
I believe it is 8192 today.
Current online help for LIB$SET_SYMBOL says 1024. Current RTL manual
says 4096.-a Dunno what's right.
-a But with people using scriptingIt is.
languages to process massive vectors to train LLM models, it all seems
pretty puny.
And more important DCL "confirms":
$ typ maxsym.com
$ write sys$output f$getsyi("version") + " " + f$getsyi("arch_name")
$ s = ""
$ i = 0
$ loop:
$-a-a-a if i .ge. 8100 then goto endloop
$-a-a-a s = s + "X"
$-a-a-a i = i + 1
$-a-a-a goto loop
$ endloop:
$ write sys$output f$length(s)
$ exit
$ @maxsym
V9.2-3-a-a x86_64
8100
$ @maxsym
V8.4-2L2 Alpha
8100
Well - the above can actually only do 8163 not 8192. Over 8163
it gives:
%DCL-W-BUFOVF, command buffer overflow - shorten expression or command line
but I buy the 8192 for the pure symbol.
On 12/6/2025 3:23 PM, Arne Vajh|+j wrote:
I have always wanted something like:
dcl.init() # open pseudo terminal
res1 = dcl.do(command1) # send command to pseudo terminal and return
output
...
resn = dcl.do(commandn) # send command to pseudo terminal and return
output
dcl.close() # close pseudo terminal
Point being that reusing the same DCL process is better than
starting a new per command for certain things.
Example of DCL requiring same process:
the skipping N lines in COPY trick
$ open/read f z.txt
$ read f dummy
$ read f dummy
$ copy f zz.txt
$ close f
Context
-------
1)
My opinion is that DCL:
* is fine for interactive use
* is fine for small scripts (5-25 lines with at most a few lexical
functions, a few if statements and 'Pn' usage
* is not up to expectations for writing large scripts aka
programming in DCL
One can argue that DCL should not be used for programming, but fact
is that it is used that way.
So what is missing for programming? I would say biggest
items are:
* loops
* switch/case
* arrays
* user defined lexicals
This mean VSI cannot break backwards compatibility for DCL - whatever
ran in 1985 has to run the exact same way today.
8) VMS SEARCH functionality is weak compared to grep.
$ @maxsym
V9.2-3-a-a x86_64
8100
$ @maxsym
V8.4-2L2 Alpha
8100
On 06/12/2025 19:45, Scott Dorsey wrote:
Let me just say that as the Unix shell advanced from ssh through ksh and
into modern bash (with some csh sidelines), at the same time that all these >> great scripting facilities were being added to the shell-- people stopped
using the shell for scripting and moved to special scripting languages like >> perl and python.
So, while I am a fan of the unix shell (and less of a fan of DCL but still >> someone who appreciates DCL), I don't think most of the effort that has
gone into making a sophisticated command language has been of that much
use, since how people use the shell has changed.
That being the case, I would think it would be better to spend the effort
into getting better python integration for VMS than in souping up DCL.
(And I really don't like python at all... the indentation being syntax
reminds me far too much of JCL... but it's what everyone uses today.)
I think you will find it started with sh (aka the Bourne shell) rather
than ssh!
6) Nothing matching "man -k" in the DCL HELP facility.
On 12/7/2025 10:13 AM, Arne Vajh|+j wrote:
$ @maxsym
V9.2-3-a-a x86_64
8100
$ @maxsym
V8.4-2L2 Alpha
8100
$ write sys$output f$getsyi("version") + " " + f$getsyi("arch_name")
$ on warning then goto endloop
$ s = ""
$ i = 0
$ loop:
$-a-a-a if i .ge. 8100 then goto endloop
$-a-a-a s = s + "X"
$-a-a-a i = i + 1
$-a-a-a goto loop
$ endloop:
$ status = $status
$ write sys$output i
$ warning:
$ on warning then goto warning
$ on control_y then exit
$ s = s - "X"
$ write sys$output f$length(s)
$ exit
THING1$ @maxxim
V7.2-a-a-a-a Alpha
%DCL-W-BUFOVF, command buffer overflow - shorten expression or command line 1014
%DCL-W-BUFOVF, command buffer overflow - shorten expression or command line %DCL-W-BUFOVF, command buffer overflow - shorten expression or command line %DCL-W-BUFOVF, command buffer overflow - shorten expression or command line %DCL-W-BUFOVF, command buffer overflow - shorten expression or command line %DCL-W-BUFOVF, command buffer overflow - shorten expression or command line %DCL-W-BUFOVF, command buffer overflow - shorten expression or command line %DCL-W-BUFOVF, command buffer overflow - shorten expression or command line %DCL-W-BUFOVF, command buffer overflow - shorten expression or command line %DCL-W-BUFOVF, command buffer overflow - shorten expression or command line %DCL-W-BUFOVF, command buffer overflow - shorten expression or command line %DCL-W-BUFOVF, command buffer overflow - shorten expression or command line %DCL-W-BUFOVF, command buffer overflow - shorten expression or command line %DCL-W-BUFOVF, command buffer overflow - shorten expression or command line %DCL-W-BUFOVF, command buffer overflow - shorten expression or command line %DCL-W-BUFOVF, command buffer overflow - shorten expression or command line %DCL-W-BUFOVF, command buffer overflow - shorten expression or command line %DCL-W-BUFOVF, command buffer overflow - shorten expression or command line %DCL-W-BUFOVF, command buffer overflow - shorten expression or command line 995
On 2025-12-05, Arne Vajh|+j <arne@vajhoej.dk> wrote:
My opinion is that DCL:
* is fine for interactive use
[Oh dear, Arne, oh dear... :-)]
No, it most certainly is not fine for interactive use.
1) You can't edit a line longer than the terminal width. Either fix the terminal driver or add the functionality to DCL itself.]
2) No easy incremental search of command history (Bash Ctrl-R style).
3) No saving of command history into a history file with just the commands entered into multiple simultaneous DCL sessions added to the end of the history file.
4) No automatic restoration of command history during startup of a new
DCL session.
5) Piping is seriously clunky.
6) Nothing matching "man -k" in the DCL HELP facility.
7) VMS diff is lousy compared to GNU diff and its unified diff mode.
GNU diff is available from third parties for VMS. It should be a part of VMS.
8) VMS SEARCH functionality is weak compared to grep.
9) No tab completion of filenames.
On 2025-12-05, Arne Vajh|+j <arne@vajhoej.dk> wrote:
So what is missing for programming? I would say biggest
items are:
* loops
* switch/case
* arrays
* user defined lexicals
New functionality is implemented as a OO model in C, which sits alongside
the existing procedural code in Macro-32.
Lists/dictionaries/tuples/etc should be a core facility.
General OO functionality with structured imports of the objects.
Generation of the VMS header files for all the languages will also
include generation of DCL OO headers and modules that can be directly imported by a DCL script.
As such, there is no need for user defined lexicals (which, with the
current DCL design, would have to run in user mode instead of supervisor
mode anyway for security reasons). You just import the OO module containing the system service that you want to call.
This mean VSI cannot break backwards compatibility for DCL - whatever
ran in 1985 has to run the exact same way today.
No problem. You keep the existing interfaces and add OO functionality
on top of it for use by new scripts, or any existing scripts you might
want to spend the time modifying. There's no reason why all the new OO
stuff can't simply be written in C that runs alongside the existing
Macro-32 code. Likewise for all the new control structures stuff.
On 12/8/2025 9:12 AM, Simon Clubley wrote:
1) You can't edit a line longer than the terminal width. Either fix the
terminal driver or add the functionality to DCL itself.]
I never use that long commands.
What are you doing when you need such long names? Filenames with
full path and deep directory structures??
9) No tab completion of filenames.
I can live without. But it is a common shell feature.
It would obvious require a DCL change - this is something
a pre-processor cannot handle.
On 12/8/2025 9:12 AM, Simon Clubley wrote:
On 2025-12-05, Arne Vajhoj <arne@vajhoej.dk> wrote:
So what is missing for programming? I would say biggest
items are:
* loops
* switch/case
* arrays
* user defined lexicals
New functionality is implemented as a OO model in C, which sits alongside
the existing procedural code in Macro-32.
Lists/dictionaries/tuples/etc should be a core facility.
General OO functionality with structured imports of the objects.
Generation of the VMS header files for all the languages will also
include generation of DCL OO headers and modules that can be directly
imported by a DCL script.
As such, there is no need for user defined lexicals (which, with the
current DCL design, would have to run in user mode instead of supervisor
mode anyway for security reasons). You just import the OO module containing >> the system service that you want to call.
????
Having a script language call native code requires some builtin
capability in the script language.
Examples: Python ctypes, VBScript COM etc..
This mean VSI cannot break backwards compatibility for DCL - whatever
ran in 1985 has to run the exact same way today.
No problem. You keep the existing interfaces and add OO functionality
on top of it for use by new scripts, or any existing scripts you might
want to spend the time modifying. There's no reason why all the new OO
stuff can't simply be written in C that runs alongside the existing
Macro-32 code. Likewise for all the new control structures stuff.
????
You need one parser not two parsers.
You need some interoperability between new stuff and old stuff.
On 2025-12-08, Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 12/8/2025 9:12 AM, Simon Clubley wrote:
Lists/dictionaries/tuples/etc should be a core facility.
General OO functionality with structured imports of the objects.
Generation of the VMS header files for all the languages will also
include generation of DCL OO headers and modules that can be directly
imported by a DCL script.
As such, there is no need for user defined lexicals (which, with the
current DCL design, would have to run in user mode instead of supervisor >>> mode anyway for security reasons). You just import the OO module containing >>> the system service that you want to call.
????
Having a script language call native code requires some builtin
capability in the script language.
Examples: Python ctypes, VBScript COM etc..
You have completely and totally missed what I am saying above. Read it again.
There is absolutely no need for the hack of user defined lexicals because missing system calls are added to a newDCL as a plugin module when a new version of newDCL is built. All the existing procedural code gets to use
the existing lexicals in a 100% backwards compatible manner.
All the new code, where backwards compatibility is not a concern, gets to
use the new extended OO Python-style interface to the system services.
The native code you call is a part of newDCL, in the exact same way that
the native code which is run when you call a lexical is a part of DCL.
This mean VSI cannot break backwards compatibility for DCL - whatever
ran in 1985 has to run the exact same way today.
No problem. You keep the existing interfaces and add OO functionality
on top of it for use by new scripts, or any existing scripts you might
want to spend the time modifying. There's no reason why all the new OO
stuff can't simply be written in C that runs alongside the existing
Macro-32 code. Likewise for all the new control structures stuff.
????
You need one parser not two parsers.
You need some interoperability between new stuff and old stuff.
Read it again.
Any variables referenced within an object you create gets all the new
fancy data structures and extended modern limits.
Any variables referenced that are not within an object, get the
traditional crappy DCL limits and functionality, making them 100%
backwards compatible with existing DCL code.
Oh, and you only have one parser, not two. It's just that all the new
stuff is handed off to code written in a HLL such as C instead of Macro-32.
PS: BTW Arne, have you written any shells/parsers/compiler frontends/etc ?
You are looking at creating an additional hack on top of an existing set of hacks.
I am using my knowledge to propose a more general and cleaner solution
On 12/9/2025 8:55 AM, Simon Clubley wrote:
Read it again.
Any variables referenced within an object you create gets all the new
fancy data structures and extended modern limits.
Any variables referenced that are not within an object, get the
traditional crappy DCL limits and functionality, making them 100%
backwards compatible with existing DCL code.
Oh, and you only have one parser, not two. It's just that all the new
stuff is handed off to code written in a HLL such as C instead of Macro-32.
You add DCL support to DCL and get starlet.com with some
syntax.
$ declare class system_services
$ ...
$ native static function sys$forcex(byref pid: long, bydesc prcnam: string, byval code: long)
$ ...
$ enddeclare
(I tried to make it DCL'ish - it could be Python'ish or Java'ish or whatever'ish instead)
The DCL script does:
$ @sys$library:starlet
$ ...
$ system_services.sys$forcex(id, , 44)
$ ...
This needs a way for DCL to call native code, using VMS calling
convention.
Existing lexicals does not need this because they are implemented
inside DCL.
And parsing is not that simple either.
DCL has an existing parser that get activated.
And it will process the @ but not the class declaration
or the class.method call.
It will not "handoff" it will give an error.
And there is also an issue the other way around.
$ ...
$ librtl.lib$put_output("''p1' ''p2' ''p3'")
$ ...
Old DCL cannot do librtl.lib$put_output, so that needs
to go to the new DCL, but the new DCL should not
reimplement "''p1' ''p2' ''p3'", so that needs to go back to the
old DCL.
Impossible to do without significant changes to old DCL.
And an extremely complex solution.
PS: BTW Arne, have you written any shells/parsers/compiler frontends/etc ?
It has been more than 25 years.
You are looking at creating an additional hack on top of an existing >> set of hacks.
True.
But also a low cost way to provide some benefits.
I am using my knowledge to propose a more general and cleaner >> solution
What you propose would be the ugliest most hackish parser and
interpreter known to mankind.
On 12/8/2025 9:12 AM, Simon Clubley wrote:
6) Nothing matching "man -k" in the DCL HELP facility.
Problems with a lot of what you said, but this one sticks out the most.
DCL is the equivalent of a Unix Shell. "man -k" has absolutely
nothing to do with a Unix Shell. It's a standalone utility that
searches the man files for keywords. One could always write an
equivalent for VMS HELP if anyone really cared. Heck, you could
even call it "man" and give it a "k" option. :-)
On 2025-12-10, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
In article <mporhuFjl5nU5@mid.individual.net>,
bill <bill.gunshannon@gmail.com> wrote:
On 12/8/2025 9:12 AM, Simon Clubley wrote:
6) Nothing matching "man -k" in the DCL HELP facility.
Problems with a lot of what you said, but this one sticks out the most.
DCL is the equivalent of a Unix Shell. "man -k" has absolutely
nothing to do with a Unix Shell. It's a standalone utility that
searches the man files for keywords. One could always write an >>>equivalent for VMS HELP if anyone really cared. Heck, you could
even call it "man" and give it a "k" option. :-)
Agreed. Simon, much on your list was complaints about the
overall interactive VMS environment, not so much about DCL.
E.g., comparing `SEARCH` and `grep` is fine, but drawing
conclusions about DCL as a result does not follow.
That may be true in the environments you are used to, but in the VMS
world, there's a tendency to regard the standard vendor-supplied
DCL commands and DCL itself all as one standard combined package.
Context
-------
1)
My opinion is that DCL:
* is fine for interactive use
* is fine for small scripts (5-25 lines with at most a few lexical
-a functions, a few if statements and 'Pn' usage
* is not up to expectations for writing large scripts aka
-a programming in DCL
One can argue that DCL should not be used for programming, but fact
is that it is used that way.
So what is missing for programming? I would say biggest
items are:
* loops
* switch/case
* arrays
* user defined lexicals
What to do?.=.
-----------
A totally new shell with a new syntax is not a solution:
* the oldtimers want DCL
* the newcomers want standard (bash, Python etc.)
* expensive
A reimplementation of DCL in C (or another language, but C is
probably the current preference) is not a solution:
* risk only achieving 99.9% compatiblity instead of 100% compatibility
* expensive
I like DCL. I prefer it over any other shell I've ever used. But it is
also over 40 years old with only limited enhancements over the years.
PS is *the* shell for Windows admins, but has it caught on with Linux
admins?
BTW, I have never liked PS - it just doesn't appear logical to me, but
that is just my personal opinion.
User written lexical functions, on the other hand, would be a real
bonus. There are, for one, still many parts of the operating system for
which you need to get information, and the only possible way to do it is
to parse some output (think of everything TCP/IP, for example), while
there is a callable interface available to get the info properly.
On Sat, 06 Dec 2025 11:42:47 +0100, Marc Van Dyck wrote:
User written lexical functions, on the other hand, would be a real
bonus. There are, for one, still many parts of the operating system for
which you need to get information, and the only possible way to do it is
to parse some output (think of everything TCP/IP, for example), while
there is a callable interface available to get the info properly.
POSIXish shells seem to feel less need for such built-in extensions.
Is this a reflection on the ease of doing command substitutions in them, vis-|a-vis DCL?
Some newer commands in the Linux world have the option to produce output
in JSON format, which also helps.
First, backwards compatibility is a powerful force blocking possible improvement. I was using Sun OS and Solaris in nineties. Later I
looked at Solaris from 2007. In 2007 they shipped crappy '/bin/sh'
which had exactly the same problems as I remembered from my earlier
use. I suspect that it was bug-for-bug compatible with '/bin/sh'
which they shipped in 1984. And I suspect that if you get current
Solaris from Oracle, you will get the same crappy '/bin/sh'.
Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:
On Sat, 06 Dec 2025 11:42:47 +0100, Marc Van Dyck wrote:
User written lexical functions, on the other hand, would be a real
bonus. There are, for one, still many parts of the operating system for
which you need to get information, and the only possible way to do it is >>> to parse some output (think of everything TCP/IP, for example), while
there is a callable interface available to get the info properly.
POSIXish shells seem to feel less need for such built-in extensions.
Is this a reflection on the ease of doing command substitutions in them,
vis-|a-vis DCL?
Some newer commands in the Linux world have the option to produce output
in JSON format, which also helps.
First, backwards compatibility is a powerful force blocking
possible improvement. I was using Sun OS and Solaris in
nineties. Later I looked at Solaris from 2007. In 2007
they shipped crappy '/bin/sh' which had exactly the same
problems as I remembered from my earlier use. I suspect
that it was bug-for-bug compatible with '/bin/sh' which
they shipped in 1984. And I suspect that if you get
current Solaris from Oracle, you will get the same crappy
'/bin/sh'.
The same forces that prevent improvments to '/bin/sh' work
for DCL, so chance of change here is very small.
Second, early Unix had relatively cheap process creation,
but some implementations (PDP-11 and 16-bit Xenix) had
thight limit on process size. So, there was preference
to using external commands for extentions. Unix provides
commands (and /proc filesystem in Linux) so that information
is available in textual form, meaning less need for
libraray API.
Third, Unix shell does not play as distingished role as
DCL in VMS, users can have different shells. There are
scripting languages and for programming shell is just
one of available scripting languages. Some scripting
languages (like Perl) give access to system calls, most
have FFI, so that one can call code in shared libraries.
On 12/5/25 7:41 PM, Arne Vajh|+j wrote:
A reimplementation of DCL in C (or another language, but C is
probably the current preference) is not a solution:
* risk only achieving 99.9% compatiblity instead of 100% compatibility
* expensive
Why not make a 2nd DCL, one that can live on the system in parallel with
the current DCL.-a Call it DCL64 since it would use 64-bit variables
instead of 32-bit.-a Which DCL a given process used would be specified in sysuaf or creprc (and, possibly, spawn).-a I'd would also like to see the following:
1. Variables can be integer, string, or floating-point.
2. Longer string lengths.
3. Enhance SYS$FAO to support floating-point.
4. New constructs (e.g. loops, arrays, etc.)
5. Lexicals match system functions.
-a-a (i.e., easy for the vendor to update lexicals to match new functions
-a-a-a or enhanced functionality and being able to specify multiple items
-a-a-a to return in a single lexical call)
6. An option to make pipe syntax the default without having to specify
the pipe command.
7. An option to make Set process/parse=extend and Set process/
token=extend as default.
8. Commands and switches match on 8 chars instead of 4.
-a-a (e.g., I'd like to see both open and openssl exist as separate commands).
9. No automatic upcasing the command line.-a (Internal parsing is case- insensitive).-a The idea is to reduce the need for lib$initialize in utilities written in C.
10. Otherwise, still DCL.
11. Perhaps even make the character set UTF-8 instead of ASCII.-a (Of course, then I'd be asking for a DECterm that used UTF-8).
Benefits:
No need to be 100% backwards compatible.-a Make it as backwards
compatible as possible but the edge cases that make it difficult to
rewrite DCL in a HLL wouldn't apply since the original DCL is still
there to handle them.
On Sat, 06 Dec 2025 11:42:47 +0100, Marc Van Dyck wrote:
User written lexical functions, on the other hand, would be a real
bonus. There are, for one, still many parts of the operating system for
which you need to get information, and the only possible way to do it is
to parse some output (think of everything TCP/IP, for example), while
there is a callable interface available to get the info properly.
POSIXish shells seem to feel less need for such built-in extensions.
Is this a reflection on the ease of doing command substitutions in them, vis-|a-vis DCL?
On Sat, 6 Dec 2025 18:46:52 -0500, Arne Vajh|+j wrote:
PS is *the* shell for Windows admins, but has it caught on with Linux
admins?
No. And judging from some of the struggles Windows users are having with
it, it may not be that well-supported on Windows now, either.
BTW, I have never liked PS - it just doesn't appear logical to me, but
that is just my personal opinion.
I have this feeling that more and more Windows diehards (inside and
outside Microsoft) are simply installing WSL2 and using the Linux command line instead.
I will grant you that it is annoying to have to assume the subset
of behavior defined by 7th Edition research Unix for "portable"
shell scripts, and increasingly even that is tenuous on Linux systems.
Ubuntu's replacement of classic sh by the not-entirely-compatible
dash was a particular low point there.
On 12/13/2025 8:41 PM, Lawrence DrCOOliveiro wrote:
I have this feeling that more and more Windows diehards (inside and
outside Microsoft) are simply installing WSL2 and using the Linux
command line instead.
Not likely.
As it would be practically useless for Windows admin tasks.
On Mon, 15 Dec 2025 11:59:51 -0500, Arne Vajh|+j wrote:
On 12/13/2025 8:41 PM, Lawrence DrCOOliveiro wrote:
I have this feeling that more and more Windows diehards (inside and
outside Microsoft) are simply installing WSL2 and using the Linux
command line instead.
Not likely.
As it would be practically useless for Windows admin tasks.
Most of those use point-and-click anyway. Or this new rCLAgentic AIrCY
thing.
On 12/15/2025 6:01 PM, Lawrence DrCOOliveiro wrote:
On Mon, 15 Dec 2025 11:59:51 -0500, Arne Vajh|+j wrote:
On 12/13/2025 8:41 PM, Lawrence DrCOOliveiro wrote:
I have this feeling that more and more Windows diehards (inside and
outside Microsoft) are simply installing WSL2 and using the Linux
command line instead.
Not likely.
As it would be practically useless for Windows admin tasks.
Most of those use point-and-click anyway. Or this new rCLAgentic AIrCY
thing.
No.
The normal end users like GUI, but the serious Windows admins script a
lot.
And today that is mostly PS.
[...] But with people using scripting
languages to process massive vectors to train LLM models, it all seems
pretty puny.
On Mon, 15 Dec 2025 19:28:42 -0500, Arne Vajhoj wrote:
On 12/15/2025 6:01 PM, Lawrence D?Oliveiro wrote:
Most of those use point-and-click anyway. Or this new ?Agentic AI?
thing.
No.
The normal end users like GUI, but the serious Windows admins script a
lot.
And today that is mostly PS.
Somehow I doubt that. Remember, Microsoft spent years, decades,
conditioning its users to be allergic to the command line.
Then they did an about-face and introduced PowerShell. But something tells me that has not been as big a success as some might have hoped.
In article <10h3upk$3f20l$1@dont-email.me>,
Craig A. Berry <craigberry@nospam.mac.com> wrote:
[...] But with people using scripting
languages to process massive vectors to train LLM models, it all seems
pretty puny.
This is a good point, but, I sometimes wonder if, perhaps, we
need to recalibrate what we mean when we say, "scripting
language." I imagine that you are referring to Python here, as
that seems to be the thing that the kids are all hip on these
days when it comes to model training and such-like, but I think
it's fair to say that that language has grown far beyond
traditional "scripting" use.
Python is interpreted, yes, but people who are using it to do
numerical analysis are often using the jit-compiled variant, and
more often the actual heavy computational lifting is being done
in a library that's exposed to Python via an FFI; so the actual
training code is in Fortran or C or some more traditional
compiled language.
There is, evidently, a need (or at least desire) for a really
good interpreted language to script various system management
tasks; I gather folks feel that DCL is a bit long in the tooth
and insufficient for that. As I mentioned before, I feel like
that is qualitatively different than using DCL as an interactive
CLI; perhaps the solution here is just to build out a really
nice set of officially supported modules for, say, Python (or a
similar suitable language) and call it a day.
But there is an important category of script that starts off as one
long command, gets split into two or more commands, wrapped in a
loop, parameterized for different inputs, put in a subroutine, and
has features added like e-mailing the output to a distribution list.
Now you've got a program, and there is no obvious point in that
process where it's free or easy to switch languages.
I work in a big-ish Windows environment. GUI is used when necessary,
and everything else is scripted. With PS. I'm a network engineer, so
I use WSL2 and Linux VMs, but I don't really manage anything
Windows.
It is well-received amongst the Windows admins I'm surrounded by.
Nobody is saying they wish they could just clicky-click several
hundred times to do something that could go in a for loop. Most of
our Windows VMs don't even have a GUI installed (datacenter
edition).
As to Arne's earlier question of PS adoption in Linux... Hell NO. PS
runs counter to how *nix people think. It is bloated, excruciatingly
verbose, and its "everything's an object" model breaks pipelining in
the *nix paradigm. This doesn't even address the religious
objections. No thank you.
On 12/16/25 8:05 AM, Dan Cross wrote:
There is, evidently, a need (or at least desire) for a really
good interpreted language to script various system management
tasks; I gather folks feel that DCL is a bit long in the tooth
and insufficient for that.-a As I mentioned before, I feel like
that is qualitatively different than using DCL as an interactive
CLI; perhaps the solution here is just to build out a really
nice set of officially supported modules for, say, Python (or a
similar suitable language) and call it a day.
Python, Perl, and Lua all exist. Probably all could use additional work
for VMS-specific administrative tasks.-a Not sure the state of Ruby,
but
there is JRuby.
-a And at least a couple of other JVM-based scripting
options.
In article <10h3upk$3f20l$1@dont-email.me>,
Craig A. Berry <craigberry@nospam.mac.com> wrote:
[...] But with people using scripting
languages to process massive vectors to train LLM models, it all seems
pretty puny.
This is a good point, but, I sometimes wonder if, perhaps, we
need to recalibrate what we mean when we say, "scripting
language." I imagine that you are referring to Python here, as
that seems to be the thing that the kids are all hip on these
days when it comes to model training and such-like, but I think
it's fair to say that that language has grown far beyond
traditional "scripting" use.
Python is interpreted, yes, but people who are using it to do
numerical analysis are often using the jit-compiled variant,
and
more often the actual heavy computational lifting is being done
in a library that's exposed to Python via an FFI; so the actual
training code is in Fortran or C or some more traditional
compiled language.
But I really do like Groovy.
(and I have also written a lot about it and how to integrate
it with stuff on VMS)
On 12/16/2025 4:20 PM, Craig A. Berry wrote:
Not sure the state of Ruby,
I don't remember ever seeing MRI being ported to VMS.
On Tue, 16 Dec 2025 17:38:47 -0000 (UTC), Sam Thomas wrote:
I work in a big-ish Windows environment. GUI is used when necessary,
and everything else is scripted. With PS. I'm a network engineer, so
I use WSL2 and Linux VMs, but I don't really manage anything
Windows.
It is well-received amongst the Windows admins I'm surrounded by.
Nobody is saying they wish they could just clicky-click several
hundred times to do something that could go in a for loop. Most of
our Windows VMs don't even have a GUI installed (datacenter
edition).
As to Arne's earlier question of PS adoption in Linux... Hell NO. PS
runs counter to how *nix people think. It is bloated, excruciatingly
verbose, and its "everything's an object" model breaks pipelining in
the *nix paradigm. This doesn't even address the religious
objections. No thank you.
Let me see if I understand this: you have Windows-using colleagues who
are fond of PowerShell, while you would avoid it like the plague for *nix-based workflows.
These same colleagues are using the ?Datacenter edition? of Windows
Server. Worth pointing out that on-prem editions of Windows Server are
no longer getting quite the same love from Microsoft as the cloud
version. They are putting more effort into the cloud versions going
forward.
And the cloud is, of course, dominated by Linux. So the future of
Windows Server here, too, seems a bit limited.
On 12/16/25 8:05 AM, Dan Cross wrote:
In article <10h3upk$3f20l$1@dont-email.me>,
Craig A. Berry <craigberry@nospam.mac.com> wrote:
[...] But with people using scripting
languages to process massive vectors to train LLM models, it all seems
pretty puny.
This is a good point, but, I sometimes wonder if, perhaps, we
need to recalibrate what we mean when we say, "scripting
language." I imagine that you are referring to Python here, as
that seems to be the thing that the kids are all hip on these
days when it comes to model training and such-like, but I think
it's fair to say that that language has grown far beyond
traditional "scripting" use.
Python is interpreted, yes, but people who are using it to do
numerical analysis are often using the jit-compiled variant, and
more often the actual heavy computational lifting is being done
in a library that's exposed to Python via an FFI; so the actual
training code is in Fortran or C or some more traditional
compiled language.
Yes, of course. The scripting language will use a library (e.g., NumPy
or PyTorch) that exposes interfaces to various kinds of hardware >acceleration. But eventually there is processed data returned that gets >manipulated using ordinary constructs of the calling language.
My
comment arose in the context of DCL and Arne's suggestion of a new
wrapper language around DCL supplying new features; if any of those new >features tries to put more than 8192 bytes into a DCL symbol, that ain't >gonna work. And the same would be true for a new lexical function in
current DCL.
There is, evidently, a need (or at least desire) for a really
good interpreted language to script various system management
tasks; I gather folks feel that DCL is a bit long in the tooth
and insufficient for that. As I mentioned before, I feel like
that is qualitatively different than using DCL as an interactive
CLI; perhaps the solution here is just to build out a really
nice set of officially supported modules for, say, Python (or a
similar suitable language) and call it a day.
Python, Perl, and Lua all exist. Probably all could use additional work
for VMS-specific administrative tasks. Not sure the state of Ruby, but
there is JRuby. And at least a couple of other JVM-based scripting
options. So it's not like people are struggling between DCL and nothing.
In a way I agree with you that a good scripting language should not have
to be a good CLI and vice versa. But there is an important category of >script that starts off as one long command, gets split into two or more >commands, wrapped in a loop, parameterized for different inputs, put in
a subroutine, and has features added like e-mailing the output to a >distribution list. Now you've got a program, and there is no obvious
point in that process where it's free or easy to switch languages.
So while it's certainly not the only paradigm for writing scripts, it's
darn convenient to have a CLI that can also function as a decent
programming language. DCL would have fit the bill 30 years ago but now
not so much. Thus the discussion about enhancing or replacing DCL.
On 12/16/2025 9:05 AM, Dan Cross wrote:
In article <10h3upk$3f20l$1@dont-email.me>,
Craig A. Berry <craigberry@nospam.mac.com> wrote:
[...] But with people using scripting
languages to process massive vectors to train LLM models, it all seems
pretty puny.
This is a good point, but, I sometimes wonder if, perhaps, we
need to recalibrate what we mean when we say, "scripting
language." I imagine that you are referring to Python here, as
that seems to be the thing that the kids are all hip on these
days when it comes to model training and such-like, but I think
it's fair to say that that language has grown far beyond
traditional "scripting" use.
ML, data processing and web has certainly passed admin
scripting in usage.
Python is interpreted, yes, but people who are using it to do
numerical analysis are often using the jit-compiled variant,
Most still use CPython.
None of the JIT implementations PyPy, GraalPy, Codon etc. has
really gotten traction.
(CPython 3.13+ actually comes with JIT, but it does not provide
the same speedup as PyPy and GraalPy can for those CPU intensive
cases that should never be done in Python)
Reason: fear of compatibility issues combined with the fact that
JIT usually does not matter.
Because:
and
more often the actual heavy computational lifting is being done
in a library that's exposed to Python via an FFI; so the actual
training code is in Fortran or C or some more traditional
compiled language.
If CPython interpretation use 0.1-1.0% of total CPU usage
and native library execution use 99.0-99.9% of total CPU usage,
then ...
Lawrence D?Oliveiro <ldo@nz.invalid> wrote:
Let me see if I understand this: you have Windows-using colleagues who
are fond of PowerShell, while you would avoid it like the plague for
*nix-based workflows.
Exactly. I assume if one wants to do scripting for Windows in a way
that has reasonable access to system API in a MS-supported way, there
is really only one choice.
In article <10ht0rk$34qt0$2@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 12/16/2025 9:05 AM, Dan Cross wrote:
Python is interpreted, yes, but people who are using it to do
numerical analysis are often using the jit-compiled variant,
Most still use CPython.
In your world of enterprise IT? Sure.
For numerical analysis? Directly in Python? No.
None of the JIT implementations PyPy, GraalPy, Codon etc. has
really gotten traction.
Reason: fear of compatibility issues combined with the fact that
JIT usually does not matter.
Because:
and
more often the actual heavy computational lifting is being done
in a library that's exposed to Python via an FFI; so the actual
training code is in Fortran or C or some more traditional
compiled language.
If CPython interpretation use 0.1-1.0% of total CPU usage
and native library execution use 99.0-99.9% of total CPU usage,
then ...
See above. I'm talking about software that's doing numerical
analysis directly in Python, _not_ via FFI.
VBScript is:
* 30 years old
* deprecated and scheduled for removal in future Windows version
* only works with stuff that expose a COM API
On Wed, 17 Dec 2025 20:34:23 -0500, Arne Vajh|+j wrote:
VBScript is:
* 30 years old
* deprecated and scheduled for removal in future Windows version
* only works with stuff that expose a COM API
Are you talking about VBA?
On 12/16/2025 4:20 PM, Craig A. Berry wrote:
there is JRuby.
Yes.
It was ported to VMS in the past.
And a recent version can build code on PC and run code on VMS.
On 12/16/25 7:17 PM, Arne Vajh|+j wrote:
On 12/16/2025 4:20 PM, Craig A. Berry wrote:
-aNot sure the state of Ruby,
I don't remember ever seeing MRI being ported to VMS.
There was a port for Alpha 20 years ago:
https://xiaotuanzi.github.io/vmsruby/en/index.html
and an incomplete attempt to revive it 7 years ago:
https://github.com/bg/vmsruby/commits/master/
On 12/17/2025 11:57 AM, Dan Cross wrote:
In article <10ht0rk$34qt0$2@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 12/16/2025 9:05 AM, Dan Cross wrote:
Python is interpreted, yes, but people who are using it to do
numerical analysis are often using the jit-compiled variant,
Most still use CPython.
In your world of enterprise IT? Sure.
For numerical analysis? Directly in Python? No.
None of the JIT implementations PyPy, GraalPy, Codon etc. has
really gotten traction.
Reason: fear of compatibility issues combined with the fact that
JIT usually does not matter.
Because:
and
more often the actual heavy computational lifting is being done
in a library that's exposed to Python via an FFI; so the actual
training code is in Fortran or C or some more traditional
compiled language.
If CPython interpretation use 0.1-1.0% of total CPU usage
and native library execution use 99.0-99.9% of total CPU usage,
then ...
See above. I'm talking about software that's doing numerical
analysis directly in Python, _not_ via FFI.
But practically nobody does that.
By using the high level packages (pandas, polars,
tensorflow, pytorch, numpy, scipy etc.) they can
do what they need to do using much higher level
constructs. No need to fiddle with LAPACK, BLAS,
matrix multiplication and inversion algorithms.
And on top of having to deal with much less
much higher level code they get way better
performance. The standard libraries are
much faster than custom Python code even
if it is JIT compiled.
Nobody want to write 5-10 times more lines
of code to run 5-10 times slower.
| Sysop: | Amessyroom |
|---|---|
| Location: | Fayetteville, NC |
| Users: | 54 |
| Nodes: | 6 (1 / 5) |
| Uptime: | 21:15:50 |
| Calls: | 742 |
| Files: | 1,218 |
| D/L today: |
6 files (8,794K bytes) |
| Messages: | 186,029 |
| Posted today: | 1 |