• Re: extending MySQL on VMS

    From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Tue Aug 19 17:04:38 2025
    From Newsgroup: comp.os.vms

    In article <68a493ec$0$710$14726298@news.sunsite.dk>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/19/2025 10:09 AM, Dan Cross wrote:
    In article <1081sk3$3njqo$7@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
    On 2025-08-18, Dan Cross <cross@spitfire.i.gajendra.net> wrote:

    I happen to disagree with Simon's notion of what makes for
    robust programming, but to go to such an extreme as to suggest
    that writing code as if logical operators don't short-circuit
    is the same as not knowing the semantics of division is
    specious.

    That last one is an interesting example. I may not care about
    short circuiting, but I am _very_ _very_ aware of the combined
    unsigned integers and signed integers issues in C expressions. :-(

    It also affects how I look at the same issues in other languages.

    I've mentioned this before, but I think languages should give you
    unsigned integers by default, and you should have to ask for
    a signed integer if you really want one.

    Whether integers are signed or unsigned by default is not
    terribly interesting to me, but I do believe, strongly, that
    implicit type conversions as in C are a Bad Idea(TM), and I
    think that history has shown that view to be more or less
    correct; the only language that seems to get this approximately
    right is Haskell, using typeclasses, but that's not implicit
    coercion; it takes well-defined, strongly-typed functions that
    do explicit conversions internally, from the prelude.

    But that's Haskell. For most programming, if one wants to do
    arithmetic on operands of differing type, then one should be
    required to explicitly convert everything to a single, uniform
    type and live with whatever the semantics of that type are.

    This needn't be as tedious or verbose as it sounds; with a
    little bit of type inference, it can be quite succinct while
    still being safe and correct.

    Kotlin is rather picky about mixing signed and unsigned.

    var v: UInt = 16u

    v = v / 2

    gives an error.

    v = v / 2u
    v = v / 2.toUInt()

    works.

    I consider that rather picky.

    It's kind of annoying that it can't infer to use unsigned for
    the '2' in the first example. Rust, for example, does do that
    inference which makes most arithmetic very natural.

    Rust's type inference is extended Hindley-Milner, but it's not
    as expressive as, say, SML.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Tue Aug 19 17:26:59 2025
    From Newsgroup: comp.os.vms

    In article <10823ei$3pb8v$3@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/19/2025 9:01 AM, Simon Clubley wrote:
    On 2025-08-18, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
    I happen to disagree with Simon's notion of what makes for
    robust programming, but to go to such an extreme as to suggest
    that writing code as if logical operators don't short-circuit
    is the same as not knowing the semantics of division is
    specious.


    That last one is an interesting example. I may not care about
    short circuiting, but I am _very_ _very_ aware of the combined
    unsigned integers and signed integers issues in C expressions. :-(

    It also affects how I look at the same issues in other languages.

    I've mentioned this before, but I think languages should give you
    unsigned integers by default, and you should have to ask for
    a signed integer if you really want one.

    "by default" sort of imply signedness being an attribute of
    same type.

    Why not just make it two different types with different names?

    I gathered Simon was referring to the type assigned to manifest
    constants like 0, 1, or 2 (or 1024, 4096, or whatever). What
    type is such a literal? Signed or unsigned? How wide is it?
    In C (for example) it's `int` unless it's too big to fit into an
    `int`, in which case it's `unsigned int`. But how big is `int`?
    THAT depends on the target platform, and so it can be difficult
    to reason about the semantics of a program just from reading the
    code.

    And despite the old admonition to make everything a symbolic
    constant, things like `2 * pi * r` are perfectly readable, and
    I'd argue that `TWO * pi * r` are less so.

    Whether we follow tradition and call them integer and cardinal
    or more modern style and call them int and uint is less important.

    I would argue that, at this point, there's little need for a
    generic "int" type anymore, and that types representing integers
    as understood by the machine should explicitly include both
    signedness and width. An exception may be something like,
    `size_t`, which is platform-dependent, but when transferred
    externally should be given an explicit size. A lot of the
    guesswork and folklore that goes into understanding the
    semantics of those things just disappears when you're explicit.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris Townley@news@cct-net.co.uk to comp.os.vms on Tue Aug 19 21:08:59 2025
    From Newsgroup: comp.os.vms

    On 19/08/2025 17:07, Dan Cross wrote:
    In article <10822mn$3pb8v$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/18/2025 11:00 PM, Lawrence DrCOOliveiro wrote:
    On Mon, 18 Aug 2025 21:49:29 -0400, Arne Vajh|+j wrote:

    On 8/18/2025 7:59 PM, Lawrence DrCOOliveiro wrote:

    On Mon, 18 Aug 2025 19:48:16 -0400, Arne Vajh|+j wrote:

    On 8/18/2025 7:45 PM, Lawrence DrCOOliveiro wrote:

    Parentheses are best used lightly -- where needed, maybe a little >>>>>>> bit more than that, and thatrCOs it.

    Otherwise parenthesis clutter introduces its own obstacles to
    readability.

    That is not the mantra among people who try to prevent future
    errors.

    There are people who just repeat what they are told, arenrCOt there, >>>>> instead of learning from actual experience.

    It is the recommendation from people with actual experience.

    One book that recommend it is "The Practice of Programming".
    Brian Kernighan and Rob Pike.

    One wonders how much experience they really have, across how many
    different languages.

    (Wow.)

    Brian Kernighan and Rob Pike? A lot! :-)

    It may help to read what they actually wrote in TPoP; on page 6:

    |_Parenthesize to resolve ambiguity_. Parentheses specify
    |grouping and can be used to make the intent clear even when
    |they are not required. The inner parentheses in the previous
    |example are not necessary, but they don't hurt, either.
    |Seasoned programmers might omit them, because the relational
    |operators (< <= == != >= >) have higher precedence than the
    |logical operators (&& and ||).
    |
    |When mixing unrelated operators, though, it's a good idea to
    |parenthesize. C and its friends present pernicious precedence
    |problems, and it's easy to make a mistake.

    For reference, the "previous example" they mention here is:

    if ((block_id >= actblks) || (block_id < unblocks))

    Most C programmers would write this as,

    if (block_id >= actblks || block_id < unblocks)

    And Kernighan and Pike would be fine with that. It must be
    noted that, throughout the rest of TPoP, they rarely
    parenthesize as aggressively as they do in that one example.
    For example, on page 98, in the discussion of building a CSV
    file parser interface, they present a function called,
    `advquoted` that contains this line of code:

    if (pj] == '"' && p[++j] != '"') {
    ...
    }

    (Note this doesn't just omit parenthesis, but also makes use of
    the pre-increment operator _and_ boolean short-circuiting.)

    Pike is famous for brevity; his 1989 document, "Notes on
    Programming in C" is a model here: http://www.literateprogramming.com/pikestyle.pdf

    Even now, it's still an interesting read. I like my own code
    princples, as well, but of course, I'm biased: https://pub.gajendra.net/2016/03/code_principles

    - Dan C.


    Interestingly I have just come across an old bit of DEC Basic code:

    REPORT.ONLY = W.S4 = "R" ! Global flag

    I know what it does, but I would wrapped the knuckles of any programmer
    who did that on my shift!
    --
    Chris
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Wed Aug 20 01:11:05 2025
    From Newsgroup: comp.os.vms

    On Tue, 19 Aug 2025 10:54:30 -0400, Arne Vajh|+j wrote:

    Two points related to the fact that they have a special operator instead
    of just using plain assignment.

    ItrCOs yet another mechanism for local scoping within an expression. Python doesnrCOt allow statement blocks within expressions, but it has for- expressions where the for-variables are strictly local to the expression.

    To this has been added the walrus. But the walrus is (mostly) unnecessary;
    you can use a for-expression that iterates over just one value, to get the same effect.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Wed Aug 20 03:18:47 2025
    From Newsgroup: comp.os.vms

    On Tue, 19 Aug 2025 10:45:44 -0400, Arne Vajh|+j wrote:

    On 8/18/2025 11:00 PM, Lawrence DrCOOliveiro wrote:

    One wonders how much experience they really have, across how many
    different languages.

    Brian Kernighan and Rob Pike? A lot! :-)

    Maybe not Lisp, though. Else they would not be so sanguine about
    parentheses.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Simon Clubley@clubley@remove_me.eisner.decus.org-Earth.UFP to comp.os.vms on Wed Aug 20 12:08:21 2025
    From Newsgroup: comp.os.vms

    On 2025-08-19, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
    In article <68a493ec$0$710$14726298@news.sunsite.dk>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:

    Kotlin is rather picky about mixing signed and unsigned.

    var v: UInt = 16u

    v = v / 2

    gives an error.

    v = v / 2u
    v = v / 2.toUInt()

    works.

    I consider that rather picky.

    I've not used Kotlin, but I consider that to be the really good type
    of picky. :-)


    It's kind of annoying that it can't infer to use unsigned for
    the '2' in the first example. Rust, for example, does do that
    inference which makes most arithmetic very natural.


    I actually consider that to be a good thing. The programmer is forced
    to think about what they have written and to change it to make those
    intentions explicit in the code. I like this.

    Simon.
    --
    Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
    Walking destinations on a map are further away than they appear.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Simon Clubley@clubley@remove_me.eisner.decus.org-Earth.UFP to comp.os.vms on Wed Aug 20 12:27:01 2025
    From Newsgroup: comp.os.vms

    On 2025-08-19, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
    In article <1081rg2$3njqo$3@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:

    I write simple to understand code, not clever code, even when the
    problem it is solving is complex or has a lot of functionality
    built into the problem.

    I've found it makes code more robust and easier for others to read, >>especially when they may not have the knowledge you have when you
    wrote the original code.

    I'm curious how this expresses itself with respect to e.g. the short-circuiting thing, though. For instance, this might be
    common in C:

    struct something *p;
    ...
    if (p != NULL && p->ptr != NULL && something(*p->ptr)) {
    // Do something here.
    }

    This, of course, relies on short-circuiting to avoid
    dereferncing either `p` or `*p->ptr` if either is NULL. What
    is the alternative?

    if (p != NULL) {
    if (p->ptr != NULL) {
    if (something(*p->ptr)) {
    // Do something....
    }
    }
    }

    If I dare say so, this is strictly worse because the code is now
    much more heavily indented.

    Indented properly (not as in your next example!) I find that very
    readable and is mostly how I would write it although I do use code
    like your return example when appropriate. This is my variant:

    if (p != NULL)
    {
    if (p->ptr != NULL)
    {
    if (something(*p->ptr))
    {
    // Do something....
    }
    }
    }

    In case that doesn't survive a NNTP client, it is in Whitesmiths format:

    https://en.wikipedia.org/wiki/Indentation_style#Whitesmiths

    I like to spread out code vertically as I find it is easier to read.
    We are no longer in the era of VT50/52/100 terminals. :-)


    Ken Thompson used to avoid things like this by writing such code
    as:

    if (p != NULL)
    if (p->ptr != NULL)
    if (something(p->ptr)) {
    // Do something....
    }


    YUCK * 1000!!! That's horrible!!! :-)

    Which has a certain elegance to it, but automated code
    formatters inevitably don't understand it (and at this point,
    one really ought to be using an automated formatter whenever
    possible).

    AN alternative might be to extract the conditional and put it
    into an auxiliary function, and use something similar to
    Dijkstra's guarded commands:

    void
    maybe_do_something(struct something *p)
    {
    if (p == NULL)
    return;
    if (p->ptr == NULL)
    return;
    if (!something(*p->ptr))
    return;
    // Now do something.
    }

    I would argue that this is better than the previous example, and
    possibly on par with or better than the original: if nothing
    else, it gives a name to the operation. This is of course just
    a contrived example, so the name here is meaningless, but one
    hopes that in a real program a name with some semantic meaning
    would be chosen.


    There is one difference for me here however. _All_ single conditional statements as in the above example are placed in braces to help avoid
    the possibility of a later editing error.

    Simon.
    --
    Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
    Walking destinations on a map are further away than they appear.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Simon Clubley@clubley@remove_me.eisner.decus.org-Earth.UFP to comp.os.vms on Wed Aug 20 12:34:19 2025
    From Newsgroup: comp.os.vms

    On 2025-08-19, Dan Cross <cross@spitfire.i.gajendra.net> wrote:

    Even now, it's still an interesting read. I like my own code
    princples, as well, but of course, I'm biased: https://pub.gajendra.net/2016/03/code_principles


    I've just read through that document and agree with everything there.
    I was especially amused by the write for readability and it will be
    read many times comments as I use that wording myself.

    I am surprised you are picking me up on some things however, given
    the mindset expressed in that document. Perhaps your idea of readability
    is different from mine. :-)

    Simon.
    --
    Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
    Walking destinations on a map are further away than they appear.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Wed Aug 20 15:03:46 2025
    From Newsgroup: comp.os.vms

    In article <1082lks$3nmtt$2@dont-email.me>,
    Chris Townley <news@cct-net.co.uk> wrote:
    On 19/08/2025 17:07, Dan Cross wrote:
    In article <10822mn$3pb8v$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/18/2025 11:00 PM, Lawrence DrCOOliveiro wrote:
    On Mon, 18 Aug 2025 21:49:29 -0400, Arne Vajh|+j wrote:

    On 8/18/2025 7:59 PM, Lawrence DrCOOliveiro wrote:

    On Mon, 18 Aug 2025 19:48:16 -0400, Arne Vajh|+j wrote:

    On 8/18/2025 7:45 PM, Lawrence DrCOOliveiro wrote:

    Parentheses are best used lightly -- where needed, maybe a little >>>>>>>> bit more than that, and thatrCOs it.

    Otherwise parenthesis clutter introduces its own obstacles to
    readability.

    That is not the mantra among people who try to prevent future
    errors.

    There are people who just repeat what they are told, arenrCOt there, >>>>>> instead of learning from actual experience.

    It is the recommendation from people with actual experience.

    One book that recommend it is "The Practice of Programming".
    Brian Kernighan and Rob Pike.

    One wonders how much experience they really have, across how many
    different languages.

    (Wow.)

    Brian Kernighan and Rob Pike? A lot! :-)

    It may help to read what they actually wrote in TPoP; on page 6:

    |_Parenthesize to resolve ambiguity_. Parentheses specify
    |grouping and can be used to make the intent clear even when
    |they are not required. The inner parentheses in the previous
    |example are not necessary, but they don't hurt, either.
    |Seasoned programmers might omit them, because the relational
    |operators (< <= == != >= >) have higher precedence than the
    |logical operators (&& and ||).
    |
    |When mixing unrelated operators, though, it's a good idea to
    |parenthesize. C and its friends present pernicious precedence
    |problems, and it's easy to make a mistake.

    For reference, the "previous example" they mention here is:

    if ((block_id >= actblks) || (block_id < unblocks))

    Most C programmers would write this as,

    if (block_id >= actblks || block_id < unblocks)

    And Kernighan and Pike would be fine with that. It must be
    noted that, throughout the rest of TPoP, they rarely
    parenthesize as aggressively as they do in that one example.
    For example, on page 98, in the discussion of building a CSV
    file parser interface, they present a function called,
    `advquoted` that contains this line of code:

    if (pj] == '"' && p[++j] != '"') {
    ...
    }

    (Note this doesn't just omit parenthesis, but also makes use of
    the pre-increment operator _and_ boolean short-circuiting.)

    Pike is famous for brevity; his 1989 document, "Notes on
    Programming in C" is a model here:
    http://www.literateprogramming.com/pikestyle.pdf

    Even now, it's still an interesting read. I like my own code
    princples, as well, but of course, I'm biased:
    https://pub.gajendra.net/2016/03/code_principles

    - Dan C.


    Interestingly I have just come across an old bit of DEC Basic code:

    REPORT.ONLY = W.S4 = "R" ! Global flag

    I know what it does, but I would wrapped the knuckles of any programmer
    who did that on my shift!

    Not knowing DEC BASIC, am I correct in guessing that this
    assigns the boolean result of comparing `W.S4` with the string
    "R" to `REPORT.ONLY`?

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris Townley@news@cct-net.co.uk to comp.os.vms on Wed Aug 20 16:11:46 2025
    From Newsgroup: comp.os.vms

    On 20/08/2025 16:03, Dan Cross wrote:
    In article <1082lks$3nmtt$2@dont-email.me>,
    Chris Townley <news@cct-net.co.uk> wrote:
    On 19/08/2025 17:07, Dan Cross wrote:
    In article <10822mn$3pb8v$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/18/2025 11:00 PM, Lawrence DrCOOliveiro wrote:
    On Mon, 18 Aug 2025 21:49:29 -0400, Arne Vajh|+j wrote:

    On 8/18/2025 7:59 PM, Lawrence DrCOOliveiro wrote:

    On Mon, 18 Aug 2025 19:48:16 -0400, Arne Vajh|+j wrote:

    On 8/18/2025 7:45 PM, Lawrence DrCOOliveiro wrote:

    Parentheses are best used lightly -- where needed, maybe a little >>>>>>>>> bit more than that, and thatrCOs it.

    Otherwise parenthesis clutter introduces its own obstacles to >>>>>>>>> readability.

    That is not the mantra among people who try to prevent future
    errors.

    There are people who just repeat what they are told, arenrCOt there, >>>>>>> instead of learning from actual experience.

    It is the recommendation from people with actual experience.

    One book that recommend it is "The Practice of Programming".
    Brian Kernighan and Rob Pike.

    One wonders how much experience they really have, across how many
    different languages.

    (Wow.)

    Brian Kernighan and Rob Pike? A lot! :-)

    It may help to read what they actually wrote in TPoP; on page 6:

    |_Parenthesize to resolve ambiguity_. Parentheses specify
    |grouping and can be used to make the intent clear even when
    |they are not required. The inner parentheses in the previous
    |example are not necessary, but they don't hurt, either.
    |Seasoned programmers might omit them, because the relational
    |operators (< <= == != >= >) have higher precedence than the
    |logical operators (&& and ||).
    |
    |When mixing unrelated operators, though, it's a good idea to
    |parenthesize. C and its friends present pernicious precedence
    |problems, and it's easy to make a mistake.

    For reference, the "previous example" they mention here is:

    if ((block_id >= actblks) || (block_id < unblocks))

    Most C programmers would write this as,

    if (block_id >= actblks || block_id < unblocks)

    And Kernighan and Pike would be fine with that. It must be
    noted that, throughout the rest of TPoP, they rarely
    parenthesize as aggressively as they do in that one example.
    For example, on page 98, in the discussion of building a CSV
    file parser interface, they present a function called,
    `advquoted` that contains this line of code:

    if (pj] == '"' && p[++j] != '"') {
    ...
    }

    (Note this doesn't just omit parenthesis, but also makes use of
    the pre-increment operator _and_ boolean short-circuiting.)

    Pike is famous for brevity; his 1989 document, "Notes on
    Programming in C" is a model here:
    http://www.literateprogramming.com/pikestyle.pdf

    Even now, it's still an interesting read. I like my own code
    princples, as well, but of course, I'm biased:
    https://pub.gajendra.net/2016/03/code_principles

    - Dan C.


    Interestingly I have just come across an old bit of DEC Basic code:

    REPORT.ONLY = W.S4 = "R" ! Global flag

    I know what it does, but I would wrapped the knuckles of any programmer
    who did that on my shift!

    Not knowing DEC BASIC, am I correct in guessing that this
    assigns the boolean result of comparing `W.S4` with the string
    "R" to `REPORT.ONLY`?

    - Dan C.


    Correct, but I would have surrounded the comparison with brackets, or
    used an IF statement.

    Not quite as bad as a colleague who found source code file for a
    function, that ended with an UNLESS Z
    Z was a global not mentioned in the source file! Try searching a massive codebase for Z!
    --
    Chris
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Wed Aug 20 15:18:20 2025
    From Newsgroup: comp.os.vms

    In article <1084fca$afbj$3@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
    On 2025-08-19, Dan Cross <cross@spitfire.i.gajendra.net> wrote:

    Even now, it's still an interesting read. I like my own code
    princples, as well, but of course, I'm biased:
    https://pub.gajendra.net/2016/03/code_principles


    I've just read through that document and agree with everything there.
    I was especially amused by the write for readability and it will be
    read many times comments as I use that wording myself.

    I am surprised you are picking me up on some things however, given
    the mindset expressed in that document. Perhaps your idea of readability
    is different from mine. :-)

    Oh, I hope any criticism I offer doesn't come across as
    personal!

    One of the harder things about programming is how much matters
    of style, readability, and so on, are open to subjective
    interpretation, and how these vary from language to language.

    My general rule is, when working in any given language, find out
    if there is some dominant style preferred by the bulk of
    programmers working in that language, and adhere as closely to
    that as I can; a decent metric is looking at the standard
    library code for the language.

    If an automated formatter exists for the language, then use it
    aggressively, and with as close to a "stock" format as possible.

    This is just part of learning the idioms of that language.

    I think there is something of a generational divide when it
    comes to this kind of thing. I've found that folks who've been
    around longer tend to feel very strongly about using their
    preferred coding styles, whereas newer programmers tend to care
    much less, expecting that there is some tool automate the
    process.

    When the Go language was being developed, one of the best things
    they did as a service to the community was declare that only
    programs as emitted by the standard `gofmt` tool were valid. It
    instantly eliminated entire categories of silly arguments. It
    is truly liberating.

    I suspect this was informed by the authors' experiences at
    Google. When I started working there, I absolutely _loathed_
    the style "guides", which are not so much guides as sets of hard
    rules, that were in use for each language. C++ in particular
    was just ugly. However, a few months in, I simply stopped
    noticing and thus I stopped caring, and moreover I couldn't deny
    the network effects of being able to change into any directory
    of G's massive source monorepo and read, essentially, any source
    file with no cognitive overhead based on style differences.

    When you are working in code bases measured in the BLOC, that's
    essential.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Wed Aug 20 15:51:14 2025
    From Newsgroup: comp.os.vms

    In article <1084drl$afbj$1@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
    On 2025-08-19, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
    In article <68a493ec$0$710$14726298@news.sunsite.dk>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:

    Kotlin is rather picky about mixing signed and unsigned.

    var v: UInt = 16u

    v = v / 2

    gives an error.

    v = v / 2u
    v = v / 2.toUInt()

    works.

    I consider that rather picky.

    I've not used Kotlin, but I consider that to be the really good type
    of picky. :-)


    It's kind of annoying that it can't infer to use unsigned for
    the '2' in the first example. Rust, for example, does do that
    inference which makes most arithmetic very natural.

    I actually consider that to be a good thing. The programmer is forced
    to think about what they have written and to change it to make those >intentions explicit in the code. I like this.

    I think the point is, that in cases like this, the compiler
    enforces the explicit typing anyway: if the program compiles, it
    is well-typed. If it does not, then it is not. In that
    context, this level of explicitness adds little, if any,
    additional value.

    That the literal "2" is a different type than "2u" is
    interesting, however, and goes back to what you were saying
    earlier about default signedness. As a mathematical object,
    "2" is just a positive integer, but programming languages are
    not _really_ a mathematical notation, so the need to be explicit
    here makes sense from that perspective, I guess.

    In Rust, I might write this sequence as:

    let mut v = 16u32;
    v = v / 2;

    And the type inference mechanism would deduce that 2 should be
    treated as a `u32`. But I could just as easily write,

    let mut v = 16u32;
    v = v / 2u32;

    Which explicitly calls out that 2 as a `u32`.

    Is this really better, though? This is where I'd argue that
    matters of idiom come into play: this is not idiomatic usage in
    the language, and so it is probably not better, and maybe worse.

    Which begs the question: who do we write our programs for? Note
    that I don't ask "what", but rather, "who": sure, we write code
    to (hopefully) make the machine do something useful, but really
    we use these notations we call "programming languages" for
    ourselves and for each other. So what are the characteristics
    of that audience? In particular, what level of proficiency
    should I assume when I write code in a particular language? Do
    I write for programmers who are not familiar with the language
    and its idioms, or the do I assume a greater level of
    familiarity? Personally, I err on the side of the latter, but
    then I'm often writing code in a domain where casual changes
    rarely work. Still, like all things, I think there's a balance
    here, and I am very tired of the mentality that programming
    _should_ be hard just because it should.

    - Dan C.

    Just as an aside about Rust, note that variables are immutable
    by default; in the above example, I can avoid `mut` and write:

    let v = 16u32;
    let v = v / 2;

    Which shadows the original binding, but gives the same effect as
    the version that uses `mut`. _This_ is subjective, and not a
    matter of idiom, however.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.os.vms on Wed Aug 20 15:57:50 2025
    From Newsgroup: comp.os.vms

    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/18/2025 8:48 PM, Dan Cross wrote:
    In article <68a3b980$0$713$14726298@news.sunsite.dk>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    But point is that one need to know something about the
    languages.

    Just picking an operator that "looks like" and hope it
    has similar semantics is no good.

    This seems like a very extreme example. There is a scale of
    knowledge when it comes to programming languages, from the basic
    ways in which one does various things like write loops or
    perform basic arithmetic, to the minutia of specific library or
    IO routines, with semantics of specific operators and how they
    combine probably somewhere in the middle.

    I happen to disagree with Simon's notion of what makes for
    robust programming, but to go to such an extreme as to suggest
    that writing code as if logical operators don't short-circuit
    is the same as not knowing the semantics of division is
    specious.

    There are 4 operations:
    - short circuiting and
    - non short circuiting and
    - integer division
    - floating point division

    There is a lot of different integer divisions:
    - "correct one", that is producing a fraction
    - rounding quotient up
    - rounding quotient down
    - rounding quotient to 0
    - rounding quotient at half with ties going to even
    - with positive remainder
    - with remainder of smallest absolute value and in case of ties positive

    There are various behaviours on overflow and in case of zero divisor.

    Both source and target language has a way of doing those: operator,
    function or a more complex expression.

    I agree that the risk of someone not understanding "division"
    ways is much less than the risk of someone not understanding
    "and" ways.

    Well, division is much more complex than boolean operations, so
    harder to understand. OTOH details of division are are harder
    to ignore than details of boolean operations.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Wed Aug 20 16:18:53 2025
    From Newsgroup: comp.os.vms

    In article <1084eul$afbj$2@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
    On 2025-08-19, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
    In article <1081rg2$3njqo$3@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:

    I write simple to understand code, not clever code, even when the
    problem it is solving is complex or has a lot of functionality
    built into the problem.

    I've found it makes code more robust and easier for others to read, >>>especially when they may not have the knowledge you have when you
    wrote the original code.

    I'm curious how this expresses itself with respect to e.g. the
    short-circuiting thing, though. For instance, this might be
    common in C:

    struct something *p;
    ...
    if (p != NULL && p->ptr != NULL && something(*p->ptr)) {
    // Do something here.
    }

    This, of course, relies on short-circuiting to avoid
    dereferncing either `p` or `*p->ptr` if either is NULL. What
    is the alternative?

    if (p != NULL) {
    if (p->ptr != NULL) {
    if (something(*p->ptr)) {
    // Do something....
    }
    }
    }

    If I dare say so, this is strictly worse because the code is now
    much more heavily indented.

    Indented properly (not as in your next example!) I find that very
    readable and is mostly how I would write it although I do use code
    like your return example when appropriate. This is my variant:

    if (p != NULL)
    {
    if (p->ptr != NULL)
    {
    if (something(*p->ptr))
    {
    // Do something....
    }
    }
    }

    In case that doesn't survive a NNTP client, it is in Whitesmiths format:

    https://en.wikipedia.org/wiki/Indentation_style#Whitesmiths

    Well...at least it's not GNU style. :-D

    Jokes aside, I've found this easy to turn into a mess, if the
    nested 'if's also have 'else's. In the context where we're
    talking about a replacement for short-circuiting booleans,
    that's not generally a consideration, though.

    I like to spread out code vertically as I find it is easier to read.
    We are no longer in the era of VT50/52/100 terminals. :-)

    Eh, vertical density has some cognitive advantages. Like it all
    things, it can be taken to an extreme, however. Sometimes it's
    useful to burn vertical space for readability; sometimes that is
    done excessively, defeating the purpose.

    Ken Thompson used to avoid things like this by writing such code
    as:

    if (p != NULL)
    if (p->ptr != NULL)
    if (something(p->ptr)) {
    // Do something....
    }


    YUCK * 1000!!! That's horrible!!! :-)

    Heh. Well, tell it to Ken; in a lot of ways, he's responible
    for much of the language (via Dennis Ritchie), so he can do what
    more or less what he wants. :-)

    Which has a certain elegance to it, but automated code
    formatters inevitably don't understand it (and at this point,
    one really ought to be using an automated formatter whenever
    possible).

    AN alternative might be to extract the conditional and put it
    into an auxiliary function, and use something similar to
    Dijkstra's guarded commands:

    void
    maybe_do_something(struct something *p)
    {
    if (p == NULL)
    return;
    if (p->ptr == NULL)
    return;
    if (!something(*p->ptr))
    return;
    // Now do something.
    }

    I would argue that this is better than the previous example, and
    possibly on par with or better than the original: if nothing
    else, it gives a name to the operation. This is of course just
    a contrived example, so the name here is meaningless, but one
    hopes that in a real program a name with some semantic meaning
    would be chosen.

    There is one difference for me here however. _All_ single conditional >statements as in the above example are placed in braces to help avoid
    the possibility of a later editing error.

    Oh, that's fine; I have no objection to that. On USENET, I'd
    omit that just for brevity, however. Here, I _do_ care about
    vertical space!

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris Townley@news@cct-net.co.uk to comp.os.vms on Wed Aug 20 18:05:32 2025
    From Newsgroup: comp.os.vms

    On 20/08/2025 13:27, Simon Clubley wrote:
    <big snip>> Indented properly (not as in your next example!) I find that
    very
    readable and is mostly how I would write it although I do use code
    like your return example when appropriate. This is my variant:

    if (p != NULL)
    {
    if (p->ptr != NULL)
    {
    if (something(*p->ptr))
    {
    // Do something....
    }
    }
    }

    In case that doesn't survive a NNTP client, it is in Whitesmiths format:

    <snip>
    That is why I don't like Whitesmiths

    To me the curly braces should logically align with the preceding statement.

    When I first looked at he example, I immediately thought there is a
    missing closing brace, which of course there isn't.

    I also dislike putting the opening brace at the end of the preceding
    line, although I have had to in some cases. Probably a Microsoft invention
    --
    Chris
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Wed Aug 20 17:43:21 2025
    From Newsgroup: comp.os.vms

    In article <1084ojj$95kl$1@dont-email.me>,
    Chris Townley <news@cct-net.co.uk> wrote:
    On 20/08/2025 16:03, Dan Cross wrote:
    In article <1082lks$3nmtt$2@dont-email.me>,
    Chris Townley <news@cct-net.co.uk> wrote:
    [snip]
    Interestingly I have just come across an old bit of DEC Basic code:

    REPORT.ONLY = W.S4 = "R" ! Global flag

    I know what it does, but I would wrapped the knuckles of any programmer
    who did that on my shift!

    Not knowing DEC BASIC, am I correct in guessing that this
    assigns the boolean result of comparing `W.S4` with the string
    "R" to `REPORT.ONLY`?

    Correct, but I would have surrounded the comparison with brackets, or
    used an IF statement.

    Indeed!

    Not quite as bad as a colleague who found source code file for a
    function, that ended with an UNLESS Z
    Z was a global not mentioned in the source file! Try searching a massive >codebase for Z!

    Eek.

    In the Plan 9 kernel (https://9p.io/sys/doc/9.html), the symbol
    `m` refers to a pointer to the local "Mach" (basically, a
    per-CPU data structure).

    `up` is similarly a symbol that refers to the current process
    context on the current CPU.

    In the ARM version of that kernel, these are "extern register"
    variables understood by the compiler, that are preserved on exit
    from, and restored on entry to, the kernel. To understand
    exactly which register corresponds to which variable requires
    examining compiler internals and the way that the compiler
    (implicitly) allocates them to such variables. "Good luck!"

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Simon Clubley@clubley@remove_me.eisner.decus.org-Earth.UFP to comp.os.vms on Wed Aug 20 18:07:10 2025
    From Newsgroup: comp.os.vms

    On 2025-08-20, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
    In article <1084fca$afbj$3@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
    On 2025-08-19, Dan Cross <cross@spitfire.i.gajendra.net> wrote:

    Even now, it's still an interesting read. I like my own code
    princples, as well, but of course, I'm biased:
    https://pub.gajendra.net/2016/03/code_principles


    I've just read through that document and agree with everything there.
    I was especially amused by the write for readability and it will be
    read many times comments as I use that wording myself.

    I am surprised you are picking me up on some things however, given
    the mindset expressed in that document. Perhaps your idea of readability
    is different from mine. :-)

    Oh, I hope any criticism I offer doesn't come across as
    personal!


    No, it absolutely does _not_ in any way.

    For me, it's exactly the same as a colleague noticing something
    in another colleague's code or design proposal and commenting on it.
    You would have to be extremely fragile to take _that_ personally. :-)

    Simon.
    --
    Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
    Walking destinations on a map are further away than they appear.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Wed Aug 20 18:08:10 2025
    From Newsgroup: comp.os.vms

    In article <1084v8t$95kl$2@dont-email.me>,
    Chris Townley <news@cct-net.co.uk> wrote:
    On 20/08/2025 13:27, Simon Clubley wrote:
    <big snip>> Indented properly (not as in your next example!) I find that >very
    readable and is mostly how I would write it although I do use code
    like your return example when appropriate. This is my variant:

    if (p != NULL)
    {
    if (p->ptr != NULL)
    {
    if (something(*p->ptr))
    {
    // Do something....
    }
    }
    }

    In case that doesn't survive a NNTP client, it is in Whitesmiths format:

    <snip>
    That is why I don't like Whitesmiths

    To me the curly braces should logically align with the preceding statement.

    When I first looked at he example, I immediately thought there is a
    missing closing brace, which of course there isn't.

    Just give yourself over to a tool to manage formatting. It
    won't feel "perfect", certainly not at first, but in a very
    short time it will become indispensible.

    https://clang.llvm.org/docs/ClangFormat.html

    I also dislike putting the opening brace at the end of the preceding
    line, although I have had to in some cases. Probably a Microsoft invention

    As in,

    if (foo != 2) {
    /* whatever here... */
    }

    ?

    That demonstrably predates Microsoft. It's often called K&R
    style, or sometimes, the "One True Brace Style", because it was
    used in Kernighan and Ritchie's books on C; Ritchie, of course,
    being the primary architect and earliest implementer of the C
    language, based on a language called B (and "new B" or nb) by
    Ken Thompson, which itself was based on Martin Richards's BCPL.
    This was also the style used in the Unix kernel, which was
    probably the first really serious program written in C.

    As they put it in the first edition of K&R (1978):

    |Although C is quite permissive about statement positioning,
    |proper indentation and use of white space are critical in
    |making programs easy for people to read. The position of the
    |braces is less important; we have chosen one of several
    |popular styles. Pick a style that suits you, then use it
    |consistently.

    The intent was good here, but really, they should have just said
    "use this style" and been done with it. It'd have saved a lot
    of blocks on disks storing USENET messages arguing over where to
    put the braces. :-D

    Btw, some folks may find Henry Spencer's, "10 Commandments for C
    Programmers" amusing. Number 8 is relevant here: https://www.lysator.liu.se/c/ten-commandments.html

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Wed Aug 20 18:09:16 2025
    From Newsgroup: comp.os.vms

    In article <10852se$fica$1@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
    On 2025-08-20, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
    In article <1084fca$afbj$3@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
    On 2025-08-19, Dan Cross <cross@spitfire.i.gajendra.net> wrote:

    Even now, it's still an interesting read. I like my own code
    princples, as well, but of course, I'm biased:
    https://pub.gajendra.net/2016/03/code_principles


    I've just read through that document and agree with everything there.
    I was especially amused by the write for readability and it will be
    read many times comments as I use that wording myself.

    I am surprised you are picking me up on some things however, given
    the mindset expressed in that document. Perhaps your idea of readability >>>is different from mine. :-)

    Oh, I hope any criticism I offer doesn't come across as
    personal!

    No, it absolutely does _not_ in any way.

    For me, it's exactly the same as a colleague noticing something
    in another colleague's code or design proposal and commenting on it.
    You would have to be extremely fragile to take _that_ personally. :-)

    Okay, great; that's 100% the intention. Thanks for confirming!

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Dave Froble@davef@tsoft-inc.com to comp.os.vms on Thu Aug 21 10:58:22 2025
    From Newsgroup: comp.os.vms

    On 8/20/2025 11:11 AM, Chris Townley wrote:
    On 20/08/2025 16:03, Dan Cross wrote:
    In article <1082lks$3nmtt$2@dont-email.me>,
    Chris Townley <news@cct-net.co.uk> wrote:
    On 19/08/2025 17:07, Dan Cross wrote:
    In article <10822mn$3pb8v$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/18/2025 11:00 PM, Lawrence DrCOOliveiro wrote:
    On Mon, 18 Aug 2025 21:49:29 -0400, Arne Vajh|+j wrote:

    On 8/18/2025 7:59 PM, Lawrence DrCOOliveiro wrote:

    On Mon, 18 Aug 2025 19:48:16 -0400, Arne Vajh|+j wrote:

    On 8/18/2025 7:45 PM, Lawrence DrCOOliveiro wrote:

    Parentheses are best used lightly -- where needed, maybe a little >>>>>>>>>> bit more than that, and thatrCOs it.

    Otherwise parenthesis clutter introduces its own obstacles to >>>>>>>>>> readability.

    That is not the mantra among people who try to prevent future >>>>>>>>> errors.

    There are people who just repeat what they are told, arenrCOt there, >>>>>>>> instead of learning from actual experience.

    It is the recommendation from people with actual experience.

    One book that recommend it is "The Practice of Programming".
    Brian Kernighan and Rob Pike.

    One wonders how much experience they really have, across how many
    different languages.

    (Wow.)

    Brian Kernighan and Rob Pike? A lot! :-)

    It may help to read what they actually wrote in TPoP; on page 6:

    |_Parenthesize to resolve ambiguity_. Parentheses specify
    |grouping and can be used to make the intent clear even when
    |they are not required. The inner parentheses in the previous
    |example are not necessary, but they don't hurt, either.
    |Seasoned programmers might omit them, because the relational
    |operators (< <= == != >= >) have higher precedence than the
    |logical operators (&& and ||).
    |
    |When mixing unrelated operators, though, it's a good idea to
    |parenthesize. C and its friends present pernicious precedence
    |problems, and it's easy to make a mistake.

    For reference, the "previous example" they mention here is:

    if ((block_id >= actblks) || (block_id < unblocks))

    Most C programmers would write this as,

    if (block_id >= actblks || block_id < unblocks)

    And Kernighan and Pike would be fine with that. It must be
    noted that, throughout the rest of TPoP, they rarely
    parenthesize as aggressively as they do in that one example.
    For example, on page 98, in the discussion of building a CSV
    file parser interface, they present a function called,
    `advquoted` that contains this line of code:

    if (pj] == '"' && p[++j] != '"') {
    ...
    }

    (Note this doesn't just omit parenthesis, but also makes use of
    the pre-increment operator _and_ boolean short-circuiting.)

    Pike is famous for brevity; his 1989 document, "Notes on
    Programming in C" is a model here:
    http://www.literateprogramming.com/pikestyle.pdf

    Even now, it's still an interesting read. I like my own code
    princples, as well, but of course, I'm biased:
    https://pub.gajendra.net/2016/03/code_principles

    - Dan C.


    Interestingly I have just come across an old bit of DEC Basic code:

    REPORT.ONLY = W.S4 = "R" ! Global flag

    I know what it does, but I would wrapped the knuckles of any programmer
    who did that on my shift!

    Not knowing DEC BASIC, am I correct in guessing that this
    assigns the boolean result of comparing `W.S4` with the string
    "R" to `REPORT.ONLY`?

    - Dan C.


    Correct, but I would have surrounded the comparison with brackets, or used an IF
    statement.

    Not quite as bad as a colleague who found source code file for a function, that
    ended with an UNLESS Z
    Z was a global not mentioned in the source file! Try searching a massive codebase for Z!


    That statement could make a person think for a while. Not a bad thing, being able to understand something like that. However, I've always been a big fan of
    clarity, so go ahead and wack some knuckles.
    --
    David Froble Tel: 724-529-0450
    Dave Froble Enterprises, Inc. E-Mail: davef@tsoft-inc.com
    DFE Ultralights, Inc.
    170 Grimplin Road
    Vanderbilt, PA 15486
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Sun Aug 24 23:24:05 2025
    From Newsgroup: comp.os.vms

    In article <108dlgm$2fi6h$3@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/20/2025 11:51 AM, Dan Cross wrote:
    In article <1084drl$afbj$1@dont-email.me>,
    Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
    On 2025-08-19, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
    In article <68a493ec$0$710$14726298@news.sunsite.dk>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:

    Kotlin is rather picky about mixing signed and unsigned.

    var v: UInt = 16u

    v = v / 2

    gives an error.

    v = v / 2u
    v = v / 2.toUInt()

    works.

    I consider that rather picky.

    I've not used Kotlin, but I consider that to be the really good type
    of picky. :-)

    It's kind of annoying that it can't infer to use unsigned for
    the '2' in the first example. Rust, for example, does do that
    inference which makes most arithmetic very natural.

    I actually consider that to be a good thing. The programmer is forced
    to think about what they have written and to change it to make those
    intentions explicit in the code. I like this.

    I think the point is, that in cases like this, the compiler
    enforces the explicit typing anyway: if the program compiles, it
    is well-typed. If it does not, then it is not. In that
    context, this level of explicitness adds little, if any,
    additional value.

    That the literal "2" is a different type than "2u" is
    interesting, however, and goes back to what you were saying
    earlier about default signedness. As a mathematical object,
    "2" is just a positive integer, but programming languages are
    not _really_ a mathematical notation, so the need to be explicit
    here makes sense from that perspective, I guess.

    In Rust, I might write this sequence as:

    let mut v = 16u32;
    v = v / 2;

    And the type inference mechanism would deduce that 2 should be
    treated as a `u32`. But I could just as easily write,

    let mut v = 16u32;
    v = v / 2u32;

    Which explicitly calls out that 2 as a `u32`.

    Is this really better, though? This is where I'd argue that
    matters of idiom come into play: this is not idiomatic usage in
    the language, and so it is probably not better, and maybe worse.

    The practical difference for specific code is likely zero.

    But there is a difference in language principles and the
    confidence the developer can have in it.

    A rule that there is never any implicit conversion or
    literal inference no matter the context is simple to
    understand and gives confidence.

    That conflates two separate things, and actually making two, not
    one, rules.

    Implicit type conversion is subtle, with often-confusing rules,
    and programmers frequently mess it up; I agree. Even
    programmers who feel very confident in the rules are not immune
    here.

    However, type inference is an entirely different matter. The
    specific context is Rust, and there is no matter of "confidence"
    at play here; either the inference rules work, and the program
    type checks, or they the inference engine in the compiler can't
    deduce the appropriate types (or makes a mistake) and the
    program does not type check and fails to compile.

    And that's the critical part: the program is either properly
    typed or it is not; unlike weakly typed languages, there is no
    middle ground. And if it is not properly typed, it will not
    compile; period, end of story.

    So if the program compiles, then that's all the "confidence" the
    programmer needs; there's no need to decorate every literal with
    an explicit type annotation, which is just noisy and clutters up
    the program with extraneous detail.

    Exceptions even in cases where it does not matter adds
    the complexity of understanding when the exceptions apply
    and why they do not matter. Complexity that developers
    would rather avoid.

    There are no exceptions. The program is either well-typed, in
    which case (barring any other errors) the program compiles. Or
    it is not, and the program fails to compile: again, there is no
    middle ground.

    Most experienced Rust programmers who saw a bunch of explicit
    type annotations on literals, when the type system could
    otherwise easily infer them, would find rightly find that
    non-idiomatic, and thus confusing, and wonder what was so
    special about the program that literals had to be so annotated.

    _That_ adds complexity where there need not be any, and thus
    increases cognitive load, which is never a desirable property
    for a maintainable program.

    This is why matters of idiom in a language are important.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Sun Aug 24 23:27:11 2025
    From Newsgroup: comp.os.vms

    In article <108dlq4$2fi6h$4@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/19/2025 1:26 PM, Dan Cross wrote:
    In article <10823ei$3pb8v$3@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/19/2025 9:01 AM, Simon Clubley wrote:
    On 2025-08-18, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
    I happen to disagree with Simon's notion of what makes for
    robust programming, but to go to such an extreme as to suggest
    that writing code as if logical operators don't short-circuit
    is the same as not knowing the semantics of division is
    specious.

    That last one is an interesting example. I may not care about
    short circuiting, but I am _very_ _very_ aware of the combined
    unsigned integers and signed integers issues in C expressions. :-(

    It also affects how I look at the same issues in other languages.

    I've mentioned this before, but I think languages should give you
    unsigned integers by default, and you should have to ask for
    a signed integer if you really want one.

    "by default" sort of imply signedness being an attribute of
    same type.

    Why not just make it two different types with different names?

    Whether we follow tradition and call them integer and cardinal
    or more modern style and call them int and uint is less important.

    I would argue that, at this point, there's little need for a
    generic "int" type anymore, and that types representing integers
    as understood by the machine should explicitly include both
    signedness and width. An exception may be something like,
    `size_t`, which is platform-dependent, but when transferred
    externally should be given an explicit size. A lot of the
    guesswork and folklore that goes into understanding the
    semantics of those things just disappears when you're explicit.

    The integer types should have well defined width.

    And they could also be called int32 and uint32.

    That seems to be in fashion in low level languages
    competing with C.

    Many higher level languages just define that int is 32 bit,
    but don't show it in the name.

    If by "many higher level languages" you mean languages in the
    JVM and CLR ecosystem, then sure, I guess so. But it's not
    universal, and I don't see how it's an improvement.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Sun Aug 24 23:32:32 2025
    From Newsgroup: comp.os.vms

    In article <108dm7k$2fi6h$5@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/19/2025 1:26 PM, Dan Cross wrote:
    And despite the old admonition to make everything a symbolic
    constant, things like `2 * pi * r` are perfectly readable, and
    I'd argue that `TWO * pi * r` are less so.

    I would say that TWO and 2 are the same regarding readability.

    The problem with TWO is not readability, but lack of purpose.

    Hence why it's poor from a readability perspective: it adds
    nothing, just a symbolic, alphanumberic label for a number, but
    that number is perfectly understandable, and in context, taken
    from a universal mathematical truth.

    There are two good reasons to introduce symbolic names for constants:
    1) The name can be more self documenting than a numeric value
    2) If the constant is used multiple places, then having a symbolic
    name makes it easier to change the value

    But neither applies.

    TWO does not provide more information than 2.

    Just so.

    And it would be very unwise to change the value of TWO
    to something different from 2.

    Indeed. Also, in a language like Rust, where all constants
    must have a type, a name like "TWO" is too generic; supposing I
    wanted to use such a constant in multiple contexts involving
    different types, this wouldn't work with the type inference
    rules, which would be unidiomatic.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sun Aug 24 19:52:52 2025
    From Newsgroup: comp.os.vms

    On 8/24/2025 7:27 PM, Dan Cross wrote:
    In article <108dlq4$2fi6h$4@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/19/2025 1:26 PM, Dan Cross wrote:
    In article <10823ei$3pb8v$3@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    Whether we follow tradition and call them integer and cardinal
    or more modern style and call them int and uint is less important.

    I would argue that, at this point, there's little need for a
    generic "int" type anymore, and that types representing integers
    as understood by the machine should explicitly include both
    signedness and width. An exception may be something like,
    `size_t`, which is platform-dependent, but when transferred
    externally should be given an explicit size. A lot of the
    guesswork and folklore that goes into understanding the
    semantics of those things just disappears when you're explicit.

    The integer types should have well defined width.

    And they could also be called int32 and uint32.

    That seems to be in fashion in low level languages
    competing with C.

    Many higher level languages just define that int is 32 bit,
    but don't show it in the name.

    If by "many higher level languages" you mean languages in the
    JVM and CLR ecosystem, then sure, I guess so. But it's not
    universal, and I don't see how it's an improvement.

    That are two huge group of languages with a pretty big
    market share in business applications.

    Delphi provide both flavors. shortint/smallint/integer
    and int8/int16/int32, byte/word/cardinal and
    uint8/uint16/uint32. I believe the first are the most
    widely used.

    (64 bit is just int64 and uint64, because somehow they
    fucked up longint and made it 32 bit on 32 bit and 64 bit
    Windows but 64 bit on 64 bit *nix)

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Mon Aug 25 01:24:23 2025
    From Newsgroup: comp.os.vms

    On Sun, 24 Aug 2025 19:52:52 -0400, Arne Vajh|+j wrote:

    (... somehow they fucked up longint and made it 32 bit on 32 bit and
    64 bit Windows but 64 bit on 64 bit *nix)

    It was *Microsoft* that rCLfucked up longintrCY, and nobody else. Just to be clear.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sun Aug 24 21:27:08 2025
    From Newsgroup: comp.os.vms

    On 8/24/2025 9:24 PM, Lawrence DrCOOliveiro wrote:
    On Sun, 24 Aug 2025 19:52:52 -0400, Arne Vajh|+j wrote:
    (... somehow they fucked up longint and made it 32 bit on 32 bit and
    64 bit Windows but 64 bit on 64 bit *nix)

    It was *Microsoft* that rCLfucked up longintrCY, and nobody else. Just to be clear.

    I don't think you can blame MS for Delphi.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.os.vms on Mon Aug 25 02:05:37 2025
    From Newsgroup: comp.os.vms

    Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:
    On Sun, 24 Aug 2025 19:52:52 -0400, Arne Vajh|+j wrote:

    (... somehow they fucked up longint and made it 32 bit on 32 bit and
    64 bit Windows but 64 bit on 64 bit *nix)

    It was *Microsoft* that rCLfucked up longintrCY, and nobody else. Just to be clear.

    You mean long. But note that DEC made half step first: Alpha is
    64-bit machine and DEC made long 32-bit on Alpha. DEC had reasons
    for doing this and Microsoft had reasons to make long 32-bit on
    64-bit Windows. One can argue if those reasons were good enough,
    but that was not a random decision.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.vms on Mon Aug 25 02:22:29 2025
    From Newsgroup: comp.os.vms

    On Mon, 25 Aug 2025 02:05:37 -0000 (UTC), Waldek Hebisch wrote:

    But note that DEC made half step first: Alpha is 64-bit machine and
    DEC made long 32-bit on Alpha.

    On DEC Unix, as on every other *nix, rCLintrCY was 32 bits and rCLlongrCY was 64
    bits. This applied on Alpha, too.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Mon Aug 25 11:07:10 2025
    From Newsgroup: comp.os.vms

    On 8/24/2025 10:22 PM, Lawrence DrCOOliveiro wrote:
    On Mon, 25 Aug 2025 02:05:37 -0000 (UTC), Waldek Hebisch wrote:
    But note that DEC made half step first: Alpha is 64-bit machine and
    DEC made long 32-bit on Alpha.

    On DEC Unix, as on every other *nix, rCLintrCY was 32 bits and rCLlongrCY was 64
    bits. This applied on Alpha, too.

    Given the group, then Alpha with no OS specification is likely to
    mean VMS.

    VMS and Tru64 (DEC OSF/1 -> Digital Unix -> Compaq Tru64) was
    treated very differently for Alpha.

    On VMS Alpha the goal was backwards compatibility with VMS VAX,
    so long stayed 32 bit.

    On Tru64 there were no strong requirement for compatibility with
    Ultrix VAX and Ultrix MIPS, so long was made 64 bit.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Fri Aug 29 13:17:32 2025
    From Newsgroup: comp.os.vms

    In article <108g8kk$33isk$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/24/2025 7:27 PM, Dan Cross wrote:
    In article <108dlq4$2fi6h$4@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/19/2025 1:26 PM, Dan Cross wrote:
    In article <10823ei$3pb8v$3@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    Whether we follow tradition and call them integer and cardinal
    or more modern style and call them int and uint is less important.

    I would argue that, at this point, there's little need for a
    generic "int" type anymore, and that types representing integers
    as understood by the machine should explicitly include both
    signedness and width. An exception may be something like,
    `size_t`, which is platform-dependent, but when transferred
    externally should be given an explicit size. A lot of the
    guesswork and folklore that goes into understanding the
    semantics of those things just disappears when you're explicit.

    The integer types should have well defined width.

    And they could also be called int32 and uint32.

    That seems to be in fashion in low level languages
    competing with C.

    Many higher level languages just define that int is 32 bit,
    but don't show it in the name.

    If by "many higher level languages" you mean languages in the
    JVM and CLR ecosystem, then sure, I guess so. But it's not
    universal, and I don't see how it's an improvement.

    That are two huge group of languages with a pretty big
    market share in business applications.

    Market share is not the same as influence, and while the JVM/CLR
    languages _do_ have a lot of users, that does not imply that all
    are good languages. In fact, only a handful of languages in
    each family have any significant adoption, and I don't think PL
    designers are mining them for much inspiration these days.

    Again, not universal, nor really an improvement over just using
    explicitly sized types.

    Delphi provide both flavors. shortint/smallint/integer
    and int8/int16/int32, byte/word/cardinal and
    uint8/uint16/uint32. I believe the first are the most
    widely used.

    The older names feel like they're very much looking backwards in
    time.

    (64 bit is just int64 and uint64, because somehow they
    fucked up longint and made it 32 bit on 32 bit and 64 bit
    Windows but 64 bit on 64 bit *nix)

    I'd blame C for that. I've heard some folks suggest that the
    real mistake was not making `long` 64 bits on the first VAX C
    compiler, which admittedly may have already been too late (the
    Interdata compilers for the 7/32 and 8/32 Unix ports targeted a
    32-bit machine very early on).

    John Mashey et al got a lot of this mess fixed up with
    `<inttypes.h>` in the 1990s as they were pushing the 64-bit
    adoption. But had types been annotated with widths very early
    on, most of these problems wouldn't have existed in the first
    place.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Fri Aug 29 15:52:04 2025
    From Newsgroup: comp.os.vms

    On 8/29/2025 9:17 AM, Dan Cross wrote:
    In article <108g8kk$33isk$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/24/2025 7:27 PM, Dan Cross wrote:
    In article <108dlq4$2fi6h$4@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/19/2025 1:26 PM, Dan Cross wrote:
    In article <10823ei$3pb8v$3@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    Whether we follow tradition and call them integer and cardinal
    or more modern style and call them int and uint is less important.

    I would argue that, at this point, there's little need for a
    generic "int" type anymore, and that types representing integers
    as understood by the machine should explicitly include both
    signedness and width. An exception may be something like,
    `size_t`, which is platform-dependent, but when transferred
    externally should be given an explicit size. A lot of the
    guesswork and folklore that goes into understanding the
    semantics of those things just disappears when you're explicit.

    The integer types should have well defined width.

    And they could also be called int32 and uint32.

    That seems to be in fashion in low level languages
    competing with C.

    Many higher level languages just define that int is 32 bit,
    but don't show it in the name.

    If by "many higher level languages" you mean languages in the
    JVM and CLR ecosystem, then sure, I guess so. But it's not
    universal, and I don't see how it's an improvement.

    That are two huge group of languages with a pretty big
    market share in business applications.

    Market share is not the same as influence, and while the JVM/CLR
    languages _do_ have a lot of users, that does not imply that all
    are good languages. In fact, only a handful of languages in
    each family have any significant adoption, and I don't think PL
    designers are mining them for much inspiration these days.

    Again, not universal, nor really an improvement over just using
    explicitly sized types.

    It is a huge domain that are totally dominated by two approaches:
    * no need for declaring variables
    * declaring variables required but type names not including width
    (despite width being well defined for the type)

    And market share does matter as it sets developers expectations.

    Delphi provide both flavors. shortint/smallint/integer
    and int8/int16/int32, byte/word/cardinal and
    uint8/uint16/uint32. I believe the first are the most
    widely used.

    The older names feel like they're very much looking backwards in
    time.

    Developers tend to like what they know.

    (64 bit is just int64 and uint64, because somehow they
    fucked up longint and made it 32 bit on 32 bit and 64 bit
    Windows but 64 bit on 64 bit *nix)

    I'd blame C for that.

    Delphi is not C.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Fri Aug 29 21:38:02 2025
    From Newsgroup: comp.os.vms

    In article <108t0d4$249vm$11@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/29/2025 9:17 AM, Dan Cross wrote:
    In article <108g8kk$33isk$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/24/2025 7:27 PM, Dan Cross wrote:
    In article <108dlq4$2fi6h$4@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/19/2025 1:26 PM, Dan Cross wrote:
    In article <10823ei$3pb8v$3@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    Whether we follow tradition and call them integer and cardinal
    or more modern style and call them int and uint is less important. >>>>>>
    I would argue that, at this point, there's little need for a
    generic "int" type anymore, and that types representing integers
    as understood by the machine should explicitly include both
    signedness and width. An exception may be something like,
    `size_t`, which is platform-dependent, but when transferred
    externally should be given an explicit size. A lot of the
    guesswork and folklore that goes into understanding the
    semantics of those things just disappears when you're explicit.

    The integer types should have well defined width.

    And they could also be called int32 and uint32.

    That seems to be in fashion in low level languages
    competing with C.

    Many higher level languages just define that int is 32 bit,
    but don't show it in the name.

    If by "many higher level languages" you mean languages in the
    JVM and CLR ecosystem, then sure, I guess so. But it's not
    universal, and I don't see how it's an improvement.

    That are two huge group of languages with a pretty big
    market share in business applications.

    Market share is not the same as influence, and while the JVM/CLR
    languages _do_ have a lot of users, that does not imply that all
    are good languages. In fact, only a handful of languages in
    each family have any significant adoption, and I don't think PL
    designers are mining them for much inspiration these days.

    Again, not universal, nor really an improvement over just using
    explicitly sized types.

    It is a huge domain that are totally dominated by two approaches:
    [snip]

    So? You referred to "many higher level languages". That is
    qualitatively different than "a small number of languages with a
    huge share of the market."

    Delphi provide both flavors. shortint/smallint/integer
    and int8/int16/int32, byte/word/cardinal and
    uint8/uint16/uint32. I believe the first are the most
    widely used.

    The older names feel like they're very much looking backwards in
    time.

    Developers tend to like what they know.

    (64 bit is just int64 and uint64, because somehow they
    fucked up longint and made it 32 bit on 32 bit and 64 bit
    Windows but 64 bit on 64 bit *nix)

    I'd blame C for that.

    Delphi is not C.

    Obviously.

    But it would be foolish to assume that they weren't influenced
    by matters of compatibility with C (or more specifically C++)
    here, particularly given the history of Delphi as a language.

    Even the name gives it away ("longint").

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Fri Aug 29 19:03:30 2025
    From Newsgroup: comp.os.vms

    On 8/29/2025 5:38 PM, Dan Cross wrote:
    In article <108t0d4$249vm$11@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/29/2025 9:17 AM, Dan Cross wrote:
    In article <108g8kk$33isk$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    Delphi provide both flavors. shortint/smallint/integer
    and int8/int16/int32, byte/word/cardinal and
    uint8/uint16/uint32. I believe the first are the most
    widely used.

    The older names feel like they're very much looking backwards in
    time.

    Developers tend to like what they know.

    (64 bit is just int64 and uint64, because somehow they
    fucked up longint and made it 32 bit on 32 bit and 64 bit
    Windows but 64 bit on 64 bit *nix)

    I'd blame C for that.

    Delphi is not C.

    Obviously.

    But it would be foolish to assume that they weren't influenced
    by matters of compatibility with C (or more specifically C++)
    here, particularly given the history of Delphi as a language.

    Even the name gives it away ("longint").

    That was also Lawrence's guess.

    But the hypothesis that they wanted to follow
    C/C++ is obviously not true.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Fri Aug 29 19:06:34 2025
    From Newsgroup: comp.os.vms

    On 8/29/2025 5:38 PM, Dan Cross wrote:
    In article <108t0d4$249vm$11@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/29/2025 9:17 AM, Dan Cross wrote:
    In article <108g8kk$33isk$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/24/2025 7:27 PM, Dan Cross wrote:
    In article <108dlq4$2fi6h$4@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/19/2025 1:26 PM, Dan Cross wrote:
    In article <10823ei$3pb8v$3@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    Whether we follow tradition and call them integer and cardinal >>>>>>>> or more modern style and call them int and uint is less important. >>>>>>>
    I would argue that, at this point, there's little need for a
    generic "int" type anymore, and that types representing integers >>>>>>> as understood by the machine should explicitly include both
    signedness and width. An exception may be something like,
    `size_t`, which is platform-dependent, but when transferred
    externally should be given an explicit size. A lot of the
    guesswork and folklore that goes into understanding the
    semantics of those things just disappears when you're explicit.

    The integer types should have well defined width.

    And they could also be called int32 and uint32.

    That seems to be in fashion in low level languages
    competing with C.

    Many higher level languages just define that int is 32 bit,
    but don't show it in the name.

    If by "many higher level languages" you mean languages in the
    JVM and CLR ecosystem, then sure, I guess so. But it's not
    universal, and I don't see how it's an improvement.

    That are two huge group of languages with a pretty big
    market share in business applications.

    Market share is not the same as influence, and while the JVM/CLR
    languages _do_ have a lot of users, that does not imply that all
    are good languages. In fact, only a handful of languages in
    each family have any significant adoption, and I don't think PL
    designers are mining them for much inspiration these days.

    Again, not universal, nor really an improvement over just using
    explicitly sized types.

    It is a huge domain that are totally dominated by two approaches:
    [snip]

    So? You referred to "many higher level languages". That is
    qualitatively different than "a small number of languages with a
    huge share of the market."

    Yes - that is two different statements.

    But they are both true.

    And the second qualifies the first in the sense that the
    many are actually some that matter not pure exotic.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Sat Aug 30 01:21:10 2025
    From Newsgroup: comp.os.vms

    In article <108tbk2$29q30$2@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/29/2025 5:38 PM, Dan Cross wrote:
    In article <108t0d4$249vm$11@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/29/2025 9:17 AM, Dan Cross wrote:
    In article <108g8kk$33isk$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    Delphi provide both flavors. shortint/smallint/integer
    and int8/int16/int32, byte/word/cardinal and
    uint8/uint16/uint32. I believe the first are the most
    widely used.

    The older names feel like they're very much looking backwards in
    time.

    Developers tend to like what they know.

    (64 bit is just int64 and uint64, because somehow they
    fucked up longint and made it 32 bit on 32 bit and 64 bit
    Windows but 64 bit on 64 bit *nix)

    I'd blame C for that.

    Delphi is not C.

    Obviously.

    But it would be foolish to assume that they weren't influenced
    by matters of compatibility with C (or more specifically C++)
    here, particularly given the history of Delphi as a language.

    Even the name gives it away ("longint").

    That was also Lawrence's guess.

    I plonked that guy ages ago, so I don't see his responses. I
    expect he knows even less about Delphi than

    But the hypothesis that they wanted to follow
    C/C++ is obviously not true.

    You'll need to qualify that statement more before its veracity
    is even within reach of being ascertained.

    Obviously they were not following C and C++ in the sense that
    the syntax (and much of the semantics) are based on Pascal, not
    C. Clearly they wanted things like fundamental integral types
    to line up with existing C code for calls across an FFI
    boundary. One merely need look up the history of the language
    to see that.

    And of course these things evolved over time. Wirth's own
    languages after Pascal exhibited semantics more closely
    resembling those of C than Pascal. For instance, arrays in
    Oberon do not retain their size as a fundamental aspect of their
    type, one of the big complaints from Kernighan's famous critique
    of Pascal: http://doc.cat-v.org/bell_labs/why_pascal/why_pascal_is_not_my_favorite_language.pdf

    This is arguably a bug in both Oberon and C, but no one had
    really discovered how to put slices a la Fortran into a systems
    language at the level of C or Oberon yet; possibly because they
    were tainted by APL and ALGOL 68.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Sat Aug 30 01:30:12 2025
    From Newsgroup: comp.os.vms

    In article <108tbpq$29q30$3@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/29/2025 5:38 PM, Dan Cross wrote:
    In article <108t0d4$249vm$11@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/29/2025 9:17 AM, Dan Cross wrote:
    In article <108g8kk$33isk$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/24/2025 7:27 PM, Dan Cross wrote:
    In article <108dlq4$2fi6h$4@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/19/2025 1:26 PM, Dan Cross wrote:
    In article <10823ei$3pb8v$3@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    Whether we follow tradition and call them integer and cardinal >>>>>>>>> or more modern style and call them int and uint is less important. >>>>>>>>
    I would argue that, at this point, there's little need for a
    generic "int" type anymore, and that types representing integers >>>>>>>> as understood by the machine should explicitly include both
    signedness and width. An exception may be something like,
    `size_t`, which is platform-dependent, but when transferred
    externally should be given an explicit size. A lot of the
    guesswork and folklore that goes into understanding the
    semantics of those things just disappears when you're explicit. >>>>>>>
    The integer types should have well defined width.

    And they could also be called int32 and uint32.

    That seems to be in fashion in low level languages
    competing with C.

    Many higher level languages just define that int is 32 bit,
    but don't show it in the name.

    If by "many higher level languages" you mean languages in the
    JVM and CLR ecosystem, then sure, I guess so. But it's not
    universal, and I don't see how it's an improvement.

    That are two huge group of languages with a pretty big
    market share in business applications.

    Market share is not the same as influence, and while the JVM/CLR
    languages _do_ have a lot of users, that does not imply that all
    are good languages. In fact, only a handful of languages in
    each family have any significant adoption, and I don't think PL
    designers are mining them for much inspiration these days.

    Again, not universal, nor really an improvement over just using
    explicitly sized types.

    It is a huge domain that are totally dominated by two approaches:
    [snip]

    So? You referred to "many higher level languages". That is
    qualitatively different than "a small number of languages with a
    huge share of the market."

    Yes - that is two different statements.

    Correct...

    But they are both true.

    ...but irrelevant. I was referring to your specific statement;
    not making any point about the other.

    If you'd like to make a point about the popularity of langauges,
    by all means do so. But moving the goal posts by conflating
    dissimilar things isn't really useful.

    And the second qualifies the first in the sense that the
    many are actually some that matter not pure exotic.

    Nope. Only a few CLR/JVM langauges actually matter as far as
    programming langauge design goes. I'd say Java, Clojure, Scala,
    Kotlin, C#, and F#/F* are about it. I would have previously
    argued that M# was important in this sense as well, but Midori
    was canceled and it was basically a dialect of C# anyway.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Fri Sep 5 19:30:32 2025
    From Newsgroup: comp.os.vms

    On 8/29/2025 9:21 PM, Dan Cross wrote:
    In article <108tbk2$29q30$2@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/29/2025 5:38 PM, Dan Cross wrote:
    In article <108t0d4$249vm$11@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/29/2025 9:17 AM, Dan Cross wrote:
    In article <108g8kk$33isk$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    Delphi provide both flavors. shortint/smallint/integer
    and int8/int16/int32, byte/word/cardinal and
    uint8/uint16/uint32. I believe the first are the most
    widely used.

    The older names feel like they're very much looking backwards in
    time.

    Developers tend to like what they know.

    (64 bit is just int64 and uint64, because somehow they
    fucked up longint and made it 32 bit on 32 bit and 64 bit
    Windows but 64 bit on 64 bit *nix)

    I'd blame C for that.

    Delphi is not C.

    Obviously.

    But it would be foolish to assume that they weren't influenced
    by matters of compatibility with C (or more specifically C++)
    here, particularly given the history of Delphi as a language.

    Even the name gives it away ("longint").

    That was also Lawrence's guess.

    I plonked that guy ages ago, so I don't see his responses. I
    expect he knows even less about Delphi than

    But the hypothesis that they wanted to follow
    C/C++ is obviously not true.

    You'll need to qualify that statement more before its veracity
    is even within reach of being ascertained.

    Obviously they were not following C and C++ in the sense that
    the syntax (and much of the semantics) are based on Pascal, not
    C. Clearly they wanted things like fundamental integral types
    to line up with existing C code for calls across an FFI
    boundary. One merely need look up the history of the language
    to see that.

    The quotes I included was not kept just make the post longer,
    but because the comment related to then content in them.

    This is about naming of integer types.

    Arne


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Fri Sep 5 19:41:51 2025
    From Newsgroup: comp.os.vms

    On 8/29/2025 9:21 PM, Dan Cross wrote:
    And of course these things evolved over time. Wirth's own
    languages after Pascal exhibited semantics more closely
    resembling those of C than Pascal. For instance, arrays in
    Oberon do not retain their size as a fundamental aspect of their
    type, one of the big complaints from Kernighan's famous critique
    of Pascal: http://doc.cat-v.org/bell_labs/why_pascal/why_pascal_is_not_my_favorite_language.pdf

    This is arguably a bug in both Oberon and C,

    The languages in the Pascal family has evolved.

    And Kernighan's critique anno 1981 has certainly been
    addressed.

    I don't see the solution being particular C like though.

    Original 1970's Pascal:

    * array passed as an address
    * compiler uses dimensions known at compile time to enforce
    boundary checks

    ISO Pascal, VMS Pascal, Delphi/FPC, Modula-2, Oberon:

    Two types of array parameters:
    - same as original 1970's Pascal
    - open arrays to accept arguments of different dimensions

    Open arrays:
    * array passed with meta information / by descriptor (VMS terminology) /
    as object (OOP terminology)
    * compiler use dimensions passed at runtime to enforce boundary checks

    C:

    * arrays pass as address
    * no boundary checks

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Fri Sep 5 19:47:26 2025
    From Newsgroup: comp.os.vms

    On 9/5/2025 7:41 PM, Arne Vajh|+j wrote:
    On 8/29/2025 9:21 PM, Dan Cross wrote:
    And of course these things evolved over time.-a Wirth's own
    languages after Pascal exhibited semantics more closely
    resembling those of C than Pascal.-a For instance, arrays in
    Oberon do not retain their size as a fundamental aspect of their
    type, one of the big complaints from Kernighan's famous critique
    of Pascal:
    http://doc.cat-v.org/bell_labs/why_pascal/
    why_pascal_is_not_my_favorite_language.pdf

    This is arguably a bug in both Oberon and C,

    The languages in the Pascal family has evolved.

    And Kernighan's critique anno 1981 has certainly been
    addressed.

    I don't see the solution being particular C like though.

    Original 1970's Pascal:

    * array passed as an address
    * compiler uses dimensions known at compile time to enforce
    -a boundary checks

    ISO Pascal, VMS Pascal, Delphi/FPC, Modula-2, Oberon:

    Two types of array parameters:
    - same as original 1970's Pascal
    - open arrays to accept arguments of different dimensions

    Open arrays:
    * array passed with meta information / by descriptor (VMS terminology) /
    -a as object (OOP terminology)
    * compiler use dimensions passed at runtime to enforce boundary checks

    C:

    * arrays pass as address
    * no boundary checks

    VMS Pascal:

    $ type main.pas
    program main(input,output);

    type
    weird_array = array [-2..-1] of array [2..3] of integer;

    [external]
    procedure oldstyle(a : weird_array); external;

    [external]
    procedure newstyle(a : array [low..upp:integer] of array
    [low2..upp2:integer] of integer); external;

    var
    a : weird_array;
    i, j : integer;

    begin
    for i := -2 to -1 do
    for j := 2 to 3 do
    a[i,j] := i * j;
    oldstyle(a);
    newstyle(a);
    end.
    $ type demo.pas
    module demo(input, output);

    type
    weird_array = array [-2..-1] of array [2..3] of integer;

    [global]
    procedure oldstyle(a : weird_array);

    var
    i, j : integer;

    begin
    for i := -2 to -1 do begin
    for j := 2 to 3 do begin
    write(a[i,j]);
    end;
    writeln;
    end;
    end;

    [global]
    procedure newstyle(a : array [low..upp:integer] of array
    [low2..upp2:integer] of integer);

    var
    i, j : integer;

    begin
    for i := lower(a, 1) to upper(a, 1) do begin
    for j := lower(a, 2) to upper(a, 2) do begin
    write(a[i,j]);
    end;
    writeln;
    end;
    end;

    end.
    $ pas main
    $ pas demo
    $ link main + demo
    $ run main
    -4 -6
    -2 -3
    -4 -6
    -2 -3
    $ type demo.c
    #include <stdio.h>

    void oldstyle(int *a)
    {
    int *data = a;
    for(int i = 0; i < 2; i++)
    {
    for(int j = 0; j < 2; j++)
    {
    printf("%10d", *data);
    data++;
    }
    printf("\n");
    }
    }

    #include <descrip.h>

    struct dsc$bounds
    {
    long dsc$l_l;
    long dsc$l_u;
    };

    void newstyle(struct dsc$descriptor_nca *sa)
    {
    printf("length = %d\n", sa->dsc$w_length);
    printf("dtype = %d%s\n", sa->dsc$b_dtype, sa->dsc$b_dtype == DSC$K_DTYPE_L ? " (DSC$K_DTYPE_L)" : "");
    printf("class = %d%s\n", sa->dsc$b_class, sa->dsc$b_class == DSC$K_CLASS_NCA ? " (DSC$K_CLASS_NCA)" : "");
    printf("pointer = %d\n", sa->dsc$a_pointer);
    printf("dimct = %d\n", sa->dsc$b_dimct);
    printf("arsize = %d\n", sa->dsc$l_arsize);
    char *p = (char *)&sa[1];
    int *a0 = (int *)p;
    printf("address zero element = %d\n", a0);
    p = p + sizeof(int *);
    int *step = (int *)p;
    step++;
    for(int i = 0; i < sa->dsc$b_dimct; i++)
    {
    printf("dim %d : step = %d\n", i + 1, step[i]);
    }
    p = p + sa->dsc$b_dimct * sizeof(int);
    struct dsc$bounds *b = (struct dsc$bounds *)p;
    for(int i = 0; i < sa->dsc$b_dimct; i++)
    {
    printf("dim %d : low=%d high=%d\n", i, b[i].dsc$l_l, b[i].dsc$l_u);
    }
    int *data = (int *)sa->dsc$a_pointer;
    for(int i = 0; i < 2; i++)
    {
    for(int j = 0; j < 2; j++)
    {
    printf("%10d", *data);
    data++;
    }
    printf("\n");
    }
    }

    $ cc demo
    $ link main + demo
    $ run main
    -4 -6
    -2 -3
    length = 4
    dtype = 8 (DSC$K_DTYPE_L)
    class = 10 (DSC$K_CLASS_NCA)
    pointer = 2060040496
    dimct = 2
    arsize = 16
    address zero element = 2060040468
    dim 1 : step = 4
    dim 2 : step = -2
    dim 0 : low=-2 high=-1
    dim 1 : low=2 high=3
    -4 -6
    -2 -3

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.os.vms on Sat Sep 6 01:50:06 2025
    From Newsgroup: comp.os.vms

    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 9/5/2025 7:41 PM, Arne Vajh|+j wrote:
    On 8/29/2025 9:21 PM, Dan Cross wrote:
    And of course these things evolved over time.-a Wirth's own
    languages after Pascal exhibited semantics more closely
    resembling those of C than Pascal.-a For instance, arrays in
    Oberon do not retain their size as a fundamental aspect of their
    type, one of the big complaints from Kernighan's famous critique
    of Pascal:
    http://doc.cat-v.org/bell_labs/why_pascal/
    why_pascal_is_not_my_favorite_language.pdf

    This is arguably a bug in both Oberon and C,

    The languages in the Pascal family has evolved.

    And Kernighan's critique anno 1981 has certainly been
    addressed.

    I don't see the solution being particular C like though.

    Original 1970's Pascal:

    * array passed as an address
    * compiler uses dimensions known at compile time to enforce
    -a boundary checks

    ISO Pascal, VMS Pascal, Delphi/FPC, Modula-2, Oberon:

    Two types of array parameters:
    - same as original 1970's Pascal
    - open arrays to accept arguments of different dimensions

    Open arrays:
    * array passed with meta information / by descriptor (VMS terminology) /
    -a as object (OOP terminology)
    * compiler use dimensions passed at runtime to enforce boundary checks

    C:

    * arrays pass as address
    * no boundary checks

    You are inprecise. Classic Pascal has conformant array parameters,
    which pass bounds. Extended Pascal (and VMS Pascal) has schema
    types, including array schema, this is much more powerful than
    conformat arrays, and the same as conformat array could be
    checked, partially at compile time, check for indices staying in
    range sometimes must be done at runtime.

    IIUC Ada has constructs equvalent to Extended Pascal schema types,
    but IMHO with more messy syntax.

    C99 has VMT (variably modified types): array with bounds is passed
    to functions and function knows which parameters carry bounds.
    this is less general than Extended Pascal schema types, but
    is more general than classic Pascal conformant arrays and
    in principle could cover a lot of uses. There is a weakness:
    standard allows incomplete declarations(with unspecified bounds)
    in prototypes and even when prototype is complete forbids
    signaling errors for mismatch. But given what is in the C99
    standard compiler could have nonconforming mode where it
    signals errors for mismatches. And regardless of conformance
    compiler could warn about mismatches. Unfortunately, I do
    not know of no C compiler that actually warns about mismatches
    between VMT-s in prototypes and in function definitions.
    Similarly, I am not aware of a C compiler that uses knowledge
    of VMT bounds to generate bound checking code.

    VMS Pascal:

    $ type main.pas
    program main(input,output);

    type
    weird_array = array [-2..-1] of array [2..3] of integer;

    [external]
    procedure oldstyle(a : weird_array); external;

    The 'newstyle' procedure actually uses conformat arrays from
    classic Wirth Pascal.

    [external]
    procedure newstyle(a : array [low..upp:integer] of array [low2..upp2:integer] of integer); external;

    var
    a : weird_array;
    i, j : integer;

    begin
    for i := -2 to -1 do
    for j := 2 to 3 do
    a[i,j] := i * j;
    oldstyle(a);
    newstyle(a);
    end.
    $ type demo.pas
    module demo(input, output);

    type
    weird_array = array [-2..-1] of array [2..3] of integer;

    [global]
    procedure oldstyle(a : weird_array);

    var
    i, j : integer;

    begin
    for i := -2 to -1 do begin
    for j := 2 to 3 do begin
    write(a[i,j]);
    end;
    writeln;
    end;
    end;

    [global]
    procedure newstyle(a : array [low..upp:integer] of array [low2..upp2:integer] of integer);

    var
    i, j : integer;

    begin
    for i := lower(a, 1) to upper(a, 1) do begin
    for j := lower(a, 2) to upper(a, 2) do begin
    write(a[i,j]);
    end;
    writeln;
    end;
    end;

    end.
    $ pas main
    $ pas demo
    $ link main + demo
    $ run main
    -4 -6
    -2 -3
    -4 -6
    -2 -3
    $ type demo.c
    #include <stdio.h>

    void oldstyle(int *a)
    {
    int *data = a;
    for(int i = 0; i < 2; i++)
    {
    for(int j = 0; j < 2; j++)
    {
    printf("%10d", *data);
    data++;
    }
    printf("\n");
    }
    }

    #include <descrip.h>

    struct dsc$bounds
    {
    long dsc$l_l;
    long dsc$l_u;
    };

    void newstyle(struct dsc$descriptor_nca *sa)
    {
    printf("length = %d\n", sa->dsc$w_length);
    printf("dtype = %d%s\n", sa->dsc$b_dtype, sa->dsc$b_dtype == DSC$K_DTYPE_L ? " (DSC$K_DTYPE_L)" : "");
    printf("class = %d%s\n", sa->dsc$b_class, sa->dsc$b_class == DSC$K_CLASS_NCA ? " (DSC$K_CLASS_NCA)" : "");
    printf("pointer = %d\n", sa->dsc$a_pointer);
    printf("dimct = %d\n", sa->dsc$b_dimct);
    printf("arsize = %d\n", sa->dsc$l_arsize);
    char *p = (char *)&sa[1];
    int *a0 = (int *)p;
    printf("address zero element = %d\n", a0);
    p = p + sizeof(int *);
    int *step = (int *)p;
    step++;
    for(int i = 0; i < sa->dsc$b_dimct; i++)
    {
    printf("dim %d : step = %d\n", i + 1, step[i]);
    }
    p = p + sa->dsc$b_dimct * sizeof(int);
    struct dsc$bounds *b = (struct dsc$bounds *)p;
    for(int i = 0; i < sa->dsc$b_dimct; i++)
    {
    printf("dim %d : low=%d high=%d\n", i, b[i].dsc$l_l, b[i].dsc$l_u);
    }
    int *data = (int *)sa->dsc$a_pointer;
    for(int i = 0; i < 2; i++)
    {
    for(int j = 0; j < 2; j++)
    {
    printf("%10d", *data);
    data++;
    }
    printf("\n");
    }
    }

    $ cc demo
    $ link main + demo
    $ run main
    -4 -6
    -2 -3
    length = 4
    dtype = 8 (DSC$K_DTYPE_L)
    class = 10 (DSC$K_CLASS_NCA)
    pointer = 2060040496
    dimct = 2
    arsize = 16
    address zero element = 2060040468
    dim 1 : step = 4
    dim 2 : step = -2
    dim 0 : low=-2 high=-1
    dim 1 : low=2 high=3
    -4 -6
    -2 -3

    Arne

    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Fri Sep 5 22:21:13 2025
    From Newsgroup: comp.os.vms

    On 9/5/2025 9:50 PM, Waldek Hebisch wrote:
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 9/5/2025 7:41 PM, Arne Vajh|+j wrote:
    On 8/29/2025 9:21 PM, Dan Cross wrote:
    And of course these things evolved over time.-a Wirth's own
    languages after Pascal exhibited semantics more closely
    resembling those of C than Pascal.-a For instance, arrays in
    Oberon do not retain their size as a fundamental aspect of their
    type, one of the big complaints from Kernighan's famous critique
    of Pascal:
    http://doc.cat-v.org/bell_labs/why_pascal/
    why_pascal_is_not_my_favorite_language.pdf

    This is arguably a bug in both Oberon and C,

    The languages in the Pascal family has evolved.

    And Kernighan's critique anno 1981 has certainly been
    addressed.

    I don't see the solution being particular C like though.

    Original 1970's Pascal:

    * array passed as an address
    * compiler uses dimensions known at compile time to enforce
    -a boundary checks

    ISO Pascal, VMS Pascal, Delphi/FPC, Modula-2, Oberon:

    Two types of array parameters:
    - same as original 1970's Pascal
    - open arrays to accept arguments of different dimensions

    Open arrays:
    * array passed with meta information / by descriptor (VMS terminology) / >>> -a as object (OOP terminology)
    * compiler use dimensions passed at runtime to enforce boundary checks

    C:

    * arrays pass as address
    * no boundary checks

    You are inprecise. Classic Pascal has conformant array parameters,
    which pass bounds. Extended Pascal (and VMS Pascal) has schema
    types, including array schema, this is much more powerful than
    conformat arrays, and the same as conformat array could be
    checked, partially at compile time, check for indices staying in
    range sometimes must be done at runtime.

    The story I got was that:
    * Wirth Pascal did not have it (conformant array)
    * ISO Pascal 1983 and 1990 added it for level 1
    but not for level 0

    But all before my time, so ...

    Arne


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.os.vms on Sat Sep 6 19:09:50 2025
    From Newsgroup: comp.os.vms

    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 9/5/2025 9:50 PM, Waldek Hebisch wrote:
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 9/5/2025 7:41 PM, Arne Vajh|+j wrote:
    On 8/29/2025 9:21 PM, Dan Cross wrote:
    And of course these things evolved over time.-a Wirth's own
    languages after Pascal exhibited semantics more closely
    resembling those of C than Pascal.-a For instance, arrays in
    Oberon do not retain their size as a fundamental aspect of their
    type, one of the big complaints from Kernighan's famous critique
    of Pascal:
    http://doc.cat-v.org/bell_labs/why_pascal/
    why_pascal_is_not_my_favorite_language.pdf

    This is arguably a bug in both Oberon and C,

    The languages in the Pascal family has evolved.

    And Kernighan's critique anno 1981 has certainly been
    addressed.

    I don't see the solution being particular C like though.

    Original 1970's Pascal:

    * array passed as an address
    * compiler uses dimensions known at compile time to enforce
    -a boundary checks

    ISO Pascal, VMS Pascal, Delphi/FPC, Modula-2, Oberon:

    Two types of array parameters:
    - same as original 1970's Pascal
    - open arrays to accept arguments of different dimensions

    Open arrays:
    * array passed with meta information / by descriptor (VMS terminology) / >>>> -a as object (OOP terminology)
    * compiler use dimensions passed at runtime to enforce boundary checks >>>>
    C:

    * arrays pass as address
    * no boundary checks

    You are inprecise. Classic Pascal has conformant array parameters,
    which pass bounds. Extended Pascal (and VMS Pascal) has schema
    types, including array schema, this is much more powerful than
    conformat arrays, and the same as conformat array could be
    checked, partially at compile time, check for indices staying in
    range sometimes must be done at runtime.

    The story I got was that:
    * Wirth Pascal did not have it (conformant array)

    AFAIK Wirth Pascal had it. Several ports of Wirh Pascal to different
    machines done by others lost conformant arrays.

    * ISO Pascal 1983 and 1990 added it for level 1
    but not for level 0

    ISO Pascal 1983 simply sanctioned existing practice, that is
    existence of ports without conformant arrays. IIUC differences
    between level 1 ISO 1983 Pascal and Wirth Pascal were tiny.

    But all before my time, so ...

    Before my time too, but some people spent effort to dig out
    various historical details.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sat Sep 6 15:44:03 2025
    From Newsgroup: comp.os.vms

    On 9/6/2025 3:09 PM, Waldek Hebisch wrote:
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 9/5/2025 9:50 PM, Waldek Hebisch wrote:
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 9/5/2025 7:41 PM, Arne Vajh|+j wrote:
    Original 1970's Pascal:

    * array passed as an address
    * compiler uses dimensions known at compile time to enforce
    -a boundary checks

    ISO Pascal, VMS Pascal, Delphi/FPC, Modula-2, Oberon:

    Two types of array parameters:
    - same as original 1970's Pascal
    - open arrays to accept arguments of different dimensions

    Open arrays:
    * array passed with meta information / by descriptor (VMS terminology) / >>>>> -a as object (OOP terminology)
    * compiler use dimensions passed at runtime to enforce boundary checks

    You are inprecise. Classic Pascal has conformant array parameters,
    which pass bounds. Extended Pascal (and VMS Pascal) has schema
    types, including array schema, this is much more powerful than
    conformat arrays, and the same as conformat array could be
    checked, partially at compile time, check for indices staying in
    range sometimes must be done at runtime.

    The story I got was that:
    * Wirth Pascal did not have it (conformant array)

    AFAIK Wirth Pascal had it. Several ports of Wirh Pascal to different machines done by others lost conformant arrays.

    * ISO Pascal 1983 and 1990 added it for level 1
    but not for level 0

    ISO Pascal 1983 simply sanctioned existing practice, that is
    existence of ports without conformant arrays. IIUC differences
    between level 1 ISO 1983 Pascal and Wirth Pascal were tiny.

    But all before my time, so ...

    Before my time too, but some people spent effort to dig out
    various historical details.

    Reagan probably know.

    Anyway I can easily add the schema thing from extended ISO
    Pascal.

    program main3(input,output);

    type
    weird_array = array [-2..-1] of array [2..3] of integer;
    twodim_integer_array(n1,n2,n3,n4:integer) = array [n1..n2] of array [n3..n4] of integer;

    procedure oldstyle(a : weird_array);

    var
    i, j : integer;

    begin
    for i := -2 to -1 do begin
    for j := 2 to 3 do begin
    write(a[i,j]);
    end;
    writeln;
    end;
    end;

    procedure newstyle(a : array [low..upp:integer] of array
    [low2..upp2:integer] of integer);

    var
    i, j : integer;

    begin
    for i := lower(a, 1) to upper(a, 1) do begin
    for j := lower(a, 2) to upper(a, 2) do begin
    write(a[i,j]);
    end;
    writeln;
    end;
    end;

    procedure extstyle(a : twodim_integer_array);

    var
    i, j : integer;

    begin
    for i := lower(a, 1) to upper(a, 1) do begin
    for j := lower(a, 2) to upper(a, 2) do begin
    write(a[i,j]);
    end;
    writeln;
    end;
    end;

    var
    a : weird_array;
    ax : twodim_integer_array(-2,-1,2,3);
    i, j : integer;

    begin
    for i := -2 to -1 do
    for j := 2 to 3 do
    a[i,j] := i * j;
    oldstyle(a);
    newstyle(a);
    for i := -2 to -1 do
    for j := 2 to 3 do
    ax[i,j] := i * j;
    extstyle(ax);
    end.

    Arne

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@arne@vajhoej.dk to comp.os.vms on Sat Sep 6 19:20:58 2025
    From Newsgroup: comp.os.vms

    On 9/6/2025 3:44 PM, Arne Vajh|+j wrote:
    Anyway I can easily add the schema thing from extended ISO
    Pascal.

    program main3(input,output);

    type
    -a-a weird_array = array [-2..-1] of array [2..3] of integer;
    -a-a twodim_integer_array(n1,n2,n3,n4:integer) = array [n1..n2] of array [n3..n4] of integer;
    procedure extstyle(a : twodim_integer_array);

    var
    -a-a i, j : integer;

    begin
    -a-a for i := lower(a, 1) to upper(a, 1) do begin
    -a-a-a-a-a for j := lower(a, 2) to upper(a, 2) do begin
    -a-a-a-a-a-a-a-a write(a[i,j]);
    -a-a-a-a-a end;
    -a-a-a-a-a writeln;
    -a-a end;
    end;

    var
    -a-a a : weird_array;
    -a-a ax : twodim_integer_array(-2,-1,2,3);
    -a-a i, j : integer;

    begin
    -a-a for i := -2 to -1 do
    -a-a-a-a-a for j := 2 to 3 do
    -a-a-a-a-a-a-a-a ax[i,j] := i * j;
    -a-a extstyle(ax);
    end.

    The schema version is a bit special in nature.

    The closest similar I could come up with is this C++:

    #include <iostream>
    #include <array>

    using namespace std;

    template<size_t N1,size_t N2>
    void f(array<array<int,N2>,N1> a)
    {
    for(int i = 0; i < N1; i++)
    {
    for(int j = 0; j < N2; j++)
    {
    cout << " " << a[i][j];
    }
    cout << endl;
    }
    }

    int main()
    {
    array<array<int,2>,2> a = { -4, -6 , -2, -3 };
    f(a);
    return 0;
    }

    which seems close functionally but I suspect are
    implemented very differently.

    Arne



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Mon Sep 8 10:26:37 2025
    From Newsgroup: comp.os.vms

    In article <109frqo$2n3qo$2@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/29/2025 9:21 PM, Dan Cross wrote:
    In article <108tbk2$29q30$2@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/29/2025 5:38 PM, Dan Cross wrote:
    In article <108t0d4$249vm$11@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/29/2025 9:17 AM, Dan Cross wrote:
    In article <108g8kk$33isk$1@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    Delphi provide both flavors. shortint/smallint/integer
    and int8/int16/int32, byte/word/cardinal and
    uint8/uint16/uint32. I believe the first are the most
    widely used.

    The older names feel like they're very much looking backwards in
    time.

    Developers tend to like what they know.

    (64 bit is just int64 and uint64, because somehow they
    fucked up longint and made it 32 bit on 32 bit and 64 bit
    Windows but 64 bit on 64 bit *nix)

    I'd blame C for that.

    Delphi is not C.

    Obviously.

    But it would be foolish to assume that they weren't influenced
    by matters of compatibility with C (or more specifically C++)
    here, particularly given the history of Delphi as a language.

    Even the name gives it away ("longint").

    That was also Lawrence's guess.

    I plonked that guy ages ago, so I don't see his responses. I
    expect he knows even less about Delphi than

    But the hypothesis that they wanted to follow
    C/C++ is obviously not true.

    You'll need to qualify that statement more before its veracity
    is even within reach of being ascertained.

    Obviously they were not following C and C++ in the sense that
    the syntax (and much of the semantics) are based on Pascal, not
    C. Clearly they wanted things like fundamental integral types
    to line up with existing C code for calls across an FFI
    boundary. One merely need look up the history of the language
    to see that.

    The quotes I included was not kept just make the post longer,
    but because the comment related to then content in them.

    Your suggestion was that concerns about compatibility with C
    types did not influence the design of Delphi. That is
    manifestly wrong.

    This is about naming of integer types.

    And their sizes.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Mon Sep 8 10:29:40 2025
    From Newsgroup: comp.os.vms

    In article <109fsfv$2n3qo$3@dont-email.me>,
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    On 8/29/2025 9:21 PM, Dan Cross wrote:
    And of course these things evolved over time. Wirth's own
    languages after Pascal exhibited semantics more closely
    resembling those of C than Pascal. For instance, arrays in
    Oberon do not retain their size as a fundamental aspect of their
    type, one of the big complaints from Kernighan's famous critique
    of Pascal:
    http://doc.cat-v.org/bell_labs/why_pascal/why_pascal_is_not_my_favorite_language.pdf

    This is arguably a bug in both Oberon and C,

    The languages in the Pascal family has evolved.

    And Kernighan's critique anno 1981 has certainly been
    addressed.

    I don't see the solution being particular C like though.

    You got this backwards.

    The bug in Oberon and in C was _not_ including the size of an
    array as part of it's type. Pascal got this right, as it's
    actually a useful property (not just for bounds checking, but
    for _eliding_ bounds checking by enforcing things statically at
    compile time).

    What was missing was something analogous to slices to make those
    arrays erognomically usable when passed as a parameter to a
    function/procedure.

    Original 1970's Pascal:
    [snip; irrelevant]

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.os.vms on Mon Sep 8 10:34:22 2025
    From Newsgroup: comp.os.vms

    In article <109i0ts$3mmqu$1@paganini.bofh.team>,
    Waldek Hebisch <antispam@fricas.org> wrote:
    Arne Vajh|+j <arne@vajhoej.dk> wrote:
    [snip]
    But all before my time, so ...

    Before my time too, but some people spent effort to dig out
    various historical details.

    Correct. I did that deep-dive here a while back. E.g., https://rbnsn.com/pipermail/info-vax_rbnsn.com/2025-January/150092.html

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2