Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 27 |
Nodes: | 6 (0 / 6) |
Uptime: | 41:18:28 |
Calls: | 631 |
Calls today: | 2 |
Files: | 1,187 |
D/L today: |
24 files (29,813K bytes) |
Messages: | 174,725 |
On 8/19/2025 10:09 AM, Dan Cross wrote:
In article <1081sk3$3njqo$7@dont-email.me>,
Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
On 2025-08-18, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
I happen to disagree with Simon's notion of what makes for
robust programming, but to go to such an extreme as to suggest
that writing code as if logical operators don't short-circuit
is the same as not knowing the semantics of division is
specious.
That last one is an interesting example. I may not care about
short circuiting, but I am _very_ _very_ aware of the combined
unsigned integers and signed integers issues in C expressions. :-(
It also affects how I look at the same issues in other languages.
I've mentioned this before, but I think languages should give you
unsigned integers by default, and you should have to ask for
a signed integer if you really want one.
Whether integers are signed or unsigned by default is not
terribly interesting to me, but I do believe, strongly, that
implicit type conversions as in C are a Bad Idea(TM), and I
think that history has shown that view to be more or less
correct; the only language that seems to get this approximately
right is Haskell, using typeclasses, but that's not implicit
coercion; it takes well-defined, strongly-typed functions that
do explicit conversions internally, from the prelude.
But that's Haskell. For most programming, if one wants to do
arithmetic on operands of differing type, then one should be
required to explicitly convert everything to a single, uniform
type and live with whatever the semantics of that type are.
This needn't be as tedious or verbose as it sounds; with a
little bit of type inference, it can be quite succinct while
still being safe and correct.
Kotlin is rather picky about mixing signed and unsigned.
var v: UInt = 16u
v = v / 2
gives an error.
v = v / 2u
v = v / 2.toUInt()
works.
I consider that rather picky.
On 8/19/2025 9:01 AM, Simon Clubley wrote:
On 2025-08-18, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
I happen to disagree with Simon's notion of what makes for
robust programming, but to go to such an extreme as to suggest
that writing code as if logical operators don't short-circuit
is the same as not knowing the semantics of division is
specious.
That last one is an interesting example. I may not care about
short circuiting, but I am _very_ _very_ aware of the combined
unsigned integers and signed integers issues in C expressions. :-(
It also affects how I look at the same issues in other languages.
I've mentioned this before, but I think languages should give you
unsigned integers by default, and you should have to ask for
a signed integer if you really want one.
"by default" sort of imply signedness being an attribute of
same type.
Why not just make it two different types with different names?
Whether we follow tradition and call them integer and cardinal
or more modern style and call them int and uint is less important.
In article <10822mn$3pb8v$1@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 8/18/2025 11:00 PM, Lawrence DrCOOliveiro wrote:
On Mon, 18 Aug 2025 21:49:29 -0400, Arne Vajh|+j wrote:
On 8/18/2025 7:59 PM, Lawrence DrCOOliveiro wrote:
On Mon, 18 Aug 2025 19:48:16 -0400, Arne Vajh|+j wrote:
On 8/18/2025 7:45 PM, Lawrence DrCOOliveiro wrote:
Parentheses are best used lightly -- where needed, maybe a little >>>>>>> bit more than that, and thatrCOs it.
Otherwise parenthesis clutter introduces its own obstacles to
readability.
That is not the mantra among people who try to prevent future
errors.
There are people who just repeat what they are told, arenrCOt there, >>>>> instead of learning from actual experience.
It is the recommendation from people with actual experience.
One book that recommend it is "The Practice of Programming".
Brian Kernighan and Rob Pike.
One wonders how much experience they really have, across how many
different languages.
(Wow.)
Brian Kernighan and Rob Pike? A lot! :-)
It may help to read what they actually wrote in TPoP; on page 6:
|_Parenthesize to resolve ambiguity_. Parentheses specify
|grouping and can be used to make the intent clear even when
|they are not required. The inner parentheses in the previous
|example are not necessary, but they don't hurt, either.
|Seasoned programmers might omit them, because the relational
|operators (< <= == != >= >) have higher precedence than the
|logical operators (&& and ||).
|
|When mixing unrelated operators, though, it's a good idea to
|parenthesize. C and its friends present pernicious precedence
|problems, and it's easy to make a mistake.
For reference, the "previous example" they mention here is:
if ((block_id >= actblks) || (block_id < unblocks))
Most C programmers would write this as,
if (block_id >= actblks || block_id < unblocks)
And Kernighan and Pike would be fine with that. It must be
noted that, throughout the rest of TPoP, they rarely
parenthesize as aggressively as they do in that one example.
For example, on page 98, in the discussion of building a CSV
file parser interface, they present a function called,
`advquoted` that contains this line of code:
if (pj] == '"' && p[++j] != '"') {
...
}
(Note this doesn't just omit parenthesis, but also makes use of
the pre-increment operator _and_ boolean short-circuiting.)
Pike is famous for brevity; his 1989 document, "Notes on
Programming in C" is a model here: http://www.literateprogramming.com/pikestyle.pdf
Even now, it's still an interesting read. I like my own code
princples, as well, but of course, I'm biased: https://pub.gajendra.net/2016/03/code_principles
- Dan C.
Two points related to the fact that they have a special operator instead
of just using plain assignment.
On 8/18/2025 11:00 PM, Lawrence DrCOOliveiro wrote:
One wonders how much experience they really have, across how many
different languages.
Brian Kernighan and Rob Pike? A lot! :-)
In article <68a493ec$0$710$14726298@news.sunsite.dk>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
Kotlin is rather picky about mixing signed and unsigned.
var v: UInt = 16u
v = v / 2
gives an error.
v = v / 2u
v = v / 2.toUInt()
works.
I consider that rather picky.
It's kind of annoying that it can't infer to use unsigned for
the '2' in the first example. Rust, for example, does do that
inference which makes most arithmetic very natural.
In article <1081rg2$3njqo$3@dont-email.me>,
Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
I write simple to understand code, not clever code, even when the
problem it is solving is complex or has a lot of functionality
built into the problem.
I've found it makes code more robust and easier for others to read, >>especially when they may not have the knowledge you have when you
wrote the original code.
I'm curious how this expresses itself with respect to e.g. the short-circuiting thing, though. For instance, this might be
common in C:
struct something *p;
...
if (p != NULL && p->ptr != NULL && something(*p->ptr)) {
// Do something here.
}
This, of course, relies on short-circuiting to avoid
dereferncing either `p` or `*p->ptr` if either is NULL. What
is the alternative?
if (p != NULL) {
if (p->ptr != NULL) {
if (something(*p->ptr)) {
// Do something....
}
}
}
If I dare say so, this is strictly worse because the code is now
much more heavily indented.
Ken Thompson used to avoid things like this by writing such code
as:
if (p != NULL)
if (p->ptr != NULL)
if (something(p->ptr)) {
// Do something....
}
Which has a certain elegance to it, but automated code
formatters inevitably don't understand it (and at this point,
one really ought to be using an automated formatter whenever
possible).
AN alternative might be to extract the conditional and put it
into an auxiliary function, and use something similar to
Dijkstra's guarded commands:
void
maybe_do_something(struct something *p)
{
if (p == NULL)
return;
if (p->ptr == NULL)
return;
if (!something(*p->ptr))
return;
// Now do something.
}
I would argue that this is better than the previous example, and
possibly on par with or better than the original: if nothing
else, it gives a name to the operation. This is of course just
a contrived example, so the name here is meaningless, but one
hopes that in a real program a name with some semantic meaning
would be chosen.
Even now, it's still an interesting read. I like my own code
princples, as well, but of course, I'm biased: https://pub.gajendra.net/2016/03/code_principles
On 19/08/2025 17:07, Dan Cross wrote:
In article <10822mn$3pb8v$1@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 8/18/2025 11:00 PM, Lawrence DrCOOliveiro wrote:
On Mon, 18 Aug 2025 21:49:29 -0400, Arne Vajh|+j wrote:
On 8/18/2025 7:59 PM, Lawrence DrCOOliveiro wrote:
On Mon, 18 Aug 2025 19:48:16 -0400, Arne Vajh|+j wrote:
On 8/18/2025 7:45 PM, Lawrence DrCOOliveiro wrote:
Parentheses are best used lightly -- where needed, maybe a little >>>>>>>> bit more than that, and thatrCOs it.
Otherwise parenthesis clutter introduces its own obstacles to
readability.
That is not the mantra among people who try to prevent future
errors.
There are people who just repeat what they are told, arenrCOt there, >>>>>> instead of learning from actual experience.
It is the recommendation from people with actual experience.
One book that recommend it is "The Practice of Programming".
Brian Kernighan and Rob Pike.
One wonders how much experience they really have, across how many
different languages.
(Wow.)
Brian Kernighan and Rob Pike? A lot! :-)
It may help to read what they actually wrote in TPoP; on page 6:
|_Parenthesize to resolve ambiguity_. Parentheses specify
|grouping and can be used to make the intent clear even when
|they are not required. The inner parentheses in the previous
|example are not necessary, but they don't hurt, either.
|Seasoned programmers might omit them, because the relational
|operators (< <= == != >= >) have higher precedence than the
|logical operators (&& and ||).
|
|When mixing unrelated operators, though, it's a good idea to
|parenthesize. C and its friends present pernicious precedence
|problems, and it's easy to make a mistake.
For reference, the "previous example" they mention here is:
if ((block_id >= actblks) || (block_id < unblocks))
Most C programmers would write this as,
if (block_id >= actblks || block_id < unblocks)
And Kernighan and Pike would be fine with that. It must be
noted that, throughout the rest of TPoP, they rarely
parenthesize as aggressively as they do in that one example.
For example, on page 98, in the discussion of building a CSV
file parser interface, they present a function called,
`advquoted` that contains this line of code:
if (pj] == '"' && p[++j] != '"') {
...
}
(Note this doesn't just omit parenthesis, but also makes use of
the pre-increment operator _and_ boolean short-circuiting.)
Pike is famous for brevity; his 1989 document, "Notes on
Programming in C" is a model here:
http://www.literateprogramming.com/pikestyle.pdf
Even now, it's still an interesting read. I like my own code
princples, as well, but of course, I'm biased:
https://pub.gajendra.net/2016/03/code_principles
- Dan C.
Interestingly I have just come across an old bit of DEC Basic code:
REPORT.ONLY = W.S4 = "R" ! Global flag
I know what it does, but I would wrapped the knuckles of any programmer
who did that on my shift!
In article <1082lks$3nmtt$2@dont-email.me>,
Chris Townley <news@cct-net.co.uk> wrote:
On 19/08/2025 17:07, Dan Cross wrote:
In article <10822mn$3pb8v$1@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 8/18/2025 11:00 PM, Lawrence DrCOOliveiro wrote:
On Mon, 18 Aug 2025 21:49:29 -0400, Arne Vajh|+j wrote:
On 8/18/2025 7:59 PM, Lawrence DrCOOliveiro wrote:
On Mon, 18 Aug 2025 19:48:16 -0400, Arne Vajh|+j wrote:
On 8/18/2025 7:45 PM, Lawrence DrCOOliveiro wrote:
Parentheses are best used lightly -- where needed, maybe a little >>>>>>>>> bit more than that, and thatrCOs it.
Otherwise parenthesis clutter introduces its own obstacles to >>>>>>>>> readability.
That is not the mantra among people who try to prevent future
errors.
There are people who just repeat what they are told, arenrCOt there, >>>>>>> instead of learning from actual experience.
It is the recommendation from people with actual experience.
One book that recommend it is "The Practice of Programming".
Brian Kernighan and Rob Pike.
One wonders how much experience they really have, across how many
different languages.
(Wow.)
Brian Kernighan and Rob Pike? A lot! :-)
It may help to read what they actually wrote in TPoP; on page 6:
|_Parenthesize to resolve ambiguity_. Parentheses specify
|grouping and can be used to make the intent clear even when
|they are not required. The inner parentheses in the previous
|example are not necessary, but they don't hurt, either.
|Seasoned programmers might omit them, because the relational
|operators (< <= == != >= >) have higher precedence than the
|logical operators (&& and ||).
|
|When mixing unrelated operators, though, it's a good idea to
|parenthesize. C and its friends present pernicious precedence
|problems, and it's easy to make a mistake.
For reference, the "previous example" they mention here is:
if ((block_id >= actblks) || (block_id < unblocks))
Most C programmers would write this as,
if (block_id >= actblks || block_id < unblocks)
And Kernighan and Pike would be fine with that. It must be
noted that, throughout the rest of TPoP, they rarely
parenthesize as aggressively as they do in that one example.
For example, on page 98, in the discussion of building a CSV
file parser interface, they present a function called,
`advquoted` that contains this line of code:
if (pj] == '"' && p[++j] != '"') {
...
}
(Note this doesn't just omit parenthesis, but also makes use of
the pre-increment operator _and_ boolean short-circuiting.)
Pike is famous for brevity; his 1989 document, "Notes on
Programming in C" is a model here:
http://www.literateprogramming.com/pikestyle.pdf
Even now, it's still an interesting read. I like my own code
princples, as well, but of course, I'm biased:
https://pub.gajendra.net/2016/03/code_principles
- Dan C.
Interestingly I have just come across an old bit of DEC Basic code:
REPORT.ONLY = W.S4 = "R" ! Global flag
I know what it does, but I would wrapped the knuckles of any programmer
who did that on my shift!
Not knowing DEC BASIC, am I correct in guessing that this
assigns the boolean result of comparing `W.S4` with the string
"R" to `REPORT.ONLY`?
- Dan C.
On 2025-08-19, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
Even now, it's still an interesting read. I like my own code
princples, as well, but of course, I'm biased:
https://pub.gajendra.net/2016/03/code_principles
I've just read through that document and agree with everything there.
I was especially amused by the write for readability and it will be
read many times comments as I use that wording myself.
I am surprised you are picking me up on some things however, given
the mindset expressed in that document. Perhaps your idea of readability
is different from mine. :-)
On 2025-08-19, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
In article <68a493ec$0$710$14726298@news.sunsite.dk>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
Kotlin is rather picky about mixing signed and unsigned.
var v: UInt = 16u
v = v / 2
gives an error.
v = v / 2u
v = v / 2.toUInt()
works.
I consider that rather picky.
I've not used Kotlin, but I consider that to be the really good type
of picky. :-)
It's kind of annoying that it can't infer to use unsigned for
the '2' in the first example. Rust, for example, does do that
inference which makes most arithmetic very natural.
I actually consider that to be a good thing. The programmer is forced
to think about what they have written and to change it to make those >intentions explicit in the code. I like this.
On 8/18/2025 8:48 PM, Dan Cross wrote:
In article <68a3b980$0$713$14726298@news.sunsite.dk>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
But point is that one need to know something about the
languages.
Just picking an operator that "looks like" and hope it
has similar semantics is no good.
This seems like a very extreme example. There is a scale of
knowledge when it comes to programming languages, from the basic
ways in which one does various things like write loops or
perform basic arithmetic, to the minutia of specific library or
IO routines, with semantics of specific operators and how they
combine probably somewhere in the middle.
I happen to disagree with Simon's notion of what makes for
robust programming, but to go to such an extreme as to suggest
that writing code as if logical operators don't short-circuit
is the same as not knowing the semantics of division is
specious.
There are 4 operations:
- short circuiting and
- non short circuiting and
- integer division
- floating point division
Both source and target language has a way of doing those: operator,
function or a more complex expression.
I agree that the risk of someone not understanding "division"
ways is much less than the risk of someone not understanding
"and" ways.
On 2025-08-19, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
In article <1081rg2$3njqo$3@dont-email.me>,
Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
I write simple to understand code, not clever code, even when the
problem it is solving is complex or has a lot of functionality
built into the problem.
I've found it makes code more robust and easier for others to read, >>>especially when they may not have the knowledge you have when you
wrote the original code.
I'm curious how this expresses itself with respect to e.g. the
short-circuiting thing, though. For instance, this might be
common in C:
struct something *p;
...
if (p != NULL && p->ptr != NULL && something(*p->ptr)) {
// Do something here.
}
This, of course, relies on short-circuiting to avoid
dereferncing either `p` or `*p->ptr` if either is NULL. What
is the alternative?
if (p != NULL) {
if (p->ptr != NULL) {
if (something(*p->ptr)) {
// Do something....
}
}
}
If I dare say so, this is strictly worse because the code is now
much more heavily indented.
Indented properly (not as in your next example!) I find that very
readable and is mostly how I would write it although I do use code
like your return example when appropriate. This is my variant:
if (p != NULL)
{
if (p->ptr != NULL)
{
if (something(*p->ptr))
{
// Do something....
}
}
}
In case that doesn't survive a NNTP client, it is in Whitesmiths format:
https://en.wikipedia.org/wiki/Indentation_style#Whitesmiths
I like to spread out code vertically as I find it is easier to read.
We are no longer in the era of VT50/52/100 terminals. :-)
Ken Thompson used to avoid things like this by writing such code
as:
if (p != NULL)
if (p->ptr != NULL)
if (something(p->ptr)) {
// Do something....
}
YUCK * 1000!!! That's horrible!!! :-)
Which has a certain elegance to it, but automated code
formatters inevitably don't understand it (and at this point,
one really ought to be using an automated formatter whenever
possible).
AN alternative might be to extract the conditional and put it
into an auxiliary function, and use something similar to
Dijkstra's guarded commands:
void
maybe_do_something(struct something *p)
{
if (p == NULL)
return;
if (p->ptr == NULL)
return;
if (!something(*p->ptr))
return;
// Now do something.
}
I would argue that this is better than the previous example, and
possibly on par with or better than the original: if nothing
else, it gives a name to the operation. This is of course just
a contrived example, so the name here is meaningless, but one
hopes that in a real program a name with some semantic meaning
would be chosen.
There is one difference for me here however. _All_ single conditional >statements as in the above example are placed in braces to help avoid
the possibility of a later editing error.
readable and is mostly how I would write it although I do use code
like your return example when appropriate. This is my variant:
if (p != NULL)
{
if (p->ptr != NULL)
{
if (something(*p->ptr))
{
// Do something....
}
}
}
In case that doesn't survive a NNTP client, it is in Whitesmiths format:
On 20/08/2025 16:03, Dan Cross wrote:
In article <1082lks$3nmtt$2@dont-email.me>,
Chris Townley <news@cct-net.co.uk> wrote:
[snip]
Interestingly I have just come across an old bit of DEC Basic code:
REPORT.ONLY = W.S4 = "R" ! Global flag
I know what it does, but I would wrapped the knuckles of any programmer
who did that on my shift!
Not knowing DEC BASIC, am I correct in guessing that this
assigns the boolean result of comparing `W.S4` with the string
"R" to `REPORT.ONLY`?
Correct, but I would have surrounded the comparison with brackets, or
used an IF statement.
Not quite as bad as a colleague who found source code file for a
function, that ended with an UNLESS Z
Z was a global not mentioned in the source file! Try searching a massive >codebase for Z!
In article <1084fca$afbj$3@dont-email.me>,
Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
On 2025-08-19, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
Even now, it's still an interesting read. I like my own code
princples, as well, but of course, I'm biased:
https://pub.gajendra.net/2016/03/code_principles
I've just read through that document and agree with everything there.
I was especially amused by the write for readability and it will be
read many times comments as I use that wording myself.
I am surprised you are picking me up on some things however, given
the mindset expressed in that document. Perhaps your idea of readability
is different from mine. :-)
Oh, I hope any criticism I offer doesn't come across as
personal!
On 20/08/2025 13:27, Simon Clubley wrote:
<big snip>> Indented properly (not as in your next example!) I find that >very
readable and is mostly how I would write it although I do use code
like your return example when appropriate. This is my variant:
if (p != NULL)
{
if (p->ptr != NULL)
{
if (something(*p->ptr))
{
// Do something....
}
}
}
In case that doesn't survive a NNTP client, it is in Whitesmiths format:
<snip>
That is why I don't like Whitesmiths
To me the curly braces should logically align with the preceding statement.
When I first looked at he example, I immediately thought there is a
missing closing brace, which of course there isn't.
I also dislike putting the opening brace at the end of the preceding
line, although I have had to in some cases. Probably a Microsoft invention
On 2025-08-20, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
In article <1084fca$afbj$3@dont-email.me>,
Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
On 2025-08-19, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
Even now, it's still an interesting read. I like my own code
princples, as well, but of course, I'm biased:
https://pub.gajendra.net/2016/03/code_principles
I've just read through that document and agree with everything there.
I was especially amused by the write for readability and it will be
read many times comments as I use that wording myself.
I am surprised you are picking me up on some things however, given
the mindset expressed in that document. Perhaps your idea of readability >>>is different from mine. :-)
Oh, I hope any criticism I offer doesn't come across as
personal!
No, it absolutely does _not_ in any way.
For me, it's exactly the same as a colleague noticing something
in another colleague's code or design proposal and commenting on it.
You would have to be extremely fragile to take _that_ personally. :-)
On 20/08/2025 16:03, Dan Cross wrote:
In article <1082lks$3nmtt$2@dont-email.me>,
Chris Townley <news@cct-net.co.uk> wrote:
On 19/08/2025 17:07, Dan Cross wrote:
In article <10822mn$3pb8v$1@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 8/18/2025 11:00 PM, Lawrence DrCOOliveiro wrote:
On Mon, 18 Aug 2025 21:49:29 -0400, Arne Vajh|+j wrote:
On 8/18/2025 7:59 PM, Lawrence DrCOOliveiro wrote:
On Mon, 18 Aug 2025 19:48:16 -0400, Arne Vajh|+j wrote:
On 8/18/2025 7:45 PM, Lawrence DrCOOliveiro wrote:
Parentheses are best used lightly -- where needed, maybe a little >>>>>>>>>> bit more than that, and thatrCOs it.
Otherwise parenthesis clutter introduces its own obstacles to >>>>>>>>>> readability.
That is not the mantra among people who try to prevent future >>>>>>>>> errors.
There are people who just repeat what they are told, arenrCOt there, >>>>>>>> instead of learning from actual experience.
It is the recommendation from people with actual experience.
One book that recommend it is "The Practice of Programming".
Brian Kernighan and Rob Pike.
One wonders how much experience they really have, across how many
different languages.
(Wow.)
Brian Kernighan and Rob Pike? A lot! :-)
It may help to read what they actually wrote in TPoP; on page 6:
|_Parenthesize to resolve ambiguity_. Parentheses specify
|grouping and can be used to make the intent clear even when
|they are not required. The inner parentheses in the previous
|example are not necessary, but they don't hurt, either.
|Seasoned programmers might omit them, because the relational
|operators (< <= == != >= >) have higher precedence than the
|logical operators (&& and ||).
|
|When mixing unrelated operators, though, it's a good idea to
|parenthesize. C and its friends present pernicious precedence
|problems, and it's easy to make a mistake.
For reference, the "previous example" they mention here is:
if ((block_id >= actblks) || (block_id < unblocks))
Most C programmers would write this as,
if (block_id >= actblks || block_id < unblocks)
And Kernighan and Pike would be fine with that. It must be
noted that, throughout the rest of TPoP, they rarely
parenthesize as aggressively as they do in that one example.
For example, on page 98, in the discussion of building a CSV
file parser interface, they present a function called,
`advquoted` that contains this line of code:
if (pj] == '"' && p[++j] != '"') {
...
}
(Note this doesn't just omit parenthesis, but also makes use of
the pre-increment operator _and_ boolean short-circuiting.)
Pike is famous for brevity; his 1989 document, "Notes on
Programming in C" is a model here:
http://www.literateprogramming.com/pikestyle.pdf
Even now, it's still an interesting read. I like my own code
princples, as well, but of course, I'm biased:
https://pub.gajendra.net/2016/03/code_principles
- Dan C.
Interestingly I have just come across an old bit of DEC Basic code:
REPORT.ONLY = W.S4 = "R" ! Global flag
I know what it does, but I would wrapped the knuckles of any programmer
who did that on my shift!
Not knowing DEC BASIC, am I correct in guessing that this
assigns the boolean result of comparing `W.S4` with the string
"R" to `REPORT.ONLY`?
- Dan C.
Correct, but I would have surrounded the comparison with brackets, or used an IF
statement.
Not quite as bad as a colleague who found source code file for a function, that
ended with an UNLESS Z
Z was a global not mentioned in the source file! Try searching a massive codebase for Z!
On 8/20/2025 11:51 AM, Dan Cross wrote:
In article <1084drl$afbj$1@dont-email.me>,
Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
On 2025-08-19, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
In article <68a493ec$0$710$14726298@news.sunsite.dk>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
Kotlin is rather picky about mixing signed and unsigned.
var v: UInt = 16u
v = v / 2
gives an error.
v = v / 2u
v = v / 2.toUInt()
works.
I consider that rather picky.
I've not used Kotlin, but I consider that to be the really good type
of picky. :-)
It's kind of annoying that it can't infer to use unsigned for
the '2' in the first example. Rust, for example, does do that
inference which makes most arithmetic very natural.
I actually consider that to be a good thing. The programmer is forced
to think about what they have written and to change it to make those
intentions explicit in the code. I like this.
I think the point is, that in cases like this, the compiler
enforces the explicit typing anyway: if the program compiles, it
is well-typed. If it does not, then it is not. In that
context, this level of explicitness adds little, if any,
additional value.
That the literal "2" is a different type than "2u" is
interesting, however, and goes back to what you were saying
earlier about default signedness. As a mathematical object,
"2" is just a positive integer, but programming languages are
not _really_ a mathematical notation, so the need to be explicit
here makes sense from that perspective, I guess.
In Rust, I might write this sequence as:
let mut v = 16u32;
v = v / 2;
And the type inference mechanism would deduce that 2 should be
treated as a `u32`. But I could just as easily write,
let mut v = 16u32;
v = v / 2u32;
Which explicitly calls out that 2 as a `u32`.
Is this really better, though? This is where I'd argue that
matters of idiom come into play: this is not idiomatic usage in
the language, and so it is probably not better, and maybe worse.
The practical difference for specific code is likely zero.
But there is a difference in language principles and the
confidence the developer can have in it.
A rule that there is never any implicit conversion or
literal inference no matter the context is simple to
understand and gives confidence.
Exceptions even in cases where it does not matter adds
the complexity of understanding when the exceptions apply
and why they do not matter. Complexity that developers
would rather avoid.
On 8/19/2025 1:26 PM, Dan Cross wrote:
In article <10823ei$3pb8v$3@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 8/19/2025 9:01 AM, Simon Clubley wrote:
On 2025-08-18, Dan Cross <cross@spitfire.i.gajendra.net> wrote:
I happen to disagree with Simon's notion of what makes for
robust programming, but to go to such an extreme as to suggest
that writing code as if logical operators don't short-circuit
is the same as not knowing the semantics of division is
specious.
That last one is an interesting example. I may not care about
short circuiting, but I am _very_ _very_ aware of the combined
unsigned integers and signed integers issues in C expressions. :-(
It also affects how I look at the same issues in other languages.
I've mentioned this before, but I think languages should give you
unsigned integers by default, and you should have to ask for
a signed integer if you really want one.
"by default" sort of imply signedness being an attribute of
same type.
Why not just make it two different types with different names?
Whether we follow tradition and call them integer and cardinal
or more modern style and call them int and uint is less important.
I would argue that, at this point, there's little need for a
generic "int" type anymore, and that types representing integers
as understood by the machine should explicitly include both
signedness and width. An exception may be something like,
`size_t`, which is platform-dependent, but when transferred
externally should be given an explicit size. A lot of the
guesswork and folklore that goes into understanding the
semantics of those things just disappears when you're explicit.
The integer types should have well defined width.
And they could also be called int32 and uint32.
That seems to be in fashion in low level languages
competing with C.
Many higher level languages just define that int is 32 bit,
but don't show it in the name.
On 8/19/2025 1:26 PM, Dan Cross wrote:
And despite the old admonition to make everything a symbolic
constant, things like `2 * pi * r` are perfectly readable, and
I'd argue that `TWO * pi * r` are less so.
I would say that TWO and 2 are the same regarding readability.
The problem with TWO is not readability, but lack of purpose.
There are two good reasons to introduce symbolic names for constants:
1) The name can be more self documenting than a numeric value
2) If the constant is used multiple places, then having a symbolic
name makes it easier to change the value
But neither applies.
TWO does not provide more information than 2.
And it would be very unwise to change the value of TWO
to something different from 2.
In article <108dlq4$2fi6h$4@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 8/19/2025 1:26 PM, Dan Cross wrote:
In article <10823ei$3pb8v$3@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
Whether we follow tradition and call them integer and cardinal
or more modern style and call them int and uint is less important.
I would argue that, at this point, there's little need for a
generic "int" type anymore, and that types representing integers
as understood by the machine should explicitly include both
signedness and width. An exception may be something like,
`size_t`, which is platform-dependent, but when transferred
externally should be given an explicit size. A lot of the
guesswork and folklore that goes into understanding the
semantics of those things just disappears when you're explicit.
The integer types should have well defined width.
And they could also be called int32 and uint32.
That seems to be in fashion in low level languages
competing with C.
Many higher level languages just define that int is 32 bit,
but don't show it in the name.
If by "many higher level languages" you mean languages in the
JVM and CLR ecosystem, then sure, I guess so. But it's not
universal, and I don't see how it's an improvement.
(... somehow they fucked up longint and made it 32 bit on 32 bit and
64 bit Windows but 64 bit on 64 bit *nix)
On Sun, 24 Aug 2025 19:52:52 -0400, Arne Vajh|+j wrote:
(... somehow they fucked up longint and made it 32 bit on 32 bit and
64 bit Windows but 64 bit on 64 bit *nix)
It was *Microsoft* that rCLfucked up longintrCY, and nobody else. Just to be clear.
On Sun, 24 Aug 2025 19:52:52 -0400, Arne Vajh|+j wrote:
(... somehow they fucked up longint and made it 32 bit on 32 bit and
64 bit Windows but 64 bit on 64 bit *nix)
It was *Microsoft* that rCLfucked up longintrCY, and nobody else. Just to be clear.
But note that DEC made half step first: Alpha is 64-bit machine and
DEC made long 32-bit on Alpha.
On Mon, 25 Aug 2025 02:05:37 -0000 (UTC), Waldek Hebisch wrote:
But note that DEC made half step first: Alpha is 64-bit machine and
DEC made long 32-bit on Alpha.
On DEC Unix, as on every other *nix, rCLintrCY was 32 bits and rCLlongrCY was 64
bits. This applied on Alpha, too.
On 8/24/2025 7:27 PM, Dan Cross wrote:
In article <108dlq4$2fi6h$4@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 8/19/2025 1:26 PM, Dan Cross wrote:
In article <10823ei$3pb8v$3@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
Whether we follow tradition and call them integer and cardinal
or more modern style and call them int and uint is less important.
I would argue that, at this point, there's little need for a
generic "int" type anymore, and that types representing integers
as understood by the machine should explicitly include both
signedness and width. An exception may be something like,
`size_t`, which is platform-dependent, but when transferred
externally should be given an explicit size. A lot of the
guesswork and folklore that goes into understanding the
semantics of those things just disappears when you're explicit.
The integer types should have well defined width.
And they could also be called int32 and uint32.
That seems to be in fashion in low level languages
competing with C.
Many higher level languages just define that int is 32 bit,
but don't show it in the name.
If by "many higher level languages" you mean languages in the
JVM and CLR ecosystem, then sure, I guess so. But it's not
universal, and I don't see how it's an improvement.
That are two huge group of languages with a pretty big
market share in business applications.
Delphi provide both flavors. shortint/smallint/integer
and int8/int16/int32, byte/word/cardinal and
uint8/uint16/uint32. I believe the first are the most
widely used.
(64 bit is just int64 and uint64, because somehow they
fucked up longint and made it 32 bit on 32 bit and 64 bit
Windows but 64 bit on 64 bit *nix)
In article <108g8kk$33isk$1@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 8/24/2025 7:27 PM, Dan Cross wrote:
In article <108dlq4$2fi6h$4@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 8/19/2025 1:26 PM, Dan Cross wrote:
In article <10823ei$3pb8v$3@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
Whether we follow tradition and call them integer and cardinal
or more modern style and call them int and uint is less important.
I would argue that, at this point, there's little need for a
generic "int" type anymore, and that types representing integers
as understood by the machine should explicitly include both
signedness and width. An exception may be something like,
`size_t`, which is platform-dependent, but when transferred
externally should be given an explicit size. A lot of the
guesswork and folklore that goes into understanding the
semantics of those things just disappears when you're explicit.
The integer types should have well defined width.
And they could also be called int32 and uint32.
That seems to be in fashion in low level languages
competing with C.
Many higher level languages just define that int is 32 bit,
but don't show it in the name.
If by "many higher level languages" you mean languages in the
JVM and CLR ecosystem, then sure, I guess so. But it's not
universal, and I don't see how it's an improvement.
That are two huge group of languages with a pretty big
market share in business applications.
Market share is not the same as influence, and while the JVM/CLR
languages _do_ have a lot of users, that does not imply that all
are good languages. In fact, only a handful of languages in
each family have any significant adoption, and I don't think PL
designers are mining them for much inspiration these days.
Again, not universal, nor really an improvement over just using
explicitly sized types.
Delphi provide both flavors. shortint/smallint/integer
and int8/int16/int32, byte/word/cardinal and
uint8/uint16/uint32. I believe the first are the most
widely used.
The older names feel like they're very much looking backwards in
time.
(64 bit is just int64 and uint64, because somehow they
fucked up longint and made it 32 bit on 32 bit and 64 bit
Windows but 64 bit on 64 bit *nix)
I'd blame C for that.
On 8/29/2025 9:17 AM, Dan Cross wrote:
In article <108g8kk$33isk$1@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 8/24/2025 7:27 PM, Dan Cross wrote:
In article <108dlq4$2fi6h$4@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 8/19/2025 1:26 PM, Dan Cross wrote:
In article <10823ei$3pb8v$3@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
Whether we follow tradition and call them integer and cardinalI would argue that, at this point, there's little need for a
or more modern style and call them int and uint is less important. >>>>>>
generic "int" type anymore, and that types representing integers
as understood by the machine should explicitly include both
signedness and width. An exception may be something like,
`size_t`, which is platform-dependent, but when transferred
externally should be given an explicit size. A lot of the
guesswork and folklore that goes into understanding the
semantics of those things just disappears when you're explicit.
The integer types should have well defined width.
And they could also be called int32 and uint32.
That seems to be in fashion in low level languages
competing with C.
Many higher level languages just define that int is 32 bit,
but don't show it in the name.
If by "many higher level languages" you mean languages in the
JVM and CLR ecosystem, then sure, I guess so. But it's not
universal, and I don't see how it's an improvement.
That are two huge group of languages with a pretty big
market share in business applications.
Market share is not the same as influence, and while the JVM/CLR
languages _do_ have a lot of users, that does not imply that all
are good languages. In fact, only a handful of languages in
each family have any significant adoption, and I don't think PL
designers are mining them for much inspiration these days.
Again, not universal, nor really an improvement over just using
explicitly sized types.
It is a huge domain that are totally dominated by two approaches:
[snip]
Delphi provide both flavors. shortint/smallint/integer
and int8/int16/int32, byte/word/cardinal and
uint8/uint16/uint32. I believe the first are the most
widely used.
The older names feel like they're very much looking backwards in
time.
Developers tend to like what they know.
(64 bit is just int64 and uint64, because somehow they
fucked up longint and made it 32 bit on 32 bit and 64 bit
Windows but 64 bit on 64 bit *nix)
I'd blame C for that.
Delphi is not C.
In article <108t0d4$249vm$11@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 8/29/2025 9:17 AM, Dan Cross wrote:
In article <108g8kk$33isk$1@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
Delphi provide both flavors. shortint/smallint/integer
and int8/int16/int32, byte/word/cardinal and
uint8/uint16/uint32. I believe the first are the most
widely used.
The older names feel like they're very much looking backwards in
time.
Developers tend to like what they know.
(64 bit is just int64 and uint64, because somehow they
fucked up longint and made it 32 bit on 32 bit and 64 bit
Windows but 64 bit on 64 bit *nix)
I'd blame C for that.
Delphi is not C.
Obviously.
But it would be foolish to assume that they weren't influenced
by matters of compatibility with C (or more specifically C++)
here, particularly given the history of Delphi as a language.
Even the name gives it away ("longint").
In article <108t0d4$249vm$11@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 8/29/2025 9:17 AM, Dan Cross wrote:
In article <108g8kk$33isk$1@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 8/24/2025 7:27 PM, Dan Cross wrote:
In article <108dlq4$2fi6h$4@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 8/19/2025 1:26 PM, Dan Cross wrote:
In article <10823ei$3pb8v$3@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
Whether we follow tradition and call them integer and cardinal >>>>>>>> or more modern style and call them int and uint is less important. >>>>>>>I would argue that, at this point, there's little need for a
generic "int" type anymore, and that types representing integers >>>>>>> as understood by the machine should explicitly include both
signedness and width. An exception may be something like,
`size_t`, which is platform-dependent, but when transferred
externally should be given an explicit size. A lot of the
guesswork and folklore that goes into understanding the
semantics of those things just disappears when you're explicit.
The integer types should have well defined width.
And they could also be called int32 and uint32.
That seems to be in fashion in low level languages
competing with C.
Many higher level languages just define that int is 32 bit,
but don't show it in the name.
If by "many higher level languages" you mean languages in the
JVM and CLR ecosystem, then sure, I guess so. But it's not
universal, and I don't see how it's an improvement.
That are two huge group of languages with a pretty big
market share in business applications.
Market share is not the same as influence, and while the JVM/CLR
languages _do_ have a lot of users, that does not imply that all
are good languages. In fact, only a handful of languages in
each family have any significant adoption, and I don't think PL
designers are mining them for much inspiration these days.
Again, not universal, nor really an improvement over just using
explicitly sized types.
It is a huge domain that are totally dominated by two approaches:
[snip]
So? You referred to "many higher level languages". That is
qualitatively different than "a small number of languages with a
huge share of the market."
On 8/29/2025 5:38 PM, Dan Cross wrote:
In article <108t0d4$249vm$11@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 8/29/2025 9:17 AM, Dan Cross wrote:
In article <108g8kk$33isk$1@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
Delphi provide both flavors. shortint/smallint/integer
and int8/int16/int32, byte/word/cardinal and
uint8/uint16/uint32. I believe the first are the most
widely used.
The older names feel like they're very much looking backwards in
time.
Developers tend to like what they know.
(64 bit is just int64 and uint64, because somehow they
fucked up longint and made it 32 bit on 32 bit and 64 bit
Windows but 64 bit on 64 bit *nix)
I'd blame C for that.
Delphi is not C.
Obviously.
But it would be foolish to assume that they weren't influenced
by matters of compatibility with C (or more specifically C++)
here, particularly given the history of Delphi as a language.
Even the name gives it away ("longint").
That was also Lawrence's guess.
But the hypothesis that they wanted to follow
C/C++ is obviously not true.
On 8/29/2025 5:38 PM, Dan Cross wrote:
In article <108t0d4$249vm$11@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 8/29/2025 9:17 AM, Dan Cross wrote:
In article <108g8kk$33isk$1@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 8/24/2025 7:27 PM, Dan Cross wrote:
In article <108dlq4$2fi6h$4@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 8/19/2025 1:26 PM, Dan Cross wrote:
In article <10823ei$3pb8v$3@dont-email.me>,The integer types should have well defined width.
Arne Vajh|+j <arne@vajhoej.dk> wrote:
Whether we follow tradition and call them integer and cardinal >>>>>>>>> or more modern style and call them int and uint is less important. >>>>>>>>I would argue that, at this point, there's little need for a
generic "int" type anymore, and that types representing integers >>>>>>>> as understood by the machine should explicitly include both
signedness and width. An exception may be something like,
`size_t`, which is platform-dependent, but when transferred
externally should be given an explicit size. A lot of the
guesswork and folklore that goes into understanding the
semantics of those things just disappears when you're explicit. >>>>>>>
And they could also be called int32 and uint32.
That seems to be in fashion in low level languages
competing with C.
Many higher level languages just define that int is 32 bit,
but don't show it in the name.
If by "many higher level languages" you mean languages in the
JVM and CLR ecosystem, then sure, I guess so. But it's not
universal, and I don't see how it's an improvement.
That are two huge group of languages with a pretty big
market share in business applications.
Market share is not the same as influence, and while the JVM/CLR
languages _do_ have a lot of users, that does not imply that all
are good languages. In fact, only a handful of languages in
each family have any significant adoption, and I don't think PL
designers are mining them for much inspiration these days.
Again, not universal, nor really an improvement over just using
explicitly sized types.
It is a huge domain that are totally dominated by two approaches:
[snip]
So? You referred to "many higher level languages". That is
qualitatively different than "a small number of languages with a
huge share of the market."
Yes - that is two different statements.
But they are both true.
And the second qualifies the first in the sense that the
many are actually some that matter not pure exotic.
In article <108tbk2$29q30$2@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 8/29/2025 5:38 PM, Dan Cross wrote:
In article <108t0d4$249vm$11@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 8/29/2025 9:17 AM, Dan Cross wrote:
In article <108g8kk$33isk$1@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
Delphi provide both flavors. shortint/smallint/integer
and int8/int16/int32, byte/word/cardinal and
uint8/uint16/uint32. I believe the first are the most
widely used.
The older names feel like they're very much looking backwards in
time.
Developers tend to like what they know.
(64 bit is just int64 and uint64, because somehow they
fucked up longint and made it 32 bit on 32 bit and 64 bit
Windows but 64 bit on 64 bit *nix)
I'd blame C for that.
Delphi is not C.
Obviously.
But it would be foolish to assume that they weren't influenced
by matters of compatibility with C (or more specifically C++)
here, particularly given the history of Delphi as a language.
Even the name gives it away ("longint").
That was also Lawrence's guess.
I plonked that guy ages ago, so I don't see his responses. I
expect he knows even less about Delphi than
But the hypothesis that they wanted to follow
C/C++ is obviously not true.
You'll need to qualify that statement more before its veracity
is even within reach of being ascertained.
Obviously they were not following C and C++ in the sense that
the syntax (and much of the semantics) are based on Pascal, not
C. Clearly they wanted things like fundamental integral types
to line up with existing C code for calls across an FFI
boundary. One merely need look up the history of the language
to see that.
And of course these things evolved over time. Wirth's own
languages after Pascal exhibited semantics more closely
resembling those of C than Pascal. For instance, arrays in
Oberon do not retain their size as a fundamental aspect of their
type, one of the big complaints from Kernighan's famous critique
of Pascal: http://doc.cat-v.org/bell_labs/why_pascal/why_pascal_is_not_my_favorite_language.pdf
This is arguably a bug in both Oberon and C,
On 8/29/2025 9:21 PM, Dan Cross wrote:
And of course these things evolved over time.-a Wirth's own
languages after Pascal exhibited semantics more closely
resembling those of C than Pascal.-a For instance, arrays in
Oberon do not retain their size as a fundamental aspect of their
type, one of the big complaints from Kernighan's famous critique
of Pascal:
http://doc.cat-v.org/bell_labs/why_pascal/
why_pascal_is_not_my_favorite_language.pdf
This is arguably a bug in both Oberon and C,
The languages in the Pascal family has evolved.
And Kernighan's critique anno 1981 has certainly been
addressed.
I don't see the solution being particular C like though.
Original 1970's Pascal:
* array passed as an address
* compiler uses dimensions known at compile time to enforce
-a boundary checks
ISO Pascal, VMS Pascal, Delphi/FPC, Modula-2, Oberon:
Two types of array parameters:
- same as original 1970's Pascal
- open arrays to accept arguments of different dimensions
Open arrays:
* array passed with meta information / by descriptor (VMS terminology) /
-a as object (OOP terminology)
* compiler use dimensions passed at runtime to enforce boundary checks
C:
* arrays pass as address
* no boundary checks
On 9/5/2025 7:41 PM, Arne Vajh|+j wrote:
On 8/29/2025 9:21 PM, Dan Cross wrote:
And of course these things evolved over time.-a Wirth's own
languages after Pascal exhibited semantics more closely
resembling those of C than Pascal.-a For instance, arrays in
Oberon do not retain their size as a fundamental aspect of their
type, one of the big complaints from Kernighan's famous critique
of Pascal:
http://doc.cat-v.org/bell_labs/why_pascal/
why_pascal_is_not_my_favorite_language.pdf
This is arguably a bug in both Oberon and C,
The languages in the Pascal family has evolved.
And Kernighan's critique anno 1981 has certainly been
addressed.
I don't see the solution being particular C like though.
Original 1970's Pascal:
* array passed as an address
* compiler uses dimensions known at compile time to enforce
-a boundary checks
ISO Pascal, VMS Pascal, Delphi/FPC, Modula-2, Oberon:
Two types of array parameters:
- same as original 1970's Pascal
- open arrays to accept arguments of different dimensions
Open arrays:
* array passed with meta information / by descriptor (VMS terminology) /
-a as object (OOP terminology)
* compiler use dimensions passed at runtime to enforce boundary checks
C:
* arrays pass as address
* no boundary checks
VMS Pascal:
$ type main.pas
program main(input,output);
type
weird_array = array [-2..-1] of array [2..3] of integer;
[external]
procedure oldstyle(a : weird_array); external;
[external]
procedure newstyle(a : array [low..upp:integer] of array [low2..upp2:integer] of integer); external;
var
a : weird_array;
i, j : integer;
begin
for i := -2 to -1 do
for j := 2 to 3 do
a[i,j] := i * j;
oldstyle(a);
newstyle(a);
end.
$ type demo.pas
module demo(input, output);
type
weird_array = array [-2..-1] of array [2..3] of integer;
[global]
procedure oldstyle(a : weird_array);
var
i, j : integer;
begin
for i := -2 to -1 do begin
for j := 2 to 3 do begin
write(a[i,j]);
end;
writeln;
end;
end;
[global]
procedure newstyle(a : array [low..upp:integer] of array [low2..upp2:integer] of integer);
var
i, j : integer;
begin
for i := lower(a, 1) to upper(a, 1) do begin
for j := lower(a, 2) to upper(a, 2) do begin
write(a[i,j]);
end;
writeln;
end;
end;
end.
$ pas main
$ pas demo
$ link main + demo
$ run main
-4 -6
-2 -3
-4 -6
-2 -3
$ type demo.c
#include <stdio.h>
void oldstyle(int *a)
{
int *data = a;
for(int i = 0; i < 2; i++)
{
for(int j = 0; j < 2; j++)
{
printf("%10d", *data);
data++;
}
printf("\n");
}
}
#include <descrip.h>
struct dsc$bounds
{
long dsc$l_l;
long dsc$l_u;
};
void newstyle(struct dsc$descriptor_nca *sa)
{
printf("length = %d\n", sa->dsc$w_length);
printf("dtype = %d%s\n", sa->dsc$b_dtype, sa->dsc$b_dtype == DSC$K_DTYPE_L ? " (DSC$K_DTYPE_L)" : "");
printf("class = %d%s\n", sa->dsc$b_class, sa->dsc$b_class == DSC$K_CLASS_NCA ? " (DSC$K_CLASS_NCA)" : "");
printf("pointer = %d\n", sa->dsc$a_pointer);
printf("dimct = %d\n", sa->dsc$b_dimct);
printf("arsize = %d\n", sa->dsc$l_arsize);
char *p = (char *)&sa[1];
int *a0 = (int *)p;
printf("address zero element = %d\n", a0);
p = p + sizeof(int *);
int *step = (int *)p;
step++;
for(int i = 0; i < sa->dsc$b_dimct; i++)
{
printf("dim %d : step = %d\n", i + 1, step[i]);
}
p = p + sa->dsc$b_dimct * sizeof(int);
struct dsc$bounds *b = (struct dsc$bounds *)p;
for(int i = 0; i < sa->dsc$b_dimct; i++)
{
printf("dim %d : low=%d high=%d\n", i, b[i].dsc$l_l, b[i].dsc$l_u);
}
int *data = (int *)sa->dsc$a_pointer;
for(int i = 0; i < 2; i++)
{
for(int j = 0; j < 2; j++)
{
printf("%10d", *data);
data++;
}
printf("\n");
}
}
$ cc demo
$ link main + demo
$ run main
-4 -6
-2 -3
length = 4
dtype = 8 (DSC$K_DTYPE_L)
class = 10 (DSC$K_CLASS_NCA)
pointer = 2060040496
dimct = 2
arsize = 16
address zero element = 2060040468
dim 1 : step = 4
dim 2 : step = -2
dim 0 : low=-2 high=-1
dim 1 : low=2 high=3
-4 -6
-2 -3
Arne
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 9/5/2025 7:41 PM, Arne Vajh|+j wrote:
On 8/29/2025 9:21 PM, Dan Cross wrote:
And of course these things evolved over time.-a Wirth's own
languages after Pascal exhibited semantics more closely
resembling those of C than Pascal.-a For instance, arrays in
Oberon do not retain their size as a fundamental aspect of their
type, one of the big complaints from Kernighan's famous critique
of Pascal:
http://doc.cat-v.org/bell_labs/why_pascal/
why_pascal_is_not_my_favorite_language.pdf
This is arguably a bug in both Oberon and C,
The languages in the Pascal family has evolved.
And Kernighan's critique anno 1981 has certainly been
addressed.
I don't see the solution being particular C like though.
Original 1970's Pascal:
* array passed as an address
* compiler uses dimensions known at compile time to enforce
-a boundary checks
ISO Pascal, VMS Pascal, Delphi/FPC, Modula-2, Oberon:
Two types of array parameters:
- same as original 1970's Pascal
- open arrays to accept arguments of different dimensions
Open arrays:
* array passed with meta information / by descriptor (VMS terminology) / >>> -a as object (OOP terminology)
* compiler use dimensions passed at runtime to enforce boundary checks
C:
* arrays pass as address
* no boundary checks
You are inprecise. Classic Pascal has conformant array parameters,
which pass bounds. Extended Pascal (and VMS Pascal) has schema
types, including array schema, this is much more powerful than
conformat arrays, and the same as conformat array could be
checked, partially at compile time, check for indices staying in
range sometimes must be done at runtime.
On 9/5/2025 9:50 PM, Waldek Hebisch wrote:
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 9/5/2025 7:41 PM, Arne Vajh|+j wrote:
On 8/29/2025 9:21 PM, Dan Cross wrote:
And of course these things evolved over time.-a Wirth's own
languages after Pascal exhibited semantics more closely
resembling those of C than Pascal.-a For instance, arrays in
Oberon do not retain their size as a fundamental aspect of their
type, one of the big complaints from Kernighan's famous critique
of Pascal:
http://doc.cat-v.org/bell_labs/why_pascal/
why_pascal_is_not_my_favorite_language.pdf
This is arguably a bug in both Oberon and C,
The languages in the Pascal family has evolved.
And Kernighan's critique anno 1981 has certainly been
addressed.
I don't see the solution being particular C like though.
Original 1970's Pascal:
* array passed as an address
* compiler uses dimensions known at compile time to enforce
-a boundary checks
ISO Pascal, VMS Pascal, Delphi/FPC, Modula-2, Oberon:
Two types of array parameters:
- same as original 1970's Pascal
- open arrays to accept arguments of different dimensions
Open arrays:
* array passed with meta information / by descriptor (VMS terminology) / >>>> -a as object (OOP terminology)
* compiler use dimensions passed at runtime to enforce boundary checks >>>>
C:
* arrays pass as address
* no boundary checks
You are inprecise. Classic Pascal has conformant array parameters,
which pass bounds. Extended Pascal (and VMS Pascal) has schema
types, including array schema, this is much more powerful than
conformat arrays, and the same as conformat array could be
checked, partially at compile time, check for indices staying in
range sometimes must be done at runtime.
The story I got was that:
* Wirth Pascal did not have it (conformant array)
* ISO Pascal 1983 and 1990 added it for level 1
but not for level 0
But all before my time, so ...
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 9/5/2025 9:50 PM, Waldek Hebisch wrote:
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 9/5/2025 7:41 PM, Arne Vajh|+j wrote:
Original 1970's Pascal:
* array passed as an address
* compiler uses dimensions known at compile time to enforce
-a boundary checks
ISO Pascal, VMS Pascal, Delphi/FPC, Modula-2, Oberon:
Two types of array parameters:
- same as original 1970's Pascal
- open arrays to accept arguments of different dimensions
Open arrays:
* array passed with meta information / by descriptor (VMS terminology) / >>>>> -a as object (OOP terminology)
* compiler use dimensions passed at runtime to enforce boundary checks
You are inprecise. Classic Pascal has conformant array parameters,
which pass bounds. Extended Pascal (and VMS Pascal) has schema
types, including array schema, this is much more powerful than
conformat arrays, and the same as conformat array could be
checked, partially at compile time, check for indices staying in
range sometimes must be done at runtime.
The story I got was that:
* Wirth Pascal did not have it (conformant array)
AFAIK Wirth Pascal had it. Several ports of Wirh Pascal to different machines done by others lost conformant arrays.
* ISO Pascal 1983 and 1990 added it for level 1
but not for level 0
ISO Pascal 1983 simply sanctioned existing practice, that is
existence of ports without conformant arrays. IIUC differences
between level 1 ISO 1983 Pascal and Wirth Pascal were tiny.
But all before my time, so ...
Before my time too, but some people spent effort to dig out
various historical details.
Anyway I can easily add the schema thing from extended ISO
Pascal.
program main3(input,output);
type
-a-a weird_array = array [-2..-1] of array [2..3] of integer;
-a-a twodim_integer_array(n1,n2,n3,n4:integer) = array [n1..n2] of array [n3..n4] of integer;
procedure extstyle(a : twodim_integer_array);
var
-a-a i, j : integer;
begin
-a-a for i := lower(a, 1) to upper(a, 1) do begin
-a-a-a-a-a for j := lower(a, 2) to upper(a, 2) do begin
-a-a-a-a-a-a-a-a write(a[i,j]);
-a-a-a-a-a end;
-a-a-a-a-a writeln;
-a-a end;
end;
var
-a-a a : weird_array;
-a-a ax : twodim_integer_array(-2,-1,2,3);
-a-a i, j : integer;
begin
-a-a for i := -2 to -1 do
-a-a-a-a-a for j := 2 to 3 do
-a-a-a-a-a-a-a-a ax[i,j] := i * j;
-a-a extstyle(ax);
end.
On 8/29/2025 9:21 PM, Dan Cross wrote:
In article <108tbk2$29q30$2@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 8/29/2025 5:38 PM, Dan Cross wrote:
In article <108t0d4$249vm$11@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
On 8/29/2025 9:17 AM, Dan Cross wrote:
In article <108g8kk$33isk$1@dont-email.me>,
Arne Vajh|+j <arne@vajhoej.dk> wrote:
Delphi provide both flavors. shortint/smallint/integer
and int8/int16/int32, byte/word/cardinal and
uint8/uint16/uint32. I believe the first are the most
widely used.
The older names feel like they're very much looking backwards in
time.
Developers tend to like what they know.
(64 bit is just int64 and uint64, because somehow they
fucked up longint and made it 32 bit on 32 bit and 64 bit
Windows but 64 bit on 64 bit *nix)
I'd blame C for that.
Delphi is not C.
Obviously.
But it would be foolish to assume that they weren't influenced
by matters of compatibility with C (or more specifically C++)
here, particularly given the history of Delphi as a language.
Even the name gives it away ("longint").
That was also Lawrence's guess.
I plonked that guy ages ago, so I don't see his responses. I
expect he knows even less about Delphi than
But the hypothesis that they wanted to follow
C/C++ is obviously not true.
You'll need to qualify that statement more before its veracity
is even within reach of being ascertained.
Obviously they were not following C and C++ in the sense that
the syntax (and much of the semantics) are based on Pascal, not
C. Clearly they wanted things like fundamental integral types
to line up with existing C code for calls across an FFI
boundary. One merely need look up the history of the language
to see that.
The quotes I included was not kept just make the post longer,
but because the comment related to then content in them.
This is about naming of integer types.
On 8/29/2025 9:21 PM, Dan Cross wrote:
And of course these things evolved over time. Wirth's own
languages after Pascal exhibited semantics more closely
resembling those of C than Pascal. For instance, arrays in
Oberon do not retain their size as a fundamental aspect of their
type, one of the big complaints from Kernighan's famous critique
of Pascal:
http://doc.cat-v.org/bell_labs/why_pascal/why_pascal_is_not_my_favorite_language.pdf
This is arguably a bug in both Oberon and C,
The languages in the Pascal family has evolved.
And Kernighan's critique anno 1981 has certainly been
addressed.
I don't see the solution being particular C like though.
Original 1970's Pascal:
[snip; irrelevant]
Arne Vajh|+j <arne@vajhoej.dk> wrote:
[snip]
But all before my time, so ...
Before my time too, but some people spent effort to dig out
various historical details.