The question is: How can you reliably printf() a time_t value?
What conversion spec should you use?
Here's kind of an old chestnut...
The question is: How can you reliably printf() a time_t value?
What conversion spec should you use?
The answer seems to be: try stuff until the compiler doesn't warn.
Note that there is now %z for size_t, but there really should be a "Just do the right thing - you're the compiler, you know what type the arg is, do
the right thing" spec.
It turns out that in my use case, %lu works, but how can you know?
Note, BTW, that another way to do it is to use %d and case the time_t value to (int), but that seems kludgey.
On Mon, 5 Jan 2026 07:19:02 -0000 (UTC), Kenny McCormack wrote:
Here's kind of an old chestnut...
The question is: How can you reliably printf() a time_t value?
What conversion spec should you use?
On Sun 1/4/2026 11:19 PM, Kenny McCormack wrote:
The question is: How can you reliably printf() a time_t value?
What conversion spec should you use?
You can't. As far as the language is concerned, `time_t` is intended
to be an opaque type. It has to be a real type, so it is either an
integer of a plain floating-point type. But other than that nothing
is known about it. There's really no point in printing it.
If you still want to, you can do it in some implementation-specific
way. Which still immediately means that you can't do it "reliably",
if I understand what you mean correctly.
On Sun 1/4/2026 11:19 PM, Kenny McCormack wrote:
The question is: How can you reliably printf() a time_t value?
What conversion spec should you use?
You can't. As far as the language is concerned, `time_t` is intended to
be an opaque type. It has to be a real type, so it is either an integer
of a plain floating-point type. But other than that nothing is known
about it. There's really no point in printing it.
If you still want to, you can do it in some implementation-specific way. Which still immediately means that you can't do it "reliably", if I understand what you mean correctly.
It turns out that in my use case, %lu works, but how can you know?
Note, BTW, that another way to do it is to use %d and case the time_t value to (int), but that seems kludgey.
Does being a "real type" imply that "time_t" is always an alias for a standard integer type or standard floating point type?
I can't think about situation in which casting time_t value to 'long
long' can go wrong.
On Sun 1/4/2026 11:19 PM, Kenny McCormack wrote:
The question is: How can you reliably printf() a time_t value?
What conversion spec should you use?
You can't. As far as the language is concerned, `time_t` is intended to
be an opaque type. It has to be a real type, ...
... so it is either an integer
of a plain floating-point type. But other than that nothing is known
about it. There's really no point in printing it.
If you still want to, you can do it in some implementation-specific way. Which still immediately means that you can't do it "reliably", if I understand what you mean correctly.
P.S. I like the rhythm of "long long can't go wrong"...
In article <20260105105138.00005f0a@yahoo.com>,
Michael S <already5chosen@yahoo.com> wrote:
...
I can't think about situation in which casting time_t value to 'long
long' can go wrong.
These are all good suggestions, but in the end, are all just kludgey workarounds. My two points in posting are:
1) There really should be a generic way to print any numeric object and
have the compiler "Do The Right Thing".
2) Although time_t was the specific occasion for composing and posting
this, it, of course, applies to all the other "artificial" numeric
types. As mentioned in the OP, they seem to have come up with a
solution for size_t; it would be nice if that were generalized.
P.S. I like the rhythm of "long long can't go wrong"...
On Mon, 5 Jan 2026 12:45:27 -0000 (UTC)
gazelle@shell.xmission.com (Kenny McCormack) wrote:
P.S. I like the rhythm of "long long can't go wrong"...
By way of free association you comment caused me to recollect short SF
story of Fritz Leiber that was written several years before I was born.
https://archive.org/details/Galaxy_v20n01_1961-10/page/n157/mode/2up
On 05/01/2026 12:45, Kenny McCormack wrote:
In article <20260105105138.00005f0a@yahoo.com>,
Michael S <already5chosen@yahoo.com> wrote:
...
I can't think about situation in which casting time_t value to 'long
long' can go wrong.
These are all good suggestions, but in the end, are all just kludgey
workarounds. My two points in posting are:
1) There really should be a generic way to print any numeric object and >> have the compiler "Do The Right Thing".
My original C compiler had a way to do it:
#include <stdio.h>
#include <time.h>
int main(void) {
printf("%v\n", clock());
}
The special format "%v" (obviously only possible within a string
literal) is translated by the compiler into a suitable default for the
type of the corresponding expression.
In article <20260105152100.00002469@yahoo.com>,
Michael S <already5chosen@yahoo.com> wrote:
On Mon, 5 Jan 2026 12:45:27 -0000 (UTC)
gazelle@shell.xmission.com (Kenny McCormack) wrote:
P.S. I like the rhythm of "long long can't go wrong"...
By way of free association you comment caused me to recollect short
SF story of Fritz Leiber that was written several years before I was
born.
My free association story is that it makes me think of (and sing in my
head) the song "Heard It In A Love Song" (Marshall Tucker Band).
https://archive.org/details/Galaxy_v20n01_1961-10/page/n157/mode/2up
lynx didn't find anything useful (lots of noise, but no content) at
that URL. Care to say a bit about what it is about?
The question is: How can you reliably printf() a time_t value?
...By way of free association you comment caused me to recollect short
SF story of Fritz Leiber that was written several years before I was
born.
lynx didn't find anything useful (lots of noise, but no content) at
that URL. Care to say a bit about what it is about?
In short, beatniks on the orbit.
For longer description, ask somebody who happens to be a native English >speaker. Or for 20 minutes allow yourself to use a little less
eccentric way of browsing web.
In article <10jfol6$2u6r8$1@news.xmission.com>,
Kenny McCormack <gazelle@shell.xmission.com> wrote:
The question is: How can you reliably printf() a time_t value?
Do you want to print it in a human-understandable format? Or a
non-binary format? Or just in a way that's re-readable?
Here's kind of an old chestnut...
The question is: How can you reliably printf() a time_t value?
What conversion spec should you use?
On Mon, 5 Jan 2026 00:17:07 -0800
Andrey Tarasevich <noone@noone.net> wrote:
On Sun 1/4/2026 11:19 PM, Kenny McCormack wrote:
The question is: How can you reliably printf() a time_t value?
What conversion spec should you use?
You can't. As far as the language is concerned, `time_t` is intended
to be an opaque type. It has to be a real type, so it is either an
integer of a plain floating-point type. But other than that nothing
is known about it. There's really no point in printing it.
If you still want to, you can do it in some implementation-specific
way. Which still immediately means that you can't do it "reliably",
if I understand what you mean correctly.
I can't think about situation in which casting time_t value to 'long
long' can go wrong.
As Andrey pointed out, time_t can resolve to a floatingpoint type,
so, "long long" would go wrong if the implementation typedefs it to
float, or
double, or
long double.
On Mon, 05 Jan 2026 10:51:38 +0200, Michael S wrote:
On Mon, 5 Jan 2026 00:17:07 -0800
Andrey Tarasevich <noone@noone.net> wrote:
On Sun 1/4/2026 11:19 PM, Kenny McCormack wrote:
The question is: How can you reliably printf() a time_t value?
What conversion spec should you use?
You can't. As far as the language is concerned, `time_t` is intended
to be an opaque type. It has to be a real type, so it is either an
integer of a plain floating-point type. But other than that nothing
is known about it. There's really no point in printing it.
If you still want to, you can do it in some implementation-specific
way. Which still immediately means that you can't do it "reliably",
if I understand what you mean correctly.
I can't think about situation in which casting time_t value to 'long
long' can go wrong.
As Andrey pointed out, time_t can resolve to a floatingpoint type,
so, "long long" would go wrong if the implementation typedefs it to
float, or
double, or
long double.
On Mon, 05 Jan 2026 10:51:38 +0200, Michael S wrote:
On Mon, 5 Jan 2026 00:17:07 -0800
Andrey Tarasevich <noone@noone.net> wrote:
On Sun 1/4/2026 11:19 PM, Kenny McCormack wrote:
The question is: How can you reliably printf() a time_t value?
What conversion spec should you use?
You can't. As far as the language is concerned, `time_t` is
intended to be an opaque type. It has to be a real type, so it is
either an integer of a plain floating-point type. But other than
that nothing is known about it. There's really no point in
printing it.
If you still want to, you can do it in some implementation-specific
way. Which still immediately means that you can't do it "reliably",
if I understand what you mean correctly.
I can't think about situation in which casting time_t value to 'long
long' can go wrong.
As Andrey pointed out, time_t can resolve to a floatingpoint type,
so, "long long" would go wrong if the implementation typedefs it to
float, or
double, or
long double.
As I understand it, time_t is intended to be suitable for holding a
number of seconds ...
... (it is used for that purpose in struct timespec). ...
I can't think about situation in which casting time_t value to 'long
long' can go wrong.
On 2026-01-05 11:34, David Brown wrote:
...
As I understand it, time_t is intended to be suitable for holding a
number of seconds ...
The standard says nothing about that.
... (it is used for that purpose in struct timespec). ...
The standard says nothing to connect time_t to struct timespec.
On 05/01/2026 19:11, James Kuyper wrote:
On 2026-01-05 11:34, David Brown wrote:
...
As I understand it, time_t is intended to be suitable for holding a
number of seconds ...
The standard says nothing about that.
... (it is used for that purpose in struct timespec). ...
The standard says nothing to connect time_t to struct timespec.
7.27.1p4:
The range and precision of times representable in clock_t and time_t are implementation-defined. The timespec structure shall contain at least
the following members, in any order. The semantics of the members and
their normal ranges are expressed in the comments.
time_t tv_sec; // whole seconds -- >= 0
long tv_nsec; // nanoseconds -- [0, 999999999]
On 2026-01-05 13:28, David Brown wrote:
On 05/01/2026 19:11, James Kuyper wrote:
On 2026-01-05 11:34, David Brown wrote:
...
As I understand it, time_t is intended to be suitable for holding a
number of seconds ...
The standard says nothing about that.
... (it is used for that purpose in struct timespec). ...
The standard says nothing to connect time_t to struct timespec.
7.27.1p4:
The range and precision of times representable in clock_t and time_t are
implementation-defined. The timespec structure shall contain at least
the following members, in any order. The semantics of the members and
their normal ranges are expressed in the comments.
time_t tv_sec; // whole seconds -- >= 0
long tv_nsec; // nanoseconds -- [0, 999999999]
I'm not sure how I missed that in my search.
In the latest draft of the
standard I could find, n3685.pdf, that's in 7.21.1p6. I found struct
timespec mentioned in 7.21.1p5 with no detailed specification, and
didn't bother reading the next paragraph, which provides that
specification. If I had thought about it, I would have realized that the
same was true of struct tm, which I know from long experience has a
detailed specification.
On 2026-01-05 11:34, David Brown wrote:
...
As I understand it, time_t is intended to be suitable for holding a
number of seconds ...
The standard says nothing about that.
... (it is used for that purpose in struct timespec). ...
The standard says nothing to connect time_t to struct timespec.
On Mon, 5 Jan 2026 16:23:09 -0000 (UTC)[...]
Lew Pitcher <lew.pitcher@digitalfreehold.ca> wrote:
As Andrey pointed out, time_t can resolve to a floatingpoint type,
so, "long long" would go wrong if the implementation typedefs it to
float, or
double, or
long double.
Reading literally what you wrote makes no sense, so I assume that you
meant that 'it' above constitutes time_t rather than 'long long'.
The rest of the post assumes 64-bit 'long long'.
'typedef float time_t' where float==IEEE binary32 is a bad idea, because
you lose your 1sec resolution after 7 months. I.e. with Unix epoch you
lost it a year or two before Ritchi finished his first C compiler.
'typedef double time_t' means that you lost 1 sec resolution ~3
orders of magnitude before you got a chance to overflow 'long long'.
Only in case of 'typedef long double time_t' there is a chance that
overflow happens before resolution is lost. But barely so for 80-bit
long double format prevalent on i386/AMD64 computers.
(I think Windows does as well, but the documentation is less clear,
or perhaps I'm missing something.)
As Andrey pointed out, time_t can resolve to a floatingpoint type,
In this case (64-bit Windows), it would be "%lld".
On 2026-01-05 05:32, David Brown wrote:
...
Does being a "real type" imply that "time_t" is always an alias for
a standard integer type or standard floating point type?
No, real types include integer types, which in turn includes the
signed and unsigned integer types (6.2.5p22). The signed integer
types include the extended signed integer types (6.2.5p6), and the
unsigned integer types include the extended unsigned integer types
(6.2.5p8).
On Mon, 5 Jan 2026 07:37:07 -0500, James Kuyper wrote:
On 2026-01-05 05:32, David Brown wrote:
...
Does being a "real type" imply that "time_t" is always an alias for
a standard integer type or standard floating point type?
No, real types include integer types, which in turn includes the
signed and unsigned integer types (6.2.5p22). The signed integer
types include the extended signed integer types (6.2.5p6), and the
unsigned integer types include the extended unsigned integer types
(6.2.5p8).
They could have said rCLnumeric typesrCY to avoid confusion with the mathematical usage ...
On Mon, 5 Jan 2026 13:22:19 +0000, bart wrote:Did you ever try to use them? They look ugly.
In this case (64-bit Windows), it would be "%lld".
Section 7.8 of the C spec defines macros you can use so you donrCOt have
to hard-code assumptions about the lengths of integers in
printf-format strings.
James Kuyper <jameskuyper@alumni.caltech.edu> writes:
On 2026-01-05 11:34, David Brown wrote:
...
As I understand it, time_t is intended to be suitable for holding a
number of seconds ...
The standard says nothing about that.
... (it is used for that purpose in struct timespec). ...
The standard says nothing to connect time_t to struct timespec.
Yes, but also no.
struct timespec contains at least the following members, in any order:
time_t tv_sec; // whole seconds rCo reN 0
long tv_nsec; // nanoseconds rCo [0, 999999999]
But a footnote says:
The tv_sec member is a linear count of seconds and may not have
the normal semantics of a time_t.
This makes for a simpler implementation *if* the time_t value
returned by time(), and operated on by difftime(), mktime(), ctime(), gmtime(), and localtime(), happens to hold a linear count of seconds
(as it does on most systems).
It's odd that time_t is used for two potentially very different
purposes, one as a member of struct timespec and another as used
in all other contexts.
And I would guess that a lot of code that
uses struct timespec *assumes* that its time_t member has the same
semantics value returned by time(NULL).
For example, as I write this the time is 2026-01-05 22:32:57.881 UTC.
The corresponding value returned by time() is 1767652377 (seconds
since the 1970 epoch, no milliseconds). An implementation could
represent the current time (the value returned by time(NULL) as a
64-bit integer with the value 20260105223257881. But timespec_get()
would still have to set the tv_sec member to 1767652377.
It might have been cleaner either to require that time_t represents a
count of seconds, or to use a type other than time_t for the tv_sec
member of struct timespec.
I know there are systems that use something other than seconds
since 1970 in the underlying time representation, but are there any
C implementations that don't use seconds since 1970? (POSIX and
Windows both specify that.)
On Mon, 5 Jan 2026 16:23:09 -0000 (UTC), Lew Pitcher wrote:
As Andrey pointed out, time_t can resolve to a floatingpoint type,
POSIX says time_t is an integer type <https://manpages.debian.org/time_t(3type)>.
On Tue, 6 Jan 2026 00:27:04 -0000 (UTC)...
Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:
Section 7.8 of the C spec defines macros you can use so you donrCOt have
to hard-code assumptions about the lengths of integers in
printf-format strings.
Did you ever try to use them? They look ugly.
On 2026-01-05 19:22, Lawrence DrCOOliveiro wrote:
On Mon, 5 Jan 2026 16:23:09 -0000 (UTC), Lew Pitcher wrote:
As Andrey pointed out, time_t can resolve to a floatingpoint type,
POSIX says time_t is an integer type
<https://manpages.debian.org/time_t(3type)>.
Which means that implementations where time_t has a floating point type cannot comply with POSIX; many implementations fail to do so.
On Tue, 06 Jan 2026 10:22:28 -0500, James Kuyper wrote:
On 2026-01-05 19:22, Lawrence DrCOOliveiro wrote:
On Mon, 5 Jan 2026 16:23:09 -0000 (UTC), Lew Pitcher wrote:
As Andrey pointed out, time_t can resolve to a floatingpoint type,
POSIX says time_t is an integer type
Actually, POSIX does not say that.
<https://manpages.debian.org/time_t(3type)>.
The Debian manpages do not necessarily represent current POSIX standards.
The current online POSIX standards pages say that, among others, time_t "shall be defined as arithmetic types of an appropriate length" (https://pubs.opengroup.org/onlinepubs/9799919799/basedefs/sys_types.h.html)
Lew Pitcher wrote:
On Tue, 06 Jan 2026 10:22:28 -0500, James Kuyper wrote:
On 2026-01-05 19:22, Lawrence DrCOOliveiro wrote:
On Mon, 5 Jan 2026 16:23:09 -0000 (UTC), Lew Pitcher wrote:
As Andrey pointed out, time_t can resolve to a floatingpoint type,
POSIX says time_t is an integer type
Actually, POSIX does not say that.
<https://manpages.debian.org/time_t(3type)>.
The Debian manpages do not necessarily represent current POSIX standards.
The current online POSIX standards pages say that, among others, time_t
"shall be defined as arithmetic types of an appropriate length"
(https://pubs.opengroup.org/onlinepubs/9799919799/basedefs/sys_types.h.html)
Looks like you looked at an old version. Currently there is:
|
| [CX] time_t shall be an integer type with a width (see <stdint.h>) of at least 64 bits.
| [...]
| Austin Group Defect 1462 is applied, changing time_t to have a width of at least 64 bits.
It refers to this issue:--
<https://www.austingroupbugs.net/view.php?id=1462>
Lew Pitcher wrote:
On Tue, 06 Jan 2026 10:22:28 -0500, James Kuyper wrote:
On 2026-01-05 19:22, Lawrence DrCOOliveiro wrote:
On Mon, 5 Jan 2026 16:23:09 -0000 (UTC), Lew Pitcher wrote:
As Andrey pointed out, time_t can resolve to a floatingpoint type,
POSIX says time_t is an integer type
Actually, POSIX does not say that.
<https://manpages.debian.org/time_t(3type)>.
The Debian manpages do not necessarily represent current POSIX standards.
The current online POSIX standards pages say that, among others, time_t
"shall be defined as arithmetic types of an appropriate length"
(https://pubs.opengroup.org/onlinepubs/9799919799/basedefs/sys_types.h.html)
Looks like you looked at an old version. Currently there is:
|
| [CX] time_t shall be an integer type with a width (see <stdint.h>) of at least 64 bits.
| [...]
| Austin Group Defect 1462 is applied, changing time_t to have a width of at least 64 bits.
It refers to this issue:
<https://www.austingroupbugs.net/view.php?id=1462>
On Tue, 06 Jan 2026 17:00:43 +0100, Michael B|nuerle wrote:...
Lew Pitcher wrote:
The current online POSIX standards pages say that, among others, time_t
"shall be defined as arithmetic types of an appropriate length"
(https://pubs.opengroup.org/onlinepubs/9799919799/basedefs/sys_types.h.html)
Looks like you looked at an old version. Currently there is:
|
| [CX] time_t shall be an integer type with a width (see <stdint.h>) of at least 64 bits.
| [...]
| Austin Group Defect 1462 is applied, changing time_t to have a width of at least 64 bits.
Do you have a URL reference for this? I got my POSIX references direct from the Open Group's
"current standards" links. The time(2), <time.h> and <sys/types.h> webpages are all
headed:
The Open Group Base Specifications Issue 8
IEEE Std 1003.1-2024
Perhaps they have not yet applied the defect remediation to the online reference.
On 2026-01-06 04:29, Michael S wrote:
On Tue, 6 Jan 2026 00:27:04 -0000 (UTC)...
Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:
Section 7.8 of the C spec defines macros you can use so you donrCOt
have to hard-code assumptions about the lengths of integers in
printf-format strings.
Did you ever try to use them? They look ugly.
Which is more important, correctness or beauty?
If you know that an expression has one of the standard-named types or typedefs for with there is a corresponding printf() specifier, youI should? Really?
should use that specifier. Otherwise, if you know that an expression
has one of the types declared in <stdint.h>, you should use the
corresponding macro #defined in <inttypes.h> to print it.
If you haveThat is better advice.
a value that is not known to be of one of those types, but is known
to be convertible to one of those types without change of value, you
should convert it to one of those types.
On Tue, 6 Jan 2026 10:31:41 -0500
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
If you know that an expression has one of the standard-named types or
typedefs for with there is a corresponding printf() specifier, you
should use that specifier. Otherwise, if you know that an expression
has one of the types declared in <stdint.h>, you should use the
corresponding macro #defined in <inttypes.h> to print it.
I should? Really?
Sorry, James, but you have no authority to make such statements.
On Tue, 6 Jan 2026 10:31:41 -0500
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 2026-01-06 04:29, Michael S wrote:
On Tue, 6 Jan 2026 00:27:04 -0000 (UTC)...
Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:
Section 7.8 of the C spec defines macros you can use so you donrCOt
have to hard-code assumptions about the lengths of integers in
printf-format strings.
Did you ever try to use them? They look ugly.
Which is more important, correctness or beauty?
It depends.
When I know for sure that incorrectness has no consequences, like
in case of using %u to print 'unsigned long' on target with 32-bit
longs, or like using %llu to print 'unsigned long' on target with
64-bit longs, then beauty wins. Easily.
On Tue, 6 Jan 2026 10:31:41 -0500
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 2026-01-06 04:29, Michael S wrote:
On Tue, 6 Jan 2026 00:27:04 -0000 (UTC)...
Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:
Section 7.8 of the C spec defines macros you can use so you donrCOt
have to hard-code assumptions about the lengths of integers in
printf-format strings.
Did you ever try to use them? They look ugly.
Which is more important, correctness or beauty?
It depends.
When I know for sure that incorrectness has no consequences, like
in case of using %u to print 'unsigned long' on target with 32-bit
longs, or like using %llu to print 'unsigned long' on target with
64-bit longs, then beauty wins. Easily.
If you know that an expression has one of the standard-named types or
typedefs for with there is a corresponding printf() specifier, you
should use that specifier. Otherwise, if you know that an expression
has one of the types declared in <stdint.h>, you should use the
corresponding macro #defined in <inttypes.h> to print it.
I should? Really?
Sorry, James, but you have no authority to make such statements.
I agree that the macros in <stdint.h> are ugly, and I rarely
use them. If I want to print an integer value whose type I don't
know, I'll probably cast to a predefined type that I know to be
wide enough and use the specifier for that type. Though now that
I think about it, I'm more likely to do that in throwaway code;
for production code, I'd be more likely to use the <stdint.h> macros.
On 2026-01-06 13:05, Michael S wrote:
in case of using %u to print 'unsigned long' on target with 32-bit
longs, or like using %llu to print 'unsigned long' on target with
64-bit longs, then beauty wins. Easily.
You've got it backwards. "%u" is the correct specifier to use for
unsigned long on all platforms, whether unsigned long is 32, 36, or even
48 bits.
On 07/01/2026 00:44, James Kuyper wrote:
On 2026-01-06 13:05, Michael S wrote:
in case of using %u to print 'unsigned long' on target with 32-bit
longs, or like using %llu to print 'unsigned long' on target with
64-bit longs, then beauty wins. Easily.
You've got it backwards. "%u" is the correct specifier to use for
unsigned long on all platforms, whether unsigned long is 32, 36, or
even 48 bits.
So not "%lu"?
Michael S <already5chosen@yahoo.com> writes:
On Tue, 6 Jan 2026 10:31:41 -0500
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 2026-01-06 04:29, Michael S wrote:
On Tue, 6 Jan 2026 00:27:04 -0000 (UTC)...
Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:
Section 7.8 of the C spec defines macros you can use so you
donrCOt have to hard-code assumptions about the lengths of
integers in printf-format strings.
Did you ever try to use them? They look ugly.
Which is more important, correctness or beauty?
It depends.
When I know for sure that incorrectness has no consequences, like
in case of using %u to print 'unsigned long' on target with 32-bit
longs, or like using %llu to print 'unsigned long' on target with
64-bit longs, then beauty wins. Easily.
Seriously?
An example:
unsigned long n = 42;
printf("%u\n", n); // incorrect
printf("%lu\n", n); // correct
Are you really saying that the second version is so much uglier
than the first that you'd rather write incorrect code?
If unsigned int and unsigned long happen to be the same size, both
are likely to print "42". But what if your code is later compiled
on a system with 32-bit unsigned int and 64-bit unsigned long?
Even if I were certain the code would never be ported (and such
certainty is often unjustified), I'd much rather use the correct
code than waste time figuring out which incorrect code will happen
to "work" on the current system, with the current version of the
compiler and runtime library. Oh, and gcc and clang both warn
about an incorrect format string.
I agree that the macros in <stdint.h> are ugly, and I rarely
use them. If I want to print an integer value whose type I don't
know, I'll probably cast to a predefined type that I know to be
wide enough and use the specifier for that type. Though now that
I think about it, I'm more likely to do that in throwaway code;
for production code, I'd be more likely to use the <stdint.h> macros.
[...]
On Tue, 06 Jan 2026 16:29:04 -0800
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
On Tue, 6 Jan 2026 10:31:41 -0500
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 2026-01-06 04:29, Michael S wrote:
On Tue, 6 Jan 2026 00:27:04 -0000 (UTC)...
Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:
Section 7.8 of the C spec defines macros you can use so you
donrCOt have to hard-code assumptions about the lengths of
integers in printf-format strings.
Did you ever try to use them? They look ugly.
Which is more important, correctness or beauty?
It depends.
When I know for sure that incorrectness has no consequences, like
in case of using %u to print 'unsigned long' on target with 32-bit
longs, or like using %llu to print 'unsigned long' on target with
64-bit longs, then beauty wins. Easily.
Seriously?
An example:
unsigned long n = 42;
printf("%u\n", n); // incorrect
printf("%lu\n", n); // correct
Are you really saying that the second version is so much uglier
than the first that you'd rather write incorrect code?
No, I don't think that it is much uglier. At worst, I think that
correct version is tiny bit uglier. Not enough for beauty to win over "correctness", even when correctness is non-consequential.
I hoped that you followed the sub-thread from the beginning and did not
lost the context yet.
Which is (everywhere except LIN64)
uint32_t n = 42;
printf("n = %u\n", n); // incorrect
printf("n = " PRIu32 "\n", n); // correct
or on LIN64
uint64_t n = 42;
printf("n = %llu\n", n); // incorrect
printf("n = " PRIu64 "\n", n); // correct
Here in my book beauty wins by landslide.
Although really it is not beauty wins. It's ugliness loses.
I am happy that in practice your position is not too different from my position. It's just that irresistible urge of you to defend "right"
things in NG discussions that creates an appearance of disagreeing.
On 2026-01-05 03:17, Andrey Tarasevich wrote:
On Sun 1/4/2026 11:19 PM, Kenny McCormack wrote:
The question is: How can you reliably printf() a time_t value?
What conversion spec should you use?
You can't. As far as the language is concerned, `time_t` is intended
to be an opaque type. It has to be a real type, ...
In C99, it was only required to be an arithmetic type. I pointed out
that this would permit it to be, for example, double _Imaginary. [...]
Michael S <already5chosen@yahoo.com> writes:
On Tue, 6 Jan 2026 10:31:41 -0500
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
If you know that an expression has one of the standard-named types or
typedefs for with there is a corresponding printf() specifier, you
should use that specifier. Otherwise, if you know that an expression
has one of the types declared in <stdint.h>, you should use the
corresponding macro #defined in <inttypes.h> to print it.
I should? Really?
Sorry, James, but you have no authority to make such statements.
James is paraphrasing the C standard.
Michael S <already5chosen@yahoo.com> writes:
On Tue, 6 Jan 2026 10:31:41 -0500
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 2026-01-06 04:29, Michael S wrote:
On Tue, 6 Jan 2026 00:27:04 -0000 (UTC)...
Lawrence DrCOOliveiro <ldo@nz.invalid> wrote:
Section 7.8 of the C spec defines macros you can use so you donrCOt
have to hard-code assumptions about the lengths of integers in
printf-format strings.
Did you ever try to use them? They look ugly.
Which is more important, correctness or beauty?
It depends.
When I know for sure that incorrectness has no consequences, like
in case of using %u to print 'unsigned long' on target with 32-bit
longs, or like using %llu to print 'unsigned long' on target with
64-bit longs, then beauty wins. Easily.
Seriously?
An example:
unsigned long n = 42;
printf("%u\n", n); // incorrect
printf("%lu\n", n); // correct
Are you really saying that the second version is so much uglier
than the first that you'd rather write incorrect code?
On Wed, 7 Jan 2026 01:14:21 +0000
bart <bc@freeuk.com> wrote:
On 07/01/2026 00:44, James Kuyper wrote:
On 2026-01-06 13:05, Michael S wrote:
in case of using %u to print 'unsigned long' on target with 32-bit
longs, or like using %llu to print 'unsigned long' on target with
64-bit longs, then beauty wins. Easily.
You've got it backwards. "%u" is the correct specifier to use for
unsigned long on all platforms, whether unsigned long is 32, 36, or
even 48 bits.
So not "%lu"?
gcc and clang maintainers certainly think so.
On 07/01/2026 11:41, Michael S wrote:
On Wed, 7 Jan 2026 01:14:21 +0000
bart <bc@freeuk.com> wrote:
On 07/01/2026 00:44, James Kuyper wrote:
On 2026-01-06 13:05, Michael S wrote:
in case of using %u to print 'unsigned long' on target with
32-bit longs, or like using %llu to print 'unsigned long' on
target with 64-bit longs, then beauty wins. Easily.
You've got it backwards. "%u" is the correct specifier to use for
unsigned long on all platforms, whether unsigned long is 32, 36,
or even 48 bits.
So not "%lu"?
gcc and clang maintainers certainly think so.
They think it is correct or not correct? If I compile this:
#include <stdio.h>
int main() {
unsigned long a=0;
printf("%u", a);
}
then gcc complains (given suitable options):
warning: format '%u' expects argument of type 'unsigned int', but
argument 2 has type 'long unsigned int' [-Wformat=]
The suggests it is not correct.
On 07/01/2026 00:44, James Kuyper wrote:
On 2026-01-06 13:05, Michael S wrote:
in case of using %u to print 'unsigned long' on target with 32-bit
longs, or like using %llu to print 'unsigned long' on-a target with
64-bit longs, then beauty wins. Easily.
You've got it backwards. "%u" is the correct specifier to use for
unsigned long on all platforms, whether unsigned long is 32, 36, or even
48 bits.
So not "%lu"?
James Kuyper <jameskuyper@alumni.caltech.edu> writes:...
On 2026-01-05 03:17, Andrey Tarasevich wrote:
You can't. As far as the language is concerned, `time_t` is intended
to be an opaque type. It has to be a real type, ...
In C99, it was only required to be an arithmetic type. I pointed out
that this would permit it to be, for example, double _Imaginary. [...]
It's hard to imagine how time_t being an imaginary type could
provide the semantics described in the C standard for time_t.
scott@slp53.sl.home (Scott Lurndal) writes:
Michael S <already5chosen@yahoo.com> writes:
On Tue, 6 Jan 2026 10:31:41 -0500
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
If you know that an expression has one of the standard-named types or
typedefs for with there is a corresponding printf() specifier, you
should use that specifier. Otherwise, if you know that an expression
has one of the types declared in <stdint.h>, you should use the
corresponding macro #defined in <inttypes.h> to print it.
I should? Really?
Sorry, James, but you have no authority to make such statements.
James is paraphrasing the C standard.
Really? What passage in the C standard is being paraphrased?
On a different point, I used time_t as an example. It would have
been better to use ptrdiff_t instead, since <inttypes.h> has a macro
for that type, and doesn't have one for time_t.
Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
Michael S <already5chosen@yahoo.com> writes:
On Tue, 6 Jan 2026 10:31:41 -0500
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 2026-01-06 04:29, Michael S wrote:
On Tue, 6 Jan 2026 00:27:04 -0000 (UTC)...
Lawrence D|ore4raoOliveiro <ldo@nz.invalid> wrote:
Section 7.8 of the C spec defines macros you can use so you don|ore4raot
have to hard-code assumptions about the lengths of integers in
printf-format strings.
Did you ever try to use them? They look ugly.
Which is more important, correctness or beauty?
It depends.
When I know for sure that incorrectness has no consequences, like
in case of using %u to print 'unsigned long' on target with 32-bit
longs, or like using %llu to print 'unsigned long' on target with
64-bit longs, then beauty wins. Easily.
Seriously?
An example:
unsigned long n = 42;
printf("%u\n", n); // incorrect
printf("%lu\n", n); // correct
Are you really saying that the second version is so much uglier
than the first that you'd rather write incorrect code?
I suspect he may have been referring to code that needs
to build for both 32-bit and 64-bit targets. One might
typedef 'uint64' to be unsigned long long on both targets
and just use %llu for the format string. BTDT.
On Wed, 7 Jan 2026 12:45:30 -0500, James Russell Kuyper Jr. wrote:
On a different point, I used time_t as an example. It would have
been better to use ptrdiff_t instead, since <inttypes.h> has a macro
for that type, and doesn't have one for time_t.
This is why you have configure scripts, so they can figure out the
right types to use for building on your platform.
scott@slp53.sl.home (Scott Lurndal) writes:
Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
Michael S <already5chosen@yahoo.com> writes:
On Tue, 6 Jan 2026 10:31:41 -0500
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 2026-01-06 04:29, Michael S wrote:
On Tue, 6 Jan 2026 00:27:04 -0000 (UTC)...
Lawrence D|ore4raoOliveiro <ldo@nz.invalid> wrote:
Section 7.8 of the C spec defines macros you can use so you
don|ore4raot have to hard-code assumptions about the lengths of
integers in printf-format strings.
Did you ever try to use them? They look ugly.
Which is more important, correctness or beauty?
It depends.
When I know for sure that incorrectness has no consequences, like
in case of using %u to print 'unsigned long' on target with 32-bit
longs, or like using %llu to print 'unsigned long' on target with
64-bit longs, then beauty wins. Easily.
Seriously?
An example:
unsigned long n = 42;
printf("%u\n", n); // incorrect
printf("%lu\n", n); // correct
Are you really saying that the second version is so much uglier
than the first that you'd rather write incorrect code?
I suspect he may have been referring to code that needs
to build for both 32-bit and 64-bit targets. One might
typedef 'uint64' to be unsigned long long on both targets
and just use %llu for the format string. BTDT.
In the quoted paragraph above, Michael wrote about using %u to print
unsigned long, not about using %u to print some type hidden behind
a typedef. If he didn't mean that, he can say so.
But even if he meant to talk about printing, say, uint64_t values,
my point stands.
I wouldn't define my own "uint64" type. I'd just use "uint64_t",
defined in <stdint.h>. And I'd use one of several *correct* ways
to print uint64_t values.
Michael, if you'd care to clarify, given:
unsigned long n = 42;
printf("%u\n", n); // incorrect
printf("%lu\n", n); // correct
(and assuming that unsigned int and unsigned long are the same width
on the current implementation), do you really prefer the version
marked as "incorrect"?
Lawrence DrCOOliveiro <ldo@nz.invalid> writes:
On Wed, 7 Jan 2026 12:45:30 -0500, James Russell Kuyper Jr. wrote:
On a different point, I used time_t as an example. It would have
been better to use ptrdiff_t instead, since <inttypes.h> has a
macro for that type, and doesn't have one for time_t.
This is why you have configure scripts, so they can figure out the
right types to use for building on your platform.
I don't follow. ptrdiff_t is defined in <stddef.h>, and is the
correct type for the result of subtracting two pointers. What
relevant information would a configure script give you?
On Wed, 07 Jan 2026 13:28:45 -0800
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
scott@slp53.sl.home (Scott Lurndal) writes:
Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
Michael S <already5chosen@yahoo.com> writes:
On Tue, 6 Jan 2026 10:31:41 -0500
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 2026-01-06 04:29, Michael S wrote:
On Tue, 6 Jan 2026 00:27:04 -0000 (UTC)...
Lawrence D|ore4raoOliveiro <ldo@nz.invalid> wrote:
Section 7.8 of the C spec defines macros you can use so you
don|ore4raot have to hard-code assumptions about the lengths of
integers in printf-format strings.
Did you ever try to use them? They look ugly.
Which is more important, correctness or beauty?
It depends.
When I know for sure that incorrectness has no consequences, like
in case of using %u to print 'unsigned long' on target with 32-bit
longs, or like using %llu to print 'unsigned long' on target with
64-bit longs, then beauty wins. Easily.
Seriously?
An example:
unsigned long n = 42;
printf("%u\n", n); // incorrect
printf("%lu\n", n); // correct
Are you really saying that the second version is so much uglier
than the first that you'd rather write incorrect code?
I suspect he may have been referring to code that needs
to build for both 32-bit and 64-bit targets. One might
typedef 'uint64' to be unsigned long long on both targets
and just use %llu for the format string. BTDT.
In the quoted paragraph above, Michael wrote about using %u to print
unsigned long, not about using %u to print some type hidden behind
a typedef. If he didn't mean that, he can say so.
But even if he meant to talk about printing, say, uint64_t values,
my point stands.
I wouldn't define my own "uint64" type. I'd just use "uint64_t",
defined in <stdint.h>. And I'd use one of several *correct* ways
to print uint64_t values.
Michael, if you'd care to clarify, given:
unsigned long n = 42;
printf("%u\n", n); // incorrect
printf("%lu\n", n); // correct
(and assuming that unsigned int and unsigned long are the same width
on the current implementation), do you really prefer the version
marked as "incorrect"?
I hoped that I already clarified that point more than one time.
Obviously, I hoped wrong.
In the case I am talking about n declared as uint32_t.
uint32_t is an alias of 'unsigned long' on 32-bit embedded targets, on
32-bit Linux, on 32-bit Windows and on 64-bit Windows. It is
alias of 'unsigned int' on 64-bit Linux.
Sometimes I move code between targets by myself, sometimes, rarely,
other people do it. I don't want to have different versions of the code
and I don't want to use ugly standard specifiers. Between two pretty
and working variants I prefer the shorter one. Partly because it is guaranteed to work correctly on all my targets, including LIN64, but
more importantly (in practice, 64-bit Linux is a very rare target in my
daily routine) just because it is shorter. And I don't care that it is formally "incorrect" on my more common targets. Or may be not
"formally", but both gcc and clang think so.
Michael S <already5chosen@yahoo.com> writes:
On Wed, 07 Jan 2026 13:28:45 -0800
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
scott@slp53.sl.home (Scott Lurndal) writes:
Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
Michael S <already5chosen@yahoo.com> writes:
On Tue, 6 Jan 2026 10:31:41 -0500
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 2026-01-06 04:29, Michael S wrote:
On Tue, 6 Jan 2026 00:27:04 -0000 (UTC)...
Lawrence D|ore4raoOliveiro <ldo@nz.invalid> wrote:
Section 7.8 of the C spec defines macros you can use so you
don|ore4raot have to hard-code assumptions about the lengths of
integers in printf-format strings.
Did you ever try to use them? They look ugly.
Which is more important, correctness or beauty?
It depends.
When I know for sure that incorrectness has no consequences,
like in case of using %u to print 'unsigned long' on target
with 32-bit longs, or like using %llu to print 'unsigned long'
on target with 64-bit longs, then beauty wins. Easily.
Seriously?
An example:
unsigned long n = 42;
printf("%u\n", n); // incorrect
printf("%lu\n", n); // correct
Are you really saying that the second version is so much uglier
than the first that you'd rather write incorrect code?
I suspect he may have been referring to code that needs
to build for both 32-bit and 64-bit targets. One might
typedef 'uint64' to be unsigned long long on both targets
and just use %llu for the format string. BTDT.
In the quoted paragraph above, Michael wrote about using %u to
print unsigned long, not about using %u to print some type hidden
behind a typedef. If he didn't mean that, he can say so.
But even if he meant to talk about printing, say, uint64_t values,
my point stands.
I wouldn't define my own "uint64" type. I'd just use "uint64_t",
defined in <stdint.h>. And I'd use one of several *correct* ways
to print uint64_t values.
Michael, if you'd care to clarify, given:
unsigned long n = 42;
printf("%u\n", n); // incorrect
printf("%lu\n", n); // correct
(and assuming that unsigned int and unsigned long are the same
width on the current implementation), do you really prefer the
version marked as "incorrect"?
I hoped that I already clarified that point more than one time.
Obviously, I hoped wrong.
And you still haven't. I asked a specific question above. What is
your answer? Would you use a "%u" format to print a value that's
defined with type unsigned long? I inferred from what you wrote
that your answer would be yes. If your answer is no, I'll gladly
accept that. (And if so, what you wrote previously was unclear,
but I'm not going to worry about that if you clarify what you meant)
You've previously indicated that you find "%lu" uglier than "%u",
and that that's relevant to which one you would use. Do you still
think so?
I would appreciate direct yes or no answers to both of those
questions.
In the case I am talking about n declared as uint32_t.
uint32_t is an alias of 'unsigned long' on 32-bit embedded targets,
on 32-bit Linux, on 32-bit Windows and on 64-bit Windows. It is
alias of 'unsigned int' on 64-bit Linux.
Sometimes I move code between targets by myself, sometimes, rarely,
other people do it. I don't want to have different versions of the
code and I don't want to use ugly standard specifiers. Between two
pretty and working variants I prefer the shorter one. Partly
because it is guaranteed to work correctly on all my targets,
including LIN64, but more importantly (in practice, 64-bit Linux is
a very rare target in my daily routine) just because it is shorter.
And I don't care that it is formally "incorrect" on my more common
targets. Or may be not "formally", but both gcc and clang think so.
So you'd write code that happens to work on some implementations
rather than code that's correct on all implementations.
You know that unsigned long is at least 32 bits wide, and therefore
that converting a uint32_t value to unsigned long will not lose
information, and therefore that
uint32_t x = 42;
printf("%lu\n", (unsigned long)x);
will work correctly. You can do this without using the ugly
<inttypes.h> macros. Why wouldn't you?
Sure, you can write code that happens to work on the only
implementation you care about, but in my opinion, aside from being
dangerous, it's just too much work. I don't care whether uint32_t is
defined as unsigned int or unsigned long on a particular
implementation, and I don't have to care.
On Wed, 07 Jan 2026 16:00:19 -0800[...]
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
On Wed, 07 Jan 2026 13:28:45 -0800
Michael, if you'd care to clarify, given:
unsigned long n = 42;
printf("%u\n", n); // incorrect
printf("%lu\n", n); // correct
(and assuming that unsigned int and unsigned long are the same
width on the current implementation), do you really prefer the
version marked as "incorrect"?
I hoped that I already clarified that point more than one time.
Obviously, I hoped wrong.
And you still haven't. I asked a specific question above. What is
your answer? Would you use a "%u" format to print a value that's
defined with type unsigned long? I inferred from what you wrote
that your answer would be yes. If your answer is no, I'll gladly
accept that. (And if so, what you wrote previously was unclear,
but I'm not going to worry about that if you clarify what you meant)
When n declared as 'unsigned long' derectly rather than via unint32_t
alias than the answer is 'no'.
You've previously indicated that you find "%lu" uglier than "%u",
and that that's relevant to which one you would use. Do you still
think so?
I would appreciate direct yes or no answers to both of those
questions.
It depends on how n declared.
When it declared as 'unsigned long' then "lu" is not uglier.
When it is defined as uint32_t it is uglier, despite the fact that on absolute majority of the targets that I care about the latter is an
alias of the former.
Let me see if I understand you correctly.
uint32_t n = 42;
printf("%u\n", n);
printf("%lu\n", n);
In this context, you find "%lu" uglier than "%u"?
On Wed, 07 Jan 2026 13:28:45 -0800
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
scott@slp53.sl.home (Scott Lurndal) writes:
Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
Michael S <already5chosen@yahoo.com> writes:
On Tue, 6 Jan 2026 10:31:41 -0500
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 2026-01-06 04:29, Michael S wrote:
On Tue, 6 Jan 2026 00:27:04 -0000 (UTC)...
Lawrence D|ore4raoOliveiro <ldo@nz.invalid> wrote:
Section 7.8 of the C spec defines macros you can use so you
don|ore4raot have to hard-code assumptions about the lengths of >>>>>>>> integers in printf-format strings.
Did you ever try to use them? They look ugly.
Which is more important, correctness or beauty?
It depends.
When I know for sure that incorrectness has no consequences, like
in case of using %u to print 'unsigned long' on target with 32-bit
longs, or like using %llu to print 'unsigned long' on target with
64-bit longs, then beauty wins. Easily.
Seriously?
An example:
unsigned long n = 42;
printf("%u\n", n); // incorrect
printf("%lu\n", n); // correct
Are you really saying that the second version is so much uglier
than the first that you'd rather write incorrect code?
I suspect he may have been referring to code that needs
to build for both 32-bit and 64-bit targets. One might
typedef 'uint64' to be unsigned long long on both targets
and just use %llu for the format string. BTDT.
In the quoted paragraph above, Michael wrote about using %u to print
unsigned long, not about using %u to print some type hidden behind
a typedef. If he didn't mean that, he can say so.
But even if he meant to talk about printing, say, uint64_t values,
my point stands.
I wouldn't define my own "uint64" type. I'd just use "uint64_t",
defined in <stdint.h>. And I'd use one of several *correct* ways
to print uint64_t values.
Michael, if you'd care to clarify, given:
unsigned long n = 42;
printf("%u\n", n); // incorrect
printf("%lu\n", n); // correct
(and assuming that unsigned int and unsigned long are the same width
on the current implementation), do you really prefer the version
marked as "incorrect"?
I hoped that I already clarified that point more than one time.
Obviously, I hoped wrong.
In the case I am talking about n declared as uint32_t.
uint32_t is an alias of 'unsigned long' on 32-bit embedded targets, on
32-bit Linux, on 32-bit Windows and on 64-bit Windows. It is
alias of 'unsigned int' on 64-bit Linux.
Sometimes I move code between targets by myself, sometimes, rarely,
other people do it. I don't want to have different versions of the code
and I don't want to use ugly standard specifiers. Between two pretty
and working variants I prefer the shorter one. Partly because it is guaranteed to work correctly on all my targets, including LIN64, but
more importantly (in practice, 64-bit Linux is a very rare target in my
daily routine) just because it is shorter. And I don't care that it is formally "incorrect" on my more common targets. Or may be not
"formally", but both gcc and clang think so.
On 07/01/2026 00:44, James Kuyper wrote:
On 2026-01-06 13:05, Michael S wrote:
in case of using %u to print 'unsigned long' on target with 32-bit
longs, or like using %llu to print 'unsigned long' on-a target with
64-bit longs, then beauty wins. Easily.
You've got it backwards. "%u" is the correct specifier to use for
unsigned long on all platforms, whether unsigned long is 32, 36, or even
48 bits.
So not "%lu"?
On 2026-01-07 08:06, Tim Rentsch wrote:
scott@slp53.sl.home (Scott Lurndal) writes:
Michael S <already5chosen@yahoo.com> writes:
On Tue, 6 Jan 2026 10:31:41 -0500
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
If you know that an expression has one of the standard-named types or >>>>> typedefs for with there is a corresponding printf() specifier, you
should use that specifier. Otherwise, if you know that an expression >>>>> has one of the types declared in <stdint.h>, you should use the
corresponding macro #defined in <inttypes.h> to print it.
I should? Really?
Sorry, James, but you have no authority to make such statements.
James is paraphrasing the C standard.
Really? What passage in the C standard is being paraphrased?
This is advice, not paraphrased text from the C standard. [...]
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu> writes:
On 2026-01-07 08:06, Tim Rentsch wrote:
scott@slp53.sl.home (Scott Lurndal) writes:
I was responding to Scotty Lurndal's statement that the C
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu> writes:
On 2026-01-07 08:06, Tim Rentsch wrote:
scott@slp53.sl.home (Scott Lurndal) writes:
Michael S <already5chosen@yahoo.com> writes:
On Tue, 6 Jan 2026 10:31:41 -0500
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
If you know that an expression has one of the standard-named types or >>>>>> typedefs for with there is a corresponding printf() specifier, you >>>>>> should use that specifier. Otherwise, if you know that an expression >>>>>> has one of the types declared in <stdint.h>, you should use the
corresponding macro #defined in <inttypes.h> to print it.
I should? Really?
Sorry, James, but you have no authority to make such statements.
James is paraphrasing the C standard.
Really? What passage in the C standard is being paraphrased?
This is advice, not paraphrased text from the C standard. [...]
I was responding to Scotty Lurndal's statement that the C
standard was being paraphrased (by someone, it didn't matter to
me who). I don't care about whether his statement is true; my
interest is only in what part of the C standard he thinks is
being paraphrased. He is in a position to answer that question,
and more to the point he is the only person who is.
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu> writes:
On 2026-01-07 08:06, Tim Rentsch wrote:
scott@slp53.sl.home (Scott Lurndal) writes:
Michael S <already5chosen@yahoo.com> writes:
On Tue, 6 Jan 2026 10:31:41 -0500
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
If you know that an expression has one of the standard-named
types or typedefs for with there is a corresponding printf()
specifier, you should use that specifier. Otherwise, if you
know that an expression has one of the types declared in
<stdint.h>, you should use the corresponding macro #defined in
<inttypes.h> to print it.
I should? Really?
Sorry, James, but you have no authority to make such statements.
James is paraphrasing the C standard.
Really? What passage in the C standard is being paraphrased?
This is advice, not paraphrased text from the C standard. [...]
I was responding to Scotty Lurndal's statement that the C
standard was being paraphrased (by someone, it didn't matter to
me who). I don't care about whether his statement is true; my
interest is only in what part of the C standard he thinks is
being paraphrased. He is in a position to answer that question,
and more to the point he is the only person who is.
It's pretty clear that the standard describes the printf
function and the methods used to match the format characters
to the data types of the arguments. The fact that James
framed that as advice doesn't change interpretation of
the text of the standard, whether or not you consider
that to be a paraphrase.
"The main rules for paraphrasing are to fully understand the
original text, restate its core idea in your own words and
sentence structure, use synonyms, and always cite the original
source to avoid plagiarism, even if the wording is different.
And it is spelled "Scott".
Lawrence DrCOOliveiro <ldo@nz.invalid> writes:
On Wed, 7 Jan 2026 12:45:30 -0500, James Russell Kuyper Jr. wrote:
On a different point, I used time_t as an example. It would have
been better to use ptrdiff_t instead, since <inttypes.h> has a macro
for that type, and doesn't have one for time_t.
This is why you have configure scripts, so they can figure out the
right types to use for building on your platform.
I don't follow. ptrdiff_t is defined in <stddef.h>, and is the correct
type for the result of subtracting two pointers. What relevant
information would a configure script give you?
On Wed, 07 Jan 2026 16:00:19 -0800...
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
So you'd write code that happens to work on some implementations
rather than code that's correct on all implementations.
No, it is correct on all implementation. Idea that in C, as opposed to
C++, two unsigned integer types of the same size are somehow
different is, IMHO, an abomination. And that is one not especially
common case in which I don't care about opinion of the Standard.
I also don't care. Since for more than decade* I didn't have target
with 'int' shorter than 32 bits, I just use %u. It takes me zero
thinking.
BTW, I am always aware of exact sizes of the basic types of the target
that I work on. I don't feel comfotable without such knowledge. That
how my mind works. It has problems with too abstract abstractions.
No, it is correct on all implementation. Idea that in C, as opposed to
C++, two unsigned integer types of the same size are somehow
different is, IMHO, an abomination. And that is one not especially
common case in which I don't care about opinion of the Standard.
On 2026-01-07 19:38, Michael S wrote:
On Wed, 07 Jan 2026 16:00:19 -0800...
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
So you'd write code that happens to work on some implementations
rather than code that's correct on all implementations.
No, it is correct on all implementation. Idea that in C, as opposed
to C++, two unsigned integer types of the same size are somehow
different is, IMHO, an abomination. And that is one not especially
common case in which I don't care about opinion of the Standard.
We're not talking about two unsigned integer types with same size.
We're talking about unsigned long, which can be any size >= 32 bits,
and uint32_t, which can only be exactly 32 bits. Your code is NOT
portable to a platform where unsigned long is greater than 32 bits.
...
I also don't care. Since for more than decade* I didn't have target
with 'int' shorter than 32 bits, I just use %u. It takes me zero
thinking.
As a general rule, I find that people who claim a decision requires
no thought generally are referring to a decision that should have
been made differently if sufficient thought had been put into it.
This is a prime example.
BTW, I am always aware of exact sizes of the basic types of the
target that I work on. I don't feel comfotable without such
knowledge. That how my mind works. It has problems with too
abstract abstractions.
I'd have no problem with your approach if you hadn't falsely claimed
that "It is correct on all platforms".
There's nothing wrong with
code that is intentionally platform specific. Platform-specific code
that the author incorrectly believes to be "correct on all platforms"
is a problem.
On Thu, 8 Jan 2026 19:31:13 -0500
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu> wrote:
On 2026-01-07 19:38, Michael S wrote:
On Wed, 07 Jan 2026 16:00:19 -0800...
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
So you'd write code that happens to work on some implementations
rather than code that's correct on all implementations.
No, it is correct on all implementation. Idea that in C, as opposed
to C++, two unsigned integer types of the same size are somehow
different is, IMHO, an abomination. And that is one not especially
common case in which I don't care about opinion of the Standard.
We're not talking about two unsigned integer types with same size.
We're talking about unsigned long, which can be any size >= 32 bits,
and uint32_t, which can only be exactly 32 bits. Your code is NOT
portable to a platform where unsigned long is greater than 32 bits.
I don't know how you came to discussions of what is possible.
My statement was concrete. It was about platforms like Windows (of all flavors) and 2-3 specific 32-bit embedded targets that I currently
care about.
On all these platforms uint32_t is alias of 'unsigned long' which is
32-bit wide. 'unsigned int' is also 32-bit wide.
I claim that *on these platforms* uint32_t and 'unsigned int' are *not* different types. I don't care to what the Standard says about it.
I do care about what gcc says about it because I am annoyed by warnings
that I consider pointless.
Printing uint32_t values on these platforms with %u specifier, apart
from advantage of being shorter, has advantage of being undoubtedly
correct on LIN64. Unlike printing with %lu.
On 09/01/2026 13:18, Michael S wrote:
On Thu, 8 Jan 2026 19:31:13 -0500
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu> wrote:
On 2026-01-07 19:38, Michael S wrote:
On Wed, 07 Jan 2026 16:00:19 -0800...
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
So you'd write code that happens to work on some implementations rather than code that's correct on all implementations.
-a
No, it is correct on all implementation. Idea that in C, as opposed
to C++, two unsigned integer types of the same size are somehow different is, IMHO, an abomination. And that is one not especially common case in which I don't care about opinion of the Standard.
We're not talking about two unsigned integer types with same size.
We're talking about unsigned long, which can be any size >= 32 bits,
and uint32_t, which can only be exactly 32 bits. Your code is NOT portable to a platform where unsigned long is greater than 32 bits.
I don't know how you came to discussions of what is possible.
My statement was concrete. It was about platforms like Windows (of all flavors) and 2-3 specific 32-bit embedded targets that I currently
care about.
On all these platforms uint32_t is alias of 'unsigned long' which is
32-bit wide. 'unsigned int' is also 32-bit wide.
I claim that *on these platforms* uint32_t and 'unsigned int' are *not* different types. I don't care to what the Standard says about it.
Of course they are different types.-a In C, "unsigned int" and "unsigned long" are different types.-a The standard says so - and it is the
standard that defines the language.
I do care about what gcc says about it because I am annoyed by warnings that I consider pointless.
The warnings are not pointless, despite what you might think.-a And of course gcc is not going to modify its warnings to pander to someone who
has their own personal ideas about what C should be.-a We /all/ have
ideas about how C could be better for our own needs.-a But outside the
realm of personal languages where the single user also designed the
language and wrote the compiler, you have to work with the language as
it is defined.
For a clear example of the differences between unsigned int and unsigned long, look at the generated code here:
<https://godbolt.org/z/hdjz6Y7vY>
That is for embedded 32-bit ARM, where "uint32_t" is "unsigned long",
and is the same size as "unsigned int".-a Then try swapping the compiler
to the 32-bit ARM gcc Linux version - here "uint32_t" is "unsigned int",
and again the same size as "unsigned long".-a Look at the differences in
the code.
It doesn't matter if /you/ think that all 32-bit integer types should be
the same - in C, they are not.-a And therefore in C compilers, they are
not the same.
Snipet from ClassGuidelines.txtPrinting uint32_t values on these platforms with %u specifier, apart
from advantage of being shorter, has advantage of being undoubtedly
correct on LIN64. Unlike printing with %lu.
But printing uint32_t with "%u" on 32-bit EABI ARM is not correct - it
is UB.-a It will /probably/ work, but maybe some day you will come across
a situation where it will not.
I have a lot of trouble understanding why you would go out of your way
to knowingly write incorrect code - prioritising tiny, irrelevant
savings in source code space over correct, guaranteed, portable code
that can be automatically checked by tools.
scott@slp53.sl.home (Scott Lurndal) writes:
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
I was responding to Scotty Lurndal's statement that the C
standard was being paraphrased (by someone, it didn't matter to
me who). I don't care about whether his statement is true; my
interest is only in what part of the C standard he thinks is
being paraphrased. He is in a position to answer that question,
and more to the point he is the only person who is.
It's pretty clear that the standard describes the printf
function and the methods used to match the format characters
to the data types of the arguments. The fact that James
framed that as advice doesn't change interpretation of
the text of the standard, whether or not you consider
that to be a paraphrase.
"The main rules for paraphrasing are to fully understand the
original text, restate its core idea in your own words and
sentence structure, use synonyms, and always cite the original
source to avoid plagiarism, even if the wording is different.
I see where the C standard says the macros in inttypes.h are
suitable for use with printf (and scanf). That isn't at all
the same as saying people should use them.
Just because
something can be done doesn't mean it should be done.
On Fri, 2026-01-09 at 13:49 +0100, David Brown wrote:[...]
[SNIP]I have a lot of trouble understanding why you would go out of your way
to knowingly write incorrect code - prioritising tiny, irrelevant
savings in source code space over correct, guaranteed, portable code
that can be automatically checked by tools.
Snipet from ClassGuidelines.txt
...
wrd(or notation)
This function converts the argument object (a type, class,..) to text
My reply might not be directly in the topic of current post. Just jumpped in to reply.
It looks to me the format character MUST match the type passed to-a printf, otherwise UB.
On Thu, 8 Jan 2026 19:31:13 -0500...
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu> wrote:
I'd have no problem with your approach if you hadn't falsely claimed
that "It is correct on all platforms".
Which I didn't.
No, it is correct on all implementation.
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:...
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu> writes:
On 2026-01-07 08:06, Tim Rentsch wrote:
scott@slp53.sl.home (Scott Lurndal) writes:
Michael S <already5chosen@yahoo.com> writes:
On Tue, 6 Jan 2026 10:31:41 -0500
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
If you know that an expression has one of the standard-named types or >>>>>>> typedefs for with there is a corresponding printf() specifier, you >>>>>>> should use that specifier. Otherwise, if you know that an expression >>>>>>> has one of the types declared in <stdint.h>, you should use the
corresponding macro #defined in <inttypes.h> to print it.
...James is paraphrasing the C standard.
It's pretty clear that the standard describes the printf
function and the methods used to match the format characters
to the data types of the arguments. The fact that James
framed that as advice doesn't change interpretation of
the text of the standard, whether or not you consider
that to be a paraphrase.
On 2026-01-09 07:18, Michael S wrote:
On Thu, 8 Jan 2026 19:31:13 -0500...
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu> wrote:
I'd have no problem with your approach if you hadn't falsely
claimed that "It is correct on all platforms".
Which I didn't.
On 2026-01-07 19:38, Michael S wrote:
...
No, it is correct on all implementation.
On Sat, 10 Jan 2026 22:02:03 -0500
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu> wrote:
On 2026-01-09 07:18, Michael S wrote:
On Thu, 8 Jan 2026 19:31:13 -0500...
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu> wrote:
I'd have no problem with your approach if you hadn't falsely
claimed that "It is correct on all platforms".
Which I didn't.
On 2026-01-07 19:38, Michael S wrote:
...
No, it is correct on all implementation.
The quote is taken out of context.
The context was that on platforms that have properties (a) and (b) (see below) printing variables declared as uint32_t via %u is probably UB according to the Standard (I don't know for sure, however it is
probable),
but it can't cause troubles with production C compiler. Or
with any C compiler that is made in intention of being used rather than crafted to prove theoretical points.
Properties are:
a) uint32_t aliased to 'unsigned long'
b) 'unsigned int' is at least 32-bit wide.
I never claimed that it is good idea on targets with 'unsigned int'
that is narrower.
Michael S <already5chosen@yahoo.com> writes:
On Sat, 10 Jan 2026 22:02:03 -0500
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu> wrote:
On 2026-01-09 07:18, Michael S wrote:
On Thu, 8 Jan 2026 19:31:13 -0500...
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu>
wrote:
I'd have no problem with your approach if you hadn't falsely
claimed that "It is correct on all platforms".
Which I didn't.
On 2026-01-07 19:38, Michael S wrote:
...
No, it is correct on all implementation.
The quote is taken out of context.
The context was that on platforms that have properties (a) and (b)
(see below) printing variables declared as uint32_t via %u is
probably UB according to the Standard (I don't know for sure,
however it is probable),
I'm sure. uint32_t is an alias for some predefined integer type.
This:
uint32_t n = 42;
printf("%u\n", n);
has undefined behavior *unless* uint32_t happens to an alias for
unsigned int in the current implementation -- not just any 32-bit
unsigned integer type, only unsigned int.
If uint32_t is an alias for unsigned long (which implies that
unsigned long is exactly 32 bits), then the call's behavior is
undefined. (It might happen to "work".)
If uint32_t and unsigned long have different sizes, it still might
happen happen to "work", depending on calling conventions. Passing a
32-bit argument and telling printf to expect a 64-bit value clearly
has undefined behavior, but perhaps both happen to be passed in 64-bit registers, for example.
but it can't cause troubles with production C compiler.
Or with any C compiler that is made in intention of being used
rather than crafted to prove theoretical points.
Properties are:
a) uint32_t aliased to 'unsigned long'
Not guaranteed by the language (and not true on the implementations
I use most often).
b) 'unsigned int' is at least 32-bit wide.
Not guaranteed by the language (though it happens to be guaranteed by
POSIX).
I never claimed that it is good idea on targets with 'unsigned int'
that is narrower.
I claim that it's not a good idea on any target.
I find it *much* easier to write portable code than to spend time
figuring out what non-portable code will happens to work on the
platforms I happen to care about today.
uint32_t n = 42;
printf("%lu\n", (unsigned long)n);
unsigned long is guaranteed by the language to be at least 32 bits.
The conversion is guaranteed not to lose information. The format
matches the type of the argument. And the code will work correctly
on any conforming hosted implementation. (It might involve an
unnecessary 32 to 64 bit conversion, but given the overhead of
printf, that's unlikely to be a problem -- and if it is, I can use
the appropriate macro from <inttypes.h>.)
And to my eyes, using "%u" with a uint32_t argument is *ugly*.
On Sun, 11 Jan 2026 04:59:47 -0800
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
On Sat, 10 Jan 2026 22:02:03 -0500
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu> wrote:
On 2026-01-09 07:18, Michael S wrote:
On Thu, 8 Jan 2026 19:31:13 -0500...
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu>
wrote:
I'd have no problem with your approach if you hadn't falsely
claimed that "It is correct on all platforms".
Which I didn't.
On 2026-01-07 19:38, Michael S wrote:
...
> No, it is correct on all implementation.
The quote is taken out of context.
The context was that on platforms that have properties (a) and (b)
(see below) printing variables declared as uint32_t via %u is
probably UB according to the Standard (I don't know for sure,
however it is probable),
I'm sure. uint32_t is an alias for some predefined integer type.
This:
uint32_t n = 42;
printf("%u\n", n);
has undefined behavior *unless* uint32_t happens to an alias for
unsigned int in the current implementation -- not just any 32-bit
unsigned integer type, only unsigned int.
If uint32_t is an alias for unsigned long (which implies that
unsigned long is exactly 32 bits), then the call's behavior is
undefined. (It might happen to "work".)
What exactly, assuming that conditions (a) and (b) fulfilled, should implementation do to prevent it from working?
I mean short of completely crazy things that will make maintainer
immediately fired?
If uint32_t and unsigned long have different sizes, it still might
happen happen to "work", depending on calling conventions. Passing a
32-bit argument and telling printf to expect a 64-bit value clearly
has undefined behavior, but perhaps both happen to be passed in 64-bit
registers, for example.
And that is sort of intimate knowledge of the ABI that I don't want to exploit, as already mentioned in my other post in this sub-thread.
On 11/01/2026 14:32, Michael S wrote:
On Sun, 11 Jan 2026 04:59:47 -0800
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
On Sat, 10 Jan 2026 22:02:03 -0500
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu>
wrote:
On 2026-01-09 07:18, Michael S wrote:
On Thu, 8 Jan 2026 19:31:13 -0500...
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu>
wrote:
I'd have no problem with your approach if you hadn't falsely
claimed that "It is correct on all platforms".
Which I didn't.
On 2026-01-07 19:38, Michael S wrote:
...
> No, it is correct on all implementation.
The quote is taken out of context.
The context was that on platforms that have properties (a) and (b)
(see below) printing variables declared as uint32_t via %u is
probably UB according to the Standard (I don't know for sure,
however it is probable),
I'm sure. uint32_t is an alias for some predefined integer type.
This:
uint32_t n = 42;
printf("%u\n", n);
has undefined behavior *unless* uint32_t happens to an alias for
unsigned int in the current implementation -- not just any 32-bit
unsigned integer type, only unsigned int.
If uint32_t is an alias for unsigned long (which implies that
unsigned long is exactly 32 bits), then the call's behavior is
undefined. (It might happen to "work".)
What exactly, assuming that conditions (a) and (b) fulfilled, should implementation do to prevent it from working?
I mean short of completely crazy things that will make maintainer immediately fired?
If an architecture has 32-bit "unsigned long", then "unsigned int" is necessarily also 32-bit (since "unsigned int" is always at least
32-bit,
and "unsigned long" cannot be smaller than "unsigned int").
The very fact that you listed "unsigned int is at least 32-bit wide"
as an assumption shows you are not well versed with the basics of C
standards in this area.
On Sun, 11 Jan 2026 16:34:28 +0100
David Brown <david.brown@hesbynett.no> wrote:
On 11/01/2026 14:32, Michael S wrote:
On Sun, 11 Jan 2026 04:59:47 -0800
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
On Sat, 10 Jan 2026 22:02:03 -0500
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu>
wrote:
On 2026-01-09 07:18, Michael S wrote:
On Thu, 8 Jan 2026 19:31:13 -0500...
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu>
wrote:
I'd have no problem with your approach if you hadn't falsely
claimed that "It is correct on all platforms".
Which I didn't.
On 2026-01-07 19:38, Michael S wrote:
...
> No, it is correct on all implementation.
The quote is taken out of context.
The context was that on platforms that have properties (a) and (b)
(see below) printing variables declared as uint32_t via %u is
probably UB according to the Standard (I don't know for sure,
however it is probable),
I'm sure. uint32_t is an alias for some predefined integer type.
This:
uint32_t n = 42;
printf("%u\n", n);
has undefined behavior *unless* uint32_t happens to an alias for
unsigned int in the current implementation -- not just any 32-bit
unsigned integer type, only unsigned int.
If uint32_t is an alias for unsigned long (which implies that
unsigned long is exactly 32 bits), then the call's behavior is
undefined. (It might happen to "work".)
What exactly, assuming that conditions (a) and (b) fulfilled, should
implementation do to prevent it from working?
I mean short of completely crazy things that will make maintainer
immediately fired?
If an architecture has 32-bit "unsigned long", then "unsigned int" is
necessarily also 32-bit (since "unsigned int" is always at least
32-bit,
I am pretty sure that it is wrong.
C Standard does not require for 'unsigned int' to be above 16 bits.
and "unsigned long" cannot be smaller than "unsigned int").
The very fact that you listed "unsigned int is at least 32-bit wide"
as an assumption shows you are not well versed with the basics of C
standards in this area.
I am not well versed in the Standard. But in this particular case you
are the one who doesn't know it.
On Sat, 10 Jan 2026 22:02:03 -0500
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu> wrote:
On 2026-01-09 07:18, Michael S wrote:
On Thu, 8 Jan 2026 19:31:13 -0500
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu> wrote:
...
I'd have no problem with your approach if you hadn't falsely
claimed that "It is correct on all platforms".
Which I didn't.
On 2026-01-07 19:38, Michael S wrote:
...
No, it is correct on all implementation.
The quote is taken out of context.
The context was that on platforms that have properties (a) and (b) (see below) printing variables declared as uint32_t via %u is probably UB according to the Standard (I don't know for sure, however it is
probable), but it can't cause troubles with production C compiler. Or
with any C compiler that is made in intention of being used rather than crafted to prove theoretical points.
Properties are:
a) uint32_t aliased to 'unsigned long'
b) 'unsigned int' is at least 32-bit wide.
Michael S <already5chosen@yahoo.com> writes:
On Sat, 10 Jan 2026 22:02:03 -0500
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu> wrote:
On 2026-01-09 07:18, Michael S wrote:
On Thu, 8 Jan 2026 19:31:13 -0500
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu> wrote:
...
I'd have no problem with your approach if you hadn't falsely
claimed that "It is correct on all platforms".
Which I didn't.
On 2026-01-07 19:38, Michael S wrote:
...
No, it is correct on all implementation.
The quote is taken out of context.
The context was that on platforms that have properties (a) and (b) (see
below) printing variables declared as uint32_t via %u is probably UB
according to the Standard (I don't know for sure, however it is
probable),
I'm sure. uint32_t is an alias for some predefined integer type.
Michael S <already5chosen@yahoo.com> writes:
On Sat, 10 Jan 2026 22:02:03 -0500
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu> wrote:
On 2026-01-09 07:18, Michael S wrote:
On Thu, 8 Jan 2026 19:31:13 -0500
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu>
wrote:
...
I'd have no problem with your approach if you hadn't falsely
claimed that "It is correct on all platforms".
Which I didn't.
On 2026-01-07 19:38, Michael S wrote:
...
No, it is correct on all implementation.
The quote is taken out of context.
The context was that on platforms that have properties (a) and (b)
(see below) printing variables declared as uint32_t via %u is
probably UB according to the Standard (I don't know for sure,
however it is probable), but it can't cause troubles with
production C compiler. Or with any C compiler that is made in
intention of being used rather than crafted to prove theoretical
points. Properties are:
a) uint32_t aliased to 'unsigned long'
b) 'unsigned int' is at least 32-bit wide.
It seems unlikely that any implementation would make such a
choice. Can you name one that does?
On Sun, 11 Jan 2026 04:59:47 -0800
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
On Sat, 10 Jan 2026 22:02:03 -0500
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu> wrote:
On 2026-01-09 07:18, Michael S wrote:
On Thu, 8 Jan 2026 19:31:13 -0500...
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu>
wrote:
I'd have no problem with your approach if you hadn't falsely
claimed that "It is correct on all platforms".
Which I didn't.
On 2026-01-07 19:38, Michael S wrote:
...
No, it is correct on all implementation.
The quote is taken out of context.
The context was that on platforms that have properties (a) and (b)
(see below) printing variables declared as uint32_t via %u is
probably UB according to the Standard (I don't know for sure,
however it is probable),
I'm sure. uint32_t is an alias for some predefined integer type.
This:
uint32_t n = 42;
printf("%u\n", n);
has undefined behavior *unless* uint32_t happens to an alias for
unsigned int in the current implementation -- not just any 32-bit
unsigned integer type, only unsigned int.
If uint32_t is an alias for unsigned long (which implies that
unsigned long is exactly 32 bits), then the call's behavior is
undefined. (It might happen to "work".)
What exactly, assuming that conditions (a) and (b) fulfilled, should implementation do to prevent it from working?
I mean short of completely crazy things that will make maintainer
immediately fired?
If uint32_t and unsigned long have different sizes, it still might
happen happen to "work", depending on calling conventions. Passing a
32-bit argument and telling printf to expect a 64-bit value clearly
has undefined behavior, but perhaps both happen to be passed in 64-bit
registers, for example.
And that is sort of intimate knowledge of the ABI that I don't want to exploit, as already mentioned in my other post in this sub-thread.
but it can't cause troubles with production C compiler.
Or with any C compiler that is made in intention of being used
rather than crafted to prove theoretical points.
Properties are:
a) uint32_t aliased to 'unsigned long'
Not guaranteed by the language (and not true on the implementations
I use most often).
Did I ever say that it is guaranteed by the language or that it is
universal in any other way?
Normally you have much better reading comprehension than one that you demonstrate in this discussion. I'd guess that it's because I somehow
caused you to become angry.
I never claimed that it is good idea on targets with 'unsigned int'
that is narrower.
I claim that it's not a good idea on any target.
I find it *much* easier to write portable code than to spend time
figuring out what non-portable code will happens to work on the
platforms I happen to care about today.
uint32_t n = 42;
printf("%lu\n", (unsigned long)n);
unsigned long is guaranteed by the language to be at least 32 bits.
The conversion is guaranteed not to lose information. The format
matches the type of the argument. And the code will work correctly
on any conforming hosted implementation. (It might involve an
unnecessary 32 to 64 bit conversion, but given the overhead of
printf, that's unlikely to be a problem -- and if it is, I can use
the appropriate macro from <inttypes.h>.)
And to my eyes, using "%u" with a uint32_t argument is *ugly*.
To be fair, it is not ideal.
The solution that I would prefer would be universal adaption of
Microsoft's size specifiers I32 and I64. They are not going to win
beauty competition, but in practice they are a lot more convenient to
use than standard macros and are equally good at carrying programmer's intentions.
Microsoft has strong influence in committee, but was not able to push
it into C11, where they successfully forced hands of other members on
few much bigger and more controversial issues.
I don't know what it means, May be, there is bold technical reason
behind non-standardization of these size specifiers. Or may be there is
no reason and Microsoft simply never tried.
On Sun, 11 Jan 2026 04:59:47 -0800
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
On Sat, 10 Jan 2026 22:02:03 -0500
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu> wrote:
On 2026-01-09 07:18, Michael S wrote:
On Thu, 8 Jan 2026 19:31:13 -0500...
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu>
wrote:
I'd have no problem with your approach if you hadn't falsely
claimed that "It is correct on all platforms".
Which I didn't.
On 2026-01-07 19:38, Michael S wrote:
...
No, it is correct on all implementation.
The quote is taken out of context.
The context was that on platforms that have properties (a) and (b)
(see below) printing variables declared as uint32_t via %u is
probably UB according to the Standard (I don't know for sure,
however it is probable),
I'm sure. uint32_t is an alias for some predefined integer type.
This:
uint32_t n = 42;
printf("%u\n", n);
has undefined behavior *unless* uint32_t happens to an alias for
unsigned int in the current implementation -- not just any 32-bit
unsigned integer type, only unsigned int.
If uint32_t is an alias for unsigned long (which implies that
unsigned long is exactly 32 bits), then the call's behavior is
undefined. (It might happen to "work".)
What exactly, assuming that conditions (a) and (b) fulfilled, should implementation do to prevent it from working?
I mean short of completely crazy things that will make maintainer
immediately fired?
If uint32_t and unsigned long have different sizes, it still might
happen happen to "work", depending on calling conventions. Passing a
32-bit argument and telling printf to expect a 64-bit value clearly
has undefined behavior, but perhaps both happen to be passed in 64-bit
registers, for example.
And that is sort of intimate knowledge of the ABI that I don't want to exploit, as already mentioned in my other post in this sub-thread.
but it can't cause troubles with production C compiler.
Or with any C compiler that is made in intention of being used
rather than crafted to prove theoretical points.
Properties are:
a) uint32_t aliased to 'unsigned long'
Not guaranteed by the language (and not true on the implementations
I use most often).
Did I ever say that it is guaranteed by the language or that it is
universal in any other way?
Normally you have much better reading comprehension than one that you demonstrate in this discussion. I'd guess that it's because I somehow
caused you to become angry.
I never claimed that it is good idea on targets with 'unsigned int'
that is narrower.
I claim that it's not a good idea on any target.
I find it *much* easier to write portable code than to spend time
figuring out what non-portable code will happens to work on the
platforms I happen to care about today.
uint32_t n = 42;
printf("%lu\n", (unsigned long)n);
unsigned long is guaranteed by the language to be at least 32 bits.
The conversion is guaranteed not to lose information. The format
matches the type of the argument. And the code will work correctly
on any conforming hosted implementation. (It might involve an
unnecessary 32 to 64 bit conversion, but given the overhead of
printf, that's unlikely to be a problem -- and if it is, I can use
the appropriate macro from <inttypes.h>.)
And to my eyes, using "%u" with a uint32_t argument is *ugly*.
To be fair, it is not ideal.
The solution that I would prefer would be universal adaption of
Microsoft's size specifiers I32 and I64. They are not going to win
beauty competition, but in practice they are a lot more convenient to
use than standard macros and are equally good at carrying programmer's intentions.
Microsoft has strong influence in committee, but was not able to push
it into C11, where they successfully forced hands of other members on
few much bigger and more controversial issues.
I don't know what it means, May be, there is bold technical reason
behind non-standardization of these size specifiers. Or may be there is
no reason and Microsoft simply never tried.
Michael S <already5chosen@yahoo.com> writes:[...]
[...]I mean short of completely crazy things that will make maintainer
immediately fired?
Most likely nothing.
On Sun, 11 Jan 2026 11:51:43 -0800[...]
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
Properties are:
a) uint32_t aliased to 'unsigned long'
b) 'unsigned int' is at least 32-bit wide.
It seems unlikely that any implementation would make such a
choice. Can you name one that does?
Four out of four target for which I write C programs for living in this decade:
- Altera Nios2 (nios2-elf-gcc)
- Arm Cortex-M bare metal (arm-none-eabi-gcc)
- Win32-i386, various compilers
- Win64-Amd64,various compilers
Well, if I would be pedantic, then in this decade I also wrote several programs for Arm32 Linux, where I don't know whether uint32_t is alias
of 'unsigned int' or 'unsigned long', few programs for AMD64 Linux,
where I know that uint32_t is an alias of 'unsigned long' and may be one program for ARM64 Linux that is the same as AMD64 Linux.
But all those outliers together constitute a tiny fraction of the code
that I wrote recently.
Michael S <already5chosen@yahoo.com> writes:
The solution that I would prefer would be universal adaption of
Microsoft's size specifiers I32 and I64. They are not going to win
beauty competition, but in practice they are a lot more convenient to
use than standard macros and are equally good at carrying programmer's
intentions.
The relative beauty of a feature that isn't available hardly seems
relevant.
The ideal solution is to write correct, and preferably portable,
code in the first place. There are often good reasons to write
non-portable code, but I suggest that fiding the correct format
string to be ugly is not one of them.
(Microsoft's documentation says that "I32" prefix applies to an
argument of type __int32 or unsigned __int32. I don't know whether
__int32 is compatible with int, with long, or neither, and I don't
much care. I don't know what Microsoft guarantees about printf
with incompatible types that happen to have the same size.)
Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
Michael S <already5chosen@yahoo.com> writes:
On Sat, 10 Jan 2026 22:02:03 -0500
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu> wrote:
On 2026-01-09 07:18, Michael S wrote:
On Thu, 8 Jan 2026 19:31:13 -0500
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu> wrote:
...
I'd have no problem with your approach if you hadn't falsely
claimed that "It is correct on all platforms".
Which I didn't.
On 2026-01-07 19:38, Michael S wrote:
...
No, it is correct on all implementation.
The quote is taken out of context.
The context was that on platforms that have properties (a) and (b) (see
below) printing variables declared as uint32_t via %u is probably UB
according to the Standard (I don't know for sure, however it is
probable),
I'm sure. uint32_t is an alias for some predefined integer type.
Very likely, but I don't think the C standard requires it. TTBOMU
the C standard allows the possibility of an implementation where
uint32_t is type distinct from any other nameable type, and yet
the implementation could still be conforming.
Michael S <already5chosen@yahoo.com> writes:
On Sun, 11 Jan 2026 11:51:43 -0800[...]
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
Properties are:
a) uint32_t aliased to 'unsigned long'
b) 'unsigned int' is at least 32-bit wide.
It seems unlikely that any implementation would make such a
choice. Can you name one that does?
Four out of four target for which I write C programs for living in this
decade:
- Altera Nios2 (nios2-elf-gcc)
- Arm Cortex-M bare metal (arm-none-eabi-gcc)
- Win32-i386, various compilers
- Win64-Amd64,various compilers
I find that surprising. I just tried a test program that prints
the name of the type uint32_t is an alias for (using _Generic),
and it's alias to unsigned int on every implementation I tried.
(Your properties are limited to systems with 32-bit int and long.)
For an implementation where int and long are both 32 bits, it
wouldn't have surprised me for uint32_t to be an alias either for
unsigned int or for unsigned long, and I wouldn't care either way
beyond idle curiosity, but all the implementations I've tried choose
to use unsigned int.
Well, if I would be pedantic, then in this decade I also wrote several
programs for Arm32 Linux, where I don't know whether uint32_t is alias
of 'unsigned int' or 'unsigned long', few programs for AMD64 Linux,
where I know that uint32_t is an alias of 'unsigned long' and may be one
program for ARM64 Linux that is the same as AMD64 Linux.
But all those outliers together constitute a tiny fraction of the code
that I wrote recently.
One advantage of my approach is that I don't have to know or care
what the underlying type of uint32_t is.
On Sun, 11 Jan 2026 11:51:43 -0800
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
On Sat, 10 Jan 2026 22:02:03 -0500
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu> wrote:
On 2026-01-09 07:18, Michael S wrote:
On Thu, 8 Jan 2026 19:31:13 -0500
"James Russell Kuyper Jr." <jameskuyper@alumni.caltech.edu>
wrote:
...
I'd have no problem with your approach if you hadn't falsely
claimed that "It is correct on all platforms".
Which I didn't.
On 2026-01-07 19:38, Michael S wrote:
...
No, it is correct on all implementation.
The quote is taken out of context.
The context was that on platforms that have properties (a) and (b)
(see below) printing variables declared as uint32_t via %u is
probably UB according to the Standard (I don't know for sure,
however it is probable), but it can't cause troubles with
production C compiler. Or with any C compiler that is made in
intention of being used rather than crafted to prove theoretical
points. Properties are:
a) uint32_t aliased to 'unsigned long'
b) 'unsigned int' is at least 32-bit wide.
It seems unlikely that any implementation would make such a
choice. Can you name one that does?
Four out of four target for which I write C programs for living in
this decade:
- Altera Nios2 (nios2-elf-gcc)
- Arm Cortex-M bare metal (arm-none-eabi-gcc)
- Win32-i386, various compilers
- Win64-Amd64,various compilers
Well, if I would be pedantic, then in this decade I also wrote several programs for Arm32 Linux, where I don't know whether uint32_t is alias
of 'unsigned int' or 'unsigned long', few programs for AMD64 Linux,
where I know that uint32_t is an alias of 'unsigned long'
and may be
one program for ARM64 Linux that is the same as AMD64 Linux.
But all those outliers together constitute a tiny fraction of the code
that I wrote recently.
Michael S <already5chosen@yahoo.com> writes:
To be fair, it is not ideal.
The solution that I would prefer would be universal adaption of
Microsoft's size specifiers I32 and I64. They are not going to win
beauty competition, but in practice they are a lot more convenient
to use than standard macros and are equally good at carrying
programmer's intentions.
The relative beauty of a feature that isn't available hardly seems
relevant.
The ideal solution is to write correct, and preferably portable,
code in the first place. There are often good reasons to write
non-portable code, but I suggest that fiding the correct format
string to be ugly is not one of them.
(Microsoft's documentation says that "I32" prefix applies to an
argument of type __int32 or unsigned __int32. I don't know whether
__int32 is compatible with int, with long, or neither, and I don't
much care. I don't know what Microsoft guarantees about printf
with incompatible types that happen to have the same size.)
Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
Michael S <already5chosen@yahoo.com> writes:[...]
I mean short of completely crazy things that will make maintainer
immediately fired?
Most likely nothing.[...]
Sorry about the duplicate post (server problems).
On 11/01/2026 23:56, Keith Thompson wrote:
Michael S <already5chosen@yahoo.com> writes:
The solution that I would prefer would be universal adaption of
Microsoft's size specifiers I32 and I64. They are not going to win
beauty competition, but in practice they are a lot more convenient
to use than standard macros and are equally good at carrying
programmer's intentions.
The relative beauty of a feature that isn't available hardly seems relevant.
The ideal solution is to write correct, and preferably portable,
code in the first place. There are often good reasons to write non-portable code, but I suggest that fiding the correct format
string to be ugly is not one of them.
(Microsoft's documentation says that "I32" prefix applies to an
argument of type __int32 or unsigned __int32. I don't know whether
__int32 is compatible with int, with long, or neither, and I don't
much care. I don't know what Microsoft guarantees about printf
with incompatible types that happen to have the same size.)
C23 includes length specifiers with explicit bit counts, so "%w32u"
is for an unsigned integer argument of 32 bits:
"""
wN Specifies that a following b, B, d, i, o, u, x, or X conversion
specifier applies to an integer argument with a specific width where
N is a positive decimal integer with no leading zeros (the argument
will have been promoted according to the integer promotions, but its
value shall be converted to the unpromoted type); or that a following
n conversion specifier applies to a pointer to an integer type
argument with a width of N bits. All minimum-width integer types
(7.22.1.2) and exact-width integer types (7.22.1.1) defined in the
header <stdint.h> shall be supported. Other supported values of N are implementation-defined. """
That looks to me that it would be a correct specifier for uint32_t,
and should also be fully defined behaviour for unsigned int and
unsigned long if these are 32 bits wide.
Michael S <already5chosen@yahoo.com> writes:
On Sun, 11 Jan 2026 11:51:43 -0800[...]
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
Properties are:
a) uint32_t aliased to 'unsigned long'
b) 'unsigned int' is at least 32-bit wide.
It seems unlikely that any implementation would make such a
choice. Can you name one that does?
Four out of four target for which I write C programs for living in
this decade:
- Altera Nios2 (nios2-elf-gcc)
- Arm Cortex-M bare metal (arm-none-eabi-gcc)
- Win32-i386, various compilers
- Win64-Amd64,various compilers
I find that surprising. I just tried a test program that prints
the name of the type uint32_t is an alias for (using _Generic),
and it's alias to unsigned int on every implementation I tried.
(Your properties are limited to systems with 32-bit int and long.)
For an implementation where int and long are both 32 bits, it
wouldn't have surprised me for uint32_t to be an alias either for
unsigned int or for unsigned long, and I wouldn't care either way
beyond idle curiosity, but all the implementations I've tried choose
to use unsigned int.
Well, if I would be pedantic, then in this decade I also wrote
several programs for Arm32 Linux, where I don't know whether
uint32_t is alias of 'unsigned int' or 'unsigned long', few
programs for AMD64 Linux, where I know that uint32_t is an alias of 'unsigned long' and may be one program for ARM64 Linux that is the
same as AMD64 Linux. But all those outliers together constitute a
tiny fraction of the code that I wrote recently.
One advantage of my approach is that I don't have to know or care
what the underlying type of uint32_t is.
On Mon, 12 Jan 2026 08:21:43 +0100
David Brown <david.brown@hesbynett.no> wrote:
On 11/01/2026 23:56, Keith Thompson wrote:
Michael S <already5chosen@yahoo.com> writes:
The solution that I would prefer would be universal adaption of
Microsoft's size specifiers I32 and I64. They are not going to win
beauty competition, but in practice they are a lot more convenient
to use than standard macros and are equally good at carrying
programmer's intentions.
The relative beauty of a feature that isn't available hardly seems
relevant.
The ideal solution is to write correct, and preferably portable,
code in the first place. There are often good reasons to write
non-portable code, but I suggest that fiding the correct format
string to be ugly is not one of them.
(Microsoft's documentation says that "I32" prefix applies to an
argument of type __int32 or unsigned __int32. I don't know whether
__int32 is compatible with int, with long, or neither, and I don't
much care. I don't know what Microsoft guarantees about printf
with incompatible types that happen to have the same size.)
C23 includes length specifiers with explicit bit counts, so "%w32u"
is for an unsigned integer argument of 32 bits:
"""
wN Specifies that a following b, B, d, i, o, u, x, or X conversion
specifier applies to an integer argument with a specific width where
N is a positive decimal integer with no leading zeros (the argument
will have been promoted according to the integer promotions, but its
value shall be converted to the unpromoted type); or that a following
n conversion specifier applies to a pointer to an integer type
argument with a width of N bits. All minimum-width integer types
(7.22.1.2) and exact-width integer types (7.22.1.1) defined in the
header <stdint.h> shall be supported. Other supported values of N are
implementation-defined. """
That looks to me that it would be a correct specifier for uint32_t,
and should also be fully defined behaviour for unsigned int and
unsigned long if these are 32 bits wide.
It sounds very good.
Except that none of my four targets of major interest supports C23 at
the moment. Esp. so at the level of standard library.
For one of them (Nios2) in the absence of something VERY unexpected
there never be support (gcc stopped support for Nios2 2 or 3 years ago).
For the other three, it will take time. I can't even guess how long,
except that I know that support versions of arm-none-eabi-gcc lags two
years behind "hosted" x86-64 and ARM64 versions, so I can guess that it
would take significant time to catch up.
C23 includes length specifiers with explicit bit counts, so "%w32u" is
for an unsigned integer argument of 32 bits:
"""
wN Specifies that a following b, B, d, i, o, u, x, or X conversion
specifier applies to an integer argument with a specific width
where N is a positive decimal integer with no leading zeros
(the argument will have been promoted according to the integer
promotions, but its value shall be converted to the unpromoted
type); or that a following n conversion specifier applies to a
pointer to an integer type argument with a width of N bits. All
minimum-width integer types (7.22.1.2) and exact-width integer
types (7.22.1.1) defined in the header <stdint.h> shall be
supported. Other supported values of N are implementation-defined.
"""
That looks to me that it would be a correct specifier for uint32_t,
and should also be fully defined behaviour for unsigned int and
unsigned long if these are 32 bits wide.
David Brown <david.brown@hesbynett.no> writes:
[...]
C23 includes length specifiers with explicit bit counts, so "%w32u" is
for an unsigned integer argument of 32 bits:
"""
wN Specifies that a following b, B, d, i, o, u, x, or X conversion
specifier applies to an integer argument with a specific width
where N is a positive decimal integer with no leading zeros
(the argument will have been promoted according to the integer
promotions, but its value shall be converted to the unpromoted
type); or that a following n conversion specifier applies to a
pointer to an integer type argument with a width of N bits. All
minimum-width integer types (7.22.1.2) and exact-width integer
types (7.22.1.1) defined in the header <stdint.h> shall be
supported. Other supported values of N are implementation-defined.
"""
That looks to me that it would be a correct specifier for uint32_t,
Yes, so for example this:
uint32_t n = 42;
printf("n = %w32u\n", n);
is correct, if I'm reading it correctly. It's also correct for uint_least32_t, which is expected to be the same type as uint32_t
if the latter exists. There's also support for the [u]int_fastN_t
types, using for example "%wf32u" in place of "%w32u".
and should also be fully defined behaviour for unsigned int and
unsigned long if these are 32 bits wide.
No, I don't think C23 says that. If int and long happen to be the same width, they are still incompatible, and there is no printf format
specifier that has defined behavior for both.
That first sentence is a bit ambiguous
wN Specifies that a following b, B, d, i, o, u, x, or X conversion
specifier applies to an integer argument with a specific width ...
but I don't think it means that it must accept *any* integer type
of the specified width.
Later in the same paragraph, it says that all [u]intN_t and
[u]int_leastN_t types shall be supported -- all such *types*, not
all such *widths*. And it doesn't say that the predefined types
shall be supported.
Paragraph 9 says:
fprintf shall behave as if it uses va_arg with a type argument
naming the type resulting from applying the default argument
promotions to the type corresponding to the conversion specification
and then converting the result of the va_arg expansion to the type
corresponding to the conversion specification.
And in the description for the va_arg macro (whose second argument
is a type name):
If *type* is not compatible with the type of the actual
next argument (as promoted according to the default argument
promotions), the behavior is undefined, except for the following
cases: ...
Corresponding signed and unsigned types are supported if the value
is representable in both, but there's no provision for mixing int
and long even if they have the same width.
If printf is implemented using <stdarg.h>, what type name can it
pass to the va_arg() macro given a "%w32u" specification? It can
only pass uint32_t or uint_least32_t (or it can pass unsigned int
or unsigned long *if* that type is compatible with the uint32_t).
(And C23 adds a requirement that [u]int_leastN_t is the same type as [u]intN_t if the latter exists; perhaps this is why.)
Prior to C17, there is no conversion specification that's valid for
both int and long, even if they're the same width. Changing that
in C23 would have been a significant change, but there's no mention
of it, even in a footnote.
The "%w..." format specifiers are simpler (and IMHO less ugly)
than the macros in <inttypes.h>, but they don't add any fundamental
new capability.
Given the format specifier "%w32u", the corresponding argument must
be of type uint32_t, or it can be of type int32_t and representable
in both, or it can be of a type compatible with [u]int32_t. I expect
that in most or all implementations, the undefined behavior of
passing an incompatible type with the same width and representation
will appear as if it "worked", but a compile-time warning is likely
if the format is a string literal.
And of course if you want to print a uint32_t value, you can always
cast it to unsigned long and use "%u".
gcc has supported the format, along with much of C23, since gcc 13,
and ARM's gcc-based toolchain version 13.2 is from October 2023. (The current version is 15.2 from December 2025.) But I don't know about
library support - that is a very different matter. (Compiler support
for printf really just means checking the format specifiers match the parameters.)
David Brown <david.brown@hesbynett.no> writes:
[...]
Context: %wN and %wfN printf length modifier, new in C23.
gcc has supported the format, along with much of C23, since gcc 13,
and ARM's gcc-based toolchain version 13.2 is from October 2023. (The
current version is 15.2 from December 2025.) But I don't know about
library support - that is a very different matter. (Compiler support
for printf really just means checking the format specifiers match the
parameters.)
Of course printf is implemented in the library, not in the compiler.
gcc has had format checking for %wN and %wfN since release 13, but
that's useless in the absence of library support.
Support in glibc was added 2023-06-19 and released in version 2.39.
Other C library implementations may or may not support it.
On 13/01/2026 01:06, Keith Thompson wrote:
David Brown <david.brown@hesbynett.no> writes:
[...]
Context: %wN and %wfN printf length modifier, new in C23.
gcc has supported the format, along with much of C23, since gcc 13,Of course printf is implemented in the library, not in the compiler.
and ARM's gcc-based toolchain version 13.2 is from October 2023. (The
current version is 15.2 from December 2025.) But I don't know about
library support - that is a very different matter. (Compiler support
for printf really just means checking the format specifiers match the
parameters.)
Primarily, yes. But like all standard library functions, compilers
can have special handling in some ways. This is more obvious for
functions like memcpy, where the compiler can often generate
significantly better code (specially for small known sizes). As far
as I know, the only optimisation gcc does on printf is turn something
like printf("Hello\n") into puts("Hello"). Hypothetically, there is
nothing to stop a compiler being a great deal more sophisticated than
that, and doing the format-string interpretation directly in some way.
gcc has had format checking for %wN and %wfN since release 13, but
that's useless in the absence of library support.
Yes.
Support in glibc was added 2023-06-19 and released in version 2.39.
Other C library implementations may or may not support it.
glibc is not particularly relevant for non-Linux embedded
systems. newlib (and newlib-nano) are a common choice for such
systems, but I have no idea if it currently has support for those
formats.
David Brown <david.brown@hesbynett.no> writes:
On 13/01/2026 01:06, Keith Thompson wrote:
David Brown <david.brown@hesbynett.no> writes:
[...]
Context: %wN and %wfN printf length modifier, new in C23.
gcc has supported the format, along with much of C23, since gcc 13,Of course printf is implemented in the library, not in the compiler.
and ARM's gcc-based toolchain version 13.2 is from October 2023. (The >>>> current version is 15.2 from December 2025.) But I don't know about
library support - that is a very different matter. (Compiler support
for printf really just means checking the format specifiers match the
parameters.)
Primarily, yes. But like all standard library functions, compilers
can have special handling in some ways. This is more obvious for
functions like memcpy, where the compiler can often generate
significantly better code (specially for small known sizes). As far
as I know, the only optimisation gcc does on printf is turn something
like printf("Hello\n") into puts("Hello"). Hypothetically, there is
nothing to stop a compiler being a great deal more sophisticated than
that, and doing the format-string interpretation directly in some way.
Sure, but in practice the library support for printf is the only thing
that matters. If your library doesn't support %wN, having the compiler recognize it doesn't help. I'm not sure why you even mentioned gcc in
this context.
gcc has had format checking for %wN and %wfN since release 13, but
that's useless in the absence of library support.
Yes.
Support in glibc was added 2023-06-19 and released in version 2.39.
Other C library implementations may or may not support it.
glibc is not particularly relevant for non-Linux embedded
systems. newlib (and newlib-nano) are a common choice for such
systems, but I have no idea if it currently has support for those
formats.
Of course, that's why I mentioned other C library implementations.
glibc happens to be the one about which I had some relevant
information.
The versions of musl and dietlibc that I have on my Ubuntu system
don't support %wN. The version of newlib I have on Cygwin also
doesn't support it. PellesC on Windows does.
On 13/01/2026 09:53, Keith Thompson wrote:
David Brown <david.brown@hesbynett.no> writes:
On 13/01/2026 01:06, Keith Thompson wrote:
David Brown <david.brown@hesbynett.no> writes:
[...]
Context: %wN and %wfN printf length modifier, new in C23.
gcc has supported the format, along with much of C23, since gccOf course printf is implemented in the library, not in the
13, and ARM's gcc-based toolchain version 13.2 is from October
2023. (The current version is 15.2 from December 2025.) But I
don't know about library support - that is a very different
matter. (Compiler support for printf really just means checking
the format specifiers match the parameters.)
compiler.
Primarily, yes. But like all standard library functions, compilers
can have special handling in some ways. This is more obvious for
functions like memcpy, where the compiler can often generate
significantly better code (specially for small known sizes). As
far as I know, the only optimisation gcc does on printf is turn
something like printf("Hello\n") into puts("Hello").
Hypothetically, there is nothing to stop a compiler being a great
deal more sophisticated than that, and doing the format-string
interpretation directly in some way.
Sure, but in practice the library support for printf is the only
thing that matters. If your library doesn't support %wN, having
the compiler recognize it doesn't help. I'm not sure why you even mentioned gcc in this context.
I had several reasons (I would want compiler checking of the format
before using it, I know the state of support in gcc, and I had
mentioned ARM's gcc toolchains as they are the standard choice of
toolchains for embedded ARM systems).
On Tue, 13 Jan 2026 11:09:55 +0100
David Brown <david.brown@hesbynett.no> wrote:
On 13/01/2026 09:53, Keith Thompson wrote:
David Brown <david.brown@hesbynett.no> writes:
On 13/01/2026 01:06, Keith Thompson wrote:
David Brown <david.brown@hesbynett.no> writes:
[...]
Context: %wN and %wfN printf length modifier, new in C23.
gcc has supported the format, along with much of C23, since gccOf course printf is implemented in the library, not in the
13, and ARM's gcc-based toolchain version 13.2 is from October
2023. (The current version is 15.2 from December 2025.) But I
don't know about library support - that is a very different
matter. (Compiler support for printf really just means checking
the format specifiers match the parameters.)
compiler.
Primarily, yes. But like all standard library functions, compilers
can have special handling in some ways. This is more obvious for
functions like memcpy, where the compiler can often generate
significantly better code (specially for small known sizes). As
far as I know, the only optimisation gcc does on printf is turn
something like printf("Hello\n") into puts("Hello").
Hypothetically, there is nothing to stop a compiler being a great
deal more sophisticated than that, and doing the format-string
interpretation directly in some way.
Sure, but in practice the library support for printf is the only
thing that matters. If your library doesn't support %wN, having
the compiler recognize it doesn't help. I'm not sure why you even
mentioned gcc in this context.
I had several reasons (I would want compiler checking of the format
before using it, I know the state of support in gcc, and I had
mentioned ARM's gcc toolchains as they are the standard choice of
toolchains for embedded ARM systems).
[O.T.]
AFAIK, that's no longer the case. For last 3-4 years Cortex-M MCU
vendors, in particular STMicro and TI, intensely push clang as a default compiler in their free IDEs.
Today, in order to use gcc in the new embedded ARM project, developer
has to make a conscious effort of rejecting vendor's default. Most
likely, gcc compiler does not come as part of default installation
package. It has to be downloaded and installed separately.
I would guess that overwhelming majority of devs does not bother.
On 13/01/2026 12:45, Michael S wrote:
On Tue, 13 Jan 2026 11:09:55 +0100
David Brown <david.brown@hesbynett.no> wrote:
On 13/01/2026 09:53, Keith Thompson wrote:
David Brown <david.brown@hesbynett.no> writes:
On 13/01/2026 01:06, Keith Thompson wrote:
David Brown <david.brown@hesbynett.no> writes:
[...]
Context: %wN and %wfN printf length modifier, new in C23.
gcc has supported the format, along with much of C23, since gccOf course printf is implemented in the library, not in the
13, and ARM's gcc-based toolchain version 13.2 is from October
2023. (The current version is 15.2 from December 2025.) But I
don't know about library support - that is a very different
matter. (Compiler support for printf really just means
checking the format specifiers match the parameters.)
compiler.
Primarily, yes. But like all standard library functions,
compilers can have special handling in some ways. This is more
obvious for functions like memcpy, where the compiler can often
generate significantly better code (specially for small known
sizes). As far as I know, the only optimisation gcc does on
printf is turn something like printf("Hello\n") into
puts("Hello"). Hypothetically, there is nothing to stop a
compiler being a great deal more sophisticated than that, and
doing the format-string interpretation directly in some way.
Sure, but in practice the library support for printf is the only
thing that matters. If your library doesn't support %wN, having
the compiler recognize it doesn't help. I'm not sure why you even
mentioned gcc in this context.
I had several reasons (I would want compiler checking of the format
before using it, I know the state of support in gcc, and I had
mentioned ARM's gcc toolchains as they are the standard choice of
toolchains for embedded ARM systems).
[O.T.]
AFAIK, that's no longer the case. For last 3-4 years Cortex-M MCU
vendors, in particular STMicro and TI, intensely push clang as a
default compiler in their free IDEs.
As far as I know, ST uses ARM's gcc build as their normal toolchain.
I haven't looked at TI's development tools for a good while.
But it is certainly the case that clang-based toolchains are gaining
in popularity for non-Linux embedded ARM. I've been looking at them
myself, and see advantages and disadvantages. However, I believe gcc
is still dominated by a large margin, and ARM makes regular releases
of complete toolchain packages (compiler, libraries, debugger, etc.).
Today, in order to use gcc in the new embedded ARM project,
developer has to make a conscious effort of rejecting vendor's
default. Most likely, gcc compiler does not come as part of default installation package. It has to be downloaded and installed
separately. I would guess that overwhelming majority of devs does
not bother.
My experience is that the majority of microcontroller vendors provide
gcc toolchains as part of their default installations. They used to
have their own gcc toolchain builds, or use third-parties like Code Sourcery, but these days they usually use an off-the-shelf ARM
package. But several vendors are in a transition period where they
are gradually moving from established Eclipse-based tools towards VS
Code tools, and perhaps clang is either an option or default with
their newer IDEs. Those vendors provide, support and maintain both
IDEs at the moment, but sometimes detailed support for particular microcontrollers only exists for one of them.
I agree that most developers will use whatever compiler comes as
default with their vendor-supplied IDEs. Personally, I think that's
fine for getting started, or for quick throw-away projects. For
anything serious I prefer to have the build controlled from outside
the IDE (by a makefile), using the toolchain I choose (typically the
latest ARM gcc toolchain when the project starts, then remaining
consistent throughout the lifetime of the project). The IDEs can
still be good editors, IDEs, debuggers, etc.
| Sysop: | Amessyroom |
|---|---|
| Location: | Fayetteville, NC |
| Users: | 54 |
| Nodes: | 6 (0 / 6) |
| Uptime: | 17:41:31 |
| Calls: | 742 |
| Files: | 1,218 |
| D/L today: |
4 files (8,203K bytes) |
| Messages: | 184,414 |
| Posted today: | 1 |