Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 28 |
Nodes: | 6 (0 / 6) |
Uptime: | 43:05:41 |
Calls: | 422 |
Calls today: | 1 |
Files: | 1,024 |
Messages: | 90,180 |
package as-is. For anything other than a quick demo, my preferred setup
is using makefiles for the build along with an ARM gcc toolchain. That
way I can always build my software, from any system, and archive the toolchain. (One day, I will also try using clang with these packages,
but I haven't done so yet.)
On 2025-03-11, David Brown <david.brown@hesbynett.no> wrote:
package as-is. For anything other than a quick demo, my preferred setup
is using makefiles for the build along with an ARM gcc toolchain. That
way I can always build my software, from any system, and archive the
toolchain. (One day, I will also try using clang with these packages,
but I haven't done so yet.)
Same here. I just switched to ARM gcc + picolibc for all my ARM projects - this required some changes in the way my makefiles generate linker scripts and startup code, and now I am quite happy with that setup.
I finally told him it was fine if he wanted to use Eclipse as his
editor, gdb front-end, SVN gui, filesystem browser, office-cleaner and nose-wiper. But it was a non-negotiable requirement that it be
possible to check the source tree and toolchain out of SVN, type
"make", hit enter, and end up with a working binary.
On 2025-03-21, David Brown <david.brown@hesbynett.no> wrote:
The way I use recursive makes is /really/ recursive - the main make
(typically split into a few include makefiles for convenience, but only
one real make) handles everything, and it does some of that be calling
/itself/ recursively. It is quite common for me to build multiple
program images from one set of source - perhaps for different variants
of a board, with different features enabled, and so on. So I might use
"make prog=board_a" to build the image for board a, and "make
prog=board_b" for board b. Each build will be done in its own directory
- builds/build_a or builds/build_b. Often I will want to build for both
boards - then I will do "make prog="board_a board_b"" (with a default
setting for the most common images).
OK, that is not the classic recursive make pattern (ie. run make in each subdirectory).
I do that (ie. building for multiple boards) using build
scripts that are external to make.
cu
Michael
Am 21.03.2025 um 16:45 schrieb David Brown:
On 21/03/2025 15:04, Waldek Hebisch wrote:
David Brown <david.brown@hesbynett.no> wrote:
[...]In a project of over 500 files in 70 directories, it's a lot more work >>>> than using wildcards and not keeping old unneeded files mixed in with
source files.
This argument blindly assumes files matching the wildcard patterns must self-evidently be "old", and "still" in there. That assumption can be _wildly_ wrong.
People will sometimes make backup copies of source
files in situ, e.g. for experimentation. Source files can also get accidentally lost or moved.
Adding a source to the build on the sole justification that it exists,
in a given place, IMHO undermines any remotely sane level of
configuration management. Skipping a file simply because it's been lost
is even worse.
Hunting for what source file the undefined reference in the final link
was supposed to have come from, but didn't, is rather less fun than a
clear message from Make that it cannot build foo.o because its source is nowhere to be found.
The opposite job of hunting down duplicate
definitions introduced by spare source files might be easier --- but
then again, it might not be. Do you _always_ know, off the top of your head, whether the definition of function "get_bar" was supposed to be in dir1/dir2/baz.cpp or dir3/dir4/dir5/baz.cpp?
Compared to effort needed to create a file, adding entry to file list
is negligible.
That's true.
But compared to have a wildcard search to include all .c and .cpp
files in the source directories, maintaining file lists is still more
than nothing!
Which IMHO actually is the best argument _not_ to do it every time you
run the build. And that includes not having make to do it for you,
every time. All that wildcard discovery adds work to every build while introducing unnecessary risk to the build's reproducibility.
Setting up file lists using wildcards is a type of job best done just
once, so after you've verified and fine-tuned the result, you save it
and only repeat the procedure on massive additions or structural changes.
Keeping that list updated will also be less of a chore than enforcing a
"thou shalt not put files in that folder lest they will be added to the
build without your consent" policy.
If I have a project, the files in the project are in the project
directory. Where else would they be? And what other files would I have
in the project directory than project files?
On 21/03/2025 15:04, Waldek Hebisch wrote:
David Brown <david.brown@hesbynett.no> wrote:
On 18/03/2025 19:28, Michael Schwingen wrote:
On 2025-03-18, David Brown <david.brown@hesbynett.no> wrote:
A good makefile picks up the new files automatically and handles all the >>>>> dependencies, so often all you need is a new "make -j".
I don't do that anymore - wildcards in makefiles can lead to all kinds of >>>> strange behaviour due to files that are left/placed somewhere but are not >>>> really needed.
I'm sure you can guess the correct way to handle that - don't leave
files in the wrong places :-)
I prefer to list the files I want compiled - it is not that
much work.
In a project of over 500 files in 70 directories, it's a lot more work
than using wildcards and not keeping old unneeded files mixed in with
source files.
In project with about 550 normal source files, 80 headers, 200 test
files, about 1200 generated files spread over 12 directories I use
explicit file lists. Lists of files increase volume of Makefile-s,
but in my experience extra work to maintain file list is very small.
Compared to effort needed to create a file, adding entry to file list
is negligible.
That's true.
But compared to have a wildcard search to include all .c and .cpp files
in the source directories, maintaining file lists is still more than
nothing!
However, the real benefit from using automatic file searches like this
is two-fold. One is that you can't get it wrong - you can't forget to
add the new file to the list, or remove deleted or renamed files from
the list.
The other - bigger - effect is that there is never any doubt
about the files in the project. A file is in the project and build if
and only if it is in one of the source directories.
That consistency is
very important to me - and to anyone else trying to look at the project.
So any technical help in enforcing that is a good thing in my book.
Explicit lists are useful if groups of files should get somewhat
different treatment (I have less need for this now, but it was
important in the past).
I do sometimes have explicit lists for /directories/ - but not for
files. I often have one branch in the source directory for my own code,
and one branch for things like vendor SDKs and third-party code. I can
then use stricter static warnings for my own code, without triggering
lots of warnings in external code.
IMO being explicit helps with readablity and make code more
amenable to audit.
A simple rule of "all files are in the project" is more amenable to audit.
On 2025-03-22, David Brown <david.brown@hesbynett.no> wrote:
If I have a project, the files in the project are in the project
directory. Where else would they be? And what other files would I have
in the project directory than project files?
If the project can be compiled for different targets, you may have files
that are used only for one target - stuff like i2c_stm32f0.c and i2c_stm32f1.c.
Both are project files, but only one is supposed to end up in the compilation. You may work around this by putting files in separate directories, but at some point you end up with lots of directories with only 1 file.
This gets to the point of build configuration - make needs to know which files belong to a build configuration. Putting "#ifdef TARGET_STM32F0" around the whole C file is not a good way to do this in a larger project
(not only because newer compilers complain that "ISO C forbids an empty translation unit").
Some optional features influence both make and the compile progress - at work, we decided to put that knowledge outside make, and generate sets of matching include files for make/c/c++ during the configure stage.
As you said, there are pros and cons - use what works for your project.
David Brown <david.brown@hesbynett.no> wrote:
A simple rule of "all files are in the project" is more amenable to audit.
Maybe your wildcard use is very simple,
but year ago wildcards
were important part in obfuscationg presence of maliciuous code
in lzma.
But more important part is keeping info together, inside Makefile.
I have an embedded project that is compiled in Atmel Studio 7.0. The
target is and ARM MCU, so the toolchain is arm-gnu-toolchain. The
installed toolchain version is 6.3.1.508. newlib version is 2.5.0.
In this build system the type time_t is defined as long, so 32 bits.
I'm using time_t mainly to show it on a display for the user (as a
broken down time) and tag with a timestamp some events (that the user
will see as broken down time).
The time can be received by Internet or by the user, if the device is
not connected. In both cases, time_t is finally used.
As you know, my system will show the Y2038 issue. I don't know if some
of my devices will be active in 2038, anyway I'd like to fix this
potential issue now.
One possibility is to use a modern toolchain[1] that most probably uses
a new version of newlib that manages 64 bits time_t. However I think I
should address several warnings and other problems after upgrading the toolchain.
Another possibility is to rewrite my own my_mktime(), my_localtime() and
so on that accepts and returns my_time_t variables, defined as 64 bits. However I'm not capable in writing such functions. Do you have some implementations? I don't need full functional time functions, for
example the timezone can be fixed at build time, I don't need to set it
at runtime.
Any suggestions?
[1] https://developer.arm.com/-/media/Files/downloads/gnu/14.2.rel1/binrel/arm-gnu-toolchain-14.2.rel1-mingw-w64-i686-arm-none-eabi.zip
Il 11/03/2025 17:32, David Brown ha scritto:
On 11/03/2025 16:22, pozz wrote:
I have an embedded project that is compiled in Atmel Studio 7.0. The
target is and ARM MCU, so the toolchain is arm-gnu-toolchain. The
installed toolchain version is 6.3.1.508. newlib version is 2.5.0.
I /seriously/ dislike Microchip's way of handling toolchains. They
work with old, outdated versions, rename and rebrand them and their
documentation to make it look like they wrote them themselves, then
add license checks and software locks so that optimisation is disabled
unless you pay them vast amounts of money for the software other
people wrote and gave away freely. To my knowledge, they do not break
the letter of the license for GCC and other tools and libraries, but
they most certainly break the spirit of the licenses in every way
imaginable.
Maybe you are thinking about Microchip IDE named MPLAB X or something similar. I read something about disabled optimizations in the free
version of the toolchain.
However I'm using *Atmel Studio* IDE, that is an old IDE distributed by Atmel, before the Microchip purchase. The documentation speaks about
some Atmel customization of ARM gcc toolchain, but it clearly specified
the toolchain is an arm gcc.
Prior to being bought by Microchip, Atmel was bad - but not as bad.
Why do you think Atmel was bad? I think they had good products.
So if for some reason I have no choice but to use a device from Atmel
/ Microchip, I do so using tools from elsewhere.
As a general rule, the gcc-based toolchains from ARM are the industry
standard, and are used as the base by most ARM microcontroller
suppliers. Some include additional library options, others provide
the package as-is. For anything other than a quick demo, my preferred
setup is using makefiles for the build along with an ARM gcc
toolchain. That way I can always build my software, from any system,
and archive the toolchain. (One day, I will also try using clang with
these packages, but I haven't done so yet.)
Yes, you're right, but now it's too late to change the toolchain.
Any reasonably modern ARM gcc toolchain will have 64-bit time_t. I
never like changing toolchains on an existing project, but you might
make an exception here.
I will check.
However, writing functions to support time conversions is not
difficult. The trick is not to start at 01.01.1970, but start at a
convenient date as early as you will need to handle - 01.01.2025 would
seem a logical point. Use <https://www.unixtimestamp.com/> to get the
time_t constant for the start of your epoch.
To turn the current time_t value into a human-readable time and date,
first take the current time_t and subtract the epoch start. Divide by
365 * 24 * 60 * 60 to get the additional years. Divide the leftovers
by 24 * 60 * 60 to get the additional days. Use a table of days in
the months to figure out the month. Leap year handling is left as an
exercise for the reader (hint - 2100, 2200 and 2300 are not leap
years, while 2400 is). Use the website I linked to check your results.
If I had to rewrite my own functions, I could define time64_t as
uint64_t, keeping the Unix epoch as my epoch.
Regarding implementation, I don't know if it so simple. mktime() fix the members of struct tm passed as an argument (and this is useful to
calculate the day of the week). Moreover I don't only need the
conversion from time64_t to struct tm, but viceversa too.
Or you can get the sources for a modern version of newlib, and pull
the routines from there.
It's a very complex code. time functions are written for whatever
timezone is set at runtime (TZ env variable), so their complexity are
higher.
Il 11/03/2025 17:32, David Brown ha scritto:
[...]
For anything other than a quick demo, my preferred setup is using[...]
makefiles for the build along with an ARM gcc toolchain. That way I
can always build my software, from any system, and archive the toolchain.
Regading this point, it's what I want to do in new projects. What I
don't know is...
Why many silicon vendors provide a *custom* arm gcc toolchain? Are those customizations important to build firmware for their MCUs? If not, why
they invest money to make changes in a toolchain? It isn't a simple job.
Another point is visual debugging. I don't mean text editor with syntax hilighting, code completion, project management and so on. There are
many tools around for this.
I used to have a button in the IDE to launch a debugging session. The generation of a good debugging session configuration is simplified in
IDE if you use main debuggin probe (for example, J-Link).
How do you debug your projects without a full-features and ready-to-use
IDE from the silicon vendor?
Il 15/03/2025 17:30, Michael Schwingen ha scritto:
On 2025-03-11, David Brown <david.brown@hesbynett.no> wrote:
package as-is. For anything other than a quick demo, my preferred setup >>> is using makefiles for the build along with an ARM gcc toolchain. That >>> way I can always build my software, from any system, and archive the
toolchain. (One day, I will also try using clang with these packages,
but I haven't done so yet.)
Same here. I just switched to ARM gcc + picolibc for all my ARM
projects -
this required some changes in the way my makefiles generate linker
scripts
and startup code, and now I am quite happy with that setup.
One day or another I will try to move from my actual build system (that depends on silicon vendor IDE, libraries, middleware, drivers, and so
on) to a generic makefile and generic toolchain.
Sincerely I tried in the past with some issues. First of all, I use a
Windows machine for development and writing makefiles that work on
Windows is not simple. Maybe next time I will try with WSL, writing
makefiles that work directly in Unix.
Another problem that I see is the complexity of actual projects: TCP/IP stack, cripto libraries, drivers, RTOS, and so on. Silicon vendors
usually give you several example projects that just works with one
click, using their IDE, libraries, debuggers, and so on. Moving from
this complex build system to custom makefiles and toolchain isn't so
simple.
Suppose you make the job to "transform" the example project into a
makefile. You start working with your preferred IDE/text
editor/toolchain, you are happy.
After some months the requirements change and you need to add a driver
for a new peripheral or a complex library. You know there are
ready-to-use example projects in the original IDE from silicon vendor
that use exactly what you need (mbedtls, DMA, ADC...), but you can't use
them because you changed your build system.
Another problem is debugging: launch a debug sessions that means
download the binary through a USB debugger/probe and SWD port, add some breakpoints, see the actual values of some variables and so on. All this works very well without big issues if using original IDE. Are you able
to configure *your* custom development system to launch debug sessions?
Eventually another question. Silicon vendors usually provide custom toolchains that often are a customized version of arm-gcc toolchian
(yes, here I'm talking about Cortex-M MCUs only, otherwise it would be
much more complex).
What happens if I move to the generic arm-gcc?
One day or another I will try to move from my actual build system (that depends on silicon vendor IDE, libraries, middleware, drivers, and so
on) to a generic makefile and generic toolchain.
Another problem that I see is the complexity of actual projects: TCP/IP stack, cripto libraries, drivers, RTOS, and so on. Silicon vendors
usually give you several example projects that just works with one
click, using their IDE, libraries, debuggers, and so on. Moving from
this complex build system to custom makefiles and toolchain isn't so simple.
Another problem is debugging: launch a debug sessions that means
download the binary through a USB debugger/probe and SWD port, add some breakpoints, see the actual values of some variables and so on. All this works very well without big issues if using original IDE. Are you able
to configure *your* custom development system to launch debug sessions?
Eventually another question. Silicon vendors usually provide custom toolchains that often are a customized version of arm-gcc toolchian
(yes, here I'm talking about Cortex-M MCUs only, otherwise it would be
much more complex).
What happens if I move to the generic arm-gcc?
This is exactly what I do. I don't use RTC with registers (seconds, minutes...) anymore, only a 32.768kHz oscillator (present in many MCUs)
that increments a counter.
Install msys2 (and the mingw-64 version of gcc, if you want a native
compiler too). Make sure the relevant "bin" directory is on your path.
Then gnu make will work perfectly, along with all the little *nix
utilities such as touch, cp, mv, sed, etc., that makefiles sometimes use.
The only time I have seen problems with makefiles on Windows is when
using ancient partial make implementations, such as from Borland, along
with more advanced modern makefiles, or when someone mistakenly uses
MS's not-make "nmake" program instead of "make".
A good makefile picks up the new files automatically and handles all the dependencies, so often all you need is a new "make -j".
On 2025-03-18, David Brown <david.brown@hesbynett.no> wrote:
Install msys2 (and the mingw-64 version of gcc, if you want a native
compiler too). Make sure the relevant "bin" directory is on your path.
Then gnu make will work perfectly, along with all the little *nix
utilities such as touch, cp, mv, sed, etc., that makefiles sometimes use.
The only time I have seen problems with makefiles on Windows is when
using ancient partial make implementations, such as from Borland, along
with more advanced modern makefiles, or when someone mistakenly uses
MS's not-make "nmake" program instead of "make".
I have seen problems when using tools that are build during the compile proess, used to generate further C code.
I would suggest using WSL instead of msys2. I have not used it for cross-compiling, but it works fine (except for file access performance) for my documentation process, which needs commandline pdf modification tools
plus latex.
A good makefile picks up the new files automatically and handles all the
dependencies, so often all you need is a new "make -j".
I don't do that anymore - wildcards in makefiles can lead to all kinds of strange behaviour due to files that are left/placed somewhere but are not really needed.
I prefer to list the files I want compiled - it is not that
much work.
Il 18/03/2025 11:34, David Brown ha scritto:
On 18/03/2025 09:21, pozz wrote:
Il 15/03/2025 17:30, Michael Schwingen ha scritto:
On 2025-03-11, David Brown <david.brown@hesbynett.no> wrote:
package as-is. For anything other than a quick demo, my preferred
setup
is using makefiles for the build along with an ARM gcc toolchain.
That
way I can always build my software, from any system, and archive the >>>>> toolchain. (One day, I will also try using clang with these packages, >>>>> but I haven't done so yet.)
Same here. I just switched to ARM gcc + picolibc for all my ARM
projects -
this required some changes in the way my makefiles generate linker
scripts
and startup code, and now I am quite happy with that setup.
One day or another I will try to move from my actual build system
(that depends on silicon vendor IDE, libraries, middleware, drivers,
and so on) to a generic makefile and generic toolchain.
Sincerely I tried in the past with some issues. First of all, I use a
Windows machine for development and writing makefiles that work on
Windows is not simple. Maybe next time I will try with WSL, writing
makefiles that work directly in Unix.
Install msys2 (and the mingw-64 version of gcc, if you want a native
compiler too). Make sure the relevant "bin" directory is on your
path. Then gnu make will work perfectly, along with all the little
*nix utilities such as touch, cp, mv, sed, etc., that makefiles
sometimes use.
Do you run <msys>\usr\bin\make.exe directly from a cmd.exe shell? Or do
you open a msys specific shell?
The only time I have seen problems with makefiles on Windows is when
using ancient partial make implementations, such as from Borland,
along with more advanced modern makefiles, or when someone mistakenly
uses MS's not-make "nmake" program instead of "make".
Of course your builds will be slower on Windows than on Linux, since
Windows is slow to start programs, slow to access files, and poor at
doing it all in parallel, but there is nothing hindering makefiles in
Windows. My builds regularly work identically under Linux and
Windows, with the same makefiles.
I tried to use make for Windows some time ago, but it was a mess. Maybe
msys2 system is much more straightforward.
Another problem that I see is the complexity of actual projects:
TCP/IP stack, cripto libraries, drivers, RTOS, and so on. Silicon
vendors usually give you several example projects that just works
with one click, using their IDE, libraries, debuggers, and so on.
Moving from this complex build system to custom makefiles and
toolchain isn't so simple.
That's why you still have a job. Putting together embedded systems is
not like making a Lego kit. Running a pre-made demo can be easy -
merging the right bits of different demos, samples and libraries into
complete systems is hard work. It is not easy whether you use an IDE
for project and build management, or by manual makefiles. Some
aspects may be easier with one tool, other aspects will be harder.
You're right.
Suppose you make the job to "transform" the example project into a
makefile. You start working with your preferred IDE/text
editor/toolchain, you are happy.
After some months the requirements change and you need to add a
driver for a new peripheral or a complex library. You know there are
ready-to-use example projects in the original IDE from silicon vendor
that use exactly what you need (mbedtls, DMA, ADC...), but you can't
use them because you changed your build system.
Find the files you need from the SDK or libraries, copy them into your
own project directories (keep them organised sensibly).
A good makefile picks up the new files automatically and handles all
the dependencies, so often all you need is a new "make -j". But you
might have to set up include directories, or even particular flags or
settings for different files. >
Another problem is debugging: launch a debug sessions that means
download the binary through a USB debugger/probe and SWD port, add
some breakpoints, see the actual values of some variables and so on.
All this works very well without big issues if using original IDE.
Are you able to configure *your* custom development system to launch
debug sessions?
Build your elf file with debugging information, open the elf file in
the debugger.
What do you mean with "open the elf file in the debugger"?
You probably have a bit of setup to specify things like the exact
microcontroller target, but mostly it works fine.
Eventually another question. Silicon vendors usually provide custom
toolchains that often are a customized version of arm-gcc toolchian
(yes, here I'm talking about Cortex-M MCUs only, otherwise it would
be much more complex).
What happens if I move to the generic arm-gcc?
This has already been covered. Most vendors now use standard
toolchain builds from ARM.
What happens if the vendor has their own customized tool and you
switch to a generic ARM tool depends on the customization and the tool
versions. Usually it means you get a new toolchain with better
warnings, better optimisation, and newer language standard support.
But it might also mean vendor-supplied code with bugs no longer works
as it did. (You don't have any bugs in your own code, I presume :-) )
:-)
msys2 is totally different. The binaries are all native Windows
binaries, and they all work within the same Windows environment as
everything else. There are no problems using Windows-style paths
(though of course it is best to use relative paths and forward slashes
in your makefiles, #include directives, etc., for cross-platform compatibility). You can use the msys2 programs directly from the normal Windows command window, or Powershell, or in batch files, or directly
from other Windows programs.
On 2025-03-18, David Brown <david.brown@hesbynett.no> wrote:
msys2 is totally different. The binaries are all native Windows
binaries, and they all work within the same Windows environment as
everything else. There are no problems using Windows-style paths
(though of course it is best to use relative paths and forward slashes
in your makefiles, #include directives, etc., for cross-platform
compatibility). You can use the msys2 programs directly from the normal
Windows command window, or Powershell, or in batch files, or directly
from other Windows programs.
Are the make recipes are run using a normal Unix shell (bash? ash?
bourne?) with exported environment variables as expected when running
'make' on Unix?
The gnu make functions [e.g $(shell <whatever>)] all work as epected?
Or are there certain gnu make features you have to avoid for makefiles
to work under msys2?
Il 12/03/2025 10:33, David Brown ha scritto:
For all of this, the big question is /why/ you are doing it. What are
you doing with your times? Where are you getting them? Are you
actually doing this in a sensible way because they suit your
application, or are you just using these types and structures because
they are part of the standard C library - which is not good enough for
your needs here?
When the user wants to set the current date and time, I fill a struct tm
with user values. Next I call mktime() to calculate time_t that is been incrementing every second.
When I need to show the current date and time to the user, I call
localtime() to convert time_t in struct tm. And I have day of the week too.
Consider that mktime() and localtime() take into account timezone, that
is important for me. In Italy we have daylight savings time with not so simple rules. Standard time functions work well with timezones.
Maybe you are going about it all the wrong way. If you need to be
able to display and set the current time and date, and to be able to
conveniently measure time differences for alarms, repetitive tasks,
etc., then you probably don't need any correlation between your
monotonic seconds counter and your time/date tracker. All you need to
do is add one second to each, every second. I don't know the details
of your application (obviously), but often no conversion is needed
either way.
I'm talking about *wall* clock only. Internally I have a time_t variable
that is incremented every second. But I need to show it to the user and
I can't show the seconds from the epoch.
Or you can get the sources for a modern version of newlib, and pull
the routines from there.
It's a very complex code. time functions are written for whatever
timezone is set at runtime (TZ env variable), so their complexity are
higher.
So find a simpler standard C library implementation. Try the avrlibc,
for example.
But I have no doubt at all that you can make all this yourself easily
enough.
I think timezone rules are not so simple to implement.
Il 12/03/2025 17:39, David Brown ha scritto:
On 12/03/2025 16:48, pozz wrote:
Il 12/03/2025 10:33, David Brown ha scritto:
For all of this, the big question is /why/ you are doing it. What
are you doing with your times? Where are you getting them? Are you >>>> actually doing this in a sensible way because they suit your
application, or are you just using these types and structures
because they are part of the standard C library - which is not good
enough for your needs here?
When the user wants to set the current date and time, I fill a struct
tm with user values. Next I call mktime() to calculate time_t that is
been incrementing every second.
When I need to show the current date and time to the user, I call
localtime() to convert time_t in struct tm. And I have day of the
week too.
Consider that mktime() and localtime() take into account timezone,
that is important for me. In Italy we have daylight savings time with
not so simple rules. Standard time functions work well with timezones.
Maybe you are going about it all the wrong way. If you need to be
able to display and set the current time and date, and to be able to
conveniently measure time differences for alarms, repetitive tasks,
etc., then you probably don't need any correlation between your
monotonic seconds counter and your time/date tracker. All you need
to do is add one second to each, every second. I don't know the
details of your application (obviously), but often no conversion is
needed either way.
I'm talking about *wall* clock only. Internally I have a time_t
variable that is incremented every second. But I need to show it to
the user and I can't show the seconds from the epoch.
The sane way to do this - the way it has been done for decades on
small embedded systems - is to track both a human-legible date/time
structure (ignore standard struct tm - make your own) /and/ to track a
monotonic seconds counter (or milliseconds counter, or minutes counter
- whatever you need). Increment both of them every second. Both
operations are very simple - far easier than any conversions.
If I got your point, adding one second to struct mytm isn't reduced to a
++ on one of its member. I should write something similar to this:
if (mytm.tm_sec < 59) {
mytm.tm_sec += 1;
} else {
mytm.tm_sec = 0;
if (mytm.tm_min < 59) {
mytm.tm_min += 1;
} else {
mytm.tm_min = 0;
if (mytm.tm_hour < 23) {
mytm.tm_hour += 1;
} else {
mytm.tm_hour = 0;
if (mytm.tm_mday < days_in_month(mytm.tm_mon, mytm.tm_year)) {
mytm.tm_mday += 1;
} else {
mytm.tm_mday = 1;
if (mytm.tm_mon < 12) {
mytm.tm_mon += 1;
} else {
mytm.tm_mon = 0;
mytm.tm_year += 1;
}
}
}
}
}
However taking into account dst is much more complex. The rule is the
last sunday of March and last sunday of October (if I'm not wrong).
All can be coded manually from the scratch, but there are standard
functions just to avoid reinventing the wheel.
Tomorrow I could install my device in another country in the world and
it could be easy to change the timezone with standard function.
Adding or subtracting an hour on occasion is also simple.
Yes, but the problem is *when*. You need to know the rules and you need
to implement them. localtime() just works.
If your system is connected to the internet, then occasionally pick up
the current wall-clock time (and unix epoch, if you like) from a
server, along with the time of the next daylight savings change.
What do you mean with "next daylight savings change"? I'm using NTP (specifically SNTP from a public server) and I'm able to retrive the
current UTC time in seconds from Unix epoch.
I just take this value and overwrite my internal counter.
In other application, I retrive the current time from incoming SMS. In
this case I have a local broken down time.
If it is not connected, then the user is going to have to make
adjustments to the time and date occasionally anyway, as there is
always drift
Drifts? By using a 32.768kHz quartz to generate a 1 Hz clock that
increases the internal counter avoid any drifts.
- they can
do the daylight saving change at the same time as they change their
analogue clocks, their cooker clock, and everything else that is not
connected.
I think you can take into account dst even if the device is not connected.
I bet Windows is able to show the correct time (with dst changes) even
if the PC is not connected.
Or you can get the sources for a modern version of newlib, and
pull the routines from there.
It's a very complex code. time functions are written for whatever
timezone is set at runtime (TZ env variable), so their complexity
are higher.
So find a simpler standard C library implementation. Try the
avrlibc, for example.
But I have no doubt at all that you can make all this yourself
easily enough.
I think timezone rules are not so simple to implement.
You don't need them. That makes them simple.
Am 18.03.2025 um 21:58 schrieb Grant Edwards:
On 2025-03-18, David Brown <david.brown@hesbynett.no> wrote:[...]
msys2 is totally different. The binaries are all native Windows
binaries, and they all work within the same Windows environment as
Are the make recipes are run using a normal Unix shell (bash? ash?
bourne?) with exported environment variables as expected when running
'make' on Unix?
Pretty much, yes. There are some gotchas in handling of path names, and particularly their passing to less-than-accomodating, native Windows compilers etc.. And the quoting of command line arguments can become
even dicier than native Windows already is.
There be dragons, but MSYS2 will keep the vast majority of them out of
your sight.
The gnu make functions [e.g $(shell <whatever>)] all work as epected?
Yes, as long as you stay reasonable about the selection of things you
try to run that way, and keep in mind you may have to massage command
line arguments if <whatever> is a native Windows tool.
For reference, MSYS2 is also the foundation of Git Bash for MS Windows,
which you might be familiar with already...
The underlying technology of MSYS2 is a fork of the Cygwin project,
which is an environment that aims to provide the best emulation of a
Unix environment they can, inside MS Windows. The key difference of the MSYS2 fork lies in a set of tweaks to resolve some of the corner cases
more towards the Windows interpretation of things.
So, if your Makefiles are too Unix centric for even MSYS2 to handle,
Cygwin can probably still manage. And it will do it for the small price
of many of your relevant files needing to have LF-only line endings.
Here's a rough hierarchy of Unix-like-ness among Make implementations on
a PC, assuming your actual compiler tool chain is a native Windows one:
0) your IDE's internal build system --- not even close
1) original DOS or Windows "make" tools
2) fully native ports of GNU make (predating MSYS)
3) GNU Make in MSYS2
4) GNU Make in Cygwin
5) WSL2 --- the full monty
I'll also second an earlier suggestion: for newcomers with little or no present skills in Makefile writing, CMake or Meson can be a much
smoother entry into this world. Also, if you're going this route, I
suggest to consider skipping Make and using Ninja instead.
There are certainly a few things that Cygwin can handle that msys2
cannot. For example, cygwin provides the "fork" system call that is
very slow and expensive on Windows, but fundamental to old *nix
software.
On 2025-03-19, David Brown <david.brown@hesbynett.no> wrote:
There are certainly a few things that Cygwin can handle that msys2
cannot. For example, cygwin provides the "fork" system call that is
very slow and expensive on Windows, but fundamental to old *nix
software.
I believe Windows inherited that from VAX/VMS via Dave Cutler.
Back when the Earth was young I used to do embedded development on
VMS. I was, however, a "Unix guy" so my usual work environment on VMS
was "DEC/Shell" which was a v7 Bourne shell and surprisingly complete
set of v7 command line utilities that ran on VMS. [Without DEC/Shell,
I'm pretty sure I wouldn't have survived that project.] At one point I
wrote some fairly complex shell/awk/grep scripts to analyze and cross-reference requirements documents written in LaTeX. The scripts
would have taken a few minutes to run under v7 on an LSI-11, but they
took hours on a VAX 780 under VMS DEC/Shell (and used up ridiculous
amounts of CPU time). I was baffled. I eventually tracked it down to
the overhead of "fork". A fork on Unix is a trivial operation, and
when running a shell program it happens a _lot_.
On VMS, a fork() call in a C program had _huge_ overhead compared to
Unix [but dog bless the guys in Massachusetts, it worked]. I'm not
sure if it was the process creation itself, or the "duplication" of
the parent that took so long. Maybe both. In the end it didn't matter:
it was so much easier to do stuff under DEC/Shell than it was under
DCL that we just ran the analysis scripts overnight.
On 19/03/2025 15:27, Grant Edwards wrote:
On 2025-03-19, David Brown <david.brown@hesbynett.no> wrote:
There are certainly a few things that Cygwin can handle that msys2
cannot. For example, cygwin provides the "fork" system call that is
very slow and expensive on Windows, but fundamental to old *nix
software.
I believe Windows inherited that from VAX/VMS via Dave Cutler.
I am always a bit wary of people saying features were copied from VMS
into Windows NT, simply because the same person was a major part of the development. Windows NT was the descendent of DOS-based Windows,
in turn was the descendent of DOS. These previous systems had nothing remotely like "fork", but Windows already had multi-threading. When you
have decent thread support, the use of "fork" is much lower - equally,
in the *nix world at the time, the use-case for threading was much lower because they had good "fork" support. Thus Windows NT did not get
"fork" because it was not worth the effort - making existing thread
support better was a lot more important.
However, true "fork" is very rarely useful, and is now rarely used in
modern *nix programming.
So these days, bash does not use "fork" for starting all the
subprocesses - it uses vfork() / execve(), making it more efficient
and also conveniently more amenable to running on Windows.
On 2025-03-19, David Brown <david.brown@hesbynett.no> wrote:
On 19/03/2025 15:27, Grant Edwards wrote:
On 2025-03-19, David Brown <david.brown@hesbynett.no> wrote:
There are certainly a few things that Cygwin can handle that msys2
cannot. For example, cygwin provides the "fork" system call that is
very slow and expensive on Windows, but fundamental to old *nix
software.
I believe Windows inherited that from VAX/VMS via Dave Cutler.
I am always a bit wary of people saying features were copied from VMS
into Windows NT, simply because the same person was a major part of the
development. Windows NT was the descendent of DOS-based Windows,
The accounts I've read about NT say otherwise. They all claim that NT
was a brand-new kernel written (supposedly from scratch) by Dave
Cutler's team. They implemented some backwards compatible Windows
APIs, but the OS kernel itself was based far more on VMS than Windows.
Quoting from https://en.wikipedia.org/wiki/Windows_NT:
Although NT was not an exact clone of Cutler's previous operating
systems, DEC engineers almost immediately noticed the internal
similarities. Parts of VAX/VMS Internals and Data Structures,
published by Digital Press, accurately describe Windows NT
internals using VMS terms. Furthermore, parts of the NT codebase's
directory structure and filenames matched that of the MICA
codebase.[10] Instead of a lawsuit, Microsoft agreed to pay DEC
$65–100 million, help market VMS, train Digital personnel on
Windows NT, and continue Windows NT support for the DEC Alpha.
That last sentence seems pretty damning to me.
in turn was the descendent of DOS. These previous systems had nothing
remotely like "fork", but Windows already had multi-threading. When you
have decent thread support, the use of "fork" is much lower - equally,
in the *nix world at the time, the use-case for threading was much lower
because they had good "fork" support. Thus Windows NT did not get
"fork" because it was not worth the effort - making existing thread
support better was a lot more important.
But it did end up making support for the legacy fork() call used by
many legacy Unix programs very expensive. I'm not claiming that fork()
was a good idea in the first place, that it should have been
implemented better in VMS or Windows, or that it should still be used.
I'm just claiming that
1. Historically, fork() was way, way, WAY slower on Windows and VMS
than on Unix. [Maybe that has improved on Windows.]
2. 40 years ago, fork() was still _the_way_ to start a process in
most all common Unix applications.
However, true "fork" is very rarely useful, and is now rarely used in
modern *nix programming.
I didn't mean to imply that it was. However, back in the 1980s when I
was running DEC/Shell with v7 Unix programs, fork() was still how the
Bourne shell in DEC/Shell started execution of every command.
Those utilities were all from v7 Unix. That's before vfork()
existed. vfork() wasn't introduced until 3BSD and then SysVr4.
https://en.wikipedia.org/wiki/Fork_(system_call)
So these days, bash does not use "fork" for starting all the
subprocesses - it uses vfork() / execve(), making it more efficient
and also conveniently more amenable to running on Windows.
That's good news. You'd think it wouldn't be so slow. :)
On 19/03/2025 15:27, Grant Edwards wrote:
On 2025-03-19, David Brown <david.brown@hesbynett.no> wrote:
There are certainly a few things that Cygwin can handle that msys2
cannot. For example, cygwin provides the "fork" system call that is
very slow and expensive on Windows, but fundamental to old *nix
software.
I believe Windows inherited that from VAX/VMS via Dave Cutler.
I am always a bit wary of people saying features were copied from VMS
into Windows NT, simply because the same person was a major part of the development. Windows NT was the descendent of DOS-based Windows, which
in turn was the descendent of DOS. These previous systems had nothing remotely like "fork", but Windows already had multi-threading. When you
have decent thread support, the use of "fork" is much lower - equally,
in the *nix world at the time, the use-case for threading was much lower because they had good "fork" support. Thus Windows NT did not get
"fork" because it was not worth the effort - making existing thread
support better was a lot more important.
David Brown <david.brown@hesbynett.no> wrote:
On 19/03/2025 15:27, Grant Edwards wrote:
On 2025-03-19, David Brown <david.brown@hesbynett.no> wrote:
There are certainly a few things that Cygwin can handle that msys2
cannot. For example, cygwin provides the "fork" system call that is
very slow and expensive on Windows, but fundamental to old *nix
software.
I believe Windows inherited that from VAX/VMS via Dave Cutler.
I am always a bit wary of people saying features were copied from VMS
into Windows NT, simply because the same person was a major part of the
development. Windows NT was the descendent of DOS-based Windows, which
in turn was the descendent of DOS. These previous systems had nothing
remotely like "fork", but Windows already had multi-threading. When you
have decent thread support, the use of "fork" is much lower - equally,
in the *nix world at the time, the use-case for threading was much lower
because they had good "fork" support. Thus Windows NT did not get
"fork" because it was not worth the effort - making existing thread
support better was a lot more important.
Actually, Microsoft folks say that Windows NT kernel supports fork.
It was used to implement Posix subsystem. IIUC they claim that
trouble is in upper layers: much of Windows API is _not_ kernel
and implementing well behaving fork means that all layers below
user program, starting from kernel would have to implement
fork.
So this complicated layered structure seem to be main technical
reason of not having fork at API level. And this structure
is like VMS and Mica. Part of this layering could be motivated
by early Windows split between DOS and Windows proper, but
as Grant explained, VMS influence was stronger.
IIUC early NT developement was part of joint IBM-Microsoft
effort to create OS/2, so clearly DOS and Windows influence
were limited. Only later Microsoft decided to merge
classic Windows and NT and effectively abandon other
system iterfaces than Windows API.
Il 12/03/2025 19:18, David Brown ha scritto:
On 12/03/2025 18:13, pozz wrote:
Il 12/03/2025 17:39, David Brown ha scritto:
On 12/03/2025 16:48, pozz wrote:
Il 12/03/2025 10:33, David Brown ha scritto:
For all of this, the big question is /why/ you are doing it. What >>>>>> are you doing with your times? Where are you getting them? Are >>>>>> you actually doing this in a sensible way because they suit your
application, or are you just using these types and structures
because they are part of the standard C library - which is not
good enough for your needs here?
When the user wants to set the current date and time, I fill a
struct tm with user values. Next I call mktime() to calculate
time_t that is been incrementing every second.
When I need to show the current date and time to the user, I call
localtime() to convert time_t in struct tm. And I have day of the
week too.
Consider that mktime() and localtime() take into account timezone,
that is important for me. In Italy we have daylight savings time
with not so simple rules. Standard time functions work well with
timezones.
Maybe you are going about it all the wrong way. If you need to be >>>>>> able to display and set the current time and date, and to be able
to conveniently measure time differences for alarms, repetitive
tasks, etc., then you probably don't need any correlation between
your monotonic seconds counter and your time/date tracker. All
you need to do is add one second to each, every second. I don't
know the details of your application (obviously), but often no
conversion is needed either way.
I'm talking about *wall* clock only. Internally I have a time_t
variable that is incremented every second. But I need to show it to
the user and I can't show the seconds from the epoch.
The sane way to do this - the way it has been done for decades on
small embedded systems - is to track both a human-legible date/time
structure (ignore standard struct tm - make your own) /and/ to track
a monotonic seconds counter (or milliseconds counter, or minutes
counter - whatever you need). Increment both of them every second.
Both operations are very simple - far easier than any conversions.
If I got your point, adding one second to struct mytm isn't reduced
to a ++ on one of its member. I should write something similar to this:
if (mytm.tm_sec < 59) {
mytm.tm_sec += 1;
} else {
mytm.tm_sec = 0;
if (mytm.tm_min < 59) {
mytm.tm_min += 1;
} else {
mytm.tm_min = 0;
if (mytm.tm_hour < 23) {
mytm.tm_hour += 1;
} else {
mytm.tm_hour = 0;
if (mytm.tm_mday < days_in_month(mytm.tm_mon, mytm.tm_year)) { >>> mytm.tm_mday += 1;
} else {
mytm.tm_mday = 1;
if (mytm.tm_mon < 12) {
mytm.tm_mon += 1;
} else {
mytm.tm_mon = 0;
mytm.tm_year += 1;
}
}
}
}
}
Yes, that's about it.
However taking into account dst is much more complex. The rule is the
last sunday of March and last sunday of October (if I'm not wrong).
No, it is not complex. Figure out the rule for your country (I'm sure
Wikipedia well tell you if you are not sure) and then apply it. It's
just a comparison to catch the right time and date, and then you add
or subtract an extra hour.
All can be coded manually from the scratch, but there are standard
functions just to avoid reinventing the wheel.
You've just written the code! You have maybe 10-15 more lines to add
to handle daylight saving.
Tomorrow I could install my device in another country in the world
and it could be easy to change the timezone with standard function.
How many countries are you targeting? Europe all uses the same system.
<https://en.wikipedia.org/wiki/Daylight_saving_time_by_country>
Adding or subtracting an hour on occasion is also simple.
Yes, but the problem is *when*. You need to know the rules and you
need to implement them. localtime() just works.
You are getting ridiculous. This is not rocket science.
Ok, but I don't understand why you prefer to write your own code (yes,
you're an exper programmer, but you can introduce some bugs, you have to write some tests), while there are standard functions that make the job
for you.
I could rewrite memcpy, strcat, strcmp, they aren't rocket science, but
why? IMHO there is no sense.
In my case standard functions aren't good (because of Y2038 issue) and rewriting them can be a valid solution. But if I had a 64 bits time_t, I would live with standard functions very well.
Besides, any fixed system is at risk from changes - and countries have
in the past and will in the future change their systems for daylight
saving. (Many have at least vague plans of scraping it.) So if a
simple fixed system is not good enough for you, use the other method I
suggested - handle it by regular checks from a server that you will
need anyway for keeping an accurate time, or let the user fix it for
unconnected systems.
My users like the automatic dst changes on my connected and unconnected devices. The risk of a future changes in the dst rules doesn't seem to
me a good reason to remove that feature.
If your system is connected to the internet, then occasionally pick
up the current wall-clock time (and unix epoch, if you like) from a
server, along with the time of the next daylight savings change.
What do you mean with "next daylight savings change"? I'm using NTP
(specifically SNTP from a public server) and I'm able to retrive the
current UTC time in seconds from Unix epoch.
I just take this value and overwrite my internal counter.
In other application, I retrive the current time from incoming SMS.
In this case I have a local broken down time.
If it is not connected, then the user is going to have to make
adjustments to the time and date occasionally anyway, as there is
always drift
Drifts? By using a 32.768kHz quartz to generate a 1 Hz clock that
increases the internal counter avoid any drifts.
There is no such thing as a 32.768 kHz crystal - there are only
approximate crystals. If you don't update often enough from an
accurate time source, you will have drift. (How much drift you have,
and what effect it has, is another matter.)
Of course, the quartz has an accuracy that changes with life,
temperature an so on. However the real accuracy doesn't allow the time drifting so much the user needs to reset the time.
- they can
do the daylight saving change at the same time as they change their
analogue clocks, their cooker clock, and everything else that is not
connected.
I think you can take into account dst even if the device is not
connected.
You certainly can. But then you have to have a fixed algorithm known
in advance.
I bet Windows is able to show the correct time (with dst changes)
even if the PC is not connected.
I bet it can't, in cases where the date system for the daylight
savings time has changed or been removed. Other than that, it will
just use a table of date systems such as on the Wikipedia page. Or
perhaps MS simply redefined what they think other people should use.
Older Windows needed manual changes for the date and time, even when
it was connected - their support for NTP was late.
Maybe Windows is not able, but I'm reading Linux is. It saves the time
as UTC on the hw RTC and shows it to the user as localtime, of course applying dst and timezone rules from a database of rules.
So, as long as the timezone/dst info for my timezone is correct, I think Linux could manage dst changes automatically without user activity.
My approach is identical to what Linux does.
How do you debug your projects without a full-features and ready-to-use
IDE from the silicon vendor?
These days I happily use it on Windows with recursive make (done
/carefully/, as all recursive makes should be), automatic dependency generation, multiple makefiles, automatic file discovery, parallel
builds, host-specific code (for things like the toolchain installation directory), and all sorts of other bits and pieces.
I'll also second an earlier suggestion: for newcomers with little or no present skills in Makefile writing, CMake or Meson can be a much
smoother entry into this world. Also, if you're going this route, I
suggest to consider skipping Make and using Ninja instead.
later Windows versions.) A decade or so ago I happened to be
approximately in sync on the hardware for my Linux desktop and my
Windows desktop (I use both systems at work), and tested a make +
cross-gcc build of a project with a couple of hundred C and C++ files.
The Linux build was close to twice the speed.
Il 13/03/2025 16:51, David Brown ha scritto:
On 13/03/2025 09:57, pozz wrote:
Il 12/03/2025 19:18, David Brown ha scritto:
On 12/03/2025 18:13, pozz wrote:
Ok, but I don't understand why you prefer to write your own code
(yes, you're an exper programmer, but you can introduce some bugs,
you have to write some tests), while there are standard functions
that make the job for you.
I prefer to use a newer version of the toolchain that does not have
such problems :-)
Sure, but the project is old. I will check if using a newer toolchain is
a feasible solution for this project.
I am quite happy to re-use known good standard functions. There is no
need to reinvent the wheel if you already have one conveniently
available. But you don't have standard functions conveniently
available here - the ones from your toolchain are not up to the task,
and you are not happy with the other sources you have found for the
standard functions.
So once you have eliminated the possibility of using pre-written
standard functions, you then need to re-evaluate what you actually
need. And that is much less than the standard functions provide. So
write your own versions to do what you need to do - no more, no less.
I agree with you. I thought you were suggesting to use custom made
functions in any case, because my approach that uses time_t counter
(seconds from epoch) and localtime()/mktime() isn't good.
2. Use an implementation from other library sources online. You've
ruled those out as too complicated.
In the past I sometimes lurked in the newlib code and it seems too complicated for me. I will search for other simple implementations of localtime()/mktime().
3. Write your own functions. Yes, that involves a certain amount of
work, testing and risk. That's your job.
Am I missing anything?
I don't think.
On 2025-03-18, David Brown <david.brown@hesbynett.no> wrote:
These days I happily use it on Windows with recursive make (done
/carefully/, as all recursive makes should be), automatic dependency
generation, multiple makefiles, automatic file discovery, parallel
builds, host-specific code (for things like the toolchain installation
directory), and all sorts of other bits and pieces.
I converted to the "recursive make considered harmful" group long ago.
Having one makefile for the whole build makes it possible to have dependencies crossing directories, and gives better performance in parallel builds - with recursive make, the overhead for entering/exiting directories and waiting for sub-makes to finish piles up. If a compile takes 30 minutes on a fast 16-cpu machine, that does make a difference.
using ninja instead of make works even better in such a scenario.
cu
Michael
On 18/03/2025 19:28, Michael Schwingen wrote:
On 2025-03-18, David Brown <david.brown@hesbynett.no> wrote:
A good makefile picks up the new files automatically and handles all the >>> dependencies, so often all you need is a new "make -j".
I don't do that anymore - wildcards in makefiles can lead to all kinds of
strange behaviour due to files that are left/placed somewhere but are not
really needed.
I'm sure you can guess the correct way to handle that - don't leave
files in the wrong places :-)
I prefer to list the files I want compiled - it is not that
much work.
In a project of over 500 files in 70 directories, it's a lot more work
than using wildcards and not keeping old unneeded files mixed in with
source files.
I have the same experience, about 20 years ago - the company was
using a cygwin-based cross-gcc + make (I think some old borland
make) on windows. I converted the makefiles to use GNU make on
linux, and compile time was half that of the windows setup. That
speed advantage was enough to (very) slowly convert colleagues to
use Linux.
On 2025-03-18, David Brown <david.brown@hesbynett.no> wrote:
These days I happily use it on Windows with recursive make (done
/carefully/, as all recursive makes should be), automatic dependency
generation, multiple makefiles, automatic file discovery, parallel
builds, host-specific code (for things like the toolchain installation
directory), and all sorts of other bits and pieces.
I converted to the "recursive make considered harmful" group long ago.
Having one makefile for the whole build makes it possible to have dependencies crossing directories, and gives better performance in parallel builds - with recursive make, the overhead for entering/exiting directories and waiting for sub-makes to finish piles up. If a compile takes 30 minutes on a fast 16-cpu machine, that does make a difference.
David Brown <david.brown@hesbynett.no> wrote:
On 18/03/2025 19:28, Michael Schwingen wrote:
On 2025-03-18, David Brown <david.brown@hesbynett.no> wrote:
A good makefile picks up the new files automatically and handles all the >>>> dependencies, so often all you need is a new "make -j".
I don't do that anymore - wildcards in makefiles can lead to all kinds of >>> strange behaviour due to files that are left/placed somewhere but are not >>> really needed.
I'm sure you can guess the correct way to handle that - don't leave
files in the wrong places :-)
I prefer to list the files I want compiled - it is not that
much work.
In a project of over 500 files in 70 directories, it's a lot more work
than using wildcards and not keeping old unneeded files mixed in with
source files.
In project with about 550 normal source files, 80 headers, 200 test
files, about 1200 generated files spread over 12 directories I use
explicit file lists. Lists of files increase volume of Makefile-s,
but in my experience extra work to maintain file list is very small.
Compared to effort needed to create a file, adding entry to file list
is negligible.
Explicit lists are useful if groups of files should get somewhat
different treatment (I have less need for this now, but it was
important in the past).
IMO being explicit helps with readablity and make code more
amenable to audit.
The way I use recursive makes is /really/ recursive - the main make (typically split into a few include makefiles for convenience, but only
one real make) handles everything, and it does some of that be calling /itself/ recursively. It is quite common for me to build multiple
program images from one set of source - perhaps for different variants
of a board, with different features enabled, and so on. So I might use
"make prog=board_a" to build the image for board a, and "make
prog=board_b" for board b. Each build will be done in its own directory
- builds/build_a or builds/build_b. Often I will want to build for both boards - then I will do "make prog="board_a board_b"" (with a default
setting for the most common images).