• Re: 32 bits time_t and Y2038 issue

    From Michael Schwingen@21:1/5 to David Brown on Sat Mar 15 16:30:21 2025
    On 2025-03-11, David Brown <david.brown@hesbynett.no> wrote:
    package as-is. For anything other than a quick demo, my preferred setup
    is using makefiles for the build along with an ARM gcc toolchain. That
    way I can always build my software, from any system, and archive the toolchain. (One day, I will also try using clang with these packages,
    but I haven't done so yet.)

    Same here. I just switched to ARM gcc + picolibc for all my ARM projects - this required some changes in the way my makefiles generate linker scripts
    and startup code, and now I am quite happy with that setup.


    I have one project where I needed custom time functions: a nixie clock that
    has both a RTC (with seconds/minutes/... registers), and NTP to get current time. NTP time is seconds since 1.1.1900 and UTC.

    The sane approach to handling timezones and DST is the unix way: keep everything in UTC internally and convert to localtime when displaying the
    time. To set the RTC, that requires a version of mktime that does *not* do timezone conversion - I simply pulled mktime from the newlib sources and removed the timezone stuff - done. You could write that stuff yourself, but getting all the corner cases right will take some time. The existing code
    is there and works fine.

    cu
    Michael
    --
    Some people have no respect of age unless it is bottled.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Edwards@21:1/5 to Michael Schwingen on Sat Mar 15 17:02:04 2025
    On 2025-03-15, Michael Schwingen <news-1513678000@discworld.dascon.de> wrote:
    On 2025-03-11, David Brown <david.brown@hesbynett.no> wrote:
    package as-is. For anything other than a quick demo, my preferred setup
    is using makefiles for the build along with an ARM gcc toolchain. That
    way I can always build my software, from any system, and archive the
    toolchain. (One day, I will also try using clang with these packages,
    but I haven't done so yet.)

    Same here. I just switched to ARM gcc + picolibc for all my ARM projects - this required some changes in the way my makefiles generate linker scripts and startup code, and now I am quite happy with that setup.

    Yep. IMO, that's definitely the "One True Answer" for embedded
    development.

    I worked with a guy who wanted to use Eclipse for embedded
    development. After _months_ of f&*king around, he was finally able to
    build a binary that worked.

    But trying to build that Eclipse "project" on another computer (same
    OS, same version of Eclips, same toolchain) was a complete failure.

    I finally told him it was fine if he wanted to use Eclipse as his
    editor, gdb front-end, SVN gui, filesystem browser, office-cleaner and nose-wiper. But it was a non-negotiable requirement that it be
    possible to check the source tree and toolchain out of SVN, type
    "make", hit enter, and end up with a working binary.

    --
    Grant

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael Schwingen@21:1/5 to Grant Edwards on Sat Mar 15 23:26:49 2025
    On 2025-03-15, Grant Edwards <invalid@invalid.invalid> wrote:
    I finally told him it was fine if he wanted to use Eclipse as his
    editor, gdb front-end, SVN gui, filesystem browser, office-cleaner and nose-wiper. But it was a non-negotiable requirement that it be
    possible to check the source tree and toolchain out of SVN, type
    "make", hit enter, and end up with a working binary.

    Yes, we do that at work - build using makefiles, and some colleagues use eclipse as their editor/debugger. I prefer emacs / ddd.

    Getting reproducable build results using eclipse (or some vendor-patched eclipse) is a PITA.

    cu
    Michael
    --
    Some people have no respect of age unless it is bottled.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael Schwingen on Sat Mar 22 11:19:56 2025
    On 21/03/2025 21:53, Michael Schwingen wrote:
    On 2025-03-21, David Brown <david.brown@hesbynett.no> wrote:

    The way I use recursive makes is /really/ recursive - the main make
    (typically split into a few include makefiles for convenience, but only
    one real make) handles everything, and it does some of that be calling
    /itself/ recursively. It is quite common for me to build multiple
    program images from one set of source - perhaps for different variants
    of a board, with different features enabled, and so on. So I might use
    "make prog=board_a" to build the image for board a, and "make
    prog=board_b" for board b. Each build will be done in its own directory
    - builds/build_a or builds/build_b. Often I will want to build for both
    boards - then I will do "make prog="board_a board_b"" (with a default
    setting for the most common images).

    OK, that is not the classic recursive make pattern (ie. run make in each subdirectory).

    Agreed - it is not the pattern that the famous paper warned against.
    But it /is/ recursive make. And in general, I think recursive make can potentially be useful in various ways, but you have to be very careful
    about how you use it in order to do so safely (and efficiently - but of
    course safety and correctness is the priority).

    I do that (ie. building for multiple boards) using build
    scripts that are external to make.

    cu
    Michael

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to All on Sat Mar 22 14:29:46 2025
    On 22/03/2025 00:00, Hans-Bernhard Bröker wrote:
    Am 21.03.2025 um 16:45 schrieb David Brown:
    On 21/03/2025 15:04, Waldek Hebisch wrote:
    David Brown <david.brown@hesbynett.no> wrote:

    In a project of over 500 files in 70 directories, it's a lot more work >>>> than using wildcards and not keeping old unneeded files mixed in with
    source files.
    [...]

    This argument blindly assumes files matching the wildcard patterns must self-evidently be "old", and "still" in there.  That assumption can be _wildly_ wrong.

    Most assumptions can be wildly wrong at times :-)

    People will sometimes make backup copies of source
    files in situ, e.g. for experimentation.  Source files can also get accidentally lost or moved.


    Yes, people do all sorts of things - I was merely describing what I
    think is the best way to organise files, in most circumstances.

    Sometimes you do want to make a backup copies of a source file before
    doing some odd changes, and you don't want to do it via your source code
    system (git, subversion, whatever). This should be rare enough, and
    temporary enough, that you can usually put the copy in a completely
    separate directory without risking confusion. Or you simply make your
    copy of "file.c" as "file.c.old", "file.c.1", or similar - anything
    except "Copy of file.c".

    And if you are losing or moving you files accidentally, you have bigger problems that can be solved by manual file lists!


    Adding a source to the build on the sole justification that it exists,
    in a given place, IMHO undermines any remotely sane level of
    configuration management.  Skipping a file simply because it's been lost
    is even worse.

    If I have a project, the files in the project are in the project
    directory. Where else would they be? And what other files would I have
    in the project directory than project files? It makes no sense to me to
    have a random bunch of old, broken, lost or misplaced source files
    inside the source file directories of a project. If I remove a file
    from a project, I remove it from the project - and if I want to see the
    old file some time in the future, I can see it in the source code
    revision system.

    As regards "lost" files, I don't know where to being to follow your
    argument. How do you "lose" files? And having lost a file, how could
    your build do anything other than skip it? Of course your build is
    likely to fail, at least during linking - regardless of whether you have manually-maintained file lists or automatic lists from wildcards.


    Hunting for what source file the undefined reference in the final link
    was supposed to have come from, but didn't, is rather less fun than a
    clear message from Make that it cannot build foo.o because its source is nowhere to be found.

    How often have you "lost" a file accidentally? At the risk of making an assumption, I bet it is a much rarer event than including a new file in
    a project and forgetting to add it to your manual makefile lists, and
    having to figure out what went wrong in /that/ build.

    Or for even more fun, you refactor to move a function from an existing C
    file that has grown too big, and put it into a new file and expand the function. But you've accidentally left the function in the old file, so
    your build works without complaint and yet no matter how many "printf"
    debug lines you add to the function, they never turn up in your testing
    - because your build is still using the old file and old function, and
    does not catch the duplication at link time.

    The /real/ problem with a haphazard source file organisation is when
    someone else has to look at the project (perhaps that person being your
    future self in ten years time). The new maintainer's task is to find
    all uses of the function "foo" and re-code them to use "bar" instead.
    They are faced with the fun of trying to figure out which of the 20
    files that grep said used "foo" are actually relevant - which are part
    of the real build, which are part of experimental or alternative builds
    done sometimes, and which are junk leftover because a previous developer
    did no housekeeping.


    Of course there are pros and cons to every way of organising files. And sometimes you need a variation of a standard rule - you need /some/
    flexibility to adapt to the needs of any given project. But start with
    the rational setup of having files in the project source directory if
    and only if they are part of the project source. Then you can have the convenience and reliability of wildcard rules in your build setup. And
    /then/ you can add in exceptions if you have good reason for it -
    specify exceptions to the patterns, don't throw out the whole pattern.


      The opposite job of hunting down duplicate
    definitions introduced by spare source files might be easier --- but
    then again, it might not be.  Do you _always_ know, off the top of your head, whether the definition of function "get_bar" was supposed to be in dir1/dir2/baz.cpp or dir3/dir4/dir5/baz.cpp?


    Generally, yes, I do - at least for my own code or code that I am
    working with heavily. Amongst other things, I would avoid having two
    "baz.cpp" files. (Sometimes files from different upstream sources might
    share a name by coincidence, so the build has to tackle that safely, but
    humans get these mixed up more easily than computers.) And I have the directories and subdirectories named sensibly with files grouped
    appropriately, to make it easier to navigate.

    But one thing I can be sure of with my system - "get_bar" is only
    defined in one place in one file. With a less regular approach with
    manual file lists and old (or moved, or "lost") files hanging around,
    maybe "get_bar" is defined in /both/ versions of "baz.cpp" and it is far
    from clear which one is actually used in the project.

    Compared to effort needed to create a file, adding entry to file list
    is negligible.

    That's true.

    But compared to have a wildcard search to include all .c and .cpp
    files in the source directories, maintaining file lists is still more
    than nothing!

    Which IMHO actually is the best argument _not_ to do it every time you
    run the build.  And that includes not having make to do it for you,
    every time.  All that wildcard discovery adds work to every build while introducing unnecessary risk to the build's reproducibility.

    There's no risk to build reproducibility. Why would there be? If you
    have the same files, you have the same wild-card generated lists. If
    you have different files, you have different generated lists - but you
    don't expect the same build binary from two different sets of source files!


    Setting up file lists using wildcards is a type of job best done just
    once, so after you've verified and fine-tuned the result, you save it
    and only repeat the procedure on massive additions or structural changes.


    It's the kind of job best done automatically, in a fraction of a
    millisecond, without risk of errors or getting out of date.

    Keeping that list updated will also be less of a chore than enforcing a
    "thou shalt not put files in that folder lest they will be added to the
    build without your consent" policy.

    I also insist on a policy of not letting my cat walk across my keyboard
    while the editor is in focus. I don't consider that any more of a
    "chore" than a policy of putting the right files in the right place!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael Schwingen@21:1/5 to David Brown on Sat Mar 22 14:46:23 2025
    On 2025-03-22, David Brown <david.brown@hesbynett.no> wrote:

    If I have a project, the files in the project are in the project
    directory. Where else would they be? And what other files would I have
    in the project directory than project files?

    If the project can be compiled for different targets, you may have files
    that are used only for one target - stuff like i2c_stm32f0.c and
    i2c_stm32f1.c.

    Both are project files, but only one is supposed to end up in the
    compilation. You may work around this by putting files in separate directories, but at some point you end up with lots of directories with only
    1 file.

    This gets to the point of build configuration - make needs to know which
    files belong to a build configuration. Putting "#ifdef TARGET_STM32F0"
    around the whole C file is not a good way to do this in a larger project
    (not only because newer compilers complain that "ISO C forbids an empty translation unit").

    Some optional features influence both make and the compile progress - at
    work, we decided to put that knowledge outside make, and generate sets of matching include files for make/c/c++ during the configure stage.

    As you said, there are pros and cons - use what works for your project.

    cu
    Michael
    --
    Some people have no respect of age unless it is bottled.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to David Brown on Sat Mar 22 15:57:50 2025
    David Brown <david.brown@hesbynett.no> wrote:
    On 21/03/2025 15:04, Waldek Hebisch wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 18/03/2025 19:28, Michael Schwingen wrote:
    On 2025-03-18, David Brown <david.brown@hesbynett.no> wrote:

    A good makefile picks up the new files automatically and handles all the >>>>> dependencies, so often all you need is a new "make -j".

    I don't do that anymore - wildcards in makefiles can lead to all kinds of >>>> strange behaviour due to files that are left/placed somewhere but are not >>>> really needed.

    I'm sure you can guess the correct way to handle that - don't leave
    files in the wrong places :-)

    I prefer to list the files I want compiled - it is not that
    much work.


    In a project of over 500 files in 70 directories, it's a lot more work
    than using wildcards and not keeping old unneeded files mixed in with
    source files.

    In project with about 550 normal source files, 80 headers, 200 test
    files, about 1200 generated files spread over 12 directories I use
    explicit file lists. Lists of files increase volume of Makefile-s,
    but in my experience extra work to maintain file list is very small.
    Compared to effort needed to create a file, adding entry to file list
    is negligible.

    That's true.

    But compared to have a wildcard search to include all .c and .cpp files
    in the source directories, maintaining file lists is still more than
    nothing!

    However, the real benefit from using automatic file searches like this
    is two-fold. One is that you can't get it wrong - you can't forget to
    add the new file to the list, or remove deleted or renamed files from
    the list.

    Depends on your workflow. I frequently do developement outside
    of source tree, then one can forget to copy file to the source
    tree. Explicit list means that you get clear build error when file
    is missing, that needs fixing anyway. Possibly less clear
    error when you add file without updating file list, but since
    normally adding a file is followed by make it is easy to find
    the reason.

    The other - bigger - effect is that there is never any doubt
    about the files in the project. A file is in the project and build if
    and only if it is in one of the source directories.

    A file is in the project if and only if it is in the source
    repository. Concerning "build", project normally allows
    optional/variant files and file is build if and only if it
    is needed in choosen configuration. Clearly, file not needed
    in any configuration has no place in source repository.

    During developement in my work tree (different from source
    repository!) there may be some auxiliary file (it is rather
    infrequent and not a big deal anyway, I just mention how
    I work).

    That consistency is
    very important to me - and to anyone else trying to look at the project.
    So any technical help in enforcing that is a good thing in my book.

    Well, for me (as mentioned above) "files in the project" and "build files"
    are different things.

    Explicit lists are useful if groups of files should get somewhat
    different treatment (I have less need for this now, but it was
    important in the past).


    I do sometimes have explicit lists for /directories/ - but not for
    files. I often have one branch in the source directory for my own code,
    and one branch for things like vendor SDKs and third-party code. I can
    then use stricter static warnings for my own code, without triggering
    lots of warnings in external code.

    IMO being explicit helps with readablity and make code more
    amenable to audit.


    A simple rule of "all files are in the project" is more amenable to audit.

    Maybe your wildcard use is very simple, but year ago wildcards
    were important part in obfuscationg presence of maliciuous code
    in lzma.

    But more important part is keeping info together, inside Makefile.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael Schwingen on Sat Mar 22 17:57:30 2025
    On 22/03/2025 15:46, Michael Schwingen wrote:
    On 2025-03-22, David Brown <david.brown@hesbynett.no> wrote:

    If I have a project, the files in the project are in the project
    directory. Where else would they be? And what other files would I have
    in the project directory than project files?

    If the project can be compiled for different targets, you may have files
    that are used only for one target - stuff like i2c_stm32f0.c and i2c_stm32f1.c.

    Both are project files, but only one is supposed to end up in the compilation. You may work around this by putting files in separate directories, but at some point you end up with lots of directories with only 1 file.

    That is a possibility, yes - and I have had such cases and dealt with
    them by including or excluding specific directories. You are very
    unlikely to have /lots/ of directories with only one file, because your
    one project does not usually need lots of builds. And you also often
    have multiple device-specific files, not just one - especially for significantly different devices.

    It is also not uncommon to see headers done like this :

    // device.h

    #if DEVICE == DEVICE_STM32F0
    #include "device_stm32f0.h"
    #elif DEVICE == DEVICE_STM32F1
    #include "device_stm32f1.h"
    #endif

    That's less common for C or C++ files than for headers.


    This gets to the point of build configuration - make needs to know which files belong to a build configuration. Putting "#ifdef TARGET_STM32F0" around the whole C file is not a good way to do this in a larger project
    (not only because newer compilers complain that "ISO C forbids an empty translation unit").

    Add :

    enum { this_file_may_be_intentionally_blank };

    :-)


    Some optional features influence both make and the compile progress - at work, we decided to put that knowledge outside make, and generate sets of matching include files for make/c/c++ during the configure stage.

    As you said, there are pros and cons - use what works for your project.


    Indeed. But I find wildcard identification of files for the build to be
    many more pros, and fewer cons, than explicit lists - at least as the
    normal pattern.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Waldek Hebisch on Sat Mar 22 18:02:56 2025
    On 22/03/2025 16:57, Waldek Hebisch wrote:
    David Brown <david.brown@hesbynett.no> wrote:

    A simple rule of "all files are in the project" is more amenable to audit.

    Maybe your wildcard use is very simple,

    My wildcards are often recursive, and cover different kinds of files, so
    they are not entirely simple. (That also makes it easier to re-use
    virtually the same makefiles in different projects.)

    but year ago wildcards
    were important part in obfuscationg presence of maliciuous code
    in lzma.

    I admit I have been thinking primarily about work projects where commit
    access is only from a few people. If there are malicious actors
    involved, then probably any way to organise files and projects can be
    abused.


    But more important part is keeping info together, inside Makefile.


    Agreed. That is a lot more important than whether the list of files in
    the build is generated from a wildcard pattern specified in the
    makefile, or from a manual list of files in the makefile.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to pozz on Tue Mar 11 17:32:03 2025
    On 11/03/2025 16:22, pozz wrote:
    I have an embedded project that is compiled in Atmel Studio 7.0. The
    target is and ARM MCU, so the toolchain is arm-gnu-toolchain. The
    installed toolchain version is 6.3.1.508. newlib version is 2.5.0.


    I /seriously/ dislike Microchip's way of handling toolchains. They work
    with old, outdated versions, rename and rebrand them and their
    documentation to make it look like they wrote them themselves, then add
    license checks and software locks so that optimisation is disabled
    unless you pay them vast amounts of money for the software other people
    wrote and gave away freely. To my knowledge, they do not break the
    letter of the license for GCC and other tools and libraries, but they
    most certainly break the spirit of the licenses in every way imaginable.

    Prior to being bought by Microchip, Atmel was bad - but not as bad.

    So if for some reason I have no choice but to use a device from Atmel / Microchip, I do so using tools from elsewhere.

    As a general rule, the gcc-based toolchains from ARM are the industry
    standard, and are used as the base by most ARM microcontroller
    suppliers. Some include additional library options, others provide the
    package as-is. For anything other than a quick demo, my preferred setup
    is using makefiles for the build along with an ARM gcc toolchain. That
    way I can always build my software, from any system, and archive the
    toolchain. (One day, I will also try using clang with these packages,
    but I haven't done so yet.)

    Any reasonably modern ARM gcc toolchain will have 64-bit time_t. I
    never like changing toolchains on an existing project, but you might
    make an exception here.

    However, writing functions to support time conversions is not difficult.
    The trick is not to start at 01.01.1970, but start at a convenient
    date as early as you will need to handle - 01.01.2025 would seem a
    logical point. Use <https://www.unixtimestamp.com/> to get the time_t
    constant for the start of your epoch.

    To turn the current time_t value into a human-readable time and date,
    first take the current time_t and subtract the epoch start. Divide by
    365 * 24 * 60 * 60 to get the additional years. Divide the leftovers by
    24 * 60 * 60 to get the additional days. Use a table of days in the
    months to figure out the month. Leap year handling is left as an
    exercise for the reader (hint - 2100, 2200 and 2300 are not leap years,
    while 2400 is). Use the website I linked to check your results.

    Or you can get the sources for a modern version of newlib, and pull the routines from there.


    David


    In this build system the type time_t is defined as long, so 32 bits.

    I'm using time_t mainly to show it on a display for the user (as a
    broken down time) and tag with a timestamp some events (that the user
    will see as broken down time).

    The time can be received by Internet or by the user, if the device is
    not connected. In both cases, time_t is finally used.

    As you know, my system will show the Y2038 issue. I don't know if some
    of my devices will be active in 2038, anyway I'd like to fix this
    potential issue now.

    One possibility is to use a modern toolchain[1] that most probably uses
    a new version of newlib that manages 64 bits time_t. However I think I
    should address several warnings and other problems after upgrading the toolchain.

    Another possibility is to rewrite my own my_mktime(), my_localtime() and
    so on that accepts and returns my_time_t variables, defined as 64 bits. However I'm not capable in writing such functions. Do you have some implementations? I don't need full functional time functions, for
    example the timezone can be fixed at build time, I don't need to set it
    at runtime.

    Any suggestions?


    [1] https://developer.arm.com/-/media/Files/downloads/gnu/14.2.rel1/binrel/arm-gnu-toolchain-14.2.rel1-mingw-w64-i686-arm-none-eabi.zip

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to pozz on Wed Mar 12 10:33:25 2025
    On 11/03/2025 23:21, pozz wrote:
    Il 11/03/2025 17:32, David Brown ha scritto:
    On 11/03/2025 16:22, pozz wrote:
    I have an embedded project that is compiled in Atmel Studio 7.0. The
    target is and ARM MCU, so the toolchain is arm-gnu-toolchain. The
    installed toolchain version is 6.3.1.508. newlib version is 2.5.0.


    I /seriously/ dislike Microchip's way of handling toolchains.  They
    work with old, outdated versions, rename and rebrand them and their
    documentation to make it look like they wrote them themselves, then
    add license checks and software locks so that optimisation is disabled
    unless you pay them vast amounts of money for the software other
    people wrote and gave away freely.  To my knowledge, they do not break
    the letter of the license for GCC and other tools and libraries, but
    they most certainly break the spirit of the licenses in every way
    imaginable.

    Maybe you are thinking about Microchip IDE named MPLAB X or something similar. I read something about disabled optimizations in the free
    version of the toolchain.


    I believe it applies to all of Microchip's toolchains - and that now
    includes those for the Atmel devices it acquired.

    However I'm using *Atmel Studio* IDE, that is an old IDE distributed by Atmel, before the Microchip purchase. The documentation speaks about
    some Atmel customization of ARM gcc toolchain, but it clearly specified
    the toolchain is an arm gcc.

    OK.



    Prior to being bought by Microchip, Atmel was bad - but not as bad.

    Why do you think Atmel was bad? I think they had good products.

    It is not the products that I am talking about. I've always like the
    AVR architecture (though it could have been massively better with a few
    small changes). Though I haven't used their ARM devices myself, I have
    heard nice things about them. I am talking about the toolchains.

    They had a very mixed attitude to open source software. For a long
    time, they dismissed GCC completely, and gave no help or information for
    other parts of the ecosystem (debuggers, programmers, etc.). Eventually
    they realised that there was a substantial customer base who did not
    want to pay huge prices for IAR toolchains, or preferred open-source
    toolchains for other reasons, and they made various half-hearted efforts
    to support GCC for the AVR. Basically, they did enough to be able to
    have a working setup that they could provide it for free, but not enough
    to make it efficient. I think they spent more money on rebranding GCC
    for the AVR and ARM than they did on technically improving them. People looking for AVR GCC toolchains were left with no idea what version of
    GCC they can get from Atmel, or how those builds compare with mainline
    GCC versions, what devices they support, or how the various required
    extensions are handled. Their ARM toolchains were a bit more standard,
    and a bit less obfuscated in their branding and versioning.

    So not as bad as Microchip, but still far from good.



    So if for some reason I have no choice but to use a device from Atmel
    / Microchip, I do so using tools from elsewhere.

    As a general rule, the gcc-based toolchains from ARM are the industry
    standard, and are used as the base by most ARM microcontroller
    suppliers.  Some include additional library options, others provide
    the package as-is.  For anything other than a quick demo, my preferred
    setup is using makefiles for the build along with an ARM gcc
    toolchain.  That way I can always build my software, from any system,
    and archive the toolchain.  (One day, I will also try using clang with
    these packages, but I haven't done so yet.)

    Yes, you're right, but now it's too late to change the toolchain.


    Any reasonably modern ARM gcc toolchain will have 64-bit time_t.  I
    never like changing toolchains on an existing project, but you might
    make an exception here.

    I will check.


    However, writing functions to support time conversions is not
    difficult.   The trick is not to start at 01.01.1970, but start at a
    convenient date as early as you will need to handle - 01.01.2025 would
    seem a logical point.  Use <https://www.unixtimestamp.com/> to get the
    time_t constant for the start of your epoch.

    To turn the current time_t value into a human-readable time and date,
    first take the current time_t and subtract the epoch start.  Divide by
    365 * 24 * 60 * 60 to get the additional years.  Divide the leftovers
    by 24 * 60 * 60 to get the additional days.  Use a table of days in
    the months to figure out the month.  Leap year handling is left as an
    exercise for the reader (hint - 2100, 2200 and 2300 are not leap
    years, while 2400 is).  Use the website I linked to check your results.

    If I had to rewrite my own functions, I could define time64_t as
    uint64_t, keeping the Unix epoch as my epoch.

    Regarding implementation, I don't know if it so simple. mktime() fix the members of struct tm passed as an argument (and this is useful to
    calculate the day of the week). Moreover I don't only need the
    conversion from time64_t to struct tm, but viceversa too.


    Day of week calculations are peanuts - divide the seconds count by the
    number of seconds in a day, add a constant value for whatever day
    01.01.1970 was, and reduce modulo 7.

    Most of the effort for converting a struct tm into a time_t is checking
    that the values make sense.

    For all of this, the big question is /why/ you are doing it. What are
    you doing with your times? Where are you getting them? Are you
    actually doing this in a sensible way because they suit your
    application, or are you just using these types and structures because
    they are part of the standard C library - which is not good enough for
    your needs here?

    Maybe you are going about it all the wrong way. If you need to be able
    to display and set the current time and date, and to be able to
    conveniently measure time differences for alarms, repetitive tasks,
    etc., then you probably don't need any correlation between your
    monotonic seconds counter and your time/date tracker. All you need to
    do is add one second to each, every second. I don't know the details of
    your application (obviously), but often no conversion is needed either way.


    Or you can get the sources for a modern version of newlib, and pull
    the routines from there.

    It's a very complex code. time functions are written for whatever
    timezone is set at runtime (TZ env variable), so their complexity are
    higher.


    So find a simpler standard C library implementation. Try the avrlibc,
    for example.

    But I have no doubt at all that you can make all this yourself easily
    enough.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to pozz on Wed Mar 12 11:14:27 2025
    On 12/03/2025 08:44, pozz wrote:
    Il 11/03/2025 17:32, David Brown ha scritto:
    [...]
    For anything other than a quick demo, my preferred setup is using
    makefiles for the build along with an ARM gcc toolchain.  That way I
    can always build my software, from any system, and archive the toolchain.
    [...]

    Regading this point, it's what I want to do in new projects. What I
    don't know is...

    Why many silicon vendors provide a *custom* arm gcc toolchain? Are those customizations important to build firmware for their MCUs? If not, why
    they invest money to make changes in a toolchain? It isn't a simple job.


    Some changes are reasonable. A prime example is support for newer
    devices - or workarounds for bugs in existing devices. The ideal way to
    handle this sort of thing, as done by a number of suppliers, is to
    figure out a fix and push it upstream.

    Full upstream projects - primarily GCC and binutils - will generally add
    these in to their current development line. They will generally only
    add it in to previous lines if the changes are small, fairly clean
    patches, important for code correctness, and don't require changes in additional areas (such as command-line options or documentation). Even
    then, they only go a few releases back.

    The main next-step upstream project for ARM GCC tools is ARM's GCC
    toolchain - they can be a bit more flexible, and maintain a list of
    patches that go on top of GCC and binutils releases so that such fixes
    can be back-ported to older sets. Their releases are always a bit
    behind upstream mainline, because they need time to do their testing and packaging.

    Finally, some microcontroller manufacturers will have their own packed
    builds based on ARM's builds. (Other intermediaries used to be popular
    before, such as Code Sourcery, but I think ARM's builds are now
    ubiquitous.) They can react faster to adding fixes and patches to
    existing code - they don't need to think about conflicts or testing with
    other people's ARM cores, for example. But again, their releases will
    be even further behind mainline because they must be tested against all
    their SDK's and other code.

    These sorts of things are all good for the user, good for the community
    as a whole, and good for the manufacturer - they can make quick fixes if
    they have to, but in the long term they keep aligned with mainline.


    But some vendors - such as Microchip - put a great deal of effort into re-branding. They are looking for marketing, control, and forced tie-in
    - they want it to be as hard as possible to move to other vendors. And
    they can have PHB's that think the development tool department of the
    company should make a profit on its own, rather than be there to support
    sales from the device production - thus they try to force developers to
    pay for licenses. IMHO this is all counter-productive - I am sure I am
    not the only developer who would not consider using microcontrollers
    from Microchip unless there was no other option. (I am happy to use
    other parts from Microchip - they make good devices.)


    Another point is visual debugging. I don't mean text editor with syntax hilighting, code completion, project management and so on. There are
    many tools around for this.
    I used to have a button in the IDE to launch a debugging session. The generation of a good debugging session configuration is simplified in
    IDE if you use main debuggin probe (for example, J-Link).

    How do you debug your projects without a full-features and ready-to-use
    IDE from the silicon vendor?

    That varies from project to project, part to part. But I am quite happy
    to use an IDE from a vendor while using an external makefile and a
    standard GNU ARM toolchain for the build.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to pozz on Tue Mar 18 11:34:17 2025
    On 18/03/2025 09:21, pozz wrote:
    Il 15/03/2025 17:30, Michael Schwingen ha scritto:
    On 2025-03-11, David Brown <david.brown@hesbynett.no> wrote:
    package as-is.  For anything other than a quick demo, my preferred setup >>> is using makefiles for the build along with an ARM gcc toolchain.  That >>> way I can always build my software, from any system, and archive the
    toolchain.  (One day, I will also try using clang with these packages,
    but I haven't done so yet.)

    Same here.  I just switched to ARM gcc + picolibc for all my ARM
    projects -
    this required some changes in the way my makefiles generate linker
    scripts
    and startup code, and now I am quite happy with that setup.

    One day or another I will try to move from my actual build system (that depends on silicon vendor IDE, libraries, middleware, drivers, and so
    on) to a generic makefile and generic toolchain.

    Sincerely I tried in the past with some issues. First of all, I use a
    Windows machine for development and writing makefiles that work on
    Windows is not simple. Maybe next time I will try with WSL, writing
    makefiles that work directly in Unix.

    Install msys2 (and the mingw-64 version of gcc, if you want a native
    compiler too). Make sure the relevant "bin" directory is on your path.
    Then gnu make will work perfectly, along with all the little *nix
    utilities such as touch, cp, mv, sed, etc., that makefiles sometimes use.

    The only time I have seen problems with makefiles on Windows is when
    using ancient partial make implementations, such as from Borland, along
    with more advanced modern makefiles, or when someone mistakenly uses
    MS's not-make "nmake" program instead of "make".

    Of course your builds will be slower on Windows than on Linux, since
    Windows is slow to start programs, slow to access files, and poor at
    doing it all in parallel, but there is nothing hindering makefiles in
    Windows. My builds regularly work identically under Linux and Windows,
    with the same makefiles.


    Another problem that I see is the complexity of actual projects: TCP/IP stack, cripto libraries, drivers, RTOS, and so on. Silicon vendors
    usually give you several example projects that just works with one
    click, using their IDE, libraries, debuggers, and so on. Moving from
    this complex build system to custom makefiles and toolchain isn't so
    simple.


    That's why you still have a job. Putting together embedded systems is
    not like making a Lego kit. Running a pre-made demo can be easy -
    merging the right bits of different demos, samples and libraries into
    complete systems is hard work. It is not easy whether you use an IDE
    for project and build management, or by manual makefiles. Some aspects
    may be easier with one tool, other aspects will be harder.

    Suppose you make the job to "transform" the example project into a
    makefile. You start working with your preferred IDE/text
    editor/toolchain, you are happy.
    After some months the requirements change and you need to add a driver
    for a new peripheral or a complex library. You know there are
    ready-to-use example projects in the original IDE from silicon vendor
    that use exactly what you need (mbedtls, DMA, ADC...), but you can't use
    them because you changed your build system.


    Find the files you need from the SDK or libraries, copy them into your
    own project directories (keep them organised sensibly).

    A good makefile picks up the new files automatically and handles all the dependencies, so often all you need is a new "make -j". But you might
    have to set up include directories, or even particular flags or settings
    for different files.

    Another problem is debugging: launch a debug sessions that means
    download the binary through a USB debugger/probe and SWD port, add some breakpoints, see the actual values of some variables and so on. All this works very well without big issues if using original IDE. Are you able
    to configure *your* custom development system to launch debug sessions?


    Build your elf file with debugging information, open the elf file in the debugger.

    You probably have a bit of setup to specify things like the exact microcontroller target, but mostly it works fine.

    Eventually another question. Silicon vendors usually provide custom toolchains that often are a customized version of arm-gcc toolchian
    (yes, here I'm talking about Cortex-M MCUs only, otherwise it would be
    much more complex).
    What happens if I move to the generic arm-gcc?


    This has already been covered. Most vendors now use standard toolchain
    builds from ARM.

    What happens if the vendor has their own customized tool and you switch
    to a generic ARM tool depends on the customization and the tool
    versions. Usually it means you get a new toolchain with better
    warnings, better optimisation, and newer language standard support. But
    it might also mean vendor-supplied code with bugs no longer works as it
    did. (You don't have any bugs in your own code, I presume :-) )

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael Schwingen@21:1/5 to pozz on Tue Mar 18 18:44:16 2025
    On 2025-03-18, pozz <pozzugno@gmail.com> wrote:
    One day or another I will try to move from my actual build system (that depends on silicon vendor IDE, libraries, middleware, drivers, and so
    on) to a generic makefile and generic toolchain.

    If you are not invested in existing makefiles, have a look at cmake instead
    of make.

    Another problem that I see is the complexity of actual projects: TCP/IP stack, cripto libraries, drivers, RTOS, and so on. Silicon vendors
    usually give you several example projects that just works with one
    click, using their IDE, libraries, debuggers, and so on. Moving from
    this complex build system to custom makefiles and toolchain isn't so simple.

    It is not that big a task, and you learn what kind of compiler flags,
    include paths etc. you really need - that helps a lot when you want to integrate those libraries into your own project in the next step.

    My cmake files have functions like
    target_add_HAL_LL_STM32F0(target)
    or
    target_add_freertos(target)
    that take care of adding the right source files, include parameters etc. -
    it take a bit to set this up, but makes it easy to maintain multiple
    projects.

    Another problem is debugging: launch a debug sessions that means
    download the binary through a USB debugger/probe and SWD port, add some breakpoints, see the actual values of some variables and so on. All this works very well without big issues if using original IDE. Are you able
    to configure *your* custom development system to launch debug sessions?

    Sure. I have two targets in my makefiles - one starts openocd (needed once
    per debug session, keeps running), and one fires up the debugger (ddd) with
    the correct ELF file. That works a lot faster than firing up the debug
    session in any vendor eclipse I have seen.

    Eventually another question. Silicon vendors usually provide custom toolchains that often are a customized version of arm-gcc toolchian
    (yes, here I'm talking about Cortex-M MCUs only, otherwise it would be
    much more complex).
    What happens if I move to the generic arm-gcc?

    Depends on what patches the vendor included, so I would suggest to do the switch early in the development cycle. I have not yet used vendor-patched
    ARM gcc versions - the upstream gcc versions worked just fine for STM32, SamD11, LPC8, LPC17xx, MM32, GD32, Luminary (now TI), TI TM4, TI MSPM0.

    This is exactly what I do. I don't use RTC with registers (seconds, minutes...) anymore, only a 32.768kHz oscillator (present in many MCUs)
    that increments a counter.

    The RTC I used is a MicroCrystal RV3029 - low drift, low power, canned part, works great for this one-off project, and the ESP32 (I wanted NTP for time updates) would use too much power to run on battery, so I can live with the register interface using seconds/minutes/...

    cu
    Michael
    --
    Some people have no respect of age unless it is bottled.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael Schwingen@21:1/5 to David Brown on Tue Mar 18 18:28:25 2025
    On 2025-03-18, David Brown <david.brown@hesbynett.no> wrote:
    Install msys2 (and the mingw-64 version of gcc, if you want a native
    compiler too). Make sure the relevant "bin" directory is on your path.
    Then gnu make will work perfectly, along with all the little *nix
    utilities such as touch, cp, mv, sed, etc., that makefiles sometimes use.

    The only time I have seen problems with makefiles on Windows is when
    using ancient partial make implementations, such as from Borland, along
    with more advanced modern makefiles, or when someone mistakenly uses
    MS's not-make "nmake" program instead of "make".

    I have seen problems when using tools that are build during the compile
    proess, used to generate further C code.

    I would suggest using WSL instead of msys2. I have not used it for cross-compiling, but it works fine (except for file access performance) for
    my documentation process, which needs commandline pdf modification tools
    plus latex.

    A good makefile picks up the new files automatically and handles all the dependencies, so often all you need is a new "make -j".

    I don't do that anymore - wildcards in makefiles can lead to all kinds of strange behaviour due to files that are left/placed somewhere but are not really needed. I prefer to list the files I want compiled - it is not that
    much work.

    cu
    Michael
    --
    Some people have no respect of age unless it is bottled.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael Schwingen on Tue Mar 18 20:43:45 2025
    On 18/03/2025 19:28, Michael Schwingen wrote:
    On 2025-03-18, David Brown <david.brown@hesbynett.no> wrote:
    Install msys2 (and the mingw-64 version of gcc, if you want a native
    compiler too). Make sure the relevant "bin" directory is on your path.
    Then gnu make will work perfectly, along with all the little *nix
    utilities such as touch, cp, mv, sed, etc., that makefiles sometimes use.

    The only time I have seen problems with makefiles on Windows is when
    using ancient partial make implementations, such as from Borland, along
    with more advanced modern makefiles, or when someone mistakenly uses
    MS's not-make "nmake" program instead of "make".

    I have seen problems when using tools that are build during the compile proess, used to generate further C code.

    I have several projects where C code is generated automatically (such as
    with a Python script that turns an image file into a const uint8_t array).

    In the days of FAT filesystems - 16-bit Windows and DOS, primarily - it
    was possible to get issues with that kind of thing along with the weaker
    "make" implementations included with Borland tools and the like. The
    big issue was the timestamp resolution for files was too coarse.


    I would suggest using WSL instead of msys2. I have not used it for cross-compiling, but it works fine (except for file access performance) for my documentation process, which needs commandline pdf modification tools
    plus latex.


    I would not suggest WSL unless you are trying to re-create a full Linux environment - since that is what WSL is. It is a virtual machine with
    Linux. If you have a complex setup with *nix features like soft links,
    or filenames with Windows-hostile characters, or want to use
    /dev/urandom to generate random data automatically in your build
    process, then use WSL. (Or just switch to Linux.)

    msys2 is totally different. The binaries are all native Windows
    binaries, and they all work within the same Windows environment as
    everything else. There are no problems using Windows-style paths
    (though of course it is best to use relative paths and forward slashes
    in your makefiles, #include directives, etc., for cross-platform compatibility). You can use the msys2 programs directly from the normal Windows command window, or Powershell, or in batch files, or directly
    from other Windows programs.

    I have also used pdf modification tools and pdfLaTeX (Miktex) on
    Windows, controlled by make, from msys. (It's been a while, so it was
    probably msys make rather than msys2 make.)

    I do most of that kind of thing from Linux - amongst other things, it is
    faster on the same hardware for all of this stuff. But I like my builds
    to work on Windows as well as Linux.

    A good makefile picks up the new files automatically and handles all the
    dependencies, so often all you need is a new "make -j".

    I don't do that anymore - wildcards in makefiles can lead to all kinds of strange behaviour due to files that are left/placed somewhere but are not really needed.

    I'm sure you can guess the correct way to handle that - don't leave
    files in the wrong places :-)

    I prefer to list the files I want compiled - it is not that
    much work.


    In a project of over 500 files in 70 directories, it's a lot more work
    than using wildcards and not keeping old unneeded files mixed in with
    source files.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to pozz on Tue Mar 18 20:29:38 2025
    On 18/03/2025 17:31, pozz wrote:
    Il 18/03/2025 11:34, David Brown ha scritto:
    On 18/03/2025 09:21, pozz wrote:
    Il 15/03/2025 17:30, Michael Schwingen ha scritto:
    On 2025-03-11, David Brown <david.brown@hesbynett.no> wrote:
    package as-is.  For anything other than a quick demo, my preferred
    setup
    is using makefiles for the build along with an ARM gcc toolchain.
    That
    way I can always build my software, from any system, and archive the >>>>> toolchain.  (One day, I will also try using clang with these packages, >>>>> but I haven't done so yet.)

    Same here.  I just switched to ARM gcc + picolibc for all my ARM
    projects -
    this required some changes in the way my makefiles generate linker
    scripts
    and startup code, and now I am quite happy with that setup.

    One day or another I will try to move from my actual build system
    (that depends on silicon vendor IDE, libraries, middleware, drivers,
    and so on) to a generic makefile and generic toolchain.

    Sincerely I tried in the past with some issues. First of all, I use a
    Windows machine for development and writing makefiles that work on
    Windows is not simple. Maybe next time I will try with WSL, writing
    makefiles that work directly in Unix.

    Install msys2 (and the mingw-64 version of gcc, if you want a native
    compiler too).  Make sure the relevant "bin" directory is on your
    path. Then gnu make will work perfectly, along with all the little
    *nix utilities such as touch, cp, mv, sed, etc., that makefiles
    sometimes use.

    Do you run <msys>\usr\bin\make.exe directly from a cmd.exe shell? Or do
    you open a msys specific shell?


    Either works fine.

    So does running "make" from whatever IDE, editor or other tool you want
    to use.


    The only time I have seen problems with makefiles on Windows is when
    using ancient partial make implementations, such as from Borland,
    along with more advanced modern makefiles, or when someone mistakenly
    uses MS's not-make "nmake" program instead of "make".

    Of course your builds will be slower on Windows than on Linux, since
    Windows is slow to start programs, slow to access files, and poor at
    doing it all in parallel, but there is nothing hindering makefiles in
    Windows.  My builds regularly work identically under Linux and
    Windows, with the same makefiles.

    I tried to use make for Windows some time ago, but it was a mess. Maybe
    msys2 system is much more straightforward.


    I have been using "make" on DOS, 16-bit Windows, OS/2, Windows of many flavours, and Linux of all sorts for several decades. I really can't understand why some people feel it is difficult. (Older makes on DOS
    and Windows were more limited in their features, but worked well enough.)

    These days I happily use it on Windows with recursive make (done
    /carefully/, as all recursive makes should be), automatic dependency generation, multiple makefiles, automatic file discovery, parallel
    builds, host-specific code (for things like the toolchain installation directory), and all sorts of other bits and pieces.

    Another problem that I see is the complexity of actual projects:
    TCP/IP stack, cripto libraries, drivers, RTOS, and so on. Silicon
    vendors usually give you several example projects that just works
    with one click, using their IDE, libraries, debuggers, and so on.
    Moving from this complex build system to custom makefiles and
    toolchain isn't so simple.

    That's why you still have a job.  Putting together embedded systems is
    not like making a Lego kit.  Running a pre-made demo can be easy -
    merging the right bits of different demos, samples and libraries into
    complete systems is hard work.  It is not easy whether you use an IDE
    for project and build management, or by manual makefiles.  Some
    aspects may be easier with one tool, other aspects will be harder.

    You're right.


    Suppose you make the job to "transform" the example project into a
    makefile. You start working with your preferred IDE/text
    editor/toolchain, you are happy.
    After some months the requirements change and you need to add a
    driver for a new peripheral or a complex library. You know there are
    ready-to-use example projects in the original IDE from silicon vendor
    that use exactly what you need (mbedtls, DMA, ADC...), but you can't
    use them because you changed your build system.

    Find the files you need from the SDK or libraries, copy them into your
    own project directories (keep them organised sensibly).

    A good makefile picks up the new files automatically and handles all
    the dependencies, so often all you need is a new "make -j".  But you
    might have to set up include directories, or even particular flags or
    settings for different files. >
    Another problem is debugging: launch a debug sessions that means
    download the binary through a USB debugger/probe and SWD port, add
    some breakpoints, see the actual values of some variables and so on.
    All this works very well without big issues if using original IDE.
    Are you able to configure *your* custom development system to launch
    debug sessions?

    Build your elf file with debugging information, open the elf file in
    the debugger.

    What do you mean with "open the elf file in the debugger"?


    A generated elf file contains all the debug information, including
    symbol maps and pointers to source code, as well as the binary.
    Debuggers work with elf files - whether the debugger is included in an
    IDE or is stand-alone.


    You probably have a bit of setup to specify things like the exact
    microcontroller target, but mostly it works fine.

    Eventually another question. Silicon vendors usually provide custom
    toolchains that often are a customized version of arm-gcc toolchian
    (yes, here I'm talking about Cortex-M MCUs only, otherwise it would
    be much more complex).
    What happens if I move to the generic arm-gcc?


    This has already been covered.  Most vendors now use standard
    toolchain builds from ARM.

    What happens if the vendor has their own customized tool and you
    switch to a generic ARM tool depends on the customization and the tool
    versions.  Usually it means you get a new toolchain with better
    warnings, better optimisation, and newer language standard support.
    But it might also mean vendor-supplied code with bugs no longer works
    as it did.  (You don't have any bugs in your own code, I presume :-) )

    :-)


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Edwards@21:1/5 to David Brown on Tue Mar 18 20:58:33 2025
    On 2025-03-18, David Brown <david.brown@hesbynett.no> wrote:

    msys2 is totally different. The binaries are all native Windows
    binaries, and they all work within the same Windows environment as
    everything else. There are no problems using Windows-style paths
    (though of course it is best to use relative paths and forward slashes
    in your makefiles, #include directives, etc., for cross-platform compatibility). You can use the msys2 programs directly from the normal Windows command window, or Powershell, or in batch files, or directly
    from other Windows programs.

    Are the make recipes are run using a normal Unix shell (bash? ash?
    bourne?) with exported environment variables as expected when running
    'make' on Unix?

    The gnu make functions [e.g $(shell <whatever>)] all work as epected?

    Or are there certain gnu make features you have to avoid for makefiles
    to work under msys2?

    --
    Grant

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Grant Edwards on Wed Mar 19 08:24:32 2025
    On 18/03/2025 21:58, Grant Edwards wrote:
    On 2025-03-18, David Brown <david.brown@hesbynett.no> wrote:

    msys2 is totally different. The binaries are all native Windows
    binaries, and they all work within the same Windows environment as
    everything else. There are no problems using Windows-style paths
    (though of course it is best to use relative paths and forward slashes
    in your makefiles, #include directives, etc., for cross-platform
    compatibility). You can use the msys2 programs directly from the normal
    Windows command window, or Powershell, or in batch files, or directly
    from other Windows programs.

    Are the make recipes are run using a normal Unix shell (bash? ash?
    bourne?) with exported environment variables as expected when running
    'make' on Unix?


    As I said in another post - no, "make" can run in any way that is
    convenient. Windows command prompt, Powershell (which I seldom use),
    msys2 bash, started from a "tools" menu in an IDE - whatever.

    The gnu make functions [e.g $(shell <whatever>)] all work as epected?


    Yes.

    To be clear, that is not a feature I have used in a particularly
    advanced way - I haven't used the "shell" function in a way that relies
    on anything that relies on a specific shell type. But even on Windows,
    there's no problem with passing environment variables on to a program
    started in a subshell - such as an additional sub-make.

    Or are there certain gnu make features you have to avoid for makefiles
    to work under msys2?


    There is nothing that I have avoided.

    It is possible that there /are/ features that I would have to avoid, if
    I used them - I can't claim to have made use of /all/ the features of
    gnu make. But I use a lot more advanced features than most make users,
    without trouble.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to pozz on Wed Mar 12 17:39:59 2025
    On 12/03/2025 16:48, pozz wrote:
    Il 12/03/2025 10:33, David Brown ha scritto:

    For all of this, the big question is /why/ you are doing it.  What are
    you doing with your times?  Where are you getting them?  Are you
    actually doing this in a sensible way because they suit your
    application, or are you just using these types and structures because
    they are part of the standard C library - which is not good enough for
    your needs here?

    When the user wants to set the current date and time, I fill a struct tm
    with user values. Next I call mktime() to calculate time_t that is been incrementing every second.

    When I need to show the current date and time to the user, I call
    localtime() to convert time_t in struct tm. And I have day of the week too.

    Consider that mktime() and localtime() take into account timezone, that
    is important for me. In Italy we have daylight savings time with not so simple rules. Standard time functions work well with timezones.


    Maybe you are going about it all the wrong way.  If you need to be
    able to display and set the current time and date, and to be able to
    conveniently measure time differences for alarms, repetitive tasks,
    etc., then you probably don't need any correlation between your
    monotonic seconds counter and your time/date tracker.  All you need to
    do is add one second to each, every second.  I don't know the details
    of your application (obviously), but often no conversion is needed
    either way.

    I'm talking about *wall* clock only. Internally I have a time_t variable
    that is incremented every second. But I need to show it to the user and
    I can't show the seconds from the epoch.


    The sane way to do this - the way it has been done for decades on small embedded systems - is to track both a human-legible date/time structure
    (ignore standard struct tm - make your own) /and/ to track a monotonic
    seconds counter (or milliseconds counter, or minutes counter - whatever
    you need). Increment both of them every second. Both operations are
    very simple - far easier than any conversions. Adding or subtracting an
    hour on occasion is also simple.

    If your system is connected to the internet, then occasionally pick up
    the current wall-clock time (and unix epoch, if you like) from a server,
    along with the time of the next daylight savings change. If it is not connected, then the user is going to have to make adjustments to the
    time and date occasionally anyway, as there is always drift - they can
    do the daylight saving change at the same time as they change their
    analogue clocks, their cooker clock, and everything else that is not
    connected.


    Or you can get the sources for a modern version of newlib, and pull
    the routines from there.

    It's a very complex code. time functions are written for whatever
    timezone is set at runtime (TZ env variable), so their complexity are
    higher.


    So find a simpler standard C library implementation.  Try the avrlibc,
    for example.

    But I have no doubt at all that you can make all this yourself easily
    enough.

    I think timezone rules are not so simple to implement.


    You don't need them. That makes them simple.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to pozz on Wed Mar 12 19:18:48 2025
    On 12/03/2025 18:13, pozz wrote:
    Il 12/03/2025 17:39, David Brown ha scritto:
    On 12/03/2025 16:48, pozz wrote:
    Il 12/03/2025 10:33, David Brown ha scritto:

    For all of this, the big question is /why/ you are doing it.  What
    are you doing with your times?  Where are you getting them?  Are you >>>> actually doing this in a sensible way because they suit your
    application, or are you just using these types and structures
    because they are part of the standard C library - which is not good
    enough for your needs here?

    When the user wants to set the current date and time, I fill a struct
    tm with user values. Next I call mktime() to calculate time_t that is
    been incrementing every second.

    When I need to show the current date and time to the user, I call
    localtime() to convert time_t in struct tm. And I have day of the
    week too.

    Consider that mktime() and localtime() take into account timezone,
    that is important for me. In Italy we have daylight savings time with
    not so simple rules. Standard time functions work well with timezones.


    Maybe you are going about it all the wrong way.  If you need to be
    able to display and set the current time and date, and to be able to
    conveniently measure time differences for alarms, repetitive tasks,
    etc., then you probably don't need any correlation between your
    monotonic seconds counter and your time/date tracker.  All you need
    to do is add one second to each, every second.  I don't know the
    details of your application (obviously), but often no conversion is
    needed either way.

    I'm talking about *wall* clock only. Internally I have a time_t
    variable that is incremented every second. But I need to show it to
    the user and I can't show the seconds from the epoch.


    The sane way to do this - the way it has been done for decades on
    small embedded systems - is to track both a human-legible date/time
    structure (ignore standard struct tm - make your own) /and/ to track a
    monotonic seconds counter (or milliseconds counter, or minutes counter
    - whatever you need).  Increment both of them every second.  Both
    operations are very simple - far easier than any conversions.

    If I got your point, adding one second to struct mytm isn't reduced to a
    ++ on one of its member. I should write something similar to this:

    if (mytm.tm_sec < 59) {
      mytm.tm_sec += 1;
    } else {
      mytm.tm_sec = 0;
      if (mytm.tm_min < 59) {
        mytm.tm_min += 1;
      } else {
        mytm.tm_min = 0;
        if (mytm.tm_hour < 23) {
          mytm.tm_hour += 1;
        } else {
          mytm.tm_hour = 0;
          if (mytm.tm_mday < days_in_month(mytm.tm_mon, mytm.tm_year)) {
            mytm.tm_mday += 1;
          } else {
            mytm.tm_mday = 1;
            if (mytm.tm_mon < 12) {
              mytm.tm_mon += 1;
            } else {
              mytm.tm_mon = 0;
              mytm.tm_year += 1;
            }
          }
        }
      }
    }


    Yes, that's about it.

    However taking into account dst is much more complex. The rule is the
    last sunday of March and last sunday of October (if I'm not wrong).

    No, it is not complex. Figure out the rule for your country (I'm sure Wikipedia well tell you if you are not sure) and then apply it. It's
    just a comparison to catch the right time and date, and then you add or subtract an extra hour.


    All can be coded manually from the scratch, but there are standard
    functions just to avoid reinventing the wheel.

    You've just written the code! You have maybe 10-15 more lines to add to
    handle daylight saving.


    Tomorrow I could install my device in another country in the world and
    it could be easy to change the timezone with standard function.

    How many countries are you targeting? Europe all uses the same system.

    <https://en.wikipedia.org/wiki/Daylight_saving_time_by_country>


    Adding or subtracting an hour on occasion is also simple.

    Yes, but the problem is *when*. You need to know the rules and you need
    to implement them. localtime() just works.


    You are getting ridiculous. This is not rocket science.

    Besides, any fixed system is at risk from changes - and countries have
    in the past and will in the future change their systems for daylight
    saving. (Many have at least vague plans of scraping it.) So if a
    simple fixed system is not good enough for you, use the other method I suggested - handle it by regular checks from a server that you will need
    anyway for keeping an accurate time, or let the user fix it for
    unconnected systems.


    If your system is connected to the internet, then occasionally pick up
    the current wall-clock time (and unix epoch, if you like) from a
    server, along with the time of the next daylight savings change.

    What do you mean with "next daylight savings change"? I'm using NTP (specifically SNTP from a public server) and I'm able to retrive the
    current UTC time in seconds from Unix epoch.

    I just take this value and overwrite my internal counter.

    In other application, I retrive the current time from incoming SMS. In
    this case I have a local broken down time.


    If it is not connected, then the user is going to have to make
    adjustments to the time and date occasionally anyway, as there is
    always drift

    Drifts? By using a 32.768kHz quartz to generate a 1 Hz clock that
    increases the internal counter avoid any drifts.

    There is no such thing as a 32.768 kHz crystal - there are only
    approximate crystals. If you don't update often enough from an accurate
    time source, you will have drift. (How much drift you have, and what
    effect it has, is another matter.)


    - they can
    do the daylight saving change at the same time as they change their
    analogue clocks, their cooker clock, and everything else that is not
    connected.

    I think you can take into account dst even if the device is not connected.


    You certainly can. But then you have to have a fixed algorithm known in advance.

    I bet Windows is able to show the correct time (with dst changes) even
    if the PC is not connected.


    I bet it can't, in cases where the date system for the daylight savings
    time has changed or been removed. Other than that, it will just use a
    table of date systems such as on the Wikipedia page. Or perhaps MS
    simply redefined what they think other people should use.

    Older Windows needed manual changes for the date and time, even when it
    was connected - their support for NTP was late.


    Or you can get the sources for a modern version of newlib, and
    pull the routines from there.

    It's a very complex code. time functions are written for whatever
    timezone is set at runtime (TZ env variable), so their complexity
    are higher.


    So find a simpler standard C library implementation.  Try the
    avrlibc, for example.

    But I have no doubt at all that you can make all this yourself
    easily enough.

    I think timezone rules are not so simple to implement.


    You don't need them.  That makes them simple.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to All on Wed Mar 19 11:24:49 2025
    On 18/03/2025 23:31, Hans-Bernhard Bröker wrote:
    Am 18.03.2025 um 21:58 schrieb Grant Edwards:
    On 2025-03-18, David Brown <david.brown@hesbynett.no> wrote:

    msys2 is totally different.  The binaries are all native Windows
    binaries, and they all work within the same Windows environment as
    [...]

    Are the make recipes are run using a normal Unix shell (bash? ash?
    bourne?) with exported environment variables as expected when running
    'make' on Unix?

    Pretty much, yes.  There are some gotchas in handling of path names, and particularly their passing to less-than-accomodating, native Windows compilers etc..  And the quoting of command line arguments can become
    even dicier than native Windows already is.

    There be dragons, but MSYS2 will keep the vast majority of them out of
    your sight.


    Exactly - there are a lot fewer dragons, and they are smaller, than with
    other solutions. If you try to use path names with very long names,
    spaces, names like "*", or with embedded quotation marks, or the dozen characters that Windows doesn't like, then you are asking for trouble.
    But those cause trouble on Windows no matter what.

    You can certainly make your life easier by avoiding ridiculous path
    names. Put your compilers in directories under "c:/compilers/", or
    whatever, so that you can easily find them and refer to them. And put
    your projects in "c:/projects/". It is unfortunate that Windows uses
    downright insane paths by default for installed programs and for user documents, but you don't have to follow those.

    You can still use msys2, and make, even with ridiculous path names, but
    you need to be more careful to get your filename quoting right.

    The gnu make functions [e.g $(shell <whatever>)] all work as epected?

    Yes, as long as you stay reasonable about the selection of things you
    try to run that way, and keep in mind you may have to massage command
    line arguments if <whatever> is a native Windows tool.

    For reference, MSYS2 is also the foundation of Git Bash for MS Windows,
    which you might be familiar with already...


    msys2 / mingw-64 is the basis for most modern gcc usage on Windows
    (mingw-64 is the gcc "target" and has the standard library, while msys2 provides a substantial fraction of POSIX / Linux compatibility along
    with vast numbers of common utility programs and libraries). When you
    install the GNU ARM Embedded toolchain, it is built by and for mingw-64.
    When you install NXP's IDE, or Atmel Studio, or pretty much any other
    vendor development tool, all the *nix world tools on it will be compiled
    for old mingw or modern mingw-64, and many will be taken directly from
    old msys or modern msys2. There are a few IDE's that now use cmake and
    ninja, but for most of them, when you select "build" from the menus, the
    IDE generates makefiles then runs an mingw / msys build of gnu make.
    Those who think you have to use an IDE on Windows and make does not
    work, are already using make !

    The underlying technology of MSYS2 is a fork of the Cygwin project,
    which is an environment that aims to provide the best emulation of a
    Unix environment they can, inside MS Windows.  The key difference of the MSYS2 fork lies in a set of tweaks to resolve some of the corner cases
    more towards the Windows interpretation of things.


    I believe they made more changes than that. Cygwin used to suffer from
    three major problems - it's focus on POSIX compatibility made it highly inefficient, it used unix-like behaviour that was alien to Windows (like /cygdrive/c/... paths), and it suffered from a level of DLL hell beyond anything seen on other Windows programs. This last point made things
    very difficult if you only wanted a few cygwin-based programs. The
    original msys and mingw projects were a lot simpler, but stagnated and
    failed to support 64-bit targets and even C99. msys2 and mingw-64 were
    made to get the best of both worlds, taking parts from each of these
    projects and adding their own.

    So, if your Makefiles are too Unix centric for even MSYS2 to handle,
    Cygwin can probably still manage.  And it will do it for the small price
    of many of your relevant files needing to have LF-only line endings.


    There are certainly a few things that Cygwin can handle that msys2
    cannot. For example, cygwin provides the "fork" system call that is
    very slow and expensive on Windows, but fundamental to old *nix
    software. msys2 (and msys before it) do not support "fork". This is
    not an issue for the solid majority of modern *nix software, because
    threading and "vfork / execve" replaced most use-cases for "fork" in the
    last twenty years.

    But you can happily use lots of unix-like things from msys2. If you
    want 16 bytes of random data in your program, you can write:

    head -c 16 /dev/random | hexdump -v -e '/i "%02x, "' > rand.inc

    and use it as :

    const uint8_t random_data = {
    #include "rand.inc"
    };

    The "head..." line will work fine from the normal Windows command line,
    or an msys2 bash shell, or a makefile, or whatever.


    Here's a rough hierarchy of Unix-like-ness among Make implementations on
    a PC, assuming your actual compiler tool chain is a native Windows one:

    0) your IDE's internal build system --- not even close

    In most cases - at least for IDE's I have had from microcontroller
    vendors - the IDE's internal build system /is/ make. It is a normal
    msys2 make (albeit often not the latest version). The IDE's "internal
    build" generates makefiles automatically (a bit slowly and inefficiently
    for big projects), then runs "make".

    Of course, you don't get to work with these makefiles directly, so you
    can't use any of the more interesting features of make - any changes you
    make to the generated makefiles will be overwritten on the next build.

    1) original DOS or Windows "make" tools

    That varied from supplier to supplier, since DOS and Windows don't have
    any kind of native development tools. Borland and MS both provided
    "make" utilities with basic features but lots of limitations compared to
    the *nix world. Other tool vendors may have been different.

    2) fully native ports of GNU make (predating MSYS)
    3) GNU Make in MSYS2
    4) GNU Make in Cygwin
    5) WSL2 --- the full monty

    I'll also second an earlier suggestion: for newcomers with little or no present skills in Makefile writing, CMake or Meson can be a much
    smoother entry into this world.  Also, if you're going this route, I
    suggest to consider skipping Make and using Ninja instead.


    Ninja is the assembly language of build tools - it is meant to be fast
    to run, but people are not expected to write ninja files manually. You generate them with cmake or other tools.

    cmake is certainly a popular modern build system, but I personally have
    never got into it. It strikes me as massively over-complex and very
    fragile - it always seems to need very specific versions of cmake, which
    in turn require very specific versions of a hundred different
    dependencies. Maybe in a decade or so it will have stabilised enough
    that the same cmake setup can be used reliably on multiple different
    systems, but it has a /long/ way to go before then. Perhaps I am being
    unfair to cmake here due to lack of experience, but I have yet to see a
    point to it.

    Meanwhile, I can (and do) build my projects on four or five different
    Linux systems and a couple of Windows machines, all of wildly different generations, using the same makefile and a copy of either the Linux or
    the Windows directories containing the appropriate GNU ARM Embedded
    toolchain. All I need to modify is a host-specific pointer to the
    toolchain directory.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Edwards@21:1/5 to David Brown on Wed Mar 19 14:27:25 2025
    On 2025-03-19, David Brown <david.brown@hesbynett.no> wrote:

    There are certainly a few things that Cygwin can handle that msys2
    cannot. For example, cygwin provides the "fork" system call that is
    very slow and expensive on Windows, but fundamental to old *nix
    software.

    I believe Windows inherited that from VAX/VMS via Dave Cutler.

    Back when the Earth was young I used to do embedded development on
    VMS. I was, however, a "Unix guy" so my usual work environment on VMS
    was "DEC/Shell" which was a v7 Bourne shell and surprisingly complete
    set of v7 command line utilities that ran on VMS. [Without DEC/Shell,
    I'm pretty sure I wouldn't have survived that project.] At one point I
    wrote some fairly complex shell/awk/grep scripts to analyze and
    cross-reference requirements documents written in LaTeX. The scripts
    would have taken a few minutes to run under v7 on an LSI-11, but they
    took hours on a VAX 780 under VMS DEC/Shell (and used up ridiculous
    amounts of CPU time). I was baffled. I eventually tracked it down to
    the overhead of "fork". A fork on Unix is a trivial operation, and
    when running a shell program it happens a _lot_.

    On VMS, a fork() call in a C program had _huge_ overhead compared to
    Unix [but dog bless the guys in Massachusetts, it worked]. I'm not
    sure if it was the process creation itself, or the "duplication" of
    the parent that took so long. Maybe both. In the end it didn't matter:
    it was so much easier to do stuff under DEC/Shell than it was under
    DCL that we just ran the analysis scripts overnight.

    --
    Grant

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Grant Edwards on Wed Mar 19 17:33:57 2025
    On 19/03/2025 15:27, Grant Edwards wrote:
    On 2025-03-19, David Brown <david.brown@hesbynett.no> wrote:

    There are certainly a few things that Cygwin can handle that msys2
    cannot. For example, cygwin provides the "fork" system call that is
    very slow and expensive on Windows, but fundamental to old *nix
    software.

    I believe Windows inherited that from VAX/VMS via Dave Cutler.

    I am always a bit wary of people saying features were copied from VMS
    into Windows NT, simply because the same person was a major part of the development. Windows NT was the descendent of DOS-based Windows, which
    in turn was the descendent of DOS. These previous systems had nothing
    remotely like "fork", but Windows already had multi-threading. When you
    have decent thread support, the use of "fork" is much lower - equally,
    in the *nix world at the time, the use-case for threading was much lower because they had good "fork" support. Thus Windows NT did not get
    "fork" because it was not worth the effort - making existing thread
    support better was a lot more important.


    Back when the Earth was young I used to do embedded development on
    VMS. I was, however, a "Unix guy" so my usual work environment on VMS
    was "DEC/Shell" which was a v7 Bourne shell and surprisingly complete
    set of v7 command line utilities that ran on VMS. [Without DEC/Shell,
    I'm pretty sure I wouldn't have survived that project.] At one point I
    wrote some fairly complex shell/awk/grep scripts to analyze and cross-reference requirements documents written in LaTeX. The scripts
    would have taken a few minutes to run under v7 on an LSI-11, but they
    took hours on a VAX 780 under VMS DEC/Shell (and used up ridiculous
    amounts of CPU time). I was baffled. I eventually tracked it down to
    the overhead of "fork". A fork on Unix is a trivial operation, and
    when running a shell program it happens a _lot_.

    Yes, fork is relatively trivial (in terms of execution time and
    resources) on /most/ *nix systems. (On some, like ucLinux without an
    MMU, it is very expensive.) Basically, it is handled by making all
    read-write pages of the process read-only, duplicating the process
    structure, and then handling copying of what were writeable pages if and
    when the parent or child actually write to them. This has become more
    costly as applications get more advanced and have more memory pages than
    they used to, but is still relatively cheap.

    However, true "fork" is very rarely useful, and is now rarely used in
    modern *nix programming. Most uses of "fork" are either followed
    immediately by an exec call to load and run a new executable (so
    vfork/execve is much cheaper), or they are duplicates of a server daemon
    and you can usually do the job more efficiently with multi-threading or asynchronous handling. It is typically only for servers that want to
    spawn duplicates that have isolation for security reasons (such as an
    ssh server) where it is worth using "fork".

    So these days, bash does not use "fork" for starting all the
    subprocesses - it uses vfork() / execve(), making it more efficient and
    also conveniently more amenable to running on Windows.



    On VMS, a fork() call in a C program had _huge_ overhead compared to
    Unix [but dog bless the guys in Massachusetts, it worked]. I'm not
    sure if it was the process creation itself, or the "duplication" of
    the parent that took so long. Maybe both. In the end it didn't matter:
    it was so much easier to do stuff under DEC/Shell than it was under
    DCL that we just ran the analysis scripts overnight.


    On Windows with Cygwin, "fork" needs to copy all the writeable pages
    (and perhaps also non-writeable pages), as well as explicitly duplicate
    things like file handles. There is also a measurable overhead for
    processes even if they don't fork - for many types of resource
    allocation (such as file and network handles), the Cygwin layer has to
    track the resources just in case you fork later on. msys and msys2
    don't support fork(), so their POSIX emulation layer is much thinner -
    in many cases it just translates POSIX-style calls into the Windows
    equivalent.


    (For a more entertaining use of the term "fork", I recommend the Netflix
    series "The Good Place".)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Edwards@21:1/5 to David Brown on Wed Mar 19 19:08:53 2025
    On 2025-03-19, David Brown <david.brown@hesbynett.no> wrote:
    On 19/03/2025 15:27, Grant Edwards wrote:
    On 2025-03-19, David Brown <david.brown@hesbynett.no> wrote:

    There are certainly a few things that Cygwin can handle that msys2
    cannot. For example, cygwin provides the "fork" system call that is
    very slow and expensive on Windows, but fundamental to old *nix
    software.

    I believe Windows inherited that from VAX/VMS via Dave Cutler.

    I am always a bit wary of people saying features were copied from VMS
    into Windows NT, simply because the same person was a major part of the development. Windows NT was the descendent of DOS-based Windows,

    The accounts I've read about NT say otherwise. They all claim that NT
    was a brand-new kernel written (supposedly from scratch) by Dave
    Cutler's team. They implemented some backwards compatible Windows
    APIs, but the OS kernel itself was based far more on VMS than Windows.

    Quoting from https://en.wikipedia.org/wiki/Windows_NT:

    Although NT was not an exact clone of Cutler's previous operating
    systems, DEC engineers almost immediately noticed the internal
    similarities. Parts of VAX/VMS Internals and Data Structures,
    published by Digital Press, accurately describe Windows NT
    internals using VMS terms. Furthermore, parts of the NT codebase's
    directory structure and filenames matched that of the MICA
    codebase.[10] Instead of a lawsuit, Microsoft agreed to pay DEC
    $65–100 million, help market VMS, train Digital personnel on
    Windows NT, and continue Windows NT support for the DEC Alpha.

    That last sentence seems pretty damning to me.

    in turn was the descendent of DOS. These previous systems had nothing remotely like "fork", but Windows already had multi-threading. When you
    have decent thread support, the use of "fork" is much lower - equally,
    in the *nix world at the time, the use-case for threading was much lower because they had good "fork" support. Thus Windows NT did not get
    "fork" because it was not worth the effort - making existing thread
    support better was a lot more important.

    But it did end up making support for the legacy fork() call used by
    many legacy Unix programs very expensive. I'm not claiming that fork()
    was a good idea in the first place, that it should have been
    implemented better in VMS or Windows, or that it should still be used.

    I'm just claiming that

    1. Historically, fork() was way, way, WAY slower on Windows and VMS
    than on Unix. [Maybe that has improved on Windows.]

    2. 40 years ago, fork() was still _the_way_ to start a process in
    most all common Unix applications.

    However, true "fork" is very rarely useful, and is now rarely used in
    modern *nix programming.

    I didn't mean to imply that it was. However, back in the 1980s when I
    was running DEC/Shell with v7 Unix programs, fork() was still how the
    Bourne shell in DEC/Shell started execution of every command.

    Those utilities were all from v7 Unix. That's before vfork()
    existed. vfork() wasn't introduced until 3BSD and then SysVr4.

    https://en.wikipedia.org/wiki/Fork_(system_call)

    So these days, bash does not use "fork" for starting all the
    subprocesses - it uses vfork() / execve(), making it more efficient
    and also conveniently more amenable to running on Windows.

    That's good news. You'd think it wouldn't be so slow. :)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Grant Edwards on Wed Mar 19 21:14:09 2025
    On 19/03/2025 20:08, Grant Edwards wrote:
    On 2025-03-19, David Brown <david.brown@hesbynett.no> wrote:
    On 19/03/2025 15:27, Grant Edwards wrote:
    On 2025-03-19, David Brown <david.brown@hesbynett.no> wrote:

    There are certainly a few things that Cygwin can handle that msys2
    cannot. For example, cygwin provides the "fork" system call that is
    very slow and expensive on Windows, but fundamental to old *nix
    software.

    I believe Windows inherited that from VAX/VMS via Dave Cutler.

    I am always a bit wary of people saying features were copied from VMS
    into Windows NT, simply because the same person was a major part of the
    development. Windows NT was the descendent of DOS-based Windows,

    The accounts I've read about NT say otherwise. They all claim that NT
    was a brand-new kernel written (supposedly from scratch) by Dave
    Cutler's team. They implemented some backwards compatible Windows
    APIs, but the OS kernel itself was based far more on VMS than Windows.


    The kernel itself was new - and perhaps was more "inspired" by VMS than
    some lawyers liked. But the way it was used - the API for programs, and
    the way programs were built up, and what users saw - was all based on
    existing Windows practice. In particular, it was important that the API
    for NT supported the multithreading from Win32s - thus it was not at all important that it could support "fork".

    Quoting from https://en.wikipedia.org/wiki/Windows_NT:

    Although NT was not an exact clone of Cutler's previous operating
    systems, DEC engineers almost immediately noticed the internal
    similarities. Parts of VAX/VMS Internals and Data Structures,
    published by Digital Press, accurately describe Windows NT
    internals using VMS terms. Furthermore, parts of the NT codebase's
    directory structure and filenames matched that of the MICA
    codebase.[10] Instead of a lawsuit, Microsoft agreed to pay DEC
    $65–100 million, help market VMS, train Digital personnel on
    Windows NT, and continue Windows NT support for the DEC Alpha.

    That last sentence seems pretty damning to me.


    I'm sure there were plenty of similarities in the way things worked
    internally. And perhaps Cutler had some reason to dislike "fork", or
    perhaps simply felt that VMS hadn't needed it, and so NT would not need
    it. But NT /had/ to have multi-threading, and when you have
    multi-threading, "fork" is not nearly as useful or important.

    in turn was the descendent of DOS. These previous systems had nothing
    remotely like "fork", but Windows already had multi-threading. When you
    have decent thread support, the use of "fork" is much lower - equally,
    in the *nix world at the time, the use-case for threading was much lower
    because they had good "fork" support. Thus Windows NT did not get
    "fork" because it was not worth the effort - making existing thread
    support better was a lot more important.

    But it did end up making support for the legacy fork() call used by
    many legacy Unix programs very expensive. I'm not claiming that fork()
    was a good idea in the first place, that it should have been
    implemented better in VMS or Windows, or that it should still be used.

    I'm just claiming that

    1. Historically, fork() was way, way, WAY slower on Windows and VMS
    than on Unix. [Maybe that has improved on Windows.]

    Agreed.

    Windows NT originally tried to be POSIX compliant (or at least, to have
    a POSIX "personality" - along with a Win32 "personality", and an OS/2 "personality"). That would mean that some level of "fork" would be
    needed. But the POSIX support aims were reduced over time. I don't
    know how much of Cygwin's "fork" support is implemented in Cygwin or how
    much is in the NT kernel.

    However, it's worth remembering that MS was not nearly as nice a company
    at that time as it is now, and not nearly as much of a team player. The
    only thing better for MS than having Windows NT be unable to run ports
    of *nix software was to be able to run such software very badly. For
    example, if Oracle could run on Windows but was much slower than MS SQL
    server due to a poor "fork", that would be a bigger marketing win than
    simply not being able to run Oracle. But perhaps that is being a bit
    too paranoid and sceptical.


    2. 40 years ago, fork() was still _the_way_ to start a process in
    most all common Unix applications.


    Agreed.

    I remember the early days of getting gcc compiled for Windows (for the
    68k target, in my case) - most of it was fine, but one program
    ("collect2" used by C++ to figure out template usage, if I remember
    correctly) used "fork" and that made things massively more complicated.

    However, true "fork" is very rarely useful, and is now rarely used in
    modern *nix programming.

    I didn't mean to imply that it was. However, back in the 1980s when I
    was running DEC/Shell with v7 Unix programs, fork() was still how the
    Bourne shell in DEC/Shell started execution of every command.

    Those utilities were all from v7 Unix. That's before vfork()
    existed. vfork() wasn't introduced until 3BSD and then SysVr4.


    Yes, vfork() was a later addition.

    I also remember endless battles about different threading systems for
    Linux before it all settled down.

    https://en.wikipedia.org/wiki/Fork_(system_call)

    So these days, bash does not use "fork" for starting all the
    subprocesses - it uses vfork() / execve(), making it more efficient
    and also conveniently more amenable to running on Windows.

    That's good news. You'd think it wouldn't be so slow. :)


    Even without "fork" being involved, Windows is /much/ slower at starting
    new processes than Linux. It is also slower for file access, and has
    poorer multi-cpu support. (These have, I believe, improved somewhat in
    later Windows versions.) A decade or so ago I happened to be
    approximately in sync on the hardware for my Linux desktop and my
    Windows desktop (I use both systems at work), and tested a make +
    cross-gcc build of a project with a couple of hundred C and C++ files.
    The Linux build was close to twice the speed.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to David Brown on Wed Mar 19 22:09:56 2025
    David Brown <david.brown@hesbynett.no> wrote:
    On 19/03/2025 15:27, Grant Edwards wrote:
    On 2025-03-19, David Brown <david.brown@hesbynett.no> wrote:

    There are certainly a few things that Cygwin can handle that msys2
    cannot. For example, cygwin provides the "fork" system call that is
    very slow and expensive on Windows, but fundamental to old *nix
    software.

    I believe Windows inherited that from VAX/VMS via Dave Cutler.

    I am always a bit wary of people saying features were copied from VMS
    into Windows NT, simply because the same person was a major part of the development. Windows NT was the descendent of DOS-based Windows, which
    in turn was the descendent of DOS. These previous systems had nothing remotely like "fork", but Windows already had multi-threading. When you
    have decent thread support, the use of "fork" is much lower - equally,
    in the *nix world at the time, the use-case for threading was much lower because they had good "fork" support. Thus Windows NT did not get
    "fork" because it was not worth the effort - making existing thread
    support better was a lot more important.

    Actually, Microsoft folks say that Windows NT kernel supports fork.
    It was used to implement Posix subsystem. IIUC they claim that
    trouble is in upper layers: much of Windows API is _not_ kernel
    and implementing well behaving fork means that all layers below
    user program, starting from kernel would have to implement
    fork.

    So this complicated layered structure seem to be main technical
    reason of not having fork at API level. And this structure
    is like VMS and Mica. Part of this layering could be motivated
    by early Windows split between DOS and Windows proper, but
    as Grant explained, VMS influence was stronger.

    IIUC early NT developement was part of joint IBM-Microsoft
    effort to create OS/2, so clearly DOS and Windows influence
    were limited. Only later Microsoft decided to merge
    classic Windows and NT and effectively abandon other
    system iterfaces than Windows API.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Waldek Hebisch on Thu Mar 20 09:26:23 2025
    On 19/03/2025 23:09, Waldek Hebisch wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 19/03/2025 15:27, Grant Edwards wrote:
    On 2025-03-19, David Brown <david.brown@hesbynett.no> wrote:

    There are certainly a few things that Cygwin can handle that msys2
    cannot. For example, cygwin provides the "fork" system call that is
    very slow and expensive on Windows, but fundamental to old *nix
    software.

    I believe Windows inherited that from VAX/VMS via Dave Cutler.

    I am always a bit wary of people saying features were copied from VMS
    into Windows NT, simply because the same person was a major part of the
    development. Windows NT was the descendent of DOS-based Windows, which
    in turn was the descendent of DOS. These previous systems had nothing
    remotely like "fork", but Windows already had multi-threading. When you
    have decent thread support, the use of "fork" is much lower - equally,
    in the *nix world at the time, the use-case for threading was much lower
    because they had good "fork" support. Thus Windows NT did not get
    "fork" because it was not worth the effort - making existing thread
    support better was a lot more important.

    Actually, Microsoft folks say that Windows NT kernel supports fork.
    It was used to implement Posix subsystem. IIUC they claim that
    trouble is in upper layers: much of Windows API is _not_ kernel
    and implementing well behaving fork means that all layers below
    user program, starting from kernel would have to implement
    fork.

    So this complicated layered structure seem to be main technical
    reason of not having fork at API level. And this structure
    is like VMS and Mica. Part of this layering could be motivated
    by early Windows split between DOS and Windows proper, but
    as Grant explained, VMS influence was stronger.

    IIUC early NT developement was part of joint IBM-Microsoft
    effort to create OS/2, so clearly DOS and Windows influence
    were limited. Only later Microsoft decided to merge
    classic Windows and NT and effectively abandon other
    system iterfaces than Windows API.


    DOS and Windows were a relevant part of OS/2 development too. Both IBM
    and MS were fully aware that if OS/2 and/or NT were to succeed,
    compatibility with existing software was essential. But more than that, compatibility with existing software /developers/ was essential.

    But you are absolutely right that the NT kernel was originally intended
    to support different API's or "personalities" (I think that was the term
    used) - at least WinAPI, OS/2 and POSIX. It was also the intention that
    the OS/2 kernel would be similarly flexible, so that users could pick
    their base system and run all sorts of different software on top. IBM
    and MS worked together for interoperability. Having at least minimal
    support for "fork" would have been necessary (along with things like case-sensitive filename support).

    However, it did not take long for MS to realise that they could stab IBM
    in the back and take everything for themselves - through a mixture of technical, economic, legal and illegal shenanigans, they killed off OS/2
    as an OS and as an API, and dropped everything but the WinAPI interface.
    (As you point out, much of that was at a higher level than the kernel itself.)


    There's a lot of interesting detail here from you and Grant, which I appreciate. However, we've strayed a long way from the OP's original
    question and topic, and it's not really about embedded systems any more.
    I hope Pozz got what he needed before we drifted!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to pozz on Thu Mar 13 16:51:23 2025
    On 13/03/2025 09:57, pozz wrote:
    Il 12/03/2025 19:18, David Brown ha scritto:
    On 12/03/2025 18:13, pozz wrote:
    Il 12/03/2025 17:39, David Brown ha scritto:
    On 12/03/2025 16:48, pozz wrote:
    Il 12/03/2025 10:33, David Brown ha scritto:

    For all of this, the big question is /why/ you are doing it.  What >>>>>> are you doing with your times?  Where are you getting them?  Are >>>>>> you actually doing this in a sensible way because they suit your
    application, or are you just using these types and structures
    because they are part of the standard C library - which is not
    good enough for your needs here?

    When the user wants to set the current date and time, I fill a
    struct tm with user values. Next I call mktime() to calculate
    time_t that is been incrementing every second.

    When I need to show the current date and time to the user, I call
    localtime() to convert time_t in struct tm. And I have day of the
    week too.

    Consider that mktime() and localtime() take into account timezone,
    that is important for me. In Italy we have daylight savings time
    with not so simple rules. Standard time functions work well with
    timezones.


    Maybe you are going about it all the wrong way.  If you need to be >>>>>> able to display and set the current time and date, and to be able
    to conveniently measure time differences for alarms, repetitive
    tasks, etc., then you probably don't need any correlation between
    your monotonic seconds counter and your time/date tracker.  All
    you need to do is add one second to each, every second.  I don't
    know the details of your application (obviously), but often no
    conversion is needed either way.

    I'm talking about *wall* clock only. Internally I have a time_t
    variable that is incremented every second. But I need to show it to
    the user and I can't show the seconds from the epoch.


    The sane way to do this - the way it has been done for decades on
    small embedded systems - is to track both a human-legible date/time
    structure (ignore standard struct tm - make your own) /and/ to track
    a monotonic seconds counter (or milliseconds counter, or minutes
    counter - whatever you need).  Increment both of them every second.
    Both operations are very simple - far easier than any conversions.

    If I got your point, adding one second to struct mytm isn't reduced
    to a ++ on one of its member. I should write something similar to this:

    if (mytm.tm_sec < 59) {
       mytm.tm_sec += 1;
    } else {
       mytm.tm_sec = 0;
       if (mytm.tm_min < 59) {
         mytm.tm_min += 1;
       } else {
         mytm.tm_min = 0;
         if (mytm.tm_hour < 23) {
           mytm.tm_hour += 1;
         } else {
           mytm.tm_hour = 0;
           if (mytm.tm_mday < days_in_month(mytm.tm_mon, mytm.tm_year)) { >>>          mytm.tm_mday += 1;
           } else {
             mytm.tm_mday = 1;
             if (mytm.tm_mon < 12) {
               mytm.tm_mon += 1;
             } else {
               mytm.tm_mon = 0;
               mytm.tm_year += 1;
             }
           }
         }
       }
    }


    Yes, that's about it.

    However taking into account dst is much more complex. The rule is the
    last sunday of March and last sunday of October (if I'm not wrong).

    No, it is not complex.  Figure out the rule for your country (I'm sure
    Wikipedia well tell you if you are not sure) and then apply it.  It's
    just a comparison to catch the right time and date, and then you add
    or subtract an extra hour.


    All can be coded manually from the scratch, but there are standard
    functions just to avoid reinventing the wheel.

    You've just written the code!  You have maybe 10-15 more lines to add
    to handle daylight saving.


    Tomorrow I could install my device in another country in the world
    and it could be easy to change the timezone with standard function.

    How many countries are you targeting?  Europe all uses the same system.

    <https://en.wikipedia.org/wiki/Daylight_saving_time_by_country>


    Adding or subtracting an hour on occasion is also simple.

    Yes, but the problem is *when*. You need to know the rules and you
    need to implement them. localtime() just works.


    You are getting ridiculous.  This is not rocket science.

    Ok, but I don't understand why you prefer to write your own code (yes,
    you're an exper programmer, but you can introduce some bugs, you have to write  some tests), while there are standard functions that make the job
    for you.


    I prefer to use a newer version of the toolchain that does not have such problems :-)

    I am quite happy to re-use known good standard functions. There is no
    need to reinvent the wheel if you already have one conveniently
    available. But you don't have standard functions conveniently available
    here - the ones from your toolchain are not up to the task, and you are
    not happy with the other sources you have found for the standard functions.

    So once you have eliminated the possibility of using pre-written
    standard functions, you then need to re-evaluate what you actually need.
    And that is much less than the standard functions provide. So write
    your own versions to do what you need to do - no more, no less.


    I could rewrite memcpy, strcat, strcmp, they aren't rocket science, but
    why? IMHO there is no sense.

    I have re-written such functionality a number of times - because
    sometimes I can do a better job for the task in hand than the standard functions. For example, strncpy() is downright silly - it is
    inefficient (it copies more than it needs to), and potentially unsafe as
    it doesn't necessarily copy the terminator. memcpy() can be inefficient
    in cases where the programmer knows more about the alignment or size
    than the compiler can prove. And so on.


    In my case standard functions aren't good (because of Y2038 issue) and rewriting them can be a valid solution. But if I had a 64 bits time_t, I would live with standard functions very well.


    And if pigs could fly, you could probably teach them to program too.
    You can't use the standard functions, so you have to look elsewhere.
    Writing them yourself is a simple and convenient solution.


    Besides, any fixed system is at risk from changes - and countries have
    in the past and will in the future change their systems for daylight
    saving.  (Many have at least vague plans of scraping it.)  So if a
    simple fixed system is not good enough for you, use the other method I
    suggested - handle it by regular checks from a server that you will
    need anyway for keeping an accurate time, or let the user fix it for
    unconnected systems.

    My users like the automatic dst changes on my connected and unconnected devices. The risk of a future changes in the dst rules doesn't seem to
    me a good reason to remove that feature.


    Okay, so you have to put it in.

    As I see it, the options are :

    1. Use the standard functions from your toolchain. You've ruled out
    using those with your current toolchain, and ruled out changing the
    toolchain, so this won't do.

    2. Use an implementation from other library sources online. You've
    ruled those out as too complicated.

    3. Write your own functions. Yes, that involves a certain amount of
    work, testing and risk. That's your job.


    Am I missing anything?



    If your system is connected to the internet, then occasionally pick
    up the current wall-clock time (and unix epoch, if you like) from a
    server, along with the time of the next daylight savings change.

    What do you mean with "next daylight savings change"? I'm using NTP
    (specifically SNTP from a public server) and I'm able to retrive the
    current UTC time in seconds from Unix epoch.

    I just take this value and overwrite my internal counter.

    In other application, I retrive the current time from incoming SMS.
    In this case I have a local broken down time.


    If it is not connected, then the user is going to have to make
    adjustments to the time and date occasionally anyway, as there is
    always drift

    Drifts? By using a 32.768kHz quartz to generate a 1 Hz clock that
    increases the internal counter avoid any drifts.

    There is no such thing as a 32.768 kHz crystal - there are only
    approximate crystals.  If you don't update often enough from an
    accurate time source, you will have drift.  (How much drift you have,
    and what effect it has, is another matter.)

    Of course, the quartz has an accuracy that changes with life,
    temperature an so on. However the real accuracy doesn't allow the time drifting so much the user needs to reset the time.


    A standard cheap nominal 32.768 kHz is +/- 20 ppm. That's 1.7 seconds
    per day - assuming everything in the hardware is good. Often that's
    good enough, but sometimes it is not. Only you can answer that one.


    - they can
    do the daylight saving change at the same time as they change their
    analogue clocks, their cooker clock, and everything else that is not
    connected.

    I think you can take into account dst even if the device is not
    connected.


    You certainly can.  But then you have to have a fixed algorithm known
    in advance.

    I bet Windows is able to show the correct time (with dst changes)
    even if the PC is not connected.

    I bet it can't, in cases where the date system for the daylight
    savings time has changed or been removed.  Other than that, it will
    just use a table of date systems such as on the Wikipedia page.  Or
    perhaps MS simply redefined what they think other people should use.

    Older Windows needed manual changes for the date and time, even when
    it was connected - their support for NTP was late.

    Maybe Windows is not able, but I'm reading Linux is. It saves the time
    as UTC on the hw RTC and shows it to the user as localtime, of course applying dst and timezone rules from a database of rules.

    Yes, Linux has had NTP, timezones and daylight savings since its early
    days (as have other *nix OS's).


    So, as long as the timezone/dst info for my timezone is correct, I think Linux could manage dst changes automatically without user activity.

    My approach is identical to what Linux does.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to pozz on Fri Mar 14 01:48:07 2025
    pozz <pozzugno@gmail.com> wrote:

    How do you debug your projects without a full-features and ready-to-use
    IDE from the silicon vendor?

    With STM devices I use Linux 'stlink' program and gdb. That is
    command line debugging. I can set breakpoints, single step,
    view and and modify device registers, those are main things
    that I need. I also use debugging UART. For debugging I
    normally load code into RAM which means that I can have
    unlimited number of breakpoints without writning to flash
    (I am not sure if that is really important, but at least
    makes me feel better).

    I have also used stlink with some non-STM devices (IIRC LPC),
    but that required modification to 'stlink' code and IIUC
    use of of non-STM devices is blocked in new firmware for
    the debugging dongle.

    'gdb' can be used with many other debugging dongles.

    Visual tools may be nicer and automatically do some extra
    things. But I got used to gdb.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael Schwingen@21:1/5 to David Brown on Fri Mar 21 09:20:04 2025
    On 2025-03-18, David Brown <david.brown@hesbynett.no> wrote:

    These days I happily use it on Windows with recursive make (done
    /carefully/, as all recursive makes should be), automatic dependency generation, multiple makefiles, automatic file discovery, parallel
    builds, host-specific code (for things like the toolchain installation directory), and all sorts of other bits and pieces.

    I converted to the "recursive make considered harmful" group long ago.
    Having one makefile for the whole build makes it possible to have
    dependencies crossing directories, and gives better performance in parallel builds - with recursive make, the overhead for entering/exiting directories
    and waiting for sub-makes to finish piles up. If a compile takes 30 minutes
    on a fast 16-cpu machine, that does make a difference.

    using ninja instead of make works even better in such a scenario.

    cu
    Michael
    --
    Some people have no respect of age unless it is bottled.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael Schwingen@21:1/5 to HBBroeker@gmail.com on Fri Mar 21 09:23:17 2025
    On 2025-03-18, Hans-Bernhard Bröker <HBBroeker@gmail.com> wrote:
    I'll also second an earlier suggestion: for newcomers with little or no present skills in Makefile writing, CMake or Meson can be a much
    smoother entry into this world. Also, if you're going this route, I
    suggest to consider skipping Make and using Ninja instead.

    Ninja works great, but I don't think you should write ninja scripts
    yourself. cmake can be used to generate ninja files - that's what I
    currently use for my ARM projects, this works great.

    cu
    Michael
    --
    Some people have no respect of age unless it is bottled.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael Schwingen@21:1/5 to David Brown on Fri Mar 21 09:48:27 2025
    On 2025-03-19, David Brown <david.brown@hesbynett.no> wrote:
    later Windows versions.) A decade or so ago I happened to be
    approximately in sync on the hardware for my Linux desktop and my
    Windows desktop (I use both systems at work), and tested a make +
    cross-gcc build of a project with a couple of hundred C and C++ files.
    The Linux build was close to twice the speed.

    I have the same experience, about 20 years ago - the company was using a cygwin-based cross-gcc + make (I think some old borland make) on windows. I converted the makefiles to use GNU make on linux, and compile time was half that of the windows setup. That speed advantage was enough to (very) slowly convert colleagues to use Linux.

    cu
    Michael
    --
    Some people have no respect of age unless it is bottled.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to pozz on Fri Mar 14 14:20:56 2025
    On 14/03/2025 13:27, pozz wrote:
    Il 13/03/2025 16:51, David Brown ha scritto:
    On 13/03/2025 09:57, pozz wrote:
    Il 12/03/2025 19:18, David Brown ha scritto:
    On 12/03/2025 18:13, pozz wrote:

    Ok, but I don't understand why you prefer to write your own code
    (yes, you're an exper programmer, but you can introduce some bugs,
    you have to write  some tests), while there are standard functions
    that make the job for you.


    I prefer to use a newer version of the toolchain that does not have
    such problems :-)

    Sure, but the project is old. I will check if using a newer toolchain is
    a feasible solution for this project.


    I fully appreciate - and agree with - not wanting to change toolchains
    on an existing established project. It might be the best solution here,
    but it is certainly not one to be picked lightly.


    I am quite happy to re-use known good standard functions.  There is no
    need to reinvent the wheel if you already have one conveniently
    available.  But you don't have standard functions conveniently
    available here - the ones from your toolchain are not up to the task,
    and you are not happy with the other sources you have found for the
    standard functions.

    So once you have eliminated the possibility of using pre-written
    standard functions, you then need to re-evaluate what you actually
    need.   And that is much less than the standard functions provide.  So
    write your own versions to do what you need to do - no more, no less.

    I agree with you. I thought you were suggesting to use custom made
    functions in any case, because my approach that uses time_t counter
    (seconds from epoch) and localtime()/mktime() isn't good.


    No. I am merely saying that if you can't use the standard functions and
    have to get other ones from somewhere (or write them yourself), making
    them match standard function interfaces is of no benefit. There are
    many alternative formats that could be better for your use.


    2. Use an implementation from other library sources online.  You've
    ruled those out as too complicated.

    In the past I sometimes lurked in the newlib code and it seems too complicated for me. I will search for other simple implementations of localtime()/mktime().


    There are other C standard libraries around - maybe others are better
    than newlib for this purpose. (I don't know if newlib nano is mixed in
    with newlib here.) Newlib sources are, at least in parts, a monstrosity
    of conditional compilation to support vast numbers of targets,
    compilers, OS's, and options.


    3. Write your own functions.  Yes, that involves a certain amount of
    work, testing and risk.  That's your job.

    Am I missing anything?

    I don't think.


    I really hope you missed a word in that sentence :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael Schwingen on Fri Mar 21 13:54:40 2025
    On 21/03/2025 10:20, Michael Schwingen wrote:
    On 2025-03-18, David Brown <david.brown@hesbynett.no> wrote:

    These days I happily use it on Windows with recursive make (done
    /carefully/, as all recursive makes should be), automatic dependency
    generation, multiple makefiles, automatic file discovery, parallel
    builds, host-specific code (for things like the toolchain installation
    directory), and all sorts of other bits and pieces.

    I converted to the "recursive make considered harmful" group long ago.
    Having one makefile for the whole build makes it possible to have dependencies crossing directories, and gives better performance in parallel builds - with recursive make, the overhead for entering/exiting directories and waiting for sub-makes to finish piles up. If a compile takes 30 minutes on a fast 16-cpu machine, that does make a difference.

    using ninja instead of make works even better in such a scenario.

    cu
    Michael

    I fully agree with the points in "recursive make considered harmful",
    which I also read long ago. But that does not mean that recursive make
    can't be used well - it just means you have to use it appropriately, and carefully.

    In particular, using one "outer" make to run make on makefiles in
    different directories is asking for trouble - you can easily get
    dependencies wrong or miss cross-directory dependencies. And it is
    often difficult to figure out what is happening if something fails in
    one of the builds. And with older makes (from the days when that paper
    was written), there was no inter-make job server meaning you either had
    to give each submake too few parallel jobs (and thus wait for some to
    finish), or too many (and slow the system down).

    The way I use recursive makes is /really/ recursive - the main make
    (typically split into a few include makefiles for convenience, but only
    one real make) handles everything, and it does some of that be calling
    /itself/ recursively. It is quite common for me to build multiple
    program images from one set of source - perhaps for different variants
    of a board, with different features enabled, and so on. So I might use
    "make prog=board_a" to build the image for board a, and "make
    prog=board_b" for board b. Each build will be done in its own directory
    - builds/build_a or builds/build_b. Often I will want to build for both
    boards - then I will do "make prog="board_a board_b"" (with a default
    setting for the most common images).

    These different boards can require different settings for compiler
    flags, directories, and various other options. Rather than having to
    track multiple sets of variables in the makefiles for when multiple
    board images are being handled within the one make, I have a far simpler solution - if there is more than one image being build, I simply spin
    off recursive makes for each build - thus after "make prog="board_a
    board_b"", I start "make prog=board_a" and "make prog=board_b". Each
    make instance lives on its own, so I only need one set of flags at a
    time, compiling into one build directory - but they share a job server,
    and all the dependencies are correct.

    It is not the only way to handle such things, but it is definitely a
    convenient and efficient method.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to David Brown on Fri Mar 21 14:04:59 2025
    David Brown <david.brown@hesbynett.no> wrote:
    On 18/03/2025 19:28, Michael Schwingen wrote:
    On 2025-03-18, David Brown <david.brown@hesbynett.no> wrote:

    A good makefile picks up the new files automatically and handles all the >>> dependencies, so often all you need is a new "make -j".

    I don't do that anymore - wildcards in makefiles can lead to all kinds of
    strange behaviour due to files that are left/placed somewhere but are not
    really needed.

    I'm sure you can guess the correct way to handle that - don't leave
    files in the wrong places :-)

    I prefer to list the files I want compiled - it is not that
    much work.


    In a project of over 500 files in 70 directories, it's a lot more work
    than using wildcards and not keeping old unneeded files mixed in with
    source files.

    In project with about 550 normal source files, 80 headers, 200 test
    files, about 1200 generated files spread over 12 directories I use
    explicit file lists. Lists of files increase volume of Makefile-s,
    but in my experience extra work to maintain file list is very small.
    Compared to effort needed to create a file, adding entry to file list
    is negligible.

    Explicit lists are useful if groups of files should get somewhat
    different treatment (I have less need for this now, but it was
    important in the past).

    IMO being explicit helps with readablity and make code more
    amenable to audit.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Edwards@21:1/5 to Michael Schwingen on Fri Mar 21 13:27:29 2025
    On 2025-03-21, Michael Schwingen <news-1513678000@discworld.dascon.de> wrote:

    I have the same experience, about 20 years ago - the company was
    using a cygwin-based cross-gcc + make (I think some old borland
    make) on windows. I converted the makefiles to use GNU make on
    linux, and compile time was half that of the windows setup. That
    speed advantage was enough to (very) slowly convert colleagues to
    use Linux.

    I support a product (ARM w/ RTOS) for which we put together an SDK
    that allowed customers to write custom firmware. The SDK was
    available for Windows+Cygwin and Linux. We had a half-dozen customers
    actually use the SDK to write custom firmware. They all chose to go
    the Windows+Cygwin Route. A few of them ended up maintaining their
    firmware for a fairly long period of time. Eventually, keeping Cygwin
    working on the customers machines, and the SDK working on Cygwin
    became too much hassle. We pointed them to instructions on installing
    Ubuntu on a VM inside Windows.

    There were all amazed at

    1. How much less work installing Linux was than installing and
    troubleshooting Cygwin.

    2. How much faster a build ran under a Linux VM on Windows than it
    did under Cygwin on Windows.

    3. How convenient it was to be able to just archive the VM image so
    that the next time they needed to modify the firmware all they had
    to do was plop the VM image on whatever host machine they had
    handy.

    Previously, they always seemed to lose track of their Windows/Cygwin development machine and would have to reinstall Cygwin and the SDK
    every time they wanted to change something (changes were usually
    several years apart).

    So we stopped supporting the Cygwin version of the SDK. There are
    couple customers that are still maintaining their custom firmware
    after 20 years. I believe they've figured out how share a directory
    between Windows and the Linux VM, so they do all of their editing
    under Windows, and then just do a "make" in the Linux VM, then use
    tools under Windows to install/deploy the firmware. I told them they
    could even run the "make" via ssh from whatever Windows IDE/editor
    thingy they were using so that it could parse the make output and do
    nice IDE type stuff with it, but I don't know if they ever did that.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to Michael Schwingen on Fri Mar 21 14:35:13 2025
    Michael Schwingen <news-1513678000@discworld.dascon.de> wrote:
    On 2025-03-18, David Brown <david.brown@hesbynett.no> wrote:

    These days I happily use it on Windows with recursive make (done
    /carefully/, as all recursive makes should be), automatic dependency
    generation, multiple makefiles, automatic file discovery, parallel
    builds, host-specific code (for things like the toolchain installation
    directory), and all sorts of other bits and pieces.

    I converted to the "recursive make considered harmful" group long ago.
    Having one makefile for the whole build makes it possible to have dependencies crossing directories, and gives better performance in parallel builds - with recursive make, the overhead for entering/exiting directories and waiting for sub-makes to finish piles up. If a compile takes 30 minutes on a fast 16-cpu machine, that does make a difference.

    I do not see substantial difference in build time between a single
    Makefile approach and recursive make with job server. That is
    on Linux and with optimizing compilers. Slower filesystem
    handling or ultra-fast compilers could make a difference.

    Also, I am trying to be explicit in my Makefile-s. Normal 'make'
    rules check (search) for various insane possibilities, being
    explict limits need for searching.

    Recursive make makes a lot of sense if build must be split into
    stages and when directory structure reflects dependences.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Waldek Hebisch on Fri Mar 21 16:45:11 2025
    On 21/03/2025 15:04, Waldek Hebisch wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 18/03/2025 19:28, Michael Schwingen wrote:
    On 2025-03-18, David Brown <david.brown@hesbynett.no> wrote:

    A good makefile picks up the new files automatically and handles all the >>>> dependencies, so often all you need is a new "make -j".

    I don't do that anymore - wildcards in makefiles can lead to all kinds of >>> strange behaviour due to files that are left/placed somewhere but are not >>> really needed.

    I'm sure you can guess the correct way to handle that - don't leave
    files in the wrong places :-)

    I prefer to list the files I want compiled - it is not that
    much work.


    In a project of over 500 files in 70 directories, it's a lot more work
    than using wildcards and not keeping old unneeded files mixed in with
    source files.

    In project with about 550 normal source files, 80 headers, 200 test
    files, about 1200 generated files spread over 12 directories I use
    explicit file lists. Lists of files increase volume of Makefile-s,
    but in my experience extra work to maintain file list is very small.
    Compared to effort needed to create a file, adding entry to file list
    is negligible.

    That's true.

    But compared to have a wildcard search to include all .c and .cpp files
    in the source directories, maintaining file lists is still more than
    nothing!

    However, the real benefit from using automatic file searches like this
    is two-fold. One is that you can't get it wrong - you can't forget to
    add the new file to the list, or remove deleted or renamed files from
    the list. The other - bigger - effect is that there is never any doubt
    about the files in the project. A file is in the project and build if
    and only if it is in one of the source directories. That consistency is
    very important to me - and to anyone else trying to look at the project.
    So any technical help in enforcing that is a good thing in my book.


    Explicit lists are useful if groups of files should get somewhat
    different treatment (I have less need for this now, but it was
    important in the past).


    I do sometimes have explicit lists for /directories/ - but not for
    files. I often have one branch in the source directory for my own code,
    and one branch for things like vendor SDKs and third-party code. I can
    then use stricter static warnings for my own code, without triggering
    lots of warnings in external code.

    IMO being explicit helps with readablity and make code more
    amenable to audit.


    A simple rule of "all files are in the project" is more amenable to audit.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael Schwingen@21:1/5 to David Brown on Fri Mar 21 20:53:48 2025
    On 2025-03-21, David Brown <david.brown@hesbynett.no> wrote:

    The way I use recursive makes is /really/ recursive - the main make (typically split into a few include makefiles for convenience, but only
    one real make) handles everything, and it does some of that be calling /itself/ recursively. It is quite common for me to build multiple
    program images from one set of source - perhaps for different variants
    of a board, with different features enabled, and so on. So I might use
    "make prog=board_a" to build the image for board a, and "make
    prog=board_b" for board b. Each build will be done in its own directory
    - builds/build_a or builds/build_b. Often I will want to build for both boards - then I will do "make prog="board_a board_b"" (with a default
    setting for the most common images).

    OK, that is not the classic recursive make pattern (ie. run make in each subdirectory). I do that (ie. building for multiple boards) using build scripts that are external to make.

    cu
    Michael
    --
    Some people have no respect of age unless it is bottled.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)