• Re: Improving build system

    From Nicolas Paul Colin de Glocester@21:1/5 to All on Tue May 13 22:48:37 2025
    On Tue, 13 May 2025, pozz wrote:
    "[. . .]

    How to choose the correct toolchain? Embedded target needs arm-gcc toolchain, for example in

    C:\Program Files (x86)\Atmel\Studio\7.0\toolchain\arm\arm-gnu-toolchain\bin

    while simulator targets needs simply gcc.

    How do you choose toolchain in Makefile? I think one trick is using the prefix.
    Usually arm-gcc is arm-none-eabi-gcc.exe, with "arm-none-eabi-" prefix. Is there other approach?

    [. . .]
    [. . .] Should I change the PATH and use arm-none-eabi- prefix?"


    Buona sera!

    Did you consider the technique of assigning a concrete compiler to an
    abstract compiler varaible in a Makefile like many projects do? For
    example from
    checkmate-0.20/libgnugetopt-1.2/Makefile
    . . .

    # Makefile.in generated by automake 1.15 from Makefile.am.
    # libgnugetopt-1.2/Makefile. Generated from Makefile.in by configure.

    # Copyright (C) 1994-2014 Free Software Foundation, Inc.

    # This Makefile.in is free software; the Free Software Foundation
    # gives unlimited permission to copy and/or distribute it,
    # with or without modifications, as long as this notice is preserved.

    # This program is distributed in the hope that it will be useful,
    # but WITHOUT ANY WARRANTY, to the extent permitted by law; without
    # even the implied warranty of MERCHANTABILITY or FITNESS FOR A
    # PARTICULAR PURPOSE.

    [. . .]

    [. . .]
    COMPILE = $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) \
    $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS)
    [. . .]
    CC = gcc
    [. . .]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to pozz on Wed May 14 11:03:56 2025
    On 13/05/2025 17:57, pozz wrote:
    As some of you remember, some weeks ago we had a discussion on the build system of an embedded project.  I declared I usually use the graphical
    IDE released by silicon manufacturer (in my case, Atmel Studio and
    MCUXpresso IDE) and some of you suggested to improve this build system
    using a command line tool, such as the universal make.  Better if used
    in Linux or Linux-based system, such as msys or WSL.

    I also declared I had many issues fine tuning a cross-platform Makefile
    that works in Windows and Linux-like shells at the same time and some of
    you suggested to use only WSL or msys, not Windows CMD shell.

    Recently I found some time to try again and I wrote a decent Makefile as
    a starting point.  Now the command make, launched in a msys/mingw32
    shell, is able to build my project, at least a specific build
    configuration, what I name "simulator".

    My projects usually have multiple build configurations. A few for
    different models of the same device, such as LITE, NORMAL and FULL.
    Moreover, I have at least two different targets: embedded and simulator.
    The embedded target is the normal product, usually running on a Cortex-M
    or AVR8 MCU. The simulator target runs directly on Windows. I use it
    very often, because I found it's much faster and simpler to build native binaries and debug such processes. Of course, building a simulator needs
    a different compilers, such as msys2/mingw32 or WSL/gcc.
    I also have a DEBUG build configuration (target=embedded) useful for
    some debugging directly on the target (no watchdog, uart logging enabled
    and so on).

    So I could have 7 different build configurations: LITE|NORMAL|FULL for EMBEDDED|SIMULATOR plus DEBUG.

    I think it isn't difficult to change my Makefile to process commands of
    type:

       make CONFIG=LITE TARGET=embedded
       make CONFIG=FULL TARGET=simulator
       make CONFIG=DEBUG

    There are many compiler options that are common to all builds (-Wall, -std=c99 and so on).  Some options are target specific (for example -DQUARTZ_FREQ_MHZ=16 -Isrc/ports/avr8 for embedded or
    -Isrc/ports/mingw32 for simulator).

    I could generate the correct options by using ifeq() in Makefile.

    How to choose the correct toolchain? Embedded target needs arm-gcc
    toolchain, for example in

      C:\Program Files (x86)\Atmel\Studio\7.0\toolchain\arm\arm-gnu-toolchain\bin

    while simulator targets needs simply gcc.

    How do you choose toolchain in Makefile?  I think one trick is using the prefix.  Usually arm-gcc is arm-none-eabi-gcc.exe, with "arm-none-eabi-" prefix.  Is there other approach?

    I don't know if I could install arm-gcc in msys2 (I'm quite sure I can install it in WSL), but for legacy projects I need to use the Atmel
    Studio toolchain.  How to call Atmel Studio arm toolchain installed in

      C:\Program Files (x86)\Atmel\Studio\7.0\toolchain\arm\arm-gnu-toolchain\bin

    from msys shell?  Should I change the PATH and use arm-none-eabi- prefix?


    You are asking a lot of questions here. They are good questions, but it
    would be a very long post if I tried to answer them all fully. So
    instead, I will try to give a few hints and suggestions that you can
    take further. I'll put numbers on them in case you want to reference
    them in replies.

    1.

    Windows path naming is insane. Fortunately, you can almost always
    override it. Whenever you install any serious program in Windows,
    especially if you ever want to refer to it from the command line, batch
    files, makefiles, etc., avoid names with spaces or "awkward" characters.
    I recommend making top-level directories like "progs" or "compilers"
    and putting the tools in there as appropriate. This also makes it
    vastly easier to copy tools to other machines. And since you should
    never upgrade your toolchains - merely add new versions to your
    collection, in separate directories - it is easier if they are better organised.


    2.

    You don't need to use bash or other *nix shells for makefile or other
    tools if you don't want to. When I do builds on Windows, I run "make"
    from a normal command line (or from an editor / IDE). It is helpful to
    have msys2's usr/bin on your path so that make can use *nix command-line utilities like cp, mv, sed, etc. But if you want to make a minimal
    build system, you don't need a full msys2 installation - you only need
    the utilities you want to use, and they can be copied directly (unlike
    with Cygwin or WSL).

    Of course you /can/ use fuller shells if you want. But don't make your makefiles depend on that, as it will be harder to use them from IDEs,
    editors, or any other automation.

    And of course you will want an msys2/mingw64 (/not/ old mingw32) for
    native gcc compilation. Don't bother with WSL unless you actually need
    a fuller Linux system - and if you /do/ need that, dump the wasteful
    crap that is Windows and use Linux for your development. Build speeds
    will double on the same hardware. (In my testing, done a good while
    back, I did some comparisons of a larger build on different setups on
    the same PC, using native Windows build as the baseline. Native Linux
    builds were twice the speed. Running VirtualBox on Windows host, with a
    Linux virtual machine, or running VirtualBox on Linux with a Windows
    virtual machine, both beat native Windows solidly.)


    3.

    Makefiles can be split up. Use "include" - and remember that you can do
    so using macros. In my makefile setups, I have a file "host.mk" that is
    used to identify the build host, then pull in a file that is specific to
    the host:

    # This is is for identifying host computer to get the paths right

    ifeq ($(OS),Windows_NT)
    # We are on a Windows machine
    host_os := windows
    host := $(COMPUTERNAME)
    else
    # Linux machine
    host_os := linux
    host := $(shell hostname)
    endif

    ifeq "$(call file-exists,makes/host_$(host).mk)" "1"
    include makes/host_$(host).mk
    else
    $(error No host makefile host_$(host).mk found)
    endif

    Then I have files like "host_xxx.mk" for a computer named "xxx",
    containing things like :

    toolchain_path := /opt/gcc-arm-none-eabi-10-2020-q4-major/bin/

    or

    toolchain_path := c:/micros/gcc-arm-none-eabi-10_2020-q4-major/bin/


    All paths to compilers and other build-related programs are specified in
    these files. The only things that are taken from the OS path are
    standard and common programs that do not affect the resulting binary files.


    Then I have a "commands.mk" file with things like :

    ATDEP := @

    toolchain_prefix := arm-none-eabi-

    CCDEP := $(ATDEP)$(toolchain_path)$(toolchain_prefix)gcc
    CC := $(AT)$(CCACHE) $(toolchain_path)$(toolchain_prefix)gcc
    LD := $(AT)$(toolchain_path)$(toolchain_prefix)gcc
    OBJCOPY := $(AT)$(CCACHE) $(toolchain_path)$(toolchain_prefix)objcopy
    OBJDUMP := $(AT)$(CCACHE) $(toolchain_path)$(toolchain_prefix)dump
    SIZE := $(AT)$(CCACHE) $(toolchain_path)$(toolchain_prefix)size


    Put CONFIG dependent stuff in "config_full.mk" and similar files. Put
    TARGET specific stuff in "target_simulator.mk". And so on. It makes it
    much easier to keep track of things, and you only need a few high-level
    "ifeq".


    Keep your various makefiles in a separate directory. Your project
    makefile is then clear and simple - much of it will be comments about
    usage (parameters like CONFIG).


    4.

    Generate dependency files, using the same compiler and the same include
    flags and -D flags as you have for the normal compilation, but with
    flags like -MM -MP -MT and -MF to make .d dependency files. Include
    them all in the makefile, using "-include" so that your makefile does
    not stop before they are generated.


    5.

    Keep your build directories neat, separate from all source directories,
    and mirroring the tree structure of the source files. So if you have a
    file "src/gui/main_window.c", and you are building with CONFIG=FULL TARGET=embedded, the object file generated should go in something akin
    to "builds/FULL/embedded/obj/src/gui/main_window.o". I like to have
    separate parts for obj (.o files), dep (.d files), and bin (linked
    binaries, map files, etc.). You could also mix .d and .o files in the
    same directory if you prefer.

    This means you can happily do incremental builds for all your
    configurations and targets, and don't risk mixing object files from
    different setups.


    6.

    Learn to use submakes. When you use plain "make" (or, more
    realistically, "make -j") to build multiple configurations, have each configuration spawned off in a separate submake. Then you don't need to
    track multiple copies of your "TARGET" macro in the same build - each
    submake has just one target, and one config.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Reuther@21:1/5 to All on Wed May 14 18:06:02 2025
    Am 13.05.2025 um 17:57 schrieb pozz:
    How do you choose toolchain in Makefile?  I think one trick is using the prefix.  Usually arm-gcc is arm-none-eabi-gcc.exe, with "arm-none-eabi-" prefix.  Is there other approach?

    One idea that I found useful to get out of Makefile madness is to
    generate the build scripts.

    When you have your build scripts in another language that supports
    proper conditionals and subroutines, you have much more freedom in
    decisions you make. Also, things like out-of-tree builds are much easier
    to control; just spit out a list of "build $objdir/$file.o from $srcdir/$file.c" instead of making a pattern rule and hoping for the best.

    This is the idea behind Make replacements such as ninja, which has
    basically no decisionmaking logic built in (unlike Make, which has some
    that is awkward).

    If you overdo the concept of generating Makefiles, you probably end up
    with CMake. But normally, such a generator can be a simple, one-file script.

    But if the structure and feature-set of all your compilers is the same,
    just the names and options are different, you could also do something
    like: put all your build rules into 'rules.mk', make a
    'atmel-arm-gnu-debug.mk' that sets all the variables and then does
    'include rules.mk', and then build with 'make -f <config>.mk'.

    I don't know if I could install arm-gcc in msys2 (I'm quite sure I can install it in WSL), but for legacy projects I need to use the Atmel
    Studio toolchain.  How to call Atmel Studio arm toolchain installed in

      C:\Program Files (x86)\Atmel\Studio\7.0\toolchain\arm\arm-gnu-toolchain\bin

    from msys shell?  Should I change the PATH and use arm-none-eabi- prefix?

    That would be personal preference.

    I have a slight preference of setting PATH and using a prefix, if I'm reasonably sure that the tools I'm going to use do not exist anywhere
    else on my path by accident.


    Stefan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From George Neuner@21:1/5 to david.brown@hesbynett.no on Wed May 14 15:21:18 2025
    On Wed, 14 May 2025 11:03:56 +0200, David Brown
    <david.brown@hesbynett.no> wrote:


    1.

    Windows path naming is insane. Fortunately, you can almost always
    override it. Whenever you install any serious program in Windows,
    especially if you ever want to refer to it from the command line, batch >files, makefiles, etc., avoid names with spaces or "awkward" characters.
    I recommend making top-level directories like "progs" or "compilers"
    and putting the tools in there as appropriate. This also makes it
    vastly easier to copy tools to other machines. And since you should
    never upgrade your toolchains - merely add new versions to your
    collection, in separate directories - it is easier if they are better >organised.


    Just note that NTFS *does* have a limit on the number of entries in
    the root directory of the drive. Offhand, I don't recall what is the
    limit [for some reason 127 is stuck in my head] but note that the
    typical Windows installation has only about ~20 folders in C:\.

    Subdirectories, however, effectively are unlimited.

    A handful of extra folders in the drive root certainly will not cause
    any problem, but just don't try to install lots of software to
    separate folders directly under the drive root.



    2.

    You don't need to use bash or other *nix shells for makefile or other
    tools if you don't want to. When I do builds on Windows, I run "make"
    from a normal command line (or from an editor / IDE). It is helpful to
    have msys2's usr/bin on your path so that make can use *nix command-line >utilities like cp, mv, sed, etc. But if you want to make a minimal
    build system, you don't need a full msys2 installation - you only need
    the utilities you want to use, and they can be copied directly (unlike
    with Cygwin or WSL).

    A number of common Unix utilities are available as native Windows
    executables, so a POSIX environment like msys2 or mingw is not even
    needed [unless you want it for some other purpose, 8-) ].

    https://sourceforge.net/projects/unxutils/



    Of course you /can/ use fuller shells if you want. But don't make your >makefiles depend on that, as it will be harder to use them from IDEs, >editors, or any other automation.

    Also note that, on Windows, Powershell is able to launch programs much
    faster than CMD. [I don't know why, just that it does.]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nicolas Paul Colin de Glocester@21:1/5 to All on Thu May 15 01:00:52 2025
    Buona sera!

    On Wed, 14 May 2025, pozz wrote:
    "[. . .] When it comes to stupid and big IDEs [. . .]
    [. . .]
    It is already a miracle if that software runs without problems with the default installation path. I don't want to imagine what happens if I changed it."


    If these IDEs would not be trustworthy, then you would need to buy good alternatives.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to George Neuner on Thu May 15 09:48:01 2025
    On 14/05/2025 21:21, George Neuner wrote:
    On Wed, 14 May 2025 11:03:56 +0200, David Brown
    <david.brown@hesbynett.no> wrote:


    1.

    Windows path naming is insane. Fortunately, you can almost always
    override it. Whenever you install any serious program in Windows,
    especially if you ever want to refer to it from the command line, batch
    files, makefiles, etc., avoid names with spaces or "awkward" characters.
    I recommend making top-level directories like "progs" or "compilers"
    and putting the tools in there as appropriate. This also makes it
    vastly easier to copy tools to other machines. And since you should
    never upgrade your toolchains - merely add new versions to your
    collection, in separate directories - it is easier if they are better
    organised.


    Just note that NTFS *does* have a limit on the number of entries in
    the root directory of the drive. Offhand, I don't recall what is the
    limit [for some reason 127 is stuck in my head] but note that the
    typical Windows installation has only about ~20 folders in C:\.

    Subdirectories, however, effectively are unlimited.

    A handful of extra folders in the drive root certainly will not cause
    any problem, but just don't try to install lots of software to
    separate folders directly under the drive root.


    I am not suggesting that he put all the tools directly in the root
    folder! But it is a good idea to put programs you want to find in a
    sane hierarchy. Exactly how any one person wants to organise this will
    vary - you might want to have IDE's separate from compiler toolchains,
    and you might want "c:\compilers" to have subdirectories for "arm",
    "avr", "msp430", or whatever - that's all up to the individual to find
    the best solution for them.



    2.

    You don't need to use bash or other *nix shells for makefile or other
    tools if you don't want to. When I do builds on Windows, I run "make" >>from a normal command line (or from an editor / IDE). It is helpful to
    have msys2's usr/bin on your path so that make can use *nix command-line
    utilities like cp, mv, sed, etc. But if you want to make a minimal
    build system, you don't need a full msys2 installation - you only need
    the utilities you want to use, and they can be copied directly (unlike
    with Cygwin or WSL).

    A number of common Unix utilities are available as native Windows executables, so a POSIX environment like msys2 or mingw is not even
    needed [unless you want it for some other purpose, 8-) ].

    https://sourceforge.net/projects/unxutils/


    Indeed, there have been many sources of that kind of program over the
    years. Most are made as mingw or mingw64 compilations, just like you
    get in msys or msys2. These days, however, I would recommend msys2 as
    the easiest and best solution - it has the most flexibility, and you
    rarely need to be concerned about using more disk space than absolutely necessary.



    Of course you /can/ use fuller shells if you want. But don't make your
    makefiles depend on that, as it will be harder to use them from IDEs,
    editors, or any other automation.

    Also note that, on Windows, Powershell is able to launch programs much
    faster than CMD. [I don't know why, just that it does.]


    Powershell can definitely do some things better than the old command
    shell. My intention is that a makefile should not be dependent on the
    shell or environment to run correctly.

    (My guess about the speed difference is that the old command shell is
    probably slower at IO for displaying the output, especially if you have
    "noisy" builds.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to pozz on Thu May 15 11:03:48 2025
    On 14/05/2025 23:51, pozz wrote:
    Il 14/05/2025 11:03, David Brown ha scritto:
    On 13/05/2025 17:57, pozz wrote:
    [...]

    You are asking a lot of questions here.  They are good questions, but
    it would be a very long post if I tried to answer them all fully.  So
    instead, I will try to give a few hints and suggestions that you can
    take further.  I'll put numbers on them in case you want to reference
    them in replies.

    Ok, thank you very much for your time.


    1.

    Windows path naming is insane.  Fortunately, you can almost always
    override it.  Whenever you install any serious program in Windows,
    especially if you ever want to refer to it from the command line,
    batch files, makefiles, etc., avoid names with spaces or "awkward"
    characters.   I recommend making top-level directories like "progs" or
    "compilers" and putting the tools in there as appropriate.  This also
    makes it vastly easier to copy tools to other machines.  And since you
    should never upgrade your toolchains - merely add new versions to your
    collection, in separate directories - it is easier if they are better
    organised.

    I know, but not all software installers work well if you change their
    default installation path.  When it comes to stupid and big IDEs (such
    as Atmel/Microchip Studio), I prefer to avoid changing the default installation path (C:\Program Files (x86)\Atmel...) to avoid other
    obscure issues.

    Almost all of them work fine in different directories (and indeed
    different drives). First assume they work - only fall back on the
    crappy defaults if there is no other option.

    Of course, the best answer is to avoid any tool made by Microchip - they
    are the worst of the bunch. (Atmel was always a bit behind in second
    place for the title of worst toolchain supplier, but Microchip has
    gradually integrated them.) I have many fine things to say about
    Microchip and Atmel as hardware suppliers, but I make a point of
    avoiding their microcontrollers because of their tools.

    My strong preference - regardless of the manufacturer - is to use the ARM-supplied gcc toolchains (for ARM microcontrollers, obviously) rather
    than the usually older tools supplied by manufacturers.

    It is already a miracle if that software runs without problems with the default installation path.  I don't want to imagine what happens if I changed it.

    Anyway until now I didn't find issues with spaces.  Even in msys2 shell
    I can use "/c/Program\ Files\ (x86)/...".

    It usually works - but that does not stop it being a PITA and an insane
    choice of pathnames.

    Still, you have to find what works best for you - I am giving
    recommendations and suggestions, not a unique solution or single
    "correct" answer.


    The other IDE I use is MCUXpresso.  It is Eclipse based so I installed
    it in c:\ without any temptations.


    Yes, that has always worked for me.

    (Of course I normally have it on Linux, rather than Windows, and most manufacturer-supplied software uses a sensible default path - in /opt or
    in /usr/local.)



    2.

    You don't need to use bash or other *nix shells for makefile or other
    tools if you don't want to.  When I do builds on Windows, I run "make"
    from a normal command line (or from an editor / IDE).  It is helpful
    to have msys2's usr/bin on your path so that make can use *nix
    command-line utilities like cp, mv, sed, etc.  But if you want to make
    a minimal build system, you don't need a full msys2 installation - you
    only need the utilities you want to use, and they can be copied
    directly (unlike with Cygwin or WSL).

    Of course you /can/ use fuller shells if you want.  But don't make
    your makefiles depend on that, as it will be harder to use them from
    IDEs, editors, or any other automation.

    In the beginning (some years ago) I started installing GNU Make for
    Windows, putting it in c:\tools\make.  Then I created a simple Makefile
    and tried to process it on a standard Windows command line.  It was a mess!  I remember there were many issues regarding: slash/backslash on
    file paths, lack of Unix commands (rm, mv, ...) and so on.  Native
    Windows tools need backslash in the paths, but some unix tools need
    slash.  It was a mess to transform the paths between the two forms.


    Most tools on Windows are happy with forward slash for path separators
    as well. Certainly everything that is originally a *nix tool will be
    fine with that.

    Of course if you have a makefile that uses commands like "rm" and you
    don't have them on your path, and don't specify the path in the
    makefile, then it won't work. This is why the norm in advanced
    makefiles is to use macros for these things :

    # Put this in the host-specific file, with blank for no path needed
    bin_path :=

    # Use this instead of "rm".
    RM := $(bin_path) rm


    After this attempt, I gave up.  I thought it was much better to use the
    IDE and build system suggested by the MCU manufacturer.


    For most IDEs, the build system is "make". But the IDE generates the
    makefiles - slowly for big projects, and usually overly simplistic with
    far too limited options.

    But IDE's are certainly much easier for getting started. On new
    projects, or new devices, I will often use the IDE to get going and then
    move it over to an independent makefile. (And I'll often continue to
    use the IDE after that as a solid editor and debugger - IDE's are
    generally happy with external makefiles.)

    Now I'm trying a Unix shell in Windows (msys, WSL or even the bash
    installed with git) and it seems many issues I had are disappearing.


    And of course you will want an msys2/mingw64 (/not/ old mingw32) for
    native gcc compilation.

    The goal of the simulator is to detect problems on the software that
    runs directly on Windows, without flashing, debug probes and so on.  I increased my productivity a lot when I started this approach.

    Obviously, the software running on Windows (the simulator) should be
    very similar to the sofware running on the embedded target.  Cortex-M
    MCUs are 32-bits so I thought it should be better to use a 32-bits
    compiler even for the simulator.


    mingw-w64 can happily generate 32-bit Windows executables. IIRC you
    just use the "-m32" flag. It is significantly better than old mingw in
    a number of ways - in particular it has vastly better standard C library support.

    Moreover, I think many issues aries on a 64-bits compilation, for
    example static allocated buffers that would be too small on a 64-bits platforms.  Or some issues on serializers.


    Don't bother with WSL unless you actually need a fuller Linux system -
    and if you /do/ need that, dump the wasteful crap that is Windows and
    use Linux for your development.  Build speeds will double on the same
    hardware.  (In my testing, done a good while back, I did some
    comparisons of a larger build on different setups on the same PC,
    using native Windows build as the baseline.  Native Linux builds were
    twice the speed.  Running VirtualBox on Windows host, with a Linux
    virtual machine, or running VirtualBox on Linux with a Windows virtual
    machine, both beat native Windows solidly.)

    I completely agree with you.  At the moment msys2 seems ok.


    3.

    Makefiles can be split up.  Use "include" - and remember that you can
    do so using macros.  In my makefile setups, I have a file "host.mk"
    that is used to identify the build host, then pull in a file that is
    specific to the host:

    # This is is for identifying host computer to get the paths right

    ifeq ($(OS),Windows_NT)
       # We are on a Windows machine
       host_os := windows
       host := $(COMPUTERNAME)
    else
       # Linux machine
       host_os := linux
       host := $(shell hostname)
    endif

    ifeq "$(call file-exists,makes/host_$(host).mk)" "1"
       include makes/host_$(host).mk
    else
       $(error No host makefile host_$(host).mk found)
    endif

    Then I have files like "host_xxx.mk" for a computer named "xxx",
    containing things like :

    toolchain_path := /opt/gcc-arm-none-eabi-10-2020-q4-major/bin/

    or

    toolchain_path := c:/micros/gcc-arm-none-eabi-10_2020-q4-major/bin/


    All paths to compilers and other build-related programs are specified
    in these files.  The only things that are taken from the OS path are
    standard and common programs that do not affect the resulting binary
    files.

    It is an interesting and uncommon (at least for me) approach.

    What happens if multiple developers work on the same repository?  Are
    they forced to create a host_xxx.mk for all their development machines? Should the host_xxx.mk files be added to the repository?

    Yes, that is /exactly/ what you do. It also applies to a single
    developer using multiple different machines. For any long-term project,
    you want to be sure you can check out the repository, do a clean build,
    and get an identical binary from more than one machine. Having
    individual "host_XXX.mk" files means that the build adapts automatically
    to the machine. What you don't want is each developer making changes to
    a single makefile so that it works on their machine - and then either
    not checking in the changes, or checking them in and messing things up
    for someone else.


    I guess the only goal of host_xxx.mk is to avoid changing PATH before
    make.  Why don't you like setting the PATH according to the project
    you're working on?


    No, that is not the only goal - there can be many differences between
    machines. For example, I usually have ccache on my Linux systems but it
    is rare to have it on (native) Windows systems - thus that can be
    enabled or disabled in a host_xxx.mk file. Some machines might also
    support building the documentation, or running a simulator, or signing binaries.

    Setting the path would be an extra complication of no benefit, but a significant source of risk or error. How do you make sure your IDE is
    using the right PATH settings before it runs "make"? How do you deal
    with multiple projects - do you keep swapping PATHs? (I usually have a half-dozen projects "open" at a time, in different workspaces on my
    Linux machine.) Do you now have a makefile and a separate path-setting
    batch file or shell script that you need to run before doing a project
    build? How do you handle things when you install some new Windows
    program that messes with your path?

    It is /vastly/ simpler and safer to put the paths to the binaries in a
    couple of macros in your makefile(s). It also gives clear and
    unequivocal documentation of the tools you need - if your makefile has
    this line :

    toolchain_path := c:/micros/gcc-arm-none-eabi-10_2020-q4-major/bin/

    then there is never any doubt as to exactly which toolchain is used for
    the project.


    Then I have a "commands.mk" file with things like :

    ATDEP := @

    toolchain_prefix := arm-none-eabi-

    CCDEP := $(ATDEP)$(toolchain_path)$(toolchain_prefix)gcc
    CC := $(AT)$(CCACHE) $(toolchain_path)$(toolchain_prefix)gcc
    LD := $(AT)$(toolchain_path)$(toolchain_prefix)gcc
    OBJCOPY := $(AT)$(CCACHE) $(toolchain_path)$(toolchain_prefix)objcopy
    OBJDUMP := $(AT)$(CCACHE) $(toolchain_path)$(toolchain_prefix)dump
    SIZE := $(AT)$(CCACHE) $(toolchain_path)$(toolchain_prefix)size


    Put CONFIG dependent stuff in "config_full.mk" and similar files.  Put
    TARGET specific stuff in "target_simulator.mk".  And so on.  It makes
    it much easier to keep track of things, and you only need a few
    high-level "ifeq".


    Keep your various makefiles in a separate directory.  Your project
    makefile is then clear and simple - much of it will be comments about
    usage (parameters like CONFIG).

    Yes, splitting makefiles is a good suggestion.


    4.

    Generate dependency files, using the same compiler and the same
    include flags and -D flags as you have for the normal compilation, but
    with flags like -MM -MP -MT and -MF to make .d dependency files.
    Include them all in the makefile, using "-include" so that your
    makefile does not stop before they are generated.

    I have to admit that ChatGPT helped me to create the Makefile.  The
    CFLAGS include -MMD and -MP and at the end I have

      -include $(DEP_FILES)

    Of course, DEP_FILES are:

      DEP_FILES := $(OBJ_FILES:.o=.d)


    That's a good start. There are quite a few articles and blog posts
    about automatic generation of makefile dependencies that can be worth
    reading.

    Sincerely I don't know if it is good, but I tried to change an include
    file and related C files are compiled again as expected (so I think the dependency are correctly managed).

    There's a thing that doesn't work.  If I change the Makefile itsel, for example changing CFLAGS adding a new compiler option, I need to manually invoke a clean.


    depfiles_src := $(cfiles:.c=.d) $(cppfiles:.cpp=.d)
    depfiles := $(addprefix $(dep_dir),$(patsubst ../%,%,$(depfiles_src)))

    -include $(depfiles)

    alldepends := makefile $(wildcard makes/*.mk)
    all : $(alldepends) $(depfiles)
    depends : $(alldepends)

    # "depends" target just makes dep files
    depends : $(depfiles)
    @echo Updated dependencies


    Vary according to your needs. But basically, if something has
    $(alldepends) in its dependency list, it will be rebuild if one of your makefiles changes.


    5.

    Keep your build directories neat, separate from all source
    directories, and mirroring the tree structure of the source files.  So
    if you have a file "src/gui/main_window.c", and you are building with
    CONFIG=FULL TARGET=embedded, the object file generated should go in
    something akin to "builds/FULL/embedded/obj/src/gui/main_window.o".  I
    like to have separate parts for obj (.o files), dep (.d files), and
    bin (linked binaries, map files, etc.).  You could also mix .d and .o
    files in the same directory if you prefer.

    This means you can happily do incremental builds for all your
    configurations and targets, and don't risk mixing object files from
    different setups.

    Yes, perfectly agreed.


    6.

    Learn to use submakes.  When you use plain "make" (or, more
    realistically, "make -j") to build multiple configurations, have each
    configuration spawned off in a separate submake.  Then you don't need
    to track multiple copies of your "TARGET" macro in the same build -
    each submake has just one target, and one config.

    I don't think I got the point.  Now I invoke the build of a single build configuration.  Are you talking about running make to build multiple configurations at the same time?

    Yes.

    Obviously it depends on the stage you are in development and the kind of project - much of the time, you will want to build just one
    configuration. But sometimes you will also want to make multiple builds
    to check that a small change has not caused trouble elsewhere, or for
    different kinds of testing? Why run multiple "make" commands when you
    can do a full project build from one "make" ?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Nicolas Paul Colin de Glocester on Thu May 15 11:17:52 2025
    On 15/05/2025 01:00, Nicolas Paul Colin de Glocester wrote:
    Buona sera!

    On Wed, 14 May 2025, pozz wrote:
    "[. . .] When it comes to stupid and big IDEs [. . .]
    [. . .]
    It is already a miracle if that software runs without problems with the default
    installation path. I don't want to imagine what happens if I changed it."


    If these IDEs would not be trustworthy, then you would need to buy good alternatives.

    Don't be silly. There /are/ no alternatives that are more trustworthy -
    they just have different failure or risk points. There can be benefits
    in buying a commercial IDE, and/or a commercial toolchain, but lower
    risk of bugs, quirks or installation issues is most certainly not one of
    them.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to pozz on Fri May 16 11:12:23 2025
    On 15/05/2025 23:25, pozz wrote:
    Il 15/05/2025 11:03, David Brown ha scritto:
    On 14/05/2025 23:51, pozz wrote:
    Il 14/05/2025 11:03, David Brown ha scritto:
    On 13/05/2025 17:57, pozz wrote:
    [...]


    I worked on PIC8 and AVR8 and IMHO AVR8 is much better then PIC8.
    Regarding Cortex-M, SAM devices are fine for me.

    The 8-bit PIC's are extraordinarily robust microcontrollers - I've seen
    devices rated for 85 °C happily running at 180 °C, and tolerating short-circuits, over-current, and many types of abuse. But the
    processor core is very limited, and the development tools have always
    been horrendous. The AVR is a much nicer core - it is one of the best
    8-bit cores around. But you are still stuck working in a highly device-specific form of coding instead of normal C or C++. And you are
    still stuck with Microchip's attitude to development tools. (You can
    probably tell that I find this very frustrating - I would like to be
    able to use more of Microchip / Atmel's devices.)



    2.

    You don't need to use bash or other *nix shells for makefile or
    other tools if you don't want to.  When I do builds on Windows, I
    run "make" from a normal command line (or from an editor / IDE).  It
    is helpful to have msys2's usr/bin on your path so that make can use
    *nix command-line utilities like cp, mv, sed, etc.  But if you want
    to make a minimal build system, you don't need a full msys2
    installation - you only need the utilities you want to use, and they
    can be copied directly (unlike with Cygwin or WSL).

    Of course you /can/ use fuller shells if you want.  But don't make
    your makefiles depend on that, as it will be harder to use them from
    IDEs, editors, or any other automation.

    In the beginning (some years ago) I started installing GNU Make for
    Windows, putting it in c:\tools\make.  Then I created a simple
    Makefile and tried to process it on a standard Windows command line.
    It was a mess!  I remember there were many issues regarding:
    slash/backslash on file paths, lack of Unix commands (rm, mv, ...)
    and so on.  Native Windows tools need backslash in the paths, but
    some unix tools need slash.  It was a mess to transform the paths
    between the two forms.


    Most tools on Windows are happy with forward slash for path separators
    as well.

    mkdir, just to name one?  And you need mkdir in a Makefile.


    Don't use the crappy Windows-native one - use msys2's mkdir. As I said:

    bin_path :=
    RM := $(bin_path) rm
    MKDIR := $(bin_path) mkdir

    and so on.

    Now your makefile can use "mkdir" happily - with forward slashes, with
    "-p" to make a whole chain of directories, and so on.

    Once you have left the limitations of the Windows default command shell builtins behind, it is all much easier. For utilities like "cp" and
    "rm" it is a little more obvious since the names are different from the
    DOS leftovers "copy" and "del" - unfortunately "mkdir" is the same name
    in both cases.


    Certainly everything that is originally a *nix tool will be fine with
    that.

    Of course if you have a makefile that uses commands like "rm" and you
    don't have them on your path, and don't specify the path in the
    makefile, then it won't work.  This is why the norm in advanced
    makefiles is to use macros for these things :

    # Put this in the host-specific file, with blank for no path needed
    bin_path :=

    # Use this instead of "rm".
    RM := $(bin_path) rm

    Initially I insisted using native Windows commands: DEL, MKDIR, COPY and
    so on.  Finally I gave up.


    Excellent decision.



    After this attempt, I gave up.  I thought it was much better to use
    the IDE and build system suggested by the MCU manufacturer.


    For most IDEs, the build system is "make".  But the IDE generates the
    makefiles - slowly for big projects, and usually overly simplistic
    with far too limited options.

    But IDE's are certainly much easier for getting started.  On new
    projects, or new devices, I will often use the IDE to get going and
    then move it over to an independent makefile.  (And I'll often
    continue to use the IDE after that as a solid editor and debugger -
    IDE's are generally happy with external makefiles.)

    I'm going to create a new post regarding editors and debugger... stay
    tuned :-D

    You are keeping this group alive almost single-handedly :-) Many of us
    read and answer posts, but few start new threads.



    Now I'm trying a Unix shell in Windows (msys, WSL or even the bash
    installed with git) and it seems many issues I had are disappearing.


    And of course you will want an msys2/mingw64 (/not/ old mingw32) for
    native gcc compilation.

    The goal of the simulator is to detect problems on the software that
    runs directly on Windows, without flashing, debug probes and so on.
    I increased my productivity a lot when I started this approach.

    Obviously, the software running on Windows (the simulator) should be
    very similar to the sofware running on the embedded target.  Cortex-M
    MCUs are 32-bits so I thought it should be better to use a 32-bits
    compiler even for the simulator.


    mingw-w64 can happily generate 32-bit Windows executables.  IIRC you
    just use the "-m32" flag.  It is significantly better than old mingw
    in a number of ways - in particular it has vastly better standard C
    library support.

    Why doesn't it work for me?  I open a Msys2/mingw64 shell and...

    $ gcc -m32 -o main.exe main.c C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/11.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: skipping incompatible C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/11.2.0/../../../../x86_64-w64-mingw32/lib/libmingw32.a when searching for -
    lmingw32
    C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/11.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: skipping incompatible C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/11.2.0/../../../../x86_64-w64-mingw32/lib\libmingw32.a when searching for -
    lmingw32
    ...
    ... and much more


    It looks like you don't have the 32-bit static libraries included in
    your msys2/mingw64 installation - these things are often optional. (It
    might be referred to as "multi-lib support".) I haven't used gcc on
    Windows for a long time - most of my work is on Linux. But I'm sure
    that you'll find the answer easily now you know it is the 32-bit static libraries (libmingw32.a) that you are missing.


    I guess the only goal of host_xxx.mk is to avoid changing PATH before
    make.  Why don't you like setting the PATH according to the project
    you're working on?


    No, that is not the only goal - there can be many differences between
    machines.  For example, I usually have ccache on my Linux systems but
    it is rare to have it on (native) Windows systems - thus that can be
    enabled or disabled in a host_xxx.mk file.  Some machines might also
    support building the documentation, or running a simulator, or signing
    binaries.

    Setting the path would be an extra complication of no benefit, but a
    significant source of risk or error.  How do you make sure your IDE is
    using the right PATH settings before it runs "make"?  How do you deal
    with multiple projects - do you keep swapping PATHs?  (I usually have
    a half-dozen projects "open" at a time, in different workspaces on my
    Linux machine.)  Do you now have a makefile and a separate
    path-setting batch file or shell script that you need to run before
    doing a project build?  How do you handle things when you install some
    new Windows program that messes with your path?

    It is /vastly/ simpler and safer to put the paths to the binaries in a
    couple of macros in your makefile(s).  It also gives clear and
    unequivocal documentation of the tools you need -  if your makefile
    has this line :

    toolchain_path := c:/micros/gcc-arm-none-eabi-10_2020-q4-major/bin/

    then there is never any doubt as to exactly which toolchain is used
    for the project.

    I see your points.  The only drawback seems putting a bunch of
    host_xxx.mk files in the repository.  If the developer team and their development machines are well defined and static, everything goes well.


    Typically the host_xxx.mk files will be pretty much the same for each
    Windows system and each Linux system. You might find it simpler to just
    have a single file that checks for the OS and sets the paths
    specifically, without bothering about host details.

    However what happens when a new developer pulls your repository and want
    to build?  At first, he must create his host_xxx.mk and starting
    polluting the original repository.  Instead, by using the PATH, it could build without touching any files in the repo.

    How often does a new developer join the team - or how often do you add a
    new host? If it is once every few years, it doesn't matter. If it
    happens regularly, then this will be a pain and you might want to have a different scheme (such as common setups on all Linux systems and all
    Windows systems). But using the PATH is much worse IME.


    Maybe this isn't our situation, but a public open-source repository
    can't use your approach.  It's impossible to include in the public repository tenths or hundreds host_xxx.mk.

    Sure.

    That's a completely different kind of project, however. In open source projects you'll want to make the system compilable with a wide range of
    tools, versions and options, and you expect a lot of varied changes to
    the code. That's entirely different from a serious commercial embedded
    system where you want to be able to make a release of the project and
    check it in, then ten years later check it out on a different machine
    and OS, do a rebuild, and get bit-perfect identical binaries. I am not suggesting a one-size-fits-all solution.


    Moreover, what happens if two developers like astronomy and set the
    hostname of their development machine JUPITER?  Maybe one uses Linux,
    the other Windows.

    Use your imagination :-)

    In your make, it seems you include the correct host_xxx.mk file
    automatically from the hostname.


    6.

    Learn to use submakes.  When you use plain "make" (or, more
    realistically, "make -j") to build multiple configurations, have
    each configuration spawned off in a separate submake.  Then you
    don't need to track multiple copies of your "TARGET" macro in the
    same build - each submake has just one target, and one config.

    I don't think I got the point.  Now I invoke the build of a single
    build configuration.  Are you talking about running make to build
    multiple configurations at the same time?

    Yes.

    Obviously it depends on the stage you are in development and the kind
    of project - much of the time, you will want to build just one
    configuration.  But sometimes you will also want to make multiple
    builds to check that a small change has not caused trouble elsewhere,
    or for different kinds of testing?  Why run multiple "make" commands
    when you can do a full project build from one "make" ?

    Are you thinking something similar to:

    all_configs:
        $(MAKE) -j 4 CONFIG=FULL
        $(MAKE) -j 4 CONFIG=STANDARD
        $(MAKE) -j 4 CONFIG=LITE


    Don't use "-j" on the submakes - just use "$(MAKE)" and it will inherit
    the job count from the first instance, which acts as a the jobserver.


    With my actual Makefile, "make all_configs" returns an errore because
    CONFIG is not specified.

    You could put something like :

    CONFIG ?= FULL

    to give a default configuration.

    I actually have something like :

    ifneq "$(submake)" "1"
    # This is the original main make, used only to start the sub-makes
    # "progs" is a list of the programs, or configurations, to build

    # Get any non-prog goals
    goals := $(filter-out $(all_progs),$(MAKECMDGOALS))

    define submake_template
    # $(1) = prg
    .PHONY : $(1)
    $(1) :
    @echo Spawning submake for $(1)
    +$(MAKE) --no-builtin-rules $(goals) prog=$(1) submake=1
    endef
    $(foreach prg,$(prog),$(eval $(call submake_template,$(prg))))
    else
    # We are in the sub-make for a configuration
    include makes/main.mk
    endif

    Thus the only thing that is done from the original instance of "make" is
    to start as many submakes as appropriate, each with a specific CONFIG
    and with the submake variable set.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nicolas Paul Colin de Glocester@21:1/5 to All on Fri May 16 12:21:31 2025
    On Thu, 15 May 2025, David Brown wrote:
    "There /are/ no alternatives that are more trustworthy - they
    just have different failure or risk points. There can be benefits in buying a commercial IDE, and/or a commercial toolchain, but lower risk of bugs, quirks or
    installation issues is most certainly not one of them."

    C and C++ compilers and codes produced thereby are not trustworthy. Use commercial Ada compilers to avoid bugs.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Nicolas Paul Colin de Glocester on Fri May 16 14:42:16 2025
    On 16/05/2025 12:21, Nicolas Paul Colin de Glocester wrote:
    On Thu, 15 May 2025, David Brown wrote:
    "There /are/ no alternatives that are more trustworthy - they
    just have different failure or risk points. There can be benefits in buying a
    commercial IDE, and/or a commercial toolchain, but lower risk of bugs, quirks or
    installation issues is most certainly not one of them."

    C and C++ compilers and codes produced thereby are not trustworthy. Use commercial Ada compilers to avoid bugs.

    No one was questioning the trustworthiness of compilers or the code they generate. The issue was about how well IDE's cope with unusual
    installations outside the defaults expected by the supplier. The free
    IDE's provided with manufacturers are vastly more commonly used than
    commercial IDE's (especially those that have their own custom IDE's),
    and you can expect them to have been tested and used in a much wider
    range of circumstances.

    As for the trustworthiness of compilers, that's another matter. I have
    never seen any reason to suppose that commercial compilers are more
    trustworthy (in terms of accepting valid code or generating correct
    code) than the good open source compilers (gcc and clang). I have never
    seen any reason to suppose that Ada compilers are more trustworthy than
    C or C++ compilers. On the contrary, I find that more popular tools are
    less likely to have serious bugs.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to pozz on Fri May 16 15:30:55 2025
    On 16/05/2025 12:46, pozz wrote:
    Il 16/05/2025 11:12, David Brown ha scritto:
    On 15/05/2025 23:25, pozz wrote:
    Il 15/05/2025 11:03, David Brown ha scritto:
    On 14/05/2025 23:51, pozz wrote:
    Il 14/05/2025 11:03, David Brown ha scritto:
    On 13/05/2025 17:57, pozz wrote:
    [...]


    I worked on PIC8 and AVR8 and IMHO AVR8 is much better then PIC8.
    Regarding Cortex-M, SAM devices are fine for me.

    The 8-bit PIC's are extraordinarily robust microcontrollers - I've
    seen devices rated for 85 °C happily running at 180 °C, and tolerating
    short-circuits, over-current, and many types of abuse.  But the
    processor core is very limited, and the development tools have always
    been horrendous.  The AVR is a much nicer core - it is one of the best
    8-bit cores around.  But you are still stuck working in a highly
    device-specific form of coding instead of normal C or C++.

    Why do you write "highly device-specific form of coding"? Considering
    they are 8-bits (and C is at-least-16-bits integer), it seems to me an acceptable C language when you coimpile with avr-gcc.

    You can use int variables without any problems (they will be 16-bits).
    You can use function calls passing paramters. You can return complex
    data from functions.

    Of course flash memory is in a different address space, so you need
    specific API to access data from flash.

    Do you know of other 8-bits cores supported better by a C compiler?


    Certainly C programming with avr-gcc is closer to normal C than C
    programming with PIC's and other 8-bit devices.

    But I don't want to work with "the most normal C considering the
    limitations of the processor" - I want to work with normal C and C++.

    I don't want to have to think about using "uint8_t" instead of "int"
    because of processor efficiency. I don't want to be limited in my
    pointer usage because the processor can't handle pointers well. I don't
    want to have a non-linear memory, where pointers to flash are different
    to pointers to ram and bigger devices have a mess of address spaces and
    linker complications if you have large blocks of read-only data. I
    don't want my C++ restricted because of severely limited calling
    conventions, pointer usage, and limited registers.

    ARM core microcontrollers these days are significantly smaller, cheaper
    and lower power than AVRs in most categories. There's a few situations
    in which AVRs might still be the best choice in a new product, but I
    consider them legacy devices, with development only for minor updates to existing products.

    (I'll be happy to switch to RISC-V to replace or complement ARM.)


    And you are still stuck with Microchip's attitude to development
    tools.  (You can probably tell that I find this very frustrating - I
    would like to be able to use more of Microchip / Atmel's devices.)

    Maybe we already talked in the past about this. I don't know if avr-gcc
    was developed by Atmel or Arduino community.

    Neither. It was independent, based on voluntary work, with Atmel making half-hearted support on occasion.

    Anyway, for AVR8 you have
    the possibility to use gcc tools for compiling and debugging. There are
    many open source tools. I think you could avoid completely
    Microchip/Atmel IDE for AVR8 without any problems. Arduino IDE is a good example.


    The Arduino IDE and libraries are great for quick tests, getting
    familiar with hardware, hobby projects, and proofs-of-concept, but
    terrible for serious work.

    But yes, you can do real work with AVRs without Microchip or Atmel's IDE's.


    2.

    You don't need to use bash or other *nix shells for makefile or
    other tools if you don't want to.  When I do builds on Windows, I >>>>>> run "make" from a normal command line (or from an editor / IDE).
    It is helpful to have msys2's usr/bin on your path so that make
    can use *nix command-line utilities like cp, mv, sed, etc.  But if >>>>>> you want to make a minimal build system, you don't need a full
    msys2 installation - you only need the utilities you want to use,
    and they can be copied directly (unlike with Cygwin or WSL).

    Of course you /can/ use fuller shells if you want.  But don't make >>>>>> your makefiles depend on that, as it will be harder to use them
    from IDEs, editors, or any other automation.

    In the beginning (some years ago) I started installing GNU Make for
    Windows, putting it in c:\tools\make.  Then I created a simple
    Makefile and tried to process it on a standard Windows command
    line. It was a mess!  I remember there were many issues regarding:
    slash/backslash on file paths, lack of Unix commands (rm, mv, ...)
    and so on.  Native Windows tools need backslash in the paths, but
    some unix tools need slash.  It was a mess to transform the paths
    between the two forms.


    Most tools on Windows are happy with forward slash for path
    separators as well.

    mkdir, just to name one?  And you need mkdir in a Makefile.

    Don't use the crappy Windows-native one - use msys2's mkdir.  As I said:

    bin_path :=
    RM := $(bin_path) rm
    MKDIR := $(bin_path) mkdir

    and so on.

    Now your makefile can use "mkdir" happily - with forward slashes, with
    "-p" to make a whole chain of directories, and so on.

    Yes, sure, now I know.  I was responding to your "Most tools on Windows
    are happy with forward slash". I thought your "tools on Windows" were
    native Windows commands.


    Ah, okay. Many programs that come with Windows /are/ happy with forward slashes for paths - because the relevant Windows API's are happy with
    forward slashes. But the old stuff, especially the commands built into
    the old command shell, can't handle them. There will also be trouble
    for commands that use forward slashes for flags and other parameters. I
    meant that there is no problem with utilities compiled on Windows that
    run natively (as distinct from under WSL, or restricted to a bash shell,
    or something like that).

    I think your suggestion is: explicitly call msys tools (rm, mkdir, gcc)
    in normal Windows CMD shell, without insisting in using directly the
    msys shell. Maybe this will help in integration with third-parties IDE/editors (such as VSCode, C::B, and so on).


    Yes, exactly.


    I'm going to create a new post regarding editors and debugger... stay
    tuned :-D

    You are keeping this group alive almost single-handedly :-)  Many of
    us read and answer posts, but few start new threads.

    I'm the student, your are the teachers, so it is normal I make the
    questions :-D

    [OT] I like newsgroups for chatting with others on specific topics.
    Nowadays unfortunately newsgroups are dying in favor of other social platforms: Facebook, reddit, blogs.... Do you know of some other active platforms about embedded?


    I too like Usenet as the non-social social network :-)

    I suppose some day I will join reddit. The comp.lang.c and
    comp.lang.c++ newsgroups are quite active, and might be of interest to
    you. comp.arch has some interesting conversations too sometimes. (And
    there is always sci.electronics.design, if you want a somewhat
    anti-social newsgroup that occasionally talks about electronics.)


    It looks like you don't have the 32-bit static libraries included in
    your msys2/mingw64 installation - these things are often optional.
    (It might be referred to as "multi-lib support".)  I haven't used gcc
    on Windows for a long time - most of my work is on Linux.  But I'm
    sure that you'll find the answer easily now you know it is the 32-bit
    static libraries (libmingw32.a) that you are missing.

    On many places they suggest to use msys2/mingw32 for generating 32-bits Windows binaries. For example here[1].

    [1] https://superuser.com/questions/1473717/compile-in-msys2-mingw64-with-m32-option


    Try looking in other places :-)

    To be honest, I have not looked at this - I don't need to use gcc on
    Windows myself. And neither my Windows nor my msys2 / mingw64
    installation have been updated in many years - the tools I need don't
    change much. But I have no doubt that mingw64 /can/ generate 32-bit
    Windows binaries, that your problem is the missing static libraries, and
    that it is a significantly superior toolchain to the older mingw -
    primarily because that uses the slow, outdated and limited external MS
    DLL's for standard C library functions.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to pozz on Fri May 16 17:20:25 2025
    On 16/05/2025 15:45, pozz wrote:
    Il 14/05/2025 11:03, David Brown ha scritto:
    On 13/05/2025 17:57, pozz wrote:
    [...]
    3.

    Makefiles can be split up.  Use "include" - and remember that you can
    do so using macros.  In my makefile setups, I have a file "host.mk"
    that is used to identify the build host, then pull in a file that is
    specific to the host:

    # This is is for identifying host computer to get the paths right

    ifeq ($(OS),Windows_NT)
       # We are on a Windows machine
       host_os := windows
       host := $(COMPUTERNAME)
    else
       # Linux machine
       host_os := linux
       host := $(shell hostname)
    endif

    ifeq "$(call file-exists,makes/host_$(host).mk)" "1"
       include makes/host_$(host).mk
    else
       $(error No host makefile host_$(host).mk found)
    endif

    Then I have files like "host_xxx.mk" for a computer named "xxx",
    containing things like :

    toolchain_path := /opt/gcc-arm-none-eabi-10-2020-q4-major/bin/

    or

    toolchain_path := c:/micros/gcc-arm-none-eabi-10_2020-q4-major/bin/


    All paths to compilers and other build-related programs are specified
    in these files.  The only things that are taken from the OS path are
    standard and common programs that do not affect the resulting binary
    files.

    Regarding this point, I tried to set

       toolchain_path := c:\\msys64\\mingw64\\bin


    What happens when you use the correct path?

    toolchain_path := c:/msys64/mingw64/bin/

    (Note the trailing slash - or you can add it when you use the macro.)



    Strive to get rid of all the Windows idiosyncrasies here.

    make, gcc and all the other relevant tools here are from a *nix
    background. Use them with that in mind and it will all be smoother.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)