• how cross compilation works?

    From Thiago Adams@thiago.adams@gmail.com to comp.lang.c on Fri Aug 29 15:46:44 2025
    From Newsgroup: comp.lang.c

    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the code. What happens in this case?

    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Fri Aug 29 12:54:27 2025
    From Newsgroup: comp.lang.c

    Thiago Adams <thiago.adams@gmail.com> writes:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the
    code.
    What happens in this case?

    The solution I can think of is emulation when evaluating constant expressions. But I don't if any compiler is doing this way.

    For example, 65535u + 1u will evaluate to 0u if the target system
    has 16-bit int, 65536u otherwise. (I picked an example that
    doesn't depend on UINT_MAX or any other macros defined in the
    standard library.)

    Any compiler, cross- or not, is required to evaluate constant
    expressions correctly for the target system. Whether they do so
    by some sort of emulation is an implementation detail.

    Even a non-cross compiler might not be implemented in exactly
    the same language and configuration as the code it's compiling,
    so evaluating constant expressions locally might not work correctly.
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Thiago Adams@thiago.adams@gmail.com to comp.lang.c on Fri Aug 29 17:10:25 2025
    From Newsgroup: comp.lang.c

    Em 29/08/2025 16:54, Keith Thompson escreveu:
    Thiago Adams <thiago.adams@gmail.com> writes:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the
    code.
    What happens in this case?

    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    For example, 65535u + 1u will evaluate to 0u if the target system
    has 16-bit int, 65536u otherwise. (I picked an example that
    doesn't depend on UINT_MAX or any other macros defined in the
    standard library.)

    yes this is the kind of sample I had in mind.

    Any compiler, cross- or not, is required to evaluate constant
    expressions correctly for the target system. Whether they do so
    by some sort of emulation is an implementation detail.

    Even a non-cross compiler might not be implemented in exactly
    the same language and configuration as the code it's compiling,
    so evaluating constant expressions locally might not work correctly.

    so in
    So in theory it has to be the same result. This may be hard do achieve.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Fri Aug 29 20:19:34 2025
    From Newsgroup: comp.lang.c

    On 2025-08-29, Thiago Adams <thiago.adams@gmail.com> wrote:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the code. What happens in this case?

    A constant expression must be evaluated in the way that would happen
    if it were translated to code on the target machine.

    Thus, if necessary, the features of the target machine's arithmetic must
    be simulated on the build machine.

    (Modulo issues not relevant to the debate, like if the expression
    has ambiguous evaluation orders that affect the result, or undefined
    behaviors, they don't have to play out the same way under different
    modes of processing in the same implementation.)

    The solution I can think of is emulation when evaluating constant expressions. But I don't if any compiler is doing this way.

    They have to; if a constant-folding optimization produces a different
    result (in an expression which has no issue) that is then an incorrect optimization.

    GCC uses arbitrary-precision libraries (GNU GMP for integer, and GNU
    MPFR for floating-point), which are in part for this issue, I think.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.lang.c on Fri Aug 29 21:54:16 2025
    From Newsgroup: comp.lang.c

    In article <20250829131023.130@kylheku.com>,
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:
    On 2025-08-29, Thiago Adams <thiago.adams@gmail.com> wrote:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the code. >> What happens in this case?

    A constant expression must be evaluated in the way that would happen
    if it were translated to code on the target machine.

    Thus, if necessary, the features of the target machine's arithmetic must
    be simulated on the build machine.

    (Modulo issues not relevant to the debate, like if the expression
    has ambiguous evaluation orders that affect the result, or undefined >behaviors, they don't have to play out the same way under different
    modes of processing in the same implementation.)

    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    They have to; if a constant-folding optimization produces a different
    result (in an expression which has no issue) that is then an incorrect >optimization.

    GCC uses arbitrary-precision libraries (GNU GMP for integer, and GNU
    MPFR for floating-point), which are in part for this issue, I think.

    Dealing with integer arithmetic, boolean expressions, character
    manipulation, and so on is often pretty straight-forward at to
    handle for a given target system at compile time. The thing,
    that throws a lot of systems off, is floating point: there exist
    FPUs with different hardware characteristics, even within a
    single architectural family, that can yield different results in
    a way that is simply unknowable until runtime. A classic
    example is hardware that uses 80-bit internal representations
    for double-precision FP arithmetic, versus a 64-bit
    representation. In that world, unless you know precisely what microarchitecture the program is going to run on, you just can't
    make a "correct" decision at compile time at all in the general
    case.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From James Kuyper@jameskuyper@alumni.caltech.edu to comp.lang.c on Fri Aug 29 20:20:14 2025
    From Newsgroup: comp.lang.c

    On 2025-08-29 16:19, Kaz Kylheku wrote:
    On 2025-08-29, Thiago Adams <thiago.adams@gmail.com> wrote:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the code. >> What happens in this case?

    A constant expression must be evaluated in the way that would happen
    if it were translated to code on the target machine.

    Thus, if necessary, the features of the target machine's arithmetic must
    be simulated on the build machine.
    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    They have to; if a constant-folding optimization produces a different
    result (in an expression which has no issue) that is then an incorrect optimization.

    Emulation is necessary only if the value of the constant expression
    changes which code is generated. If the value is simply used by the calculations, then the value can be calculated at run time on the target machine, as if done before the start of main().
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Sat Aug 30 01:00:35 2025
    From Newsgroup: comp.lang.c

    On 2025-08-30, James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
    On 2025-08-29 16:19, Kaz Kylheku wrote:
    On 2025-08-29, Thiago Adams <thiago.adams@gmail.com> wrote:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the code.
    What happens in this case?

    A constant expression must be evaluated in the way that would happen
    if it were translated to code on the target machine.

    Thus, if necessary, the features of the target machine's arithmetic must
    be simulated on the build machine.
    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    They have to; if a constant-folding optimization produces a different
    result (in an expression which has no issue) that is then an incorrect
    optimization.

    Emulation is necessary only if the value of the constant expression
    changes which code is generated. If the value is simply used by the calculations, then the value can be calculated at run time on the target machine, as if done before the start of main().

    But since the former situations occur regularly (e.g. dead code elimination based on conditionals with constant test expressions) you will need to implement that target evaluation strategy anyway. Then, if you have it, why wouldn't you just use it for all constant expressions, and not have to make arrangements for load-time initializations.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Mon Sep 1 10:10:17 2025
    From Newsgroup: comp.lang.c

    On 29/08/2025 22:10, Thiago Adams wrote:
    Em 29/08/2025 16:54, Keith Thompson escreveu:
    Thiago Adams <thiago.adams@gmail.com> writes:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the
    code.
    What happens in this case?

    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    For example, 65535u + 1u will evaluate to 0u if the target system
    has 16-bit int, 65536u otherwise.-a (I picked an example that
    doesn't depend on UINT_MAX or any other macros defined in the
    standard library.)

    yes this is the kind of sample I had in mind.

    Any compiler, cross- or not, is required to evaluate constant
    expressions correctly for the target system.-a Whether they do so
    by some sort of emulation is an implementation detail.

    Even a non-cross compiler might not be implemented in exactly
    the same language and configuration as the code it's compiling,
    so evaluating constant expressions locally might not work correctly.

    so in
    So in theory it has to be the same result. This may be hard do achieve.


    Yes, it can be hard to achieve in some cases. For things like integer arithmetic, it's no serious challenge - floating point is the biggie for
    the challenge of getting the details correct when the host and the
    target are different. (And even if the compiler is native, different
    floating point options can lead to significantly different results.)

    Compilers have to make sure they can do compile-time evaluation that is bit-perfect to run-time evaluation before they can use it as an
    optimisation - any risk of an error is a bug in the compiler. I don't
    know about other compilers, but gcc has a /huge/ library that is used to simulate floating point on a wide range of targets and options,
    precisely so that it can get this right.



    --- Synchronet 3.21a-Linux NewsLink 1.2