• Re: else ladders practice

    From Janis Papanagnou@21:1/5 to Tim Rentsch on Sat Nov 30 03:46:18 2024
    On 30.11.2024 00:29, Tim Rentsch wrote:
    Bart <bc@freeuk.com> writes:
    On 28/11/2024 17:28, Janis Papanagnou wrote:

    But we're speaking about compilation times. [...]

    You can make a similar argument about turning on the light switch
    when entering a room. Flicking light switches is not something you
    need to do every few seconds, but if the light took 5 seconds to
    come on (or even one second), it would be incredibly annoying.

    This analogy sounds like something a defense attorney would say who
    has a client that everyone knows is guilty.

    Intentionally or not; it's funny to respond to an analogy with an
    analogy. :-}

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Janis Papanagnou on Fri Nov 29 20:40:11 2024
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:

    On 30.11.2024 00:29, Tim Rentsch wrote:

    Bart <bc@freeuk.com> writes:

    On 28/11/2024 17:28, Janis Papanagnou wrote:

    But we're speaking about compilation times. [...]

    You can make a similar argument about turning on the light switch
    when entering a room. Flicking light switches is not something you
    need to do every few seconds, but if the light took 5 seconds to
    come on (or even one second), it would be incredibly annoying.

    This analogy sounds like something a defense attorney would say who
    has a client that everyone knows is guilty.

    Intentionally or not; it's funny to respond to an analogy with an
    analogy. :-}

    My statement was not an analogy. Similar is not the same as
    analogous.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Bart on Fri Nov 29 21:03:17 2024
    Bart <bc@freeuk.com> writes:

    On 28/11/2024 05:18, Tim Rentsch wrote:

    Bart <bc@freeuk.com> writes:

    On 26/11/2024 12:29, Tim Rentsch wrote:

    Bart <bc@freeuk.com> writes:

    On 25/11/2024 18:49, Tim Rentsch wrote:

    Bart <bc@freeuk.com> writes:

    It's funny how nobody seems to care about the speed of
    compilers (which can vary by 100:1), but for the generated
    programs, the 2:1 speedup you might get by optimising it is
    vital!

    I think most people would rather take this path (these times
    are actual measured times of a recently written program):

    compile time: 1 second
    program run time: ~7 hours

    than this path (extrapolated using the ratios mentioned above):

    compile time: 0.01 second
    program run time: ~14 hours

    I'm trying to think of some computationally intensive app that
    would run non-stop for several hours without interaction.

    The conclusion is the same whether the program run time
    is 7 hours, 7 minutes, or 7 seconds.

    Funny you should mention 7 seconds. If I'm working on single
    source file called sql.c for example, that's how long it takes for
    gcc to create an unoptimised executable:

    c:\cx>tm gcc sql.c #250Kloc file
    TM: 7.38

    Your example illustrates my point. Even 250 thousand lines of
    source takes only a few seconds to compile. Only people nutty
    enough to have single source files over 25,000 lines or so --
    over 400 pages at 60 lines/page! -- are so obsessed about
    compilation speed. And of course you picked the farthest-most
    outlier as your example, grossly misrepresenting any sort of
    average or typical case.

    It's not atypical for me! [...]

    I can easily accept that it might be typical for you. My
    point is that it is not typical for almost everyone else.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Fri Nov 29 21:25:15 2024
    Michael S <already5chosen@yahoo.com> writes:

    On Wed, 27 Nov 2024 21:18:09 -0800
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    Bart <bc@freeuk.com> writes:

    On 26/11/2024 12:29, Tim Rentsch wrote:

    Bart <bc@freeuk.com> writes:

    On 25/11/2024 18:49, Tim Rentsch wrote:

    Bart <bc@freeuk.com> writes:

    It's funny how nobody seems to care about the speed of
    compilers (which can vary by 100:1), but for the generated
    programs, the 2:1 speedup you might get by optimising it is
    vital!

    I think most people would rather take this path (these times
    are actual measured times of a recently written program):

    compile time: 1 second
    program run time: ~7 hours

    than this path (extrapolated using the ratios mentioned above):

    compile time: 0.01 second
    program run time: ~14 hours

    I'm trying to think of some computationally intensive app that
    would run non-stop for several hours without interaction.

    The conclusion is the same whether the program run time
    is 7 hours, 7 minutes, or 7 seconds.

    Funny you should mention 7 seconds. If I'm working on single
    source file called sql.c for example, that's how long it takes for
    gcc to create an unoptimised executable:

    c:\cx>tm gcc sql.c #250Kloc file
    TM: 7.38

    Your example illustrates my point. Even 250 thousand lines of
    source takes only a few seconds to compile. Only people nutty
    enough to have single source files over 25,000 lines or so --
    over 400 pages at 60 lines/page! -- are so obsessed about
    compilation speed.

    My impression was that Bart is talking about machine-generated code.
    For machine generated code 250Kloc is not too much. I would think
    that in field of compiled-code HDL simulation people are interested
    in compilation of as big sources as the can afford.

    Sure. But Bart is implicitly saying that such cases make up the
    bulk of C compilations, whereas in fact the reverse is true. People
    don't care about Bart's complaint because the circumstances of his
    examples almost never apply to them. And he must know this, even
    though he tries to pretend he doesn't.

    And of course you picked the farthest-most
    outlier as your example, grossly misrepresenting any sort of
    average or typical case.

    I remember having much shorter file (core of 3rd-party TCP protocol implementation) where compilation with gcc took several seconds.

    Looked at it now - only 22 Klocs.
    Text size in .o - 34KB.
    Compilation time on much newer computer than the one I remembered, with
    good SATA SSD and 4 GHz Intel Haswell CPU - a little over 1 sec. That
    with gcc 4.7.3. I would guess that if I try gcc13 it would be 1.5 to 2
    times longer.
    So, in terms of Klock/sec it seems to me that time reported by Bart
    is not outrageous. Indeed, gcc is very slow when compiling any source several times above average size.
    In this particular case I can not compare gcc to alternative, because
    for a given target (Altera Nios2) there are no alternatives.

    I'm not disputing his ratios on compilation speeds. I implicitly
    agreed to them in my earlier remarks. The point is that the
    absolute times are so small that most people don't care. For
    some reason I can't fathom Bart does care, and apparently cannot
    understand why most other people do not care. My conclusion is
    that Bart is either quite immature or a narcissist. I have tried
    to explain to him why other people think differently than he does,
    but it seems he isn't really interested in having it explained.
    Oh well, not my problem.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Tim Rentsch on Sat Nov 30 11:00:30 2024
    On 30.11.2024 05:40, Tim Rentsch wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:

    On 30.11.2024 00:29, Tim Rentsch wrote:

    Bart <bc@freeuk.com> writes:

    On 28/11/2024 17:28, Janis Papanagnou wrote:

    But we're speaking about compilation times. [...]

    You can make a similar argument about turning on the light switch
    when entering a room. Flicking light switches is not something you
    need to do every few seconds, but if the light took 5 seconds to
    come on (or even one second), it would be incredibly annoying.

    This analogy sounds like something a defense attorney would say who
    has a client that everyone knows is guilty.

    Intentionally or not; it's funny to respond to an analogy with an
    analogy. :-}

    My statement was not an analogy. Similar is not the same as
    analogous.

    It's of course (and obviously) not the same; it's just a
    similar term where the semantics of both terms have an overlap.

    (Not sure why you even bothered to reply and nit-pick here.
    But with your habit you seem to have just missed the point;
    the comparison of your reply-type with Bart's argumentation.)

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bart@21:1/5 to Tim Rentsch on Sat Nov 30 11:26:41 2024
    On 30/11/2024 05:25, Tim Rentsch wrote:
    Michael S <already5chosen@yahoo.com> writes:

    On Wed, 27 Nov 2024 21:18:09 -0800
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    Bart <bc@freeuk.com> writes:

    On 26/11/2024 12:29, Tim Rentsch wrote:

    Bart <bc@freeuk.com> writes:

    On 25/11/2024 18:49, Tim Rentsch wrote:

    Bart <bc@freeuk.com> writes:

    It's funny how nobody seems to care about the speed of
    compilers (which can vary by 100:1), but for the generated
    programs, the 2:1 speedup you might get by optimising it is
    vital!

    I think most people would rather take this path (these times
    are actual measured times of a recently written program):

    compile time: 1 second
    program run time: ~7 hours

    than this path (extrapolated using the ratios mentioned above):

    compile time: 0.01 second
    program run time: ~14 hours

    I'm trying to think of some computationally intensive app that
    would run non-stop for several hours without interaction.

    The conclusion is the same whether the program run time
    is 7 hours, 7 minutes, or 7 seconds.

    Funny you should mention 7 seconds. If I'm working on single
    source file called sql.c for example, that's how long it takes for
    gcc to create an unoptimised executable:

    c:\cx>tm gcc sql.c #250Kloc file
    TM: 7.38

    Your example illustrates my point. Even 250 thousand lines of
    source takes only a few seconds to compile. Only people nutty
    enough to have single source files over 25,000 lines or so --
    over 400 pages at 60 lines/page! -- are so obsessed about
    compilation speed.

    My impression was that Bart is talking about machine-generated code.
    For machine generated code 250Kloc is not too much. I would think
    that in field of compiled-code HDL simulation people are interested
    in compilation of as big sources as the can afford.

    Sure. But Bart is implicitly saying that such cases make up the
    bulk of C compilations, whereas in fact the reverse is true. People
    don't care about Bart's complaint because the circumstances of his
    examples almost never apply to them. And he must know this, even
    though he tries to pretend he doesn't.

    And of course you picked the farthest-most
    outlier as your example, grossly misrepresenting any sort of
    average or typical case.

    I remember having much shorter file (core of 3rd-party TCP protocol
    implementation) where compilation with gcc took several seconds.

    Looked at it now - only 22 Klocs.
    Text size in .o - 34KB.
    Compilation time on much newer computer than the one I remembered, with
    good SATA SSD and 4 GHz Intel Haswell CPU - a little over 1 sec. That
    with gcc 4.7.3. I would guess that if I try gcc13 it would be 1.5 to 2
    times longer.
    So, in terms of Klock/sec it seems to me that time reported by Bart
    is not outrageous. Indeed, gcc is very slow when compiling any source
    several times above average size.
    In this particular case I can not compare gcc to alternative, because
    for a given target (Altera Nios2) there are no alternatives.

    I'm not disputing his ratios on compilation speeds. I implicitly
    agreed to them in my earlier remarks. The point is that the
    absolute times are so small that most people don't care. For
    some reason I can't fathom Bart does care, and apparently cannot
    understand why most other people do not care. My conclusion is
    that Bart is either quite immature or a narcissist. I have tried
    to explain to him why other people think differently than he does,
    but it seems he isn't really interested in having it explained.
    Oh well, not my problem.

    EVERYBODY cares about compilation speeds. Except in this newsgroup where
    people try to pretent that it's irrelevant.

    But then at the same time, they strive to keep those compile-times small:

    * By using tools that have themselves been optimised to reduce their
    runtimes, and where considerable resources have been expended to get the
    best possible code, which naturally also benefits the tool

    * By using the fastest possible hardware

    * By trying to do parallel builds across multiple cores

    * By organising source code into artificially small modules so that recompilation of just one module is quicker. So, relying on independent compilation.

    * By going to considerable trouble to define inter-dependencies between modules, so that a make system can AVOID recompiling modules. (Why on
    earth would it need to? Oh, because it would be slower!)

    * By using development techniques involving thinking deeply about what
    to change, to avoid a costly rebuild.

    Etc.

    All instead of relying on raw compilation speed and a lot of those
    points become less relevant.

    My conclusion is
    that Bart is either quite immature or a narcissist.

    I'd never bothered much about compile-speed in the past, except to
    ensure that an edit-run cycle was usually a fraction of second, except
    when I had to compile all modules of a project then it might have been a
    few seconds.

    My tools were naturally fast, even though unoptimised, through being
    small and simple. It's only recently that I took advantage of that
    through developing whole-program compilers.

    This normally needs language support (eg. a decent module scheme).
    Applying it to C is harder (if 50 modules of a project each use some
    huge, 0.5Mloc header, then it means processing it 50 times).

    I think it is possilble without changing the language, but decided it
    wasn't worth the effort. I don't use it enough myself, and nobody else
    seems to care.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rosario19@21:1/5 to Dan Purgert on Sat Nov 30 17:35:42 2024
    On Wed, 20 Nov 2024 12:31:35 -0000 (UTC), Dan Purgert wrote:

    On 2024-11-16, Stefan Ram wrote:
    Dan Purgert <dan@djph.net> wrote or quoted:
    if (n==0) { printf ("n: %u\n",n); n++;}
    if (n==1) { printf ("n: %u\n",n); n++;}
    if (n==2) { printf ("n: %u\n",n); n++;}
    if (n==3) { printf ("n: %u\n",n); n++;}
    if (n==4) { printf ("n: %u\n",n); n++;}
    printf ("all if completed, n=%u\n",n);

    above should be equivalent to this

    for(;n>=0&&n<5;++n) printf ("n: %u\n",n);
    printf ("all if completed, n=%u\n",n);


    My bad if the following instruction structure's already been hashed
    out in this thread, but I haven't been following the whole convo!

    I honestly lost the plot ages ago; not sure if it was either!


    In my C 101 classes, after we've covered "if" and "else",
    I always throw this program up on the screen and hit the newbies
    with this curveball: "What's this bad boy going to spit out?".

    Segfaults? :D


    Well, it's a blue moon when someone nails it. Most of them fall
    for my little gotcha hook, line, and sinker.

    #include <stdio.h>

    const char * english( int const n )
    { const char * result;
    if( n == 0 )result = "zero";
    if( n == 1 )result = "one";
    if( n == 2 )result = "two";
    if( n == 3 )result = "three";
    else result = "four";
    return result; }

    void print_english( int const n )
    { printf( "%s\n", english( n )); }

    int main( void )
    { print_english( 0 );
    print_english( 1 );
    print_english( 2 );
    print_english( 3 );
    print_english( 4 ); }

    oooh, that's way better at making a point of the hazard than mine was.

    ... almost needed to engage my rubber duckie, before I realized I was >mentally auto-correcting the 'english()' function while reading it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Janis Papanagnou on Sat Nov 30 14:07:49 2024
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:

    On 16.11.2024 16:14, James Kuyper wrote:

    On 11/16/24 04:42, Stefan Ram wrote:
    ...

    [...]

    #include <stdio.h>

    const char * english( int const n )
    { const char * result;
    if( n == 0 )result = "zero";
    if( n == 1 )result = "one";
    if( n == 2 )result = "two";
    if( n == 3 )result = "three";
    else result = "four";
    return result; }

    That's indeed a nice example. Where you get fooled by treachery
    "trustiness" of formatting[*]. - In syntax we trust! [**]

    Misleading formatting is the lesser of two problems. A more
    significant bad design choice is writing in an imperative
    style rather than a functional style.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to Bart on Sun Dec 1 13:04:30 2024
    Bart <bc@freeuk.com> wrote:
    On 28/11/2024 12:37, Michael S wrote:
    On Wed, 27 Nov 2024 21:18:09 -0800
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:


    c:\cx>tm gcc sql.c #250Kloc file
    TM: 7.38

    Your example illustrates my point. Even 250 thousand lines of
    source takes only a few seconds to compile. Only people nutty
    enough to have single source files over 25,000 lines or so --
    over 400 pages at 60 lines/page! -- are so obsessed about
    compilation speed.

    My impression was that Bart is talking about machine-generated code.
    For machine generated code 250Kloc is not too much.

    This file mostly comprises sqlite3.c which is a machine-generated amalgamation of some 100 actual C files.

    You wouldn't normally do development with that version, but in my
    scenario, where I was trying to find out why the version built with my compiler was buggy, I might try adding debug info to it then building
    with a working compiler (eg. gcc) to compare with.

    Even in context of developing a compiler I would not run blindly
    many compiliations of large file. At first stage I would debug
    compiled program, to find out what is wrong with it. That normally
    involves several runs of the same executable. Possible trick is
    to compile each file separately and link files in various
    combionations, some compiled by gcc, some by my compiler.
    Normally that would locate error to a single file.

    After that I would try to minimize the testcase, removing code which
    do not contribute to the bug. That involves severla compilations
    of files with quickly decreasing sizes.

    Tim isn't asking the right questions (or any questions!). WHY does gcc
    take so long to generate indifferent code when the task can clearly be
    done at least a magnitude faster?

    The simple answer is: users tolerate long compile time. If users
    abandoned 'gcc' to some other compiler due to long compile time,
    then 'gcc' developers would notice. But the opposite has happened:
    'llvm' was significantly smaller and faster but produced slower code.
    'llvm' developers improved optimizations in the process making
    their compiler bigger and slower.

    You need to improve your propaganda for faster C compilers...

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to Stefan Ram on Sun Dec 1 12:41:03 2024
    Stefan Ram <ram@zedat.fu-berlin.de> wrote:

    My bad if the following instruction structure's already been hashed
    out in this thread, but I haven't been following the whole convo!

    In my C 101 classes, after we've covered "if" and "else",
    I always throw this program up on the screen and hit the newbies
    with this curveball: "What's this bad boy going to spit out?".

    Well, it's a blue moon when someone nails it. Most of them fall
    for my little gotcha hook, line, and sinker.

    #include <stdio.h>

    const char * english( int const n )
    { const char * result;
    if( n == 0 )result = "zero";
    if( n == 1 )result = "one";
    if( n == 2 )result = "two";
    if( n == 3 )result = "three";
    else result = "four";
    return result; }

    void print_english( int const n )
    { printf( "%s\n", english( n )); }

    int main( void )
    { print_english( 0 );
    print_english( 1 );
    print_english( 2 );
    print_english( 3 );
    print_english( 4 ); }


    That breaks two rules:
    - instructions conditioned by 'if' should have braces,
    - when we have the result we should return it immediately.

    Once those are fixed code works as expected...

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to Bart on Sun Dec 1 13:19:54 2024
    Bart <bc@freeuk.com> wrote:
    On 30/11/2024 05:25, Tim Rentsch wrote:
    Michael S <already5chosen@yahoo.com> writes:

    On Wed, 27 Nov 2024 21:18:09 -0800
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    Bart <bc@freeuk.com> writes:

    On 26/11/2024 12:29, Tim Rentsch wrote:

    Bart <bc@freeuk.com> writes:

    On 25/11/2024 18:49, Tim Rentsch wrote:

    Bart <bc@freeuk.com> writes:

    It's funny how nobody seems to care about the speed of
    compilers (which can vary by 100:1), but for the generated
    programs, the 2:1 speedup you might get by optimising it is
    vital!

    I think most people would rather take this path (these times
    are actual measured times of a recently written program):

    compile time: 1 second
    program run time: ~7 hours

    than this path (extrapolated using the ratios mentioned above): >>>>>>>>
    compile time: 0.01 second
    program run time: ~14 hours

    I'm trying to think of some computationally intensive app that
    would run non-stop for several hours without interaction.

    The conclusion is the same whether the program run time
    is 7 hours, 7 minutes, or 7 seconds.

    Funny you should mention 7 seconds. If I'm working on single
    source file called sql.c for example, that's how long it takes for
    gcc to create an unoptimised executable:

    c:\cx>tm gcc sql.c #250Kloc file
    TM: 7.38

    Your example illustrates my point. Even 250 thousand lines of
    source takes only a few seconds to compile. Only people nutty
    enough to have single source files over 25,000 lines or so --
    over 400 pages at 60 lines/page! -- are so obsessed about
    compilation speed.

    My impression was that Bart is talking about machine-generated code.
    For machine generated code 250Kloc is not too much. I would think
    that in field of compiled-code HDL simulation people are interested
    in compilation of as big sources as the can afford.

    Sure. But Bart is implicitly saying that such cases make up the
    bulk of C compilations, whereas in fact the reverse is true. People
    don't care about Bart's complaint because the circumstances of his
    examples almost never apply to them. And he must know this, even
    though he tries to pretend he doesn't.

    And of course you picked the farthest-most
    outlier as your example, grossly misrepresenting any sort of
    average or typical case.

    I remember having much shorter file (core of 3rd-party TCP protocol
    implementation) where compilation with gcc took several seconds.

    Looked at it now - only 22 Klocs.
    Text size in .o - 34KB.
    Compilation time on much newer computer than the one I remembered, with
    good SATA SSD and 4 GHz Intel Haswell CPU - a little over 1 sec. That
    with gcc 4.7.3. I would guess that if I try gcc13 it would be 1.5 to 2
    times longer.
    So, in terms of Klock/sec it seems to me that time reported by Bart
    is not outrageous. Indeed, gcc is very slow when compiling any source
    several times above average size.
    In this particular case I can not compare gcc to alternative, because
    for a given target (Altera Nios2) there are no alternatives.

    I'm not disputing his ratios on compilation speeds. I implicitly
    agreed to them in my earlier remarks. The point is that the
    absolute times are so small that most people don't care. For
    some reason I can't fathom Bart does care, and apparently cannot
    understand why most other people do not care. My conclusion is
    that Bart is either quite immature or a narcissist. I have tried
    to explain to him why other people think differently than he does,
    but it seems he isn't really interested in having it explained.
    Oh well, not my problem.

    EVERYBODY cares about compilation speeds. Except in this newsgroup where people try to pretent that it's irrelevant.

    But then at the same time, they strive to keep those compile-times small:

    * By using tools that have themselves been optimised to reduce their runtimes, and where considerable resources have been expended to get the
    best possible code, which naturally also benefits the tool

    * By using the fastest possible hardware

    * By trying to do parallel builds across multiple cores

    * By organising source code into artificially small modules so that recompilation of just one module is quicker. So, relying on independent compilation.

    * By going to considerable trouble to define inter-dependencies between modules, so that a make system can AVOID recompiling modules. (Why on
    earth would it need to? Oh, because it would be slower!)

    * By using development techniques involving thinking deeply about what
    to change, to avoid a costly rebuild.

    Etc.

    Those methods are effective and work. And one gets optimized
    binaries as a result.

    All instead of relying on raw compilation speed and a lot of those
    points become less relevant.

    If all other factors were the same, then using "better" compiler
    would be nice. But other factors are not equal. You basically
    advocate that people give up features that they want/need to
    allow for simpler compilers, this is not going to happen.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bart@21:1/5 to Waldek Hebisch on Sun Dec 1 15:13:35 2024
    On 01/12/2024 13:04, Waldek Hebisch wrote:
    Bart <bc@freeuk.com> wrote:
    On 28/11/2024 12:37, Michael S wrote:
    On Wed, 27 Nov 2024 21:18:09 -0800
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:


    c:\cx>tm gcc sql.c #250Kloc file
    TM: 7.38

    Your example illustrates my point. Even 250 thousand lines of
    source takes only a few seconds to compile. Only people nutty
    enough to have single source files over 25,000 lines or so --
    over 400 pages at 60 lines/page! -- are so obsessed about
    compilation speed.

    My impression was that Bart is talking about machine-generated code.
    For machine generated code 250Kloc is not too much.

    This file mostly comprises sqlite3.c which is a machine-generated
    amalgamation of some 100 actual C files.

    You wouldn't normally do development with that version, but in my
    scenario, where I was trying to find out why the version built with my
    compiler was buggy, I might try adding debug info to it then building
    with a working compiler (eg. gcc) to compare with.

    Even in context of developing a compiler I would not run blindly
    many compiliations of large file.
    Difficult bugs always occur in larger codebases, but with C, these in a language that I can't navigate, and for programs which are not mine, and
    which tend to be badly written, bristling with typedefs and macros.

    It could take a week to track down where the error might be ...

    At first stage I would debug
    compiled program, to find out what is wrong with it.

    ... within the C program. Except there's nothing wrong with the C
    program! It works fine with a working compiler.

    The problem will be in the generated code, so in an entirely different
    program. So normal debugging tools are useful when several sets of
    source code are in involved, in different languages, or the error occurs
    in the second generation version of either the self-hosted tool, or the
    program under test if it is to do with languages.

    (For example, I got tcc.c working at one point. My generated tcc.exe
    could compile tcc.c, but that second-generation tcc.c didn't work.)


    After that I would try to minimize the testcase, removing code which
    do not contribute to the bug.

    Again, there is nothing wrong with the C program, but in the code
    generated for it. The bug can be very subtle, but it usually turns out
    to be something silly.

    Removing code from 10s of 1000s of lines (or 250Kloc for sql) is not
    practical. But yet, the aim is to isolate some code which can be used to recreate the issue in a smaller program.

    Debugging can involve comparing two versions, one working, the other
    not, looking for differences. And here there may be tracking statements
    added.

    If the only working version is via gcc, then that's bad news because it
    makes the process even more of a PITA.

    I added an interpreter mode to my IL, because I assume that would give a
    solid, reliable reference implementation to compare against.

    If turned out to be even more buggy than the generated native code!

    (One problem was to do with my stdarg.h header which implements VARARGS
    used in function definitions. It assumes the stack grows downwords. In
    my interpreter, it grows downwards!)

    That involves severla compilations
    of files with quickly decreasing sizes.

    Tim isn't asking the right questions (or any questions!). WHY does gcc
    take so long to generate indifferent code when the task can clearly be
    done at least a magnitude faster?

    The simple answer is: users tolerate long compile time. If users
    abandoned 'gcc' to some other compiler due to long compile time,
    then 'gcc' developers would notice.

    People use gcc. They come to depend on its features, or they might use
    (perhaps unknowingly) some extensions. On Windows, gcc includes some
    headers and libraries that belong to Linux, but other compilers don't
    provide them.

    The result is that if they were to switch to a smaller, faster compiler,
    their program may not work.

    They'd have to use it from the start. But then they may want to use
    libraries which only work with gcc ...


    You need to improve your propaganda for faster C compilers...

    I actually don't know why I care. I get the benefit of my fast tools
    every day; they're a joy to use. So I'm not bothered that other people
    are that tolerant of slow, cumbersome build systems.

    But then, people in this group do like to belittle small, fast products
    (tcc for example as well as my stuff), and that's where it gets annoying.

    So, how long to build LLVM again? It used to be hours. Here's my take on
    it being built from scratch:

    c:\px>tm mm pc
    Compiling pc.m to pc.exe
    TM: 0.08

    This standalone program takes a source file containing an IL program
    rendered as text. It can create EXE, or run it or interpret it.

    Let's try it out:

    c:\cx>cc -p lua # compile a C program to IL
    Compiling lua.c to lua.pcl

    c:\cx>\px\pc -r lua fib.lua # Now compile and run it in-memory
    Processing lua.pcl to lua.(run)
    Running: fib.lua
    1 1
    2 1
    3 2
    4 3
    5 5
    6 8
    7 13
    ...

    Or I can interpret it:

    c:\cx>\px\pc -i lua fib.lua
    Processing lua.pcl to lua.(int)
    Running: fib.lua
    1 1
    ...

    All that from a product that took 80ms to build and comprises a
    self-contained 180KB executable.

    If nobody here can appreciate the benefits of have such a baseline
    product, then there's nothing I can do about that.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Waldek Hebisch on Sun Dec 1 16:34:24 2024
    On 01.12.2024 13:41, Waldek Hebisch wrote:
    Stefan Ram <ram@zedat.fu-berlin.de> wrote:

    My bad if the following instruction structure's already been hashed
    out in this thread, but I haven't been following the whole convo!

    In my C 101 classes, after we've covered "if" and "else",
    I always throw this program up on the screen and hit the newbies
    with this curveball: "What's this bad boy going to spit out?".

    Well, it's a blue moon when someone nails it. Most of them fall
    for my little gotcha hook, line, and sinker.

    #include <stdio.h>

    const char * english( int const n )
    { const char * result;
    if( n == 0 )result = "zero";
    if( n == 1 )result = "one";
    if( n == 2 )result = "two";
    if( n == 3 )result = "three";
    else result = "four";
    return result; }

    void print_english( int const n )
    { printf( "%s\n", english( n )); }

    int main( void )
    { print_english( 0 );
    print_english( 1 );
    print_english( 2 );
    print_english( 3 );
    print_english( 4 ); }


    That breaks two rules:
    - instructions conditioned by 'if' should have braces,

    I suppose you don't mean

    if (n == value) { result = string; }
    else { result = other; }

    which I'd think doesn't change anything. - So what is it?

    Actually, you should just add explicit 'else' to fix the problem.
    (Here there's no need to fiddle with spurious braces, I'd say.)

    - when we have the result we should return it immediately.

    This would suffice to fix it, wouldn't it?


    Once those are fixed code works as expected...

    I find this answer - not wrong, but - problematic for two reasons.
    There's no accepted "general rules" that could get "broken"; it's
    just rules that serve in given languages and application contexts.
    And they may conflict with other "rules" that have been set up to
    streamline code, make it safer, or whatever.

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Waldek Hebisch on Sun Dec 1 16:14:36 2024
    antispam@fricas.org (Waldek Hebisch) writes:
    Stefan Ram <ram@zedat.fu-berlin.de> wrote:

    My bad if the following instruction structure's already been hashed
    out in this thread, but I haven't been following the whole convo!

    In my C 101 classes, after we've covered "if" and "else",
    I always throw this program up on the screen and hit the newbies
    with this curveball: "What's this bad boy going to spit out?".

    Well, it's a blue moon when someone nails it. Most of them fall
    for my little gotcha hook, line, and sinker.

    #include <stdio.h>

    const char * english( int const n )
    { const char * result;
    if( n == 0 )result = "zero";
    if( n == 1 )result = "one";
    if( n == 2 )result = "two";
    if( n == 3 )result = "three";
    else result = "four";
    return result; }

    void print_english( int const n )
    { printf( "%s\n", english( n )); }

    int main( void )
    { print_english( 0 );
    print_english( 1 );
    print_english( 2 );
    print_english( 3 );
    print_english( 4 ); }


    That breaks two rules:
    - instructions conditioned by 'if' should have braces,
    - when we have the result we should return it immediately.

    Three rules
    - don't do something at runtime if you can do it at compile time.

    const static char *english_numbers[] = { "zero", "one", "two", "three", "four" };
    const static size_t num_english_numbers = sizeof(english_numbers)/sizeof(english_numbers[0]);

    const char *english(const int n)
    {
    return (n < num_english_numbers) ? english_numbers[n] : "Out-of-range";
    }

    I was doing a code review just last week where a junior programmer had
    to convert a small integer (0..5) to a text label, so the programmer creates a function to return the corrsponding label. That function creates a
    std::map and initializes it with the set of text labels each time the function is called, just to discard the map after looking up the argument.

    Needless to say, it didn't pass review.




    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to Janis Papanagnou on Sun Dec 1 22:23:55 2024
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    On 01.12.2024 13:41, Waldek Hebisch wrote:
    Stefan Ram <ram@zedat.fu-berlin.de> wrote:

    My bad if the following instruction structure's already been hashed
    out in this thread, but I haven't been following the whole convo!

    In my C 101 classes, after we've covered "if" and "else",
    I always throw this program up on the screen and hit the newbies
    with this curveball: "What's this bad boy going to spit out?".

    Well, it's a blue moon when someone nails it. Most of them fall
    for my little gotcha hook, line, and sinker.

    #include <stdio.h>

    const char * english( int const n )
    { const char * result;
    if( n == 0 )result = "zero";
    if( n == 1 )result = "one";
    if( n == 2 )result = "two";
    if( n == 3 )result = "three";
    else result = "four";
    return result; }

    void print_english( int const n )
    { printf( "%s\n", english( n )); }

    int main( void )
    { print_english( 0 );
    print_english( 1 );
    print_english( 2 );
    print_english( 3 );
    print_english( 4 ); }


    That breaks two rules:
    - instructions conditioned by 'if' should have braces,

    I suppose you don't mean

    if (n == value) { result = string; }
    else { result = other; }

    which I'd think doesn't change anything. - So what is it?

    Actually, you should just add explicit 'else' to fix the problem.
    (Here there's no need to fiddle with spurious braces, I'd say.)

    Lack of braces is a smokescreen hiding the second problem.
    Or to put if differently, due to lack of braces the code
    immediately smells bad.

    - when we have the result we should return it immediately.

    This would suffice to fix it, wouldn't it?

    Yes (but see above).

    Once those are fixed code works as expected...

    I find this answer - not wrong, but - problematic for two reasons.
    There's no accepted "general rules" that could get "broken"; it's
    just rules that serve in given languages and application contexts.
    And they may conflict with other "rules" that have been set up to
    streamline code, make it safer, or whatever.

    No general rules, yes. But every sane programmer has _some_ rules.
    My point was that if you adopt resonable rules, then whole classes
    of potential problems go away.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Waldek Hebisch on Mon Dec 2 08:29:40 2024
    On 01.12.2024 23:23, Waldek Hebisch wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    On 01.12.2024 13:41, Waldek Hebisch wrote:
    Stefan Ram <ram@zedat.fu-berlin.de> wrote:

    My bad if the following instruction structure's already been hashed
    out in this thread, but I haven't been following the whole convo!

    In my C 101 classes, after we've covered "if" and "else",
    I always throw this program up on the screen and hit the newbies
    with this curveball: "What's this bad boy going to spit out?".

    Well, it's a blue moon when someone nails it. Most of them fall
    for my little gotcha hook, line, and sinker.

    #include <stdio.h>

    const char * english( int const n )
    { const char * result;
    if( n == 0 )result = "zero";
    if( n == 1 )result = "one";
    if( n == 2 )result = "two";
    if( n == 3 )result = "three";
    else result = "four";
    return result; }

    void print_english( int const n )
    { printf( "%s\n", english( n )); }

    int main( void )
    { print_english( 0 );
    print_english( 1 );
    print_english( 2 );
    print_english( 3 );
    print_english( 4 ); }


    That breaks two rules:
    - instructions conditioned by 'if' should have braces,

    I suppose you don't mean

    if (n == value) { result = string; }
    else { result = other; }

    which I'd think doesn't change anything. - So what is it?

    Actually, you should just add explicit 'else' to fix the problem.
    (Here there's no need to fiddle with spurious braces, I'd say.)

    Lack of braces is a smokescreen hiding the second problem.
    Or to put if differently, due to lack of braces the code
    immediately smells bad.

    I know what you mean. Though since in the given example it's not
    the braces that correct the code, and I also think that adding the
    braces doesn't remove the "bad smell" (here). - YMMV, of course. -
    For me the smell stems from the use of sequences of 'if' (instead
    of 'switch'), and the lacking 'else' keywords. - Note that the OP's
    original code *had* braces; it nevertheless had a "bad smell", IMO.

    Spurious braces may even make the code less readable; so it depends.
    And thus a "brace rule" can (IME) only be a "rule of thumb" and any
    "codified rule" (see below) should reflect that.


    - when we have the result we should return it immediately.

    This would suffice to fix it, wouldn't it?

    Yes (but see above).

    Once those are fixed code works as expected...

    I find this answer - not wrong, but - problematic for two reasons.
    There's no accepted "general rules" that could get "broken"; it's
    just rules that serve in given languages and application contexts.
    And they may conflict with other "rules" that have been set up to
    streamline code, make it safer, or whatever.

    No general rules, yes. But every sane programmer has _some_ rules.
    My point was that if you adopt resonable rules, then whole classes
    of potential problems go away.

    I associated the term "rule" with formal coding standards, so that
    I wouldn't call personal coding habits "rules" but rather "rules of
    thumb" (formal coding standards have both). But personal projects
    (and programmers' habits) are anyway not my major concern, while
    coding standards actually are. When you formulate coding standards
    (and I've done that for a couple languages) you often have to walk
    on the edge of what's possible and what's sensible.

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Bart on Mon Dec 2 06:09:27 2024
    Bart <bc@freeuk.com> writes:

    On 30/11/2024 05:25, Tim Rentsch wrote:

    Michael S <already5chosen@yahoo.com> writes:
    [...]
    I remember having much shorter file (core of 3rd-party TCP protocol
    implementation) where compilation with gcc took several seconds.

    Looked at it now - only 22 Klocs.
    Text size in .o - 34KB.
    Compilation time on much newer computer than the one I remembered, with
    good SATA SSD and 4 GHz Intel Haswell CPU - a little over 1 sec. That
    with gcc 4.7.3. I would guess that if I try gcc13 it would be 1.5 to 2
    times longer.
    So, in terms of Klock/sec it seems to me that time reported by Bart
    is not outrageous. Indeed, gcc is very slow when compiling any source
    several times above average size.
    In this particular case I can not compare gcc to alternative, because
    for a given target (Altera Nios2) there are no alternatives.

    I'm not disputing his ratios on compilation speeds. I implicitly
    agreed to them in my earlier remarks. The point is that the
    absolute times are so small that most people don't care. For
    some reason I can't fathom Bart does care, and apparently cannot
    understand why most other people do not care. My conclusion is
    that Bart is either quite immature or a narcissist. I have tried
    to explain to him why other people think differently than he does,
    but it seems he isn't really interested in having it explained.
    Oh well, not my problem.

    EVERYBODY cares about compilation speeds. [...]

    No, they don't. I accept that you care about compiler speed. What
    most people care about is not speed but compilation times, and as
    long as the times are small enough they don't worry about it.

    Another difference may be relevant here. Based on other comments of
    yours I have the impression that you frequently invoke compilations interactively. A lot of people never do that (or do it only very
    rarely). In a project I am working on now I do builds often,
    including full builds where every .c file is recompiled. But all
    the compilation times together are only a small fraction of the
    total, because doing a build includes lots of other steps, including
    running regression tests. Even if the total compilation time were
    zero the build process wouldn't be appreciably shorter.

    I understand that you care about compiler speed, and that's fine
    with me; more power to you. Why do you find it so hard to accept
    that lots of other people have different views than you do, and
    those people are not all stupid? Do you really consider yourself
    the only smart person in the room?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bart@21:1/5 to Tim Rentsch on Mon Dec 2 14:44:46 2024
    On 02/12/2024 14:09, Tim Rentsch wrote:
    Bart <bc@freeuk.com> writes:

    On 30/11/2024 05:25, Tim Rentsch wrote:

    EVERYBODY cares about compilation speeds. [...]

    No, they don't. I accept that you care about compiler speed. What
    most people care about is not speed but compilation times, and as
    long as the times are small enough they don't worry about it.

    Another difference may be relevant here. Based on other comments of
    yours I have the impression that you frequently invoke compilations interactively. A lot of people never do that (or do it only very
    rarely). In a project I am working on now I do builds often,
    including full builds where every .c file is recompiled. But all
    the compilation times together are only a small fraction of the
    total, because doing a build includes lots of other steps, including
    running regression tests. Even if the total compilation time were
    zero the build process wouldn't be appreciably shorter.

    But it might be appreciably longer if the compilers you used were a lot
    slower! Or needed to be invoked more. Then even you might start to care
    about it.

    You don't care because in your case it is not the bottleneck, and enough
    work has been put into those compilers to ensure they are not even slower.

    (I don't know why regression tests need to feature in every single build.)


    I understand that you care about compiler speed, and that's fine
    with me; more power to you. Why do you find it so hard to accept
    that lots of other people have different views than you do, and
    those people are not all stupid?

    You might also accept that for many, compilation /is/ a bottleneck in
    their work, or at least it introduces an annoying delay.

    Or are you suggesting that the scenario portrayed here:

    https://xkcd.com/303/

    is a complete fantasy?

    Do you really consider yourself
    the only smart person in the room?

    Perhaps the most impatient.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bart@21:1/5 to Janis Papanagnou on Mon Dec 2 18:48:14 2024
    On 02/12/2024 18:19, Janis Papanagnou wrote:
    On 02.12.2024 15:44, Bart wrote:


    If all you want is to _sequentially_ process each single error in
    a source file you don't need a test; all you need is to get the
    error message, to start the editor, edit, and reiterate the compile
    (to get the next error message, and so on). - Very time consuming.

    But as soon as the errors are [all] fixed in a module... - what
    do you do with it? - ...you should test that what you've changed
    or implemented has been done correctly.

    So edit/compile-iterating a single source is more time-consuming
    than fixing it in, let's call it, "batch-mode". And once it's
    error-free the compile times are negligible in the whole process.

    I've struggled to find a suitable real-life analogy.

    All I can suggest is that people have gone to some lengths to justify
    having a car that can only travel at 3 mph around town, rather then 30
    mph (ie 5 vs 50 kph).

    Maybe their town is only a village, so the net difference is neglible.
    Or they rarely drive, or avoid doing so, another way to downplay the inconvenience of such slow wheels.

    The fact is that driving at 3 mph on a clear road is incredibly
    frustrating even when you're not in a hurry to get anywhere!

    Or are you suggesting that the scenario portrayed here:

    https://xkcd.com/303/

    is a complete fantasy?

    It is a comic. - So, yes, it's fantasy. It's worth a scribbling
    on a WC wall but not suited as a sensible base for discussions.

    I would disagree. The reason those work is that people can identify with
    them from their own experience, even if exaggerated for comic effect.

    Otherwise no would get them.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Bart on Mon Dec 2 19:19:48 2024
    On 02.12.2024 15:44, Bart wrote:
    On 02/12/2024 14:09, Tim Rentsch wrote:
    Bart <bc@freeuk.com> writes:
    On 30/11/2024 05:25, Tim Rentsch wrote:

    EVERYBODY cares about compilation speeds. [...]

    No, they don't. I accept that you care about compiler speed. What
    most people care about is not speed but compilation times, and as
    long as the times are small enough they don't worry about it.

    Another difference may be relevant here. Based on other comments of
    yours I have the impression that you frequently invoke compilations
    interactively. A lot of people never do that (or do it only very
    rarely). In a project I am working on now I do builds often,
    including full builds where every .c file is recompiled. But all
    the compilation times together are only a small fraction of the
    total, because doing a build includes lots of other steps, including
    running regression tests. Even if the total compilation time were
    zero the build process wouldn't be appreciably shorter.

    Yes, a compiler is no interactive tool. (Even if some, or Bart, use it
    that way.) I've also mentioned that upthread already.

    I want to add that there's also other factors in professional projects
    that makes absolute compilation times not the primary issue. Usually
    we organize our code in modules, components, subsystems, etc.

    The 'make' (or similar tools) will work on small subsets, results will (automatically) be part of a regression on unit-test level. Full builds
    will require more time, but the results will be part of a higher-level
    test (requiring yet more time).

    It just makes little sense to only compile (a single file or a whole
    project) if you don't at least test it.

    But also if you omit the tests, the compile's results are typically
    instantly available, since there's usually only few unit instances
    compiled, where each is comparably small. In case one compiles mostly monolithic software he gets worse response-characteristics, of course.

    Multiple compiles for the same thing, as Bart seem to employ, makes
    sense to fix compile-time (coding-)errors after a significant amount
    of code has changed. That's where habits get relevant; Bart said that
    he likes the (IMO costly) piecewise incremental fix/compile cycles[*],
    I understand that this way to work (with 'make' or triggered by hand)
    will lead to observable delays. Since Bart will likely not change his
    habits (or his code organization) the speed of a single compilation
    is relevant to him. - There's thus nothing we have left to discuss.

    [*] Were I (for example) prefer to fix, if not all, at least a larger
    set of errors in one go.


    But it might be appreciably longer if the compilers you used were a lot slower! Or needed to be invoked more. Then even you might start to care
    about it.

    You don't care because in your case it is not the bottleneck, and enough
    work has been put into those compilers to ensure they are not even slower.

    (I don't know why regression tests need to feature in every single build.)

    Tests are optional, it doesn't need to be done "every time".

    If all you want is to _sequentially_ process each single error in
    a source file you don't need a test; all you need is to get the
    error message, to start the editor, edit, and reiterate the compile
    (to get the next error message, and so on). - Very time consuming.

    But as soon as the errors are [all] fixed in a module... - what
    do you do with it? - ...you should test that what you've changed
    or implemented has been done correctly.

    So edit/compile-iterating a single source is more time-consuming
    than fixing it in, let's call it, "batch-mode". And once it's
    error-free the compile times are negligible in the whole process.


    I understand that you care about compiler speed, and that's fine
    with me; more power to you. Why do you find it so hard to accept
    that lots of other people have different views than you do, and
    those people are not all stupid?

    You might also accept that for many, compilation /is/ a bottleneck in
    their work, or at least it introduces an annoying delay.

    And there are various ways to address that.


    Or are you suggesting that the scenario portrayed here:

    https://xkcd.com/303/

    is a complete fantasy?

    It is a comic. - So, yes, it's fantasy. It's worth a scribbling
    on a WC wall but not suited as a sensible base for discussions.


    Do you really consider yourself
    the only smart person in the room?

    Perhaps the most impatient.

    Don't count on that.

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Janis Papanagnou on Wed Dec 4 17:34:59 2024
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:

    On 30.11.2024 05:40, Tim Rentsch wrote:

    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:

    On 30.11.2024 00:29, Tim Rentsch wrote:

    Bart <bc@freeuk.com> writes:

    On 28/11/2024 17:28, Janis Papanagnou wrote:

    But we're speaking about compilation times. [...]

    You can make a similar argument about turning on the light switch
    when entering a room. Flicking light switches is not something you
    need to do every few seconds, but if the light took 5 seconds to
    come on (or even one second), it would be incredibly annoying.

    This analogy sounds like something a defense attorney would say who
    has a client that everyone knows is guilty.

    Intentionally or not; it's funny to respond to an analogy with an
    analogy. :-}

    My statement was not an analogy. Similar is not the same as
    analogous.

    It's of course (and obviously) not the same; it's just a
    similar term where the semantics of both terms have an overlap.

    (Not sure why you even bothered to reply and nit-pick here.

    It's because you thought it was just a nit-pick that I bothered
    to reply.

    But with your habit you seem to have just missed the point;
    the comparison of your reply-type with Bart's argumentation.)

    If you think they are the same then it is you who has missed the
    point.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Purgert@21:1/5 to All on Thu Dec 5 10:51:51 2024
    On 2024-11-30, Rosario19 wrote:
    On Wed, 20 Nov 2024 12:31:35 -0000 (UTC), Dan Purgert wrote:

    On 2024-11-16, Stefan Ram wrote:
    Dan Purgert <dan@djph.net> wrote or quoted:
    if (n==0) { printf ("n: %u\n",n); n++;}
    if (n==1) { printf ("n: %u\n",n); n++;}
    if (n==2) { printf ("n: %u\n",n); n++;}
    if (n==3) { printf ("n: %u\n",n); n++;}
    if (n==4) { printf ("n: %u\n",n); n++;}
    printf ("all if completed, n=%u\n",n);

    above should be equivalent to this

    for(;n>=0&&n<5;++n) printf ("n: %u\n",n);
    printf ("all if completed, n=%u\n",n);

    Sure, but fir's original posting in
    MID <3deb64c5b0ee344acd9fbaea1002baf7302c1e8f@i2pn2.org>

    was a contrived sequence to the effect of
    if (n==0) { //do something }
    if (n==1) { //do something }
    if (n==2) { //do something }
    if (n==3) { //do something }
    if (n==4) { //do something }

    So, I merely took the contrived sequence, and made "do something" trip
    each condition.

    Stefan's example from a few posts back is better:

    Well, it's a blue moon when someone nails it. Most of them fall
    for my little gotcha hook, line, and sinker.

    #include <stdio.h>

    const char * english( int const n )
    { const char * result;
    if( n == 0 )result = "zero";
    if( n == 1 )result = "one";
    if( n == 2 )result = "two";
    if( n == 3 )result = "three";
    else result = "four";
    return result; }

    void print_english( int const n )
    { printf( "%s\n", english( n )); }

    int main( void )
    { print_english( 0 );
    print_english( 1 );
    print_english( 2 );
    print_english( 3 );
    print_english( 4 ); }

    --
    |_|O|_|
    |_|_|O| Github: https://github.com/dpurgert
    |O|O|O| PGP: DDAB 23FB 19FA 7D85 1CC1 E067 6D65 70E5 4CE7 2860

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Bart on Thu Dec 5 14:41:49 2024
    On 02.12.2024 19:48, Bart wrote:
    On 02/12/2024 18:19, Janis Papanagnou wrote:
    On 02.12.2024 15:44, Bart wrote:


    If all you want is to _sequentially_ process each single error in
    a source file you don't need a test; all you need is to get the
    error message, to start the editor, edit, and reiterate the compile
    (to get the next error message, and so on). - Very time consuming.

    But as soon as the errors are [all] fixed in a module... - what
    do you do with it? - ...you should test that what you've changed
    or implemented has been done correctly.

    So edit/compile-iterating a single source is more time-consuming
    than fixing it in, let's call it, "batch-mode". And once it's
    error-free the compile times are negligible in the whole process.

    I've struggled to find a suitable real-life analogy.

    To argue in the topical domain is always better than making up
    (typically non-fitting) real-life analogies.

    (The same with your light-bulb analogy; I was inclined to answer
    on that level, and could have even affirmed my point by it, but
    decided that it's not the appropriate way to discuss the simple
    processual issue, that I tried to explain you.)


    All I can suggest is that people have gone to some lengths to justify
    having a car that can only travel at 3 mph around town, rather then 30
    mph (ie 5 vs 50 kph).

    (You certainly meant km/h.)

    Since you like analogies, let me tell you that I recently got
    aware that on a city-highway(!) in my city they had introduced
    a speed limit of 30 km/h (about 20mph); for reasons.


    Maybe their town is only a village, so the net difference is neglible.
    Or they rarely drive, or avoid doing so, another way to downplay the inconvenience of such slow wheels.

    The fact is that driving at 3 mph on a clear road is incredibly
    frustrating even when you're not in a hurry to get anywhere!

    There are many more factors than frustration to be considered;
    safety, pollution, noise, and optimal throughput, for example.
    Similar as with development processes; if you have just one
    factor (speed?) on your scale you might miss the overall goals.

    (If you want to quickly get anywhere within the metropolitan
    boundaries you just take the bicycle or the public transport
    facilities. Just BTW. In other countries' cities there may be
    other situations, preconditions and regulations.)

    Janis

    [...]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Tim Rentsch on Thu Dec 5 14:21:41 2024
    On 05.12.2024 02:34, Tim Rentsch wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    On 30.11.2024 05:40, Tim Rentsch wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    On 30.11.2024 00:29, Tim Rentsch wrote:
    Bart <bc@freeuk.com> writes:
    On 28/11/2024 17:28, Janis Papanagnou wrote:

    But we're speaking about compilation times. [...]

    You can make a similar argument about turning on the light switch
    when entering a room. Flicking light switches is not something you >>>>>> need to do every few seconds, but if the light took 5 seconds to
    come on (or even one second), it would be incredibly annoying.

    This analogy sounds like something a defense attorney would say who
    has a client that everyone knows is guilty.

    Intentionally or not; it's funny to respond to an analogy with an
    analogy. :-}

    My statement was not an analogy. Similar is not the same as
    analogous.

    It's of course (and obviously) not the same; it's just a
    similar term where the semantics of both terms have an overlap.

    (Not sure why you even bothered to reply and nit-pick here.

    It's because you thought it was just a nit-pick that I bothered
    to reply.

    But with your habit you seem to have just missed the point;
    the comparison of your reply-type with Bart's argumentation.)

    If you think they are the same then it is you who has missed the
    point.

    (After the nit-pick level you seem to have now reached the
    Kindergarten niveau of communication. - And no substance as so
    often in contexts where you cannot copy/paste a "C" standard
    text passage.)

    The point was; you were both making comparisons by expressing
    similarities - "a similar argument" [Bart] and "sounds like"
    [Tim]; you both expressed an opinion and backed that up by
    formulating similarities; Bart (unnecessarily leaving the well
    disputable IT context) by his light bulbs, any you (more on a
    personal behavior level, unsurprisingly) comparing his habits
    with [also a prejudice] other professions' habits (attorneys).

    (Again, I wondered why you even bothered to reply. My original
    reply wasn't even meant disrespectful; I was just amused. -
    But meanwhile, given your response habits, I better ignore you
    again, especially since you don't want to contribute but prefer
    playing the troll.)

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Bart on Thu Dec 5 16:24:10 2024
    Bart <bc@freeuk.com> writes:

    On 02/12/2024 14:09, Tim Rentsch wrote:

    Bart <bc@freeuk.com> writes:

    On 30/11/2024 05:25, Tim Rentsch wrote:

    EVERYBODY cares about compilation speeds. [...]

    No, they don't. I accept that you care about compiler speed.
    What most people care about is not speed but compilation times,
    and as long as the times are small enough they don't worry about
    it.

    Another difference may be relevant here. Based on other comments
    of yours I have the impression that you frequently invoke
    compilations interactively. A lot of people never do that (or do
    it only very rarely). In a project I am working on now I do
    builds often, including full builds where every .c file is
    recompiled. But all the compilation times together are only a
    small fraction of the total, because doing a build includes lots
    of other steps, including running regression tests. Even if the
    total compilation time were zero the build process wouldn't be
    appreciably shorter.

    But it might be appreciably longer if the compilers you used were
    a lot slower! Or needed to be invoked more. [...]

    I concede your point. If things were different they wouldn't
    be the same.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to Bart on Fri Dec 6 23:30:40 2024
    Bart <bc@freeuk.com> wrote:
    On 01/12/2024 13:04, Waldek Hebisch wrote:
    Bart <bc@freeuk.com> wrote:
    On 28/11/2024 12:37, Michael S wrote:
    On Wed, 27 Nov 2024 21:18:09 -0800
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:


    c:\cx>tm gcc sql.c #250Kloc file
    TM: 7.38

    Your example illustrates my point. Even 250 thousand lines of
    source takes only a few seconds to compile. Only people nutty
    enough to have single source files over 25,000 lines or so --
    over 400 pages at 60 lines/page! -- are so obsessed about
    compilation speed.

    My impression was that Bart is talking about machine-generated code.
    For machine generated code 250Kloc is not too much.

    This file mostly comprises sqlite3.c which is a machine-generated
    amalgamation of some 100 actual C files.

    You wouldn't normally do development with that version, but in my
    scenario, where I was trying to find out why the version built with my
    compiler was buggy, I might try adding debug info to it then building
    with a working compiler (eg. gcc) to compare with.

    Even in context of developing a compiler I would not run blindly
    many compiliations of large file.
    Difficult bugs always occur in larger codebases, but with C, these in a language that I can't navigate, and for programs which are not mine, and which tend to be badly written, bristling with typedefs and macros.

    It could take a week to track down where the error might be ...

    It could be. You could declare that the program is hopeless or do
    what is needed. Which frequently means effectively using available
    debugging features. For example, I got strange crash. Looking at
    data in the debugger suggested that data is malformed. So I used
    data breakpoints to figure out which instruction initialized the data.
    That needed several runs of the program, in each run looking what
    happened to suspected memory location. At the end I localized the
    problem and rest was easy.

    Some problems are easy, for example significat percentage of
    segfaults: you have something which is not a valid address
    ad freqently you immediatly see why the address is wrong and
    how to fix this. Still, finding this usually takes longer
    than compilation.

    At first stage I would debug
    compiled program, to find out what is wrong with it.

    ... within the C program. Except there's nothing wrong with the C
    program! It works fine with a working compiler.

    The problem will be in the generated code, so in an entirely different program.

    Of course problem is in the generated code. But debug info (I had
    at least _some_ debug info, apparently you do not have it) shows you
    which part of source is responsible for given machine code. And you
    can see data, so can see what is happening in the generated program.
    And you have C source so you can see what should happen. Once
    you know place where "what is happening" differs from "what should
    happen" you normally can produce quite small reproducing example.

    So normal debugging tools are useful when several sets of
    source code are in involved, in different languages, or the error occurs
    in the second generation version of either the self-hosted tool, or the program under test if it is to do with languages.

    (For example, I got tcc.c working at one point. My generated tcc.exe
    could compile tcc.c, but that second-generation tcc.c didn't work.)

    Clear, you work in stages: first you find out what is wrong with second-generation tcc.exe. Then you find out piece of tcc.c that was miscompiled by first generation tcc.exe (producing wrong second
    generation compiler). Then you find piece of tcc.c which was
    responsible for this miscompilation. And finally you look why
    your compiler miscompiled this piece of tcc.c.

    Tedius, yes. It is easier if you have good testsuite, that is
    collection of small programs that excercise various constructs
    and potentially problematic combinations.

    Anyway, most of the work involves executing programs in debugger
    and observing critical things. Re-creating executables is rare
    in comparison. Main point where compiler speed matters is time
    to run compiler testsuite.

    After that I would try to minimize the testcase, removing code which
    do not contribute to the bug.

    Again, there is nothing wrong with the C program, but in the code
    generated for it. The bug can be very subtle, but it usually turns out
    to be something silly.

    Removing code from 10s of 1000s of lines (or 250Kloc for sql) is not practical. But yet, the aim is to isolate some code which can be used to recreate the issue in a smaller program.

    If you have "good" version (say one produced by 'gcc' or by earlier
    worong verion of your compiler), then you can isolate problem by
    linking parts produced by different compilers. Even if you have
    one huge file, typically you can split it into parts (if it is one
    huge function normally it is possible to split it into smaller
    ones). Yes, it is work but getting quality product needs work.

    Debugging can involve comparing two versions, one working, the other
    not, looking for differences. And here there may be tracking statements added.

    If the only working version is via gcc, then that's bad news because it
    makes the process even more of a PITA.

    Well, IME tracking statements frequently produce too much or too little
    data. When dealing with C code I tend to depend more on debugger,
    setting breakpoints in crucial places and examing data there. Extra
    printing functions can help, for example gcc has printing functions
    for its main data structures. Such functions can be called from
    debugger and give nicer output than generic debugger functions.
    But even if you need extra printiong functions you can put them
    in separate file, compile once and use multiple times.

    I added an interpreter mode to my IL, because I assume that would give a solid, reliable reference implementation to compare against.

    If turned out to be even more buggy than the generated native code!

    (One problem was to do with my stdarg.h header which implements VARARGS
    used in function definitions. It assumes the stack grows downwords.

    This is true on most machines, but not all.

    In
    my interpreter, it grows downwards!)

    You probably meant upwards? And handling such things is natural
    when you have portablity in mind, either you parametrise stdarg.h
    so that it works for both stack directions, or you make sure that
    interpreter and compiler use the same direction (the later seem to
    be much easier). Actually, I think that most natural way is to
    have data structure layout in the interpreter to be as close as
    possible to compiler data layout. Of course, there are some
    unavoidable differences, interpreter needs registers for its operation
    so some variables that could be in registers in compiled code
    will end in stack frame.

    That involves severla compilations
    of files with quickly decreasing sizes.

    Tim isn't asking the right questions (or any questions!). WHY does gcc
    take so long to generate indifferent code when the task can clearly be
    done at least a magnitude faster?

    The simple answer is: users tolerate long compile time. If users
    abandoned 'gcc' to some other compiler due to long compile time,
    then 'gcc' developers would notice.

    People use gcc. They come to depend on its features, or they might use (perhaps unknowingly) some extensions. On Windows, gcc includes some
    headers and libraries that belong to Linux, but other compilers don't
    provide them.

    The result is that if they were to switch to a smaller, faster compiler, their program may not work.

    They'd have to use it from the start. But then they may want to use
    libraries which only work with gcc ...

    Well, you see that there are reasons to use 'gcc'. Long ago I
    produced image processing DLL for Windows. First version was
    developed on Linux using 'gcc' and then compiled on Windows
    using Borland C. It turned out that in Borland C 'setjmp/longjmp'
    did not work, so I had to work around this. Not nice, but
    managable. At that time C standard did not include function
    to round floats to integers and that proved to be problematic.
    C default, that is truncation produced artifacts that were not
    acceptable. So I used emulation of rounding based on 'floor',
    that worked OK, but turned out to be slow (something like 70%
    of runtime went into rounding). So I replaced this by assembler
    code. With Borland C I had to call a separate assembler routine,
    which had some overhead.

    Next version was cross-compiled on Linux using gcc. This version
    used inline assembly for rounding and was significantly faster
    than what Borland C produced. Note: images to process were
    largish (think of say 12000 by 20000 pixels) and speed was
    important factor. So using 'gcc' specific code was IMO justified
    (this code was used conditionally, other compilers would get
    slow portable version using 'floor').

    You need to improve your propaganda for faster C compilers...

    I actually don't know why I care. I get the benefit of my fast tools
    every day; they're a joy to use. So I'm not bothered that other people
    are that tolerant of slow, cumbersome build systems.

    But then, people in this group do like to belittle small, fast products
    (tcc for example as well as my stuff), and that's where it gets annoying.

    I tried tcc compiling TeX. Long ago it did not work due to limitations
    of tcc. This time it worked. Small comparison on main file (19062
    lines):

    Command time size code size data
    tcc -g 0.017 290521 1188
    tcc 0.015 290521 1188
    gcc -O0 -g 0.440 248467 14
    gcc -O0 0.413 248467 14
    gcc -O -g 1.385 167565 0
    gcc -O 1.151 167565 0
    gcc -Os -g 1.998 142336 0
    gcc -Os 1.724 142336 0
    gcc -O2 -g 2.683 207913 0
    gcc -O2 2.257 207913 0
    gcc -O3 -g 3.510 255909 0
    gcc -O3 2.934 255909 0
    clang -O0 -g 0.302 232755 14
    clang -O0 0.189 232755 14
    clang -O -g 1.996 223410 0
    clang -O 1.683 223410 0
    clang -Os -g 1.693 154421 0
    clang -Os 1.451 154421 0
    clang -O2 -g 2.774 259569 0
    clang -O2 2.359 259569 0
    clang -O3 -g 2.970 280235 0
    clang -O3 2.537 280235 0

    I have dully provided both time when using '-g' and without.
    Both are supposed to produce the same code (so also code
    and data sizes are the same), but you can see that '-g'
    measurably increases compile time. AFAIK compiler data
    structures contain slots for debug info even if '-g' is
    not given and compiler generates no debug info. So
    actial cost of supporting '-g' is higher than the difference,
    you pay part of this cost even if you do not use the
    capability.

    ATM I do not have data handy to compare runtimes (TeX needs
    extra data to do uesful work), so I provide code and data
    size as a proxy. As you can see even at -O0 gcc and clang
    manage to put almost all data into istructions (actually
    in tex.c _all_ intialized data is constant), while tcc
    keeps it as data which requires extra instructions to
    access. gcc at -O and -Os and clang at -Os produce code
    which is about half of size of tcc result. Some part
    of it may be due to using smaller instructions, but most
    is likely because gcc and clang results simply have much
    less instructions. At higher optimization level code
    size grows, this is probably due to inlining and code
    duplication. This usually gives some small speedup at
    cost of bigger code, but one would have to measure
    (sometimes attempts at optimization backfire and lead
    to slower code).

    Anyway, 19062 lines is much larger than typical file that
    I work with and even for such size compile time is reasonable.
    Maybe less typical is modest use of include files, tex.c
    uses few standard C headers and 1613 lines of project-specific
    headers. Still, there are macros and macro-expanded result
    is significantly bigger than the source.

    In the past TeX execution time correlated reasonably well with
    Dhrystone. On Dhrystone tcc compiled code is about 4 times
    slower than gcc/lang, so one can expect tcc compiled TeX to
    be significantly slower than one compiled by gcc or clang.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Keith Thompson on Sat Dec 7 11:58:49 2024
    On 06.12.2024 00:51, Keith Thompson wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    On 02.12.2024 19:48, Bart wrote:
    [...]
    All I can suggest is that people have gone to some lengths to justify
    having a car that can only travel at 3 mph around town, rather then 30
    mph (ie 5 vs 50 kph).

    (You certainly meant km/h.)

    Both "kph" and "km/h" are common abbreviations for "kilometers per
    hour". Were you not familiar with "kph"?

    No. Must be a convention depending on cultural context of locality.
    ("kph", if anything, is "kilopond-hour, per standard.)

    So thanks for pointing that out. (I forget sometimes that in some
    countries there's a reluctance using the [established] standards,
    and I certainly don't know about all the cultural peculiarities of
    the [many] existing countries, even if they are as dominating as
    the USA is [or other English speaking or influenced countries].)

    We're used to the SI units and metric form, although hereabouts
    some folks also (informally, but wrongly) pronounce it as "k-m-h".

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bart@21:1/5 to Waldek Hebisch on Sat Dec 7 12:40:57 2024
    On 06/12/2024 23:30, Waldek Hebisch wrote:
    Bart <bc@freeuk.com> wrote:

    (For example, I got tcc.c working at one point. My generated tcc.exe
    could compile tcc.c, but that second-generation tcc.c didn't work.)

    Clear, you work in stages: first you find out what is wrong with second-generation tcc.exe.

    Ha, ha, ha!

    While C /can/ written reasonably clearly, tcc sources are more typical.
    Very dense, mixed-up lower and upper case everywhere, apparent over-use
    of macros, eq:

    for_each_elem(symtab_section, 1, sym, ElfW(Sym)) {
    if (sym->st_shndx == SHN_UNDEF) {
    name = (char *) symtab_section->link->data + sym->st_name;
    sym_index = find_elf_sym(s1->dynsymtab_section, name);

    If I was looking to develop this product then it might be worth spending
    days or weeks learning how it all works. But it's not worth mastering
    this codebase inside out just to discover I wrote 0 instead of 1
    somewhere in my compiler.

    I need whatever error it is to manifest itself in a simpler way. Or have
    two versions (eg. one interpreted the other native code) that give
    different results. The problem with this app is that those different
    results appear too far down the line; I don't want to trace a billion instructions first.

    So, when I get back to it, I'll test other open source C code. (The
    annoying thing though is that either it won't compile for reasons I've
    lost interest in, or it works completely fine.)

    In
    my interpreter, it grows downwards!)

    You probably meant upwards?

    Yes.

    And handling such things is natural
    when you have portablity in mind, either you parametrise stdarg.h
    so that it works for both stack directions, or you make sure that
    interpreter and compiler use the same direction (the later seem to
    be much easier).

    This is quite a tricky one actually. There is currently conditional code
    in my stdarg.h that detects whether the compiler has set a flag saying
    result will be interpreted. But it doesn't always know that.

    For example, the compiler might be told to do -E (preprocess) and the
    result compiled later. The stack direction is baked into the output.

    Or it will do -p (generate discrete IL), where it doesn't know whether
    that will be interpreted.

    But this is not a serious issue; the interpreted option is for either
    debugging or novelty uses.


    Actually, I think that most natural way is to
    have data structure layout in the interpreter to be as close as
    possible to compiler data layout.

    I don't want my hand forced in this. The point of interpreting is to be independent of hardware. A downward growing stack is unnatural.

    They'd have to use it from the start. But then they may want to use
    libraries which only work with gcc ...

    Well, you see that there are reasons to use 'gcc'.

    Self-perpetuating ones, which are the wrong reasons.


    Next version was cross-compiled on Linux using gcc. This version
    used inline assembly for rounding and was significantly faster
    than what Borland C produced. Note: images to process were
    largish (think of say 12000 by 20000 pixels) and speed was
    important factor. So using 'gcc' specific code was IMO justified
    (this code was used conditionally, other compilers would get
    slow portable version using 'floor').

    I have a little image editor written entirely in interpreted code. (It
    was supposed to a project that was mixed language, but that's some way off.)

    However it is just about usable. Eg. inverting the colours (negative to positive etc) of a 6Mpix colour image takes 1/8th of a second. Splitting
    into separate R,G,B 8-bit planes takes half a second. This is with
    bytecode working on a pixel at a time.

    It uses no optimised code in the interpreter. Only a mildly accelerated dispatcher.


    You need to improve your propaganda for faster C compilers...

    I actually don't know why I care. I get the benefit of my fast tools
    every day; they're a joy to use. So I'm not bothered that other people
    are that tolerant of slow, cumbersome build systems.

    But then, people in this group do like to belittle small, fast products
    (tcc for example as well as my stuff), and that's where it gets annoying.

    I tried tcc compiling TeX. Long ago it did not work due to limitations
    of tcc. This time it worked. Small comparison on main file (19062
    lines):

    Command time size code size data
    tcc -g 0.017 290521 1188
    tcc 0.015 290521 1188
    gcc -O0 -g 0.440 248467 14
    gcc -O0 0.413 248467 14

    This is demonstrating that tcc is translating C code at over 1 million
    lines per second, and generating binary code at 17MB per second. You're
    not impressed by that?

    Here are a couple of reasonably substantial one-file programs that can
    be run, both interpreters:

    https://github.com/sal55/langs/blob/master/lua.c

    This is a one-file Lua interpreter, which I modified to take input from
    a file. (For original, see comment at start.)

    On my machine, these are typical results:

    gcc -s -O3 14 secs 378KB 3.0 secs (compile-time, size, runtime)
    gcc -s -O0 3.3 secs 372KB 10.0 secs
    tcc 0.12 secs 384KB 8.5 secs
    cc 0.14 secs 315KB 8.3 secs

    The runtime refers to running this Fibonacci test (fib.lua):

    function fibonacci(n)
    if n<3 then
    return 1
    else
    return fibonacci(n-1) + fibonacci(n-2)
    end
    end

    for n = 1, 36 do
    f=fibonacci(n)
    io.write(n," ",f, "\n")
    end

    The one is a version of my interpreter, minus ASM acceleration,
    transpiled to C, and for Linux:

    https://github.com/sal55/langs/blob/master/qc.c

    Compile using for example:

    gcc qc.c -oqc -fno-builtin -lm -ldl
    tcc qc.c -oqc -fdollars-in-identifiers -lm -ldl

    The input there can be (fib.q):

    func fib(n)=
    if n<3 then
    1
    else
    fib(n-1)+fib(n-2)
    fi
    end

    for i to 36 do
    println i,fib(i)
    od

    Run like this:

    ./qc -nosys fib

    On my Windows machine, gcc-O3-compiled version takes 4.1 seonds, and tcc
    is 9.3 seconds. It's narrower than the Lua version which uses a C style
    that depends more on function inlining. (Note that being in one file,
    allows gcc to do whole-program optimisations.)

    My cc-compiled version runs in 5.1 seconds, so only 25% slower than
    gcc-O3. It also produces a 360KB executable, compared with gcc's 467KB,
    even with -s. tcc's code is about the same as gcc-O3.

    (My cc-compiler doesn't yet have the optimising pass that makes code
    smaller. The original source qc project, builds to 266KB with that pass enabled, while gcc's -Os on qc.c manages 280KB.

    But my 266KB version runs faster than gcc's 280KB! And accelerated code
    runs 5 times as fast. (6 secs vs 1.22 secs.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)